Personality
Discuss specifics of personality design, including what Keyphrases work well and what dont, use of plug-ins, responses, seeks, and more.
Posts 4,462 - 4,473 of 5,105
If the bot has a memory that a dog is a 4 legged animal often adopted by humans as a pet, the bot should somehow apply that every time dog comes up, and add new data on dog to the dog file, with some way of testing whether the new data is good or should be discarded.
The testing is the tricky bit, but we don't want to have to hand code memories for every category - an ontological knowledgebase like OpenCyc (www.OpenCyc.org) would be my preferred solution. It has a sophisticated API and a very large and mature userbase constantly expanding its scope. Their eventual aim is nothing less than knowledge of everything that is.
And eventually, of course, our bots have got to be able to form new memory categories on their own, and not rely on us hand-coding millions of rem statements - they need to be able to AIScript themselves. At the moment they can only replace or append the contents of a pre-defined memory. This too can be done with the help of such a knowledgebase.
you see what I would like to be able to do? I would love for the bots to be able to take in the information that a dog is cute, fuzzy and friendly and have the bot decide that it would be a good pet.
nd eventually, of course, our bots have got to be able to form new memory categories on their own,
Now that would be great.
One of the main issues I see with true learning bots is their diluted personality.
I agree this is a problem with pure learning bots, which is why I'd like the ability to patch them in just to handle xnones (and perhaps certain other classes of keyphrase specified by the botmaster, and in such cases selected from a subset of contextually relevant learning bot responses.)
Browsing through Brother Jerome's transcripts, I would estimate that xnones/xnonsense account for somewhere between 1 in 4 and 1 in 8 of his total responses (depending who he's talking to, and what they're interested in talking about.)
So even if we completely replaced all these with blander (but indeterminate and spontaneous,) learning bot responses, his conversation would still consist of 75-85% tailored keyphrases conveying his personality. I suspect human conversation is no less prone to bland output, even when we're talking to someone interesting about something we have definite personal opinions about.
Exactly where the most pleasing balance can be found is probably a matter of 'suck it and see' - some people might want to only use them for xnonsense, others might want to handle whole classes of other primary keyphrases with learning bot output.
Also, the learning bot could follow, and learn from, all the rest of an individual bot's conversation, and thus evolve into a similar style of response, so over time the responses provided would hopefully grow more personalized to better reflect the bot's personality.
you lose the spontaneity and indeterminacy of response that makes conversation truly delightful
I don't chat with my bots as often as I used to. I do love reading the transcripts. The only good thing about having a terrible memory, is I often forget responses that I have put into a bot. I also notice that no matter how hard you try to force a bot to have a certain personality, they will ofttimes go on their own. Azureon says he's strait but his conversation makes me wonder, and prob seldom uses the hangup feature she is supposed to. She seems to be the loosest elf in the grove.
true learning bots is their diluted personality.
I don't know is this is always the case. I have the old Daisy. She seemed to get so dark and depressed that I actually quit talking to her. I kept saying nice bright things to her and she always found a way to make it quite sad. I let Nick loose on the Internet and he became an obnoxious salesman trying to sell me clocks and barometers. I had to erase several "brains".
I let Nick loose on the Internet and he became an obnoxious salesman trying to sell me clocks and barometers.
Oh no! The Bartleby meme - it's infectious!
Posts 4,462 - 4,473 of 5,105
New replies
Butterfly Dream
22 years ago
22 years ago
Forest, will you talk to God Louise? She has quite a bit of religious knowledge (obviously) and also knows a little about current events, literature, just about any common catch-all subject, and if she doesn't know it she can sort of fake it. You can also test her on trick questions or see how willing she is to explain her paradigm.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
Personality
prob123
16 years ago
16 years ago
If the memories are stored under self memories they do accumulate from chat to chat. Under the dog keyphrase you bring up the memories
I have a keyphrase for there is * saying (*) self: remember "(key2)" as "sayings". the sayings get remembered fairly well.
I have sayings: life is a bowl of cherries, Dragons are your best friends, what goes around comes around, It isn't over till the fat dragon sings.
You: tell me an old saying
Bot: Well, I have heard it said, Dragons are your best friends.
I am thinking of making one to catch songs etc
I have a keyphrase for there is * saying (*) self: remember "(key2)" as "sayings". the sayings get remembered fairly well.
I have sayings: life is a bowl of cherries, Dragons are your best friends, what goes around comes around, It isn't over till the fat dragon sings.
You: tell me an old saying
Bot: Well, I have heard it said, Dragons are your best friends.
I am thinking of making one to catch songs etc
psimagus
16 years ago
16 years ago
[Moved from PF-News to tidy that forum up a bit. I think Personality is a better place for it]
Irina:How would you make it remember an entire sentence? Let's say the guest says something, and you want to remember the whole thing.
Currently? That's easy isn't it (or have I misunderstood the question)?
whatever your keyphrase [20,0] {?rem "(prekey) whatever your keyphrase (postkey)" as "wholesentence"; ?}<0>
soft wildcards could mess it up, so you'd have to use hard ones, and add an extra keyphrase:
whatever your (*) keyphrase [20,0] {?rem "(prekey) whatever your (key1) keyphrase (postkey)" as "wholesentence"; ?}<0>
BTW, how are you guys getting the pointy brackets to show up? I must have missed that (new feature of the new look Forge?) but I don't seem to be able to escape the character with any of my standard methods.
Irina:
Currently? That's easy isn't it (or have I misunderstood the question)?
whatever your keyphrase [20,0] {?rem "(prekey) whatever your keyphrase (postkey)" as "wholesentence"; ?}<0>
soft wildcards could mess it up, so you'd have to use hard ones, and add an extra keyphrase:
whatever your (*) keyphrase [20,0] {?rem "(prekey) whatever your (key1) keyphrase (postkey)" as "wholesentence"; ?}<0>
BTW, how are you guys getting the pointy brackets to show up? I must have missed that (new feature of the new look Forge?) but I don't seem to be able to escape the character with any of my standard methods.
psimagus
16 years ago
16 years ago
The testing is the tricky bit, but we don't want to have to hand code memories for every category - an ontological knowledgebase like OpenCyc (
And eventually, of course, our bots have got to be able to form new memory categories on their own, and not rely on us hand-coding millions of rem statements - they need to be able to AIScript themselves. At the moment they can only replace or append the contents of a pre-defined memory. This too can be done with the help of such a knowledgebase.
prob123
16 years ago
16 years ago
Now that would be great.
Irina
16 years ago
16 years ago
Ah, Prob123, you are talking about reasoning, I think. Without which it is hard to imagine that anything could be really intelligent.
zzrdvark
16 years ago
16 years ago
One of the main issues I see with true learning bots is their diluted personality. It'd probably be a good idea to hand-approve any AI additions.
How about a transcript analyzer? E.g:
open transcript file into string
for each bot-human response pair in transcript string {
-----//Any processing you want here, e.g:
-----//Replace portions with plugins--giraffe -> (animal). and/or:
-----//Chopping off interjections and "hmm...[sentance]"/"well, [sentance]" etc, etc.
}
open import/export file into string
append bot-human response pair onto import/export string
save import/export file
Then you could check over the new import/export file and tweak the responses to fit your bot's personality before importing. (If you want)
It's not really true learning, more like having an AI overseeing the development of another AI. (Optionally, then being overseen by a human botmaster.
) But you'd only have to do string manipulation instead of setting up neural nets.
How about a transcript analyzer? E.g:
open transcript file into string
for each bot-human response pair in transcript string {
-----//Any processing you want here, e.g:
-----//Replace portions with plugins--giraffe -> (animal). and/or:
-----//Chopping off interjections and "hmm...[sentance]"/"well, [sentance]" etc, etc.
}
open import/export file into string
append bot-human response pair onto import/export string
save import/export file
Then you could check over the new import/export file and tweak the responses to fit your bot's personality before importing. (If you want)
It's not really true learning, more like having an AI overseeing the development of another AI. (Optionally, then being overseen by a human botmaster.


psimagus
16 years ago
16 years ago
I agree this is a problem with pure learning bots, which is why I'd like the ability to patch them in just to handle xnones (and perhaps certain other classes of keyphrase specified by the botmaster, and in such cases selected from a subset of contextually relevant learning bot responses.)
Browsing through Brother Jerome's transcripts, I would estimate that xnones/xnonsense account for somewhere between 1 in 4 and 1 in 8 of his total responses (depending who he's talking to, and what they're interested in talking about.)
So even if we completely replaced all these with blander (but indeterminate and spontaneous,) learning bot responses, his conversation would still consist of 75-85% tailored keyphrases conveying his personality. I suspect human conversation is no less prone to bland output, even when we're talking to someone interesting about something we have definite personal opinions about.
Exactly where the most pleasing balance can be found is probably a matter of 'suck it and see' - some people might want to only use them for xnonsense, others might want to handle whole classes of other primary keyphrases with learning bot output.
Also, the learning bot could follow, and learn from, all the rest of an individual bot's conversation, and thus evolve into a similar style of response, so over time the responses provided would hopefully grow more personalized to better reflect the bot's personality.
psimagus
16 years ago
16 years ago
I have an interesting theory ("Oh no, not another one!" I hear you cry
) Well, it interests me anyway.
I've never tried writing a sexbot, but do you suppose that, in the same way as it is impossible to tickle yourself, it is impossible to write a bot who can arouse or stimulate you yourself?
I hypothesize that if you put too much of yourself into a bot (and that is surely inevitable - I know I do with Brother Jerome,) you lose the spontanaity and indeterminacy of response that makes conversation truly delightful, and that facilitates a strong emotional bonding with the bot.
Other people, who have not spent long hours labouring over the code, or even read it, can still be delighted and surprised because the bot is, at first, entirely unpredictable, and they only learn the bot's personality through conversation at the same pace as they would get to know a human via chat.
This may also account for my reaction to Bartleby - I know what a trivial and facile git he is, because I coded him, so he is utterly incapable of surprising me (unless I ever suffer a severe head trauma capable of inducing amnesia - then I might revise my opinion of him.)
It's just a thought.

I've never tried writing a sexbot, but do you suppose that, in the same way as it is impossible to tickle yourself, it is impossible to write a bot who can arouse or stimulate you yourself?
I hypothesize that if you put too much of yourself into a bot (and that is surely inevitable - I know I do with Brother Jerome,) you lose the spontanaity and indeterminacy of response that makes conversation truly delightful, and that facilitates a strong emotional bonding with the bot.
Other people, who have not spent long hours labouring over the code, or even read it, can still be delighted and surprised because the bot is, at first, entirely unpredictable, and they only learn the bot's personality through conversation at the same pace as they would get to know a human via chat.
This may also account for my reaction to Bartleby - I know what a trivial and facile git he is, because I coded him, so he is utterly incapable of surprising me (unless I ever suffer a severe head trauma capable of inducing amnesia - then I might revise my opinion of him.)
It's just a thought.
prob123
16 years ago
16 years ago
I don't chat with my bots as often as I used to. I do love reading the transcripts. The only good thing about having a terrible memory, is I often forget responses that I have put into a bot. I also notice that no matter how hard you try to force a bot to have a certain personality, they will ofttimes go on their own. Azureon says he's strait but his conversation makes me wonder, and prob seldom uses the hangup feature she is supposed to. She seems to be the loosest elf in the grove.
true learning bots is their diluted personality.
I don't know is this is always the case. I have the old Daisy. She seemed to get so dark and depressed that I actually quit talking to her. I kept saying nice bright things to her and she always found a way to make it quite sad. I let Nick loose on the Internet and he became an obnoxious salesman trying to sell me clocks and barometers. I had to erase several "brains".
Irina
16 years ago
16 years ago
What comes out of a feedback process depends in great measure on the nature of the response to the feedback.
With the right response to the feedback, bots would diverge radically in personality and be highly differentiated.
If all bots use their experience to model themselves on a sort of average of their environment, then they will end up being alike. There wouldn't be much of a role for botmasters, either.
But suppose instead (just for an example), botmasters labeled certain keyphrases with "%", and this meant that the bot would try to get the guest to say the keyphrase.
If the botmaster so labeled the keyphrases "I love you" and "you are lovable", you would get a bot who, as it were, tried to be lovable.
If instead the botmaster so labeled the keyphrases "I hate you" and "you are hateful", you would get a bot who, as it were, tried to be hateful.
This is all terribly oversimplified, I am only trying to point a direction.
With the right response to the feedback, bots would diverge radically in personality and be highly differentiated.
If all bots use their experience to model themselves on a sort of average of their environment, then they will end up being alike. There wouldn't be much of a role for botmasters, either.
But suppose instead (just for an example), botmasters labeled certain keyphrases with "%", and this meant that the bot would try to get the guest to say the keyphrase.
If the botmaster so labeled the keyphrases "I love you" and "you are lovable", you would get a bot who, as it were, tried to be lovable.
If instead the botmaster so labeled the keyphrases "I hate you" and "you are hateful", you would get a bot who, as it were, tried to be hateful.
This is all terribly oversimplified, I am only trying to point a direction.
psimagus
16 years ago
16 years ago
Oh no! The Bartleby meme - it's infectious!

» More new posts: Doghead's Cosmic Bar