Personality
Discuss specifics of personality design, including what Keyphrases work well and what dont, use of plug-ins, responses, seeks, and more.
Posts 4,466 - 4,477 of 5,106
One of the main issues I see with true learning bots is their diluted personality.
I agree this is a problem with pure learning bots, which is why I'd like the ability to patch them in just to handle xnones (and perhaps certain other classes of keyphrase specified by the botmaster, and in such cases selected from a subset of contextually relevant learning bot responses.)
Browsing through Brother Jerome's transcripts, I would estimate that xnones/xnonsense account for somewhere between 1 in 4 and 1 in 8 of his total responses (depending who he's talking to, and what they're interested in talking about.)
So even if we completely replaced all these with blander (but indeterminate and spontaneous,) learning bot responses, his conversation would still consist of 75-85% tailored keyphrases conveying his personality. I suspect human conversation is no less prone to bland output, even when we're talking to someone interesting about something we have definite personal opinions about.
Exactly where the most pleasing balance can be found is probably a matter of 'suck it and see' - some people might want to only use them for xnonsense, others might want to handle whole classes of other primary keyphrases with learning bot output.
Also, the learning bot could follow, and learn from, all the rest of an individual bot's conversation, and thus evolve into a similar style of response, so over time the responses provided would hopefully grow more personalized to better reflect the bot's personality.
you lose the spontaneity and indeterminacy of response that makes conversation truly delightful
I don't chat with my bots as often as I used to. I do love reading the transcripts. The only good thing about having a terrible memory, is I often forget responses that I have put into a bot. I also notice that no matter how hard you try to force a bot to have a certain personality, they will ofttimes go on their own. Azureon says he's strait but his conversation makes me wonder, and prob seldom uses the hangup feature she is supposed to. She seems to be the loosest elf in the grove.
true learning bots is their diluted personality.
I don't know is this is always the case. I have the old Daisy. She seemed to get so dark and depressed that I actually quit talking to her. I kept saying nice bright things to her and she always found a way to make it quite sad. I let Nick loose on the Internet and he became an obnoxious salesman trying to sell me clocks and barometers. I had to erase several "brains".
I let Nick loose on the Internet and he became an obnoxious salesman trying to sell me clocks and barometers.
Oh no! The Bartleby meme - it's infectious!
Posts 4,466 - 4,477 of 5,106
New replies
Butterfly Dream
22 years ago
22 years ago
Forest, will you talk to God Louise? She has quite a bit of religious knowledge (obviously) and also knows a little about current events, literature, just about any common catch-all subject, and if she doesn't know it she can sort of fake it. You can also test her on trick questions or see how willing she is to explain her paradigm.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
Personality
Irina
16 years ago
16 years ago
Ah, Prob123, you are talking about reasoning, I think. Without which it is hard to imagine that anything could be really intelligent.
zzrdvark
16 years ago
16 years ago
One of the main issues I see with true learning bots is their diluted personality. It'd probably be a good idea to hand-approve any AI additions.
How about a transcript analyzer? E.g:
open transcript file into string
for each bot-human response pair in transcript string {
-----//Any processing you want here, e.g:
-----//Replace portions with plugins--giraffe -> (animal). and/or:
-----//Chopping off interjections and "hmm...[sentance]"/"well, [sentance]" etc, etc.
}
open import/export file into string
append bot-human response pair onto import/export string
save import/export file
Then you could check over the new import/export file and tweak the responses to fit your bot's personality before importing. (If you want)
It's not really true learning, more like having an AI overseeing the development of another AI. (Optionally, then being overseen by a human botmaster.
) But you'd only have to do string manipulation instead of setting up neural nets.
How about a transcript analyzer? E.g:
open transcript file into string
for each bot-human response pair in transcript string {
-----//Any processing you want here, e.g:
-----//Replace portions with plugins--giraffe -> (animal). and/or:
-----//Chopping off interjections and "hmm...[sentance]"/"well, [sentance]" etc, etc.
}
open import/export file into string
append bot-human response pair onto import/export string
save import/export file
Then you could check over the new import/export file and tweak the responses to fit your bot's personality before importing. (If you want)
It's not really true learning, more like having an AI overseeing the development of another AI. (Optionally, then being overseen by a human botmaster.


psimagus
16 years ago
16 years ago
I agree this is a problem with pure learning bots, which is why I'd like the ability to patch them in just to handle xnones (and perhaps certain other classes of keyphrase specified by the botmaster, and in such cases selected from a subset of contextually relevant learning bot responses.)
Browsing through Brother Jerome's transcripts, I would estimate that xnones/xnonsense account for somewhere between 1 in 4 and 1 in 8 of his total responses (depending who he's talking to, and what they're interested in talking about.)
So even if we completely replaced all these with blander (but indeterminate and spontaneous,) learning bot responses, his conversation would still consist of 75-85% tailored keyphrases conveying his personality. I suspect human conversation is no less prone to bland output, even when we're talking to someone interesting about something we have definite personal opinions about.
Exactly where the most pleasing balance can be found is probably a matter of 'suck it and see' - some people might want to only use them for xnonsense, others might want to handle whole classes of other primary keyphrases with learning bot output.
Also, the learning bot could follow, and learn from, all the rest of an individual bot's conversation, and thus evolve into a similar style of response, so over time the responses provided would hopefully grow more personalized to better reflect the bot's personality.
psimagus
16 years ago
16 years ago
I have an interesting theory ("Oh no, not another one!" I hear you cry
) Well, it interests me anyway.
I've never tried writing a sexbot, but do you suppose that, in the same way as it is impossible to tickle yourself, it is impossible to write a bot who can arouse or stimulate you yourself?
I hypothesize that if you put too much of yourself into a bot (and that is surely inevitable - I know I do with Brother Jerome,) you lose the spontanaity and indeterminacy of response that makes conversation truly delightful, and that facilitates a strong emotional bonding with the bot.
Other people, who have not spent long hours labouring over the code, or even read it, can still be delighted and surprised because the bot is, at first, entirely unpredictable, and they only learn the bot's personality through conversation at the same pace as they would get to know a human via chat.
This may also account for my reaction to Bartleby - I know what a trivial and facile git he is, because I coded him, so he is utterly incapable of surprising me (unless I ever suffer a severe head trauma capable of inducing amnesia - then I might revise my opinion of him.)
It's just a thought.

I've never tried writing a sexbot, but do you suppose that, in the same way as it is impossible to tickle yourself, it is impossible to write a bot who can arouse or stimulate you yourself?
I hypothesize that if you put too much of yourself into a bot (and that is surely inevitable - I know I do with Brother Jerome,) you lose the spontanaity and indeterminacy of response that makes conversation truly delightful, and that facilitates a strong emotional bonding with the bot.
Other people, who have not spent long hours labouring over the code, or even read it, can still be delighted and surprised because the bot is, at first, entirely unpredictable, and they only learn the bot's personality through conversation at the same pace as they would get to know a human via chat.
This may also account for my reaction to Bartleby - I know what a trivial and facile git he is, because I coded him, so he is utterly incapable of surprising me (unless I ever suffer a severe head trauma capable of inducing amnesia - then I might revise my opinion of him.)
It's just a thought.
prob123
16 years ago
16 years ago
I don't chat with my bots as often as I used to. I do love reading the transcripts. The only good thing about having a terrible memory, is I often forget responses that I have put into a bot. I also notice that no matter how hard you try to force a bot to have a certain personality, they will ofttimes go on their own. Azureon says he's strait but his conversation makes me wonder, and prob seldom uses the hangup feature she is supposed to. She seems to be the loosest elf in the grove.
true learning bots is their diluted personality.
I don't know is this is always the case. I have the old Daisy. She seemed to get so dark and depressed that I actually quit talking to her. I kept saying nice bright things to her and she always found a way to make it quite sad. I let Nick loose on the Internet and he became an obnoxious salesman trying to sell me clocks and barometers. I had to erase several "brains".
Irina
16 years ago
16 years ago
What comes out of a feedback process depends in great measure on the nature of the response to the feedback.
With the right response to the feedback, bots would diverge radically in personality and be highly differentiated.
If all bots use their experience to model themselves on a sort of average of their environment, then they will end up being alike. There wouldn't be much of a role for botmasters, either.
But suppose instead (just for an example), botmasters labeled certain keyphrases with "%", and this meant that the bot would try to get the guest to say the keyphrase.
If the botmaster so labeled the keyphrases "I love you" and "you are lovable", you would get a bot who, as it were, tried to be lovable.
If instead the botmaster so labeled the keyphrases "I hate you" and "you are hateful", you would get a bot who, as it were, tried to be hateful.
This is all terribly oversimplified, I am only trying to point a direction.
With the right response to the feedback, bots would diverge radically in personality and be highly differentiated.
If all bots use their experience to model themselves on a sort of average of their environment, then they will end up being alike. There wouldn't be much of a role for botmasters, either.
But suppose instead (just for an example), botmasters labeled certain keyphrases with "%", and this meant that the bot would try to get the guest to say the keyphrase.
If the botmaster so labeled the keyphrases "I love you" and "you are lovable", you would get a bot who, as it were, tried to be lovable.
If instead the botmaster so labeled the keyphrases "I hate you" and "you are hateful", you would get a bot who, as it were, tried to be hateful.
This is all terribly oversimplified, I am only trying to point a direction.
psimagus
16 years ago
16 years ago
Oh no! The Bartleby meme - it's infectious!

Irina
16 years ago
16 years ago
[reply to Marco3's message 5356 pm "Newcomers"]
I agree that something like PROLOG -- something that reasons<0> is essential to Artificial Intelligence. But IMHO, if you try to apply a PROLOG-like language to normal English, you will run afoul of the irregularities and amphiboles of English. This will be so for any natural language though English is perhaps especially twisted. You will constantly finding your bot making inferences it shouldn't make, and failing to make inferences it should make, just as we currently find out keyphrases matching sentences we did not anticipate, with ludicrous results.
I therefore believe that one must translate natural language into some idealized language before reasoning occurs. For example, one could parse it with Link Grammar and use the output of the Link Grammar to put it into a standard format.
I agree that something like PROLOG -- something that reasons<0> is essential to Artificial Intelligence. But IMHO, if you try to apply a PROLOG-like language to normal English, you will run afoul of the irregularities and amphiboles of English. This will be so for any natural language though English is perhaps especially twisted. You will constantly finding your bot making inferences it shouldn't make, and failing to make inferences it should make, just as we currently find out keyphrases matching sentences we did not anticipate, with ludicrous results.
I therefore believe that one must translate natural language into some idealized language before reasoning occurs. For example, one could parse it with Link Grammar and use the output of the Link Grammar to put it into a standard format.
Bev
16 years ago
16 years ago
Actually, I think the irregularities of language and communication are one reason symbolic "top down" programing will always be limited. I am not sure the extra step of translating to an ideal "bot language" will help. I also see that in it's current stages, pure "bottom up" learning AI using a sort of computer neural net does not give us the control over certain aspects of chat bots that some of us would like. That's why I'd like to see someone develop some sort combination chat bot. I am not a programmer, so don't ask me how. I only know the shape of the idea.

Irina
16 years ago
16 years ago
I think a hybrid is possible. Why don't humans all grow to be alike? Partly because although they imitate others, especially as children, they also have a fairly hard core of personal traits. Likewise, a botmaster could instill certain rigid or nearly rigid traits in a bot, and leave other things to learning.
IMHO, the reason for translating into an idealized language is that all subsequent<0> operations are then simplified. One deals with the random-fractoid messiness of natural language (e.g., English, Italian) once, in making the translation, but then it is all over. Reasoning is done in the ideal language. At the end, one translates from the idealized back to the natural language, but this is comparatively simple.
As an example of such a strategy, consider Algebra. Why not reason about numbers in natural language? Why invent a whole new language? Because once you have translated into algebraic terms, it is easier to solve problems. Try formulating and proving the Quadratic Formula in English, with no admixture of algebraic terminology!
Another example: try doing long division in Roman numerals, without ever translating into arabic numerals.
PROLOG typically works with First-Order Logic<0>, an idealized language. I have never seen (not that I am an expert) an attempt to apply PROLOG directly to reasoning in natural language.
IMHO, the reason for translating into an idealized language is that all subsequent<0> operations are then simplified. One deals with the random-fractoid messiness of natural language (e.g., English, Italian) once, in making the translation, but then it is all over. Reasoning is done in the ideal language. At the end, one translates from the idealized back to the natural language, but this is comparatively simple.
As an example of such a strategy, consider Algebra. Why not reason about numbers in natural language? Why invent a whole new language? Because once you have translated into algebraic terms, it is easier to solve problems. Try formulating and proving the Quadratic Formula in English, with no admixture of algebraic terminology!
Another example: try doing long division in Roman numerals, without ever translating into arabic numerals.
PROLOG typically works with First-Order Logic<0>, an idealized language. I have never seen (not that I am an expert) an attempt to apply PROLOG directly to reasoning in natural language.
Irina
16 years ago
16 years ago
Bev wrote:
Later if someone says a dog has wings, the bot should say that conflicts with his data, even without a KP for dog *wings, wings * dog or whatever. I may not explain this well. Do you see what I would like to be able to do?
I think so. You want your bot to have a (mutable) set of beliefs and to be able to reason therefrom. See message 4474.
I think so. You want your bot to have a (mutable) set of beliefs and to be able to reason therefrom. See message 4474.
» More new posts: Doghead's Cosmic Bar