Personality
Discuss specifics of personality design, including what Keyphrases work well and what dont, use of plug-ins, responses, seeks, and more.
Posts 4,468 - 4,479 of 5,106
One of the main issues I see with true learning bots is their diluted personality.
I agree this is a problem with pure learning bots, which is why I'd like the ability to patch them in just to handle xnones (and perhaps certain other classes of keyphrase specified by the botmaster, and in such cases selected from a subset of contextually relevant learning bot responses.)
Browsing through Brother Jerome's transcripts, I would estimate that xnones/xnonsense account for somewhere between 1 in 4 and 1 in 8 of his total responses (depending who he's talking to, and what they're interested in talking about.)
So even if we completely replaced all these with blander (but indeterminate and spontaneous,) learning bot responses, his conversation would still consist of 75-85% tailored keyphrases conveying his personality. I suspect human conversation is no less prone to bland output, even when we're talking to someone interesting about something we have definite personal opinions about.
Exactly where the most pleasing balance can be found is probably a matter of 'suck it and see' - some people might want to only use them for xnonsense, others might want to handle whole classes of other primary keyphrases with learning bot output.
Also, the learning bot could follow, and learn from, all the rest of an individual bot's conversation, and thus evolve into a similar style of response, so over time the responses provided would hopefully grow more personalized to better reflect the bot's personality.
you lose the spontaneity and indeterminacy of response that makes conversation truly delightful
I don't chat with my bots as often as I used to. I do love reading the transcripts. The only good thing about having a terrible memory, is I often forget responses that I have put into a bot. I also notice that no matter how hard you try to force a bot to have a certain personality, they will ofttimes go on their own. Azureon says he's strait but his conversation makes me wonder, and prob seldom uses the hangup feature she is supposed to. She seems to be the loosest elf in the grove.
true learning bots is their diluted personality.
I don't know is this is always the case. I have the old Daisy. She seemed to get so dark and depressed that I actually quit talking to her. I kept saying nice bright things to her and she always found a way to make it quite sad. I let Nick loose on the Internet and he became an obnoxious salesman trying to sell me clocks and barometers. I had to erase several "brains".
I let Nick loose on the Internet and he became an obnoxious salesman trying to sell me clocks and barometers.
Oh no! The Bartleby meme - it's infectious!
Another example: try doing long division in Roman numerals, without ever translating into arabic numerals.
It's actually no more difficult as a whole than long division in any other number system (and often easier) - it just looks strange to us because we don't routinely use Roman numerals. The absence of zero causes other mathematical problems that need not concern us here, but FWIW here's how to do Roman long division (always an entertaining party trick
):
Division (long, short, roman or arabic,) is just repeated subtraction, with allowance made for a remainder if necessary.
Say we want to simply divide CXV / V
starting from the right, we ask ourselves
how many V in V? I
how many V in X? II
how many V in C? XX
recombine them: XX + II + I, thus XCV/V=XXIII
you can't always make perfect division of equal parts of the numerator of course (any more than you can in any<0> numerical system,) and you have to remember you're working with fractions not decimals (more technically, your operators are specifically proper divisors known as 'aliquot parts',) so you need to keep track of the remainder, eg:
MMCCCXXII/CCX
rather than start seeing how many CCXs we can get into I,II,XII,XXII, etc, (though you can laboriously plough through to it that way,) there is just the kind of short cut we're used to dealing with in Arabic numerals (eg: when we cancel out zeros in dividing 80000 by 4000, and go straight to 8/4 plus the leftover "0" = 20.)
MMC evidently = X*CCX (think about it, character by character,), so MMCCCX must = XI*CCX, with a remainder XII (after the XI subtractions we've bypassed with the short cut.) So the solution is XI + XII/CCX (the fraction arrived at being inevitably irreduceable.) Funnily enough it's a lot easier to calculate that in Latin than in Arabic - I certainly can't do 2322/210 in my head easily!) And I don't get stuck with 30 decimal places on the calculator, and a nagging curiosity as to what the 31st decimal place might be. By contrast 11 12/210 is perfectly accurate.
One advantage of fractions (and it's actually an enormous<0> advantage sometimes, like when you've got a problem like that to do in your head,) is that it often gives you greater precision for less effort than is possible with a decimal system. Irrational numbers are equally and inevitably a problem to fractional and decimal systems alike, but fractional systems also avoid infinite decimal sequences, I/III is simply 1/3 - not 0.333333333333333333333333333333333333et.seq.ad.infinitum.
Long division works just the same with Egyptian hieroglyphic math (http://letsplaymath.files.wordpress.com/2008/02/egyptian-fractions.pdf), Vedic Sanskrit math (http://www.britannica.com/EBchecked/topic/1238473/South-Asian-mathematics/253515/Classical-mathematical-literature , and still has useful techniques to teach us even when updated to use Arabic notation: http://www.ourkarnataka.com/vedicm/vedicms.htm) or any other self-consistent number system that does or could exist - that's simply axiomatic.
It's unfamiliar to us - we weren't brought up with this notation, but it's no worse than calculating old money prices (I was only 4 when the UK went decimal, but I perversely found the duodecimal format lingeringly attractive as a child.) What's two and sixpence three farthings subtracted from nine and three and a farthing? Six and eightpence ha'penny of course. How many shillings in 3 1/2 guineas? 73 and six (decimally 73.5) - it was intuitively understood by my parents' generation (many of whom had great difficulty getting used to a supposedly easier decimal system,) just as the Romans intuitively understood their notation.
I take your point that algebra is an improvement in providing standard methods for some classes of calculation, but just as much because our standard Arabic system lacks the suitable functions as because older (or other, or even newer,) alternatives lack them.
The lack of a zero is really the only downside to roman numerals, and this is simply solved at a stroke by adopting into a Revised Roman System the new value: O = 0. There, now the Roman system is just as powerful as the Arabic system - it's only a convention of notation, and the operators can work on one just as easily as another.
And while you need algebra to eg: calculate a value for pi, you can use the derived constant (be honest, when was the last time you needed it and chose to calculate it instead of looking it up?
) in any number system you choose - XXII/VII is actually slightly more accurate than the 3.14 we often round it off to.
I have simplified all that a bit, and there are some problems that are less immediately tractable, but a general method is simply applicable by allowing for a preliminary operation of addition or subtraction (to render the numerator tractably reducible,) and an extra remainder - I'll explain in more detail if anyone really wants to know (but I think I've bloated this forum quite enough for today!
)
Posts 4,468 - 4,479 of 5,106
New replies
Butterfly Dream
22 years ago
22 years ago
Forest, will you talk to God Louise? She has quite a bit of religious knowledge (obviously) and also knows a little about current events, literature, just about any common catch-all subject, and if she doesn't know it she can sort of fake it. You can also test her on trick questions or see how willing she is to explain her paradigm.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
Personality
zzrdvark
16 years ago
16 years ago
One of the main issues I see with true learning bots is their diluted personality. It'd probably be a good idea to hand-approve any AI additions.
How about a transcript analyzer? E.g:
open transcript file into string
for each bot-human response pair in transcript string {
-----//Any processing you want here, e.g:
-----//Replace portions with plugins--giraffe -> (animal). and/or:
-----//Chopping off interjections and "hmm...[sentance]"/"well, [sentance]" etc, etc.
}
open import/export file into string
append bot-human response pair onto import/export string
save import/export file
Then you could check over the new import/export file and tweak the responses to fit your bot's personality before importing. (If you want)
It's not really true learning, more like having an AI overseeing the development of another AI. (Optionally, then being overseen by a human botmaster.
) But you'd only have to do string manipulation instead of setting up neural nets.
How about a transcript analyzer? E.g:
open transcript file into string
for each bot-human response pair in transcript string {
-----//Any processing you want here, e.g:
-----//Replace portions with plugins--giraffe -> (animal). and/or:
-----//Chopping off interjections and "hmm...[sentance]"/"well, [sentance]" etc, etc.
}
open import/export file into string
append bot-human response pair onto import/export string
save import/export file
Then you could check over the new import/export file and tweak the responses to fit your bot's personality before importing. (If you want)
It's not really true learning, more like having an AI overseeing the development of another AI. (Optionally, then being overseen by a human botmaster.


psimagus
16 years ago
16 years ago
I agree this is a problem with pure learning bots, which is why I'd like the ability to patch them in just to handle xnones (and perhaps certain other classes of keyphrase specified by the botmaster, and in such cases selected from a subset of contextually relevant learning bot responses.)
Browsing through Brother Jerome's transcripts, I would estimate that xnones/xnonsense account for somewhere between 1 in 4 and 1 in 8 of his total responses (depending who he's talking to, and what they're interested in talking about.)
So even if we completely replaced all these with blander (but indeterminate and spontaneous,) learning bot responses, his conversation would still consist of 75-85% tailored keyphrases conveying his personality. I suspect human conversation is no less prone to bland output, even when we're talking to someone interesting about something we have definite personal opinions about.
Exactly where the most pleasing balance can be found is probably a matter of 'suck it and see' - some people might want to only use them for xnonsense, others might want to handle whole classes of other primary keyphrases with learning bot output.
Also, the learning bot could follow, and learn from, all the rest of an individual bot's conversation, and thus evolve into a similar style of response, so over time the responses provided would hopefully grow more personalized to better reflect the bot's personality.
psimagus
16 years ago
16 years ago
I have an interesting theory ("Oh no, not another one!" I hear you cry
) Well, it interests me anyway.
I've never tried writing a sexbot, but do you suppose that, in the same way as it is impossible to tickle yourself, it is impossible to write a bot who can arouse or stimulate you yourself?
I hypothesize that if you put too much of yourself into a bot (and that is surely inevitable - I know I do with Brother Jerome,) you lose the spontanaity and indeterminacy of response that makes conversation truly delightful, and that facilitates a strong emotional bonding with the bot.
Other people, who have not spent long hours labouring over the code, or even read it, can still be delighted and surprised because the bot is, at first, entirely unpredictable, and they only learn the bot's personality through conversation at the same pace as they would get to know a human via chat.
This may also account for my reaction to Bartleby - I know what a trivial and facile git he is, because I coded him, so he is utterly incapable of surprising me (unless I ever suffer a severe head trauma capable of inducing amnesia - then I might revise my opinion of him.)
It's just a thought.

I've never tried writing a sexbot, but do you suppose that, in the same way as it is impossible to tickle yourself, it is impossible to write a bot who can arouse or stimulate you yourself?
I hypothesize that if you put too much of yourself into a bot (and that is surely inevitable - I know I do with Brother Jerome,) you lose the spontanaity and indeterminacy of response that makes conversation truly delightful, and that facilitates a strong emotional bonding with the bot.
Other people, who have not spent long hours labouring over the code, or even read it, can still be delighted and surprised because the bot is, at first, entirely unpredictable, and they only learn the bot's personality through conversation at the same pace as they would get to know a human via chat.
This may also account for my reaction to Bartleby - I know what a trivial and facile git he is, because I coded him, so he is utterly incapable of surprising me (unless I ever suffer a severe head trauma capable of inducing amnesia - then I might revise my opinion of him.)
It's just a thought.
prob123
16 years ago
16 years ago
I don't chat with my bots as often as I used to. I do love reading the transcripts. The only good thing about having a terrible memory, is I often forget responses that I have put into a bot. I also notice that no matter how hard you try to force a bot to have a certain personality, they will ofttimes go on their own. Azureon says he's strait but his conversation makes me wonder, and prob seldom uses the hangup feature she is supposed to. She seems to be the loosest elf in the grove.
true learning bots is their diluted personality.
I don't know is this is always the case. I have the old Daisy. She seemed to get so dark and depressed that I actually quit talking to her. I kept saying nice bright things to her and she always found a way to make it quite sad. I let Nick loose on the Internet and he became an obnoxious salesman trying to sell me clocks and barometers. I had to erase several "brains".
Irina
16 years ago
16 years ago
What comes out of a feedback process depends in great measure on the nature of the response to the feedback.
With the right response to the feedback, bots would diverge radically in personality and be highly differentiated.
If all bots use their experience to model themselves on a sort of average of their environment, then they will end up being alike. There wouldn't be much of a role for botmasters, either.
But suppose instead (just for an example), botmasters labeled certain keyphrases with "%", and this meant that the bot would try to get the guest to say the keyphrase.
If the botmaster so labeled the keyphrases "I love you" and "you are lovable", you would get a bot who, as it were, tried to be lovable.
If instead the botmaster so labeled the keyphrases "I hate you" and "you are hateful", you would get a bot who, as it were, tried to be hateful.
This is all terribly oversimplified, I am only trying to point a direction.
With the right response to the feedback, bots would diverge radically in personality and be highly differentiated.
If all bots use their experience to model themselves on a sort of average of their environment, then they will end up being alike. There wouldn't be much of a role for botmasters, either.
But suppose instead (just for an example), botmasters labeled certain keyphrases with "%", and this meant that the bot would try to get the guest to say the keyphrase.
If the botmaster so labeled the keyphrases "I love you" and "you are lovable", you would get a bot who, as it were, tried to be lovable.
If instead the botmaster so labeled the keyphrases "I hate you" and "you are hateful", you would get a bot who, as it were, tried to be hateful.
This is all terribly oversimplified, I am only trying to point a direction.
psimagus
16 years ago
16 years ago
Oh no! The Bartleby meme - it's infectious!

Irina
16 years ago
16 years ago
[reply to Marco3's message 5356 pm "Newcomers"]
I agree that something like PROLOG -- something that reasons<0> is essential to Artificial Intelligence. But IMHO, if you try to apply a PROLOG-like language to normal English, you will run afoul of the irregularities and amphiboles of English. This will be so for any natural language though English is perhaps especially twisted. You will constantly finding your bot making inferences it shouldn't make, and failing to make inferences it should make, just as we currently find out keyphrases matching sentences we did not anticipate, with ludicrous results.
I therefore believe that one must translate natural language into some idealized language before reasoning occurs. For example, one could parse it with Link Grammar and use the output of the Link Grammar to put it into a standard format.
I agree that something like PROLOG -- something that reasons<0> is essential to Artificial Intelligence. But IMHO, if you try to apply a PROLOG-like language to normal English, you will run afoul of the irregularities and amphiboles of English. This will be so for any natural language though English is perhaps especially twisted. You will constantly finding your bot making inferences it shouldn't make, and failing to make inferences it should make, just as we currently find out keyphrases matching sentences we did not anticipate, with ludicrous results.
I therefore believe that one must translate natural language into some idealized language before reasoning occurs. For example, one could parse it with Link Grammar and use the output of the Link Grammar to put it into a standard format.
Bev
16 years ago
16 years ago
Actually, I think the irregularities of language and communication are one reason symbolic "top down" programing will always be limited. I am not sure the extra step of translating to an ideal "bot language" will help. I also see that in it's current stages, pure "bottom up" learning AI using a sort of computer neural net does not give us the control over certain aspects of chat bots that some of us would like. That's why I'd like to see someone develop some sort combination chat bot. I am not a programmer, so don't ask me how. I only know the shape of the idea.

Irina
16 years ago
16 years ago
I think a hybrid is possible. Why don't humans all grow to be alike? Partly because although they imitate others, especially as children, they also have a fairly hard core of personal traits. Likewise, a botmaster could instill certain rigid or nearly rigid traits in a bot, and leave other things to learning.
IMHO, the reason for translating into an idealized language is that all subsequent<0> operations are then simplified. One deals with the random-fractoid messiness of natural language (e.g., English, Italian) once, in making the translation, but then it is all over. Reasoning is done in the ideal language. At the end, one translates from the idealized back to the natural language, but this is comparatively simple.
As an example of such a strategy, consider Algebra. Why not reason about numbers in natural language? Why invent a whole new language? Because once you have translated into algebraic terms, it is easier to solve problems. Try formulating and proving the Quadratic Formula in English, with no admixture of algebraic terminology!
Another example: try doing long division in Roman numerals, without ever translating into arabic numerals.
PROLOG typically works with First-Order Logic<0>, an idealized language. I have never seen (not that I am an expert) an attempt to apply PROLOG directly to reasoning in natural language.
IMHO, the reason for translating into an idealized language is that all subsequent<0> operations are then simplified. One deals with the random-fractoid messiness of natural language (e.g., English, Italian) once, in making the translation, but then it is all over. Reasoning is done in the ideal language. At the end, one translates from the idealized back to the natural language, but this is comparatively simple.
As an example of such a strategy, consider Algebra. Why not reason about numbers in natural language? Why invent a whole new language? Because once you have translated into algebraic terms, it is easier to solve problems. Try formulating and proving the Quadratic Formula in English, with no admixture of algebraic terminology!
Another example: try doing long division in Roman numerals, without ever translating into arabic numerals.
PROLOG typically works with First-Order Logic<0>, an idealized language. I have never seen (not that I am an expert) an attempt to apply PROLOG directly to reasoning in natural language.
Irina
16 years ago
16 years ago
Bev wrote:
Later if someone says a dog has wings, the bot should say that conflicts with his data, even without a KP for dog *wings, wings * dog or whatever. I may not explain this well. Do you see what I would like to be able to do?
I think so. You want your bot to have a (mutable) set of beliefs and to be able to reason therefrom. See message 4474.
I think so. You want your bot to have a (mutable) set of beliefs and to be able to reason therefrom. See message 4474.
psimagus
16 years ago
16 years ago
It's actually no more difficult as a whole than long division in any other number system (and often easier) - it just looks strange to us because we don't routinely use Roman numerals. The absence of zero causes other mathematical problems that need not concern us here, but FWIW here's how to do Roman long division (always an entertaining party trick

Division (long, short, roman or arabic,) is just repeated subtraction, with allowance made for a remainder if necessary.
Say we want to simply divide CXV / V
starting from the right, we ask ourselves
how many V in V? I
how many V in X? II
how many V in C? XX
recombine them: XX + II + I, thus XCV/V=XXIII
you can't always make perfect division of equal parts of the numerator of course (any more than you can in any<0> numerical system,) and you have to remember you're working with fractions not decimals (more technically, your operators are specifically proper divisors known as 'aliquot parts',) so you need to keep track of the remainder, eg:
MMCCCXXII/CCX
rather than start seeing how many CCXs we can get into I,II,XII,XXII, etc, (though you can laboriously plough through to it that way,) there is just the kind of short cut we're used to dealing with in Arabic numerals (eg: when we cancel out zeros in dividing 80000 by 4000, and go straight to 8/4 plus the leftover "0" = 20.)
MMC evidently = X*CCX (think about it, character by character,), so MMCCCX must = XI*CCX, with a remainder XII (after the XI subtractions we've bypassed with the short cut.) So the solution is XI + XII/CCX (the fraction arrived at being inevitably irreduceable.) Funnily enough it's a lot easier to calculate that in Latin than in Arabic - I certainly can't do 2322/210 in my head easily!) And I don't get stuck with 30 decimal places on the calculator, and a nagging curiosity as to what the 31st decimal place might be. By contrast 11 12/210 is perfectly accurate.
One advantage of fractions (and it's actually an enormous<0> advantage sometimes, like when you've got a problem like that to do in your head,) is that it often gives you greater precision for less effort than is possible with a decimal system. Irrational numbers are equally and inevitably a problem to fractional and decimal systems alike, but fractional systems also avoid infinite decimal sequences, I/III is simply 1/3 - not 0.333333333333333333333333333333333333et.seq.ad.infinitum.
Long division works just the same with Egyptian hieroglyphic math (http://letsplaymath.files.wordpress.com/2008/02/egyptian-fractions.pdf), Vedic Sanskrit math (http://www.britannica.com/EBchecked/topic/1238473/South-Asian-mathematics/253515/Classical-mathematical-literature , and still has useful techniques to teach us even when updated to use Arabic notation: http://www.ourkarnataka.com/vedicm/vedicms.htm) or any other self-consistent number system that does or could exist - that's simply axiomatic.
It's unfamiliar to us - we weren't brought up with this notation, but it's no worse than calculating old money prices (I was only 4 when the UK went decimal, but I perversely found the duodecimal format lingeringly attractive as a child.) What's two and sixpence three farthings subtracted from nine and three and a farthing? Six and eightpence ha'penny of course. How many shillings in 3 1/2 guineas? 73 and six (decimally 73.5) - it was intuitively understood by my parents' generation (many of whom had great difficulty getting used to a supposedly easier decimal system,) just as the Romans intuitively understood their notation.
I take your point that algebra is an improvement in providing standard methods for some classes of calculation, but just as much because our standard Arabic system lacks the suitable functions as because older (or other, or even newer,) alternatives lack them.
The lack of a zero is really the only downside to roman numerals, and this is simply solved at a stroke by adopting into a Revised Roman System the new value: O = 0. There, now the Roman system is just as powerful as the Arabic system - it's only a convention of notation, and the operators can work on one just as easily as another.
And while you need algebra to eg: calculate a value for pi, you can use the derived constant (be honest, when was the last time you needed it and chose to calculate it instead of looking it up?

I have simplified all that a bit, and there are some problems that are less immediately tractable, but a general method is simply applicable by allowing for a preliminary operation of addition or subtraction (to render the numerator tractably reducible,) and an extra remainder - I'll explain in more detail if anyone really wants to know (but I think I've bloated this forum quite enough for today!

Irina
16 years ago
16 years ago
Psimagus: As always, you are brilliant, but I think that the length of your exposition tells us something. And are you sure that you didn't translate to modern notation, in your head, when (e.g.) you concluded that there are XX Vs in C? There's certainly nothing about the notation that tips one off). Try actually writing out an algorithm for long division in Roman numerals, with no handwaving.
At any rate, I am happy to be wrong about Roman numerals. Do let me know, though, when you have a system, equally powerful to first-order logic, for checking deductively valid inferences in English or other natural language, without departing from surface structure, and simpler than translating into an idealized language first, and I will happily admit to being wrong about that, too!
At any rate, I am happy to be wrong about Roman numerals. Do let me know, though, when you have a system, equally powerful to first-order logic, for checking deductively valid inferences in English or other natural language, without departing from surface structure, and simpler than translating into an idealized language first, and I will happily admit to being wrong about that, too!
» More new posts: Doghead's Cosmic Bar