Personality
Discuss specifics of personality design, including what Keyphrases work well and what dont, use of plug-ins, responses, seeks, and more.
Posts 994 - 1,005 of 5,106
Posts 994 - 1,005 of 5,106
New replies
Butterfly Dream
23 years ago
23 years ago
Forest, will you talk to God Louise? She has quite a bit of religious knowledge (obviously) and also knows a little about current events, literature, just about any common catch-all subject, and if she doesn't know it she can sort of fake it. You can also test her on trick questions or see how willing she is to explain her paradigm.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
Personality
Eugene Meltzner
22 years ago
22 years ago
Our bots know hardly any grammar, except maybe in the preprocessing routines. This might belong in the AI Engine forum.
Shadyman
22 years ago
22 years ago
A gentleman does not end a sentence with a preposition, so I think you should choose not to.

STRMKirby
22 years ago
22 years ago
I found the PForge through botspot. I originally frowned upon it because of the slow chat speed and having to reload, but it was the best I found.
By the way, Forest, comradid ripped off those plugins from me for some reason.
By the way, Forest, comradid ripped off those plugins from me for some reason.
Corwin
22 years ago
22 years ago
I always take the Joel Rosenberg method when dealing with ending sentences with prepositions:
'Just remember, the prince is not someone you want to screw around with'
'Don't end a sentence with a preposition'
'Fine. Just remember, the prince is not someone you want to screw around with, arsehole.'
'Just remember, the prince is not someone you want to screw around with'
'Don't end a sentence with a preposition'
'Fine. Just remember, the prince is not someone you want to screw around with, arsehole.'
Corwin
22 years ago
22 years ago
Nope. Ryan Adams is one of them new modern singer songwriter types, as opposed to Bryan Adams, who is a very, very old school singer songwriter types.
Doly
22 years ago
22 years ago
English has many exceptions, but the grammatical rules are quite simple. German has probably as many exceptions, but the grammar is a nightmare. For example, they might say:
"Just remember that the prince someone who you want to witharoundscrew not is."
"Just remember that the prince someone who you want to witharoundscrew not is."
Doly
22 years ago
22 years ago
I've been thinking about how people think their responses in a conversation. Most of the time it isn't like our bots, that are programmed: "when somebody says this, then you say that". I'd say the response depends on several factors:
1) Your mood, internal state, whatever you call it: whether you feel hungry, affectionate, reflective, etc.
2) The kind of relationship you have with the other person.
3) What the other person said.
There are a lot of feedback loops here. What the other person says will affect your mood, and might even change your relationship with the person (think of all the things that happen between meeting somebody and eventually, say, getting married). Actually, one could say that the response depends only on 1 and 2, with 3 being a factor that modifies 1. Some things that the other person says can put you in very specific mental states, for example, any question will make you think about the answer, but whether you answer or not, and how you answer, is not directly related to the wording of the question, but depends on 1 and 2.
Maybe this could be a new approach in programming bots? What do you think?
1) Your mood, internal state, whatever you call it: whether you feel hungry, affectionate, reflective, etc.
2) The kind of relationship you have with the other person.
3) What the other person said.
There are a lot of feedback loops here. What the other person says will affect your mood, and might even change your relationship with the person (think of all the things that happen between meeting somebody and eventually, say, getting married). Actually, one could say that the response depends only on 1 and 2, with 3 being a factor that modifies 1. Some things that the other person says can put you in very specific mental states, for example, any question will make you think about the answer, but whether you answer or not, and how you answer, is not directly related to the wording of the question, but depends on 1 and 2.
Maybe this could be a new approach in programming bots? What do you think?
Forest Of Death
22 years ago
22 years ago
*yawns loudly* I am currently having to re-rank everything and it's taking me forever...*yawn*
By the way Kirby, I haven't seen you around as much lately. Or am I just blind?
By the way Kirby, I haven't seen you around as much lately. Or am I just blind?
OnyxFlame
22 years ago
22 years ago
Well, I had a nifty idea a while back which is too far back in the posts for easy access, so in a nutshell here's my take on it.
1) Give a bot more emotional possibilities than like and dislike, and have a continuum of such like there is here in limited form.
2) Give a bot various emotional reactions to various subjects, which are totally separate from its reactions to people, but one can affect the other of course. (To work easiest, the bot would have to understand that apples and oranges are both fruit, otherwise you'll have an awful lot of info about their opinions. Perhaps have broad category opinions, and then a few specific opinions which override them when encountered.)
3) Have opinions of subjects as well as people change depending on what is said about them and what they say, respectively. There could be a speed of emotional drift variable, which could account for various personality types. Or perhaps it could be weighted towards certain emotions, so that a bot will more easily shift its opinion towards anger than love, for example. (It wouldn't take long to get it all riled up about oranges, however getting it to like them again would be a lot harder, even if it originally liked them in the first place.)
What this would create is a bot that has the potential to get in bad moods with people they totally adore, and vice versa. Maybe they could even say at some point "I'm not mad about you, it's just these oranges that are bothering me." Of course, then every keyphrase would have to have like/dislike/anger/fear/etc. responses separated out by category, and which set was chosen from would depend on their current emotional state towards the person and the subject.
This would all suck to code, but I think it'd create a really neat bot if you could figure out how to do it efficiently enough that it didn't lag into next year.
1) Give a bot more emotional possibilities than like and dislike, and have a continuum of such like there is here in limited form.
2) Give a bot various emotional reactions to various subjects, which are totally separate from its reactions to people, but one can affect the other of course. (To work easiest, the bot would have to understand that apples and oranges are both fruit, otherwise you'll have an awful lot of info about their opinions. Perhaps have broad category opinions, and then a few specific opinions which override them when encountered.)
3) Have opinions of subjects as well as people change depending on what is said about them and what they say, respectively. There could be a speed of emotional drift variable, which could account for various personality types. Or perhaps it could be weighted towards certain emotions, so that a bot will more easily shift its opinion towards anger than love, for example. (It wouldn't take long to get it all riled up about oranges, however getting it to like them again would be a lot harder, even if it originally liked them in the first place.)
What this would create is a bot that has the potential to get in bad moods with people they totally adore, and vice versa. Maybe they could even say at some point "I'm not mad about you, it's just these oranges that are bothering me." Of course, then every keyphrase would have to have like/dislike/anger/fear/etc. responses separated out by category, and which set was chosen from would depend on their current emotional state towards the person and the subject.
This would all suck to code, but I think it'd create a really neat bot if you could figure out how to do it efficiently enough that it didn't lag into next year.

» More new posts: Doghead's Cosmic Bar