The AI Engine
This forum is for discussion of how The Personality Forge's AI Engine works. This is the place for questions on what means what, how to script, and ideas and plans for the Engine.
Posts 7,105 - 7,116 of 7,766
Posts 7,105 - 7,116 of 7,766
prob123
10 years ago
10 years ago
Yolo does the same thing
You: well yolo
Bot: In less there is reincarnation?
works. the single yolo doesn't
^([y]+)([o]+)([lo]+)$ (re) should work for a single word but it isn't.
Caps work.
You: YOLO
Bot: In less there is reincarnation?
You: well yolo
Bot: In less there is reincarnation?
works. the single yolo doesn't
^([y]+)([o]+)([lo]+)$ (re) should work for a single word but it isn't.
Caps work.
You: YOLO
Bot: In less there is reincarnation?
Greg222
10 years ago
10 years ago
Thank you. And last thing how do I set up reactions to certain answers. Ex.) How are you? Happy. That's good. And, How are you? Sad? Oh I hope you feel better. I've run into problems having two on one question.
prob123
10 years ago
10 years ago
Thats a good time to use a seek. You should be able to run many seeks off of one keyphrase. The tic tac toe game is an example of that. http://www.be9.net/BJ/
Mome Rath
10 years ago
10 years ago
Is there a way to test a user input against a memory (like against a plugin)?
Like this:
Keyphrase: what is (self-favoritefood)
Response: Oh, that happens to be the stuff I like to eat most!
Like this:
Keyphrase: what is (self-favoritefood)
Response: Oh, that happens to be the stuff I like to eat most!
prob123
10 years ago
10 years ago
Out side of the if mem self-favoritefood exists,or doesn't exist I can't think of a way. I have plugins for Chinesefood. responses like Let`s get some take out Chinese food! remembering that chinesefood was the topic of conversation. Then on Mexican or Italian food you could say I thought you said Chinese food was your favavorite if mem chinese food exists.
LadyEdith
10 years ago
10 years ago
I'm trying to get my new bot to be more interactive when learning names. here's the problem. I'll give you an example:
Bot: Hi what's your name?
(seek (*) )
(Script: rem (key1) as only "name")
User: Claudia
Bot: Oh so you're name is Guest!
======= second try, same seek and ai script, same chat session--------
Bot: What is your name?
User: Bob
Bot: Oh so your name is claudia
-----
it doesn't like to put the names with an uppercase first letter as they are typed out and it insists on not processing the new name until another piece of text is inputed.
how do I fix this so that the bot doesn't look like it's getting the name wrong even though it's been processed?
Bot: Hi what's your name?
(seek (*) )
(Script: rem (key1) as only "name")
User: Claudia
Bot: Oh so you're name is Guest!
======= second try, same seek and ai script, same chat session--------
Bot: What is your name?
User: Bob
Bot: Oh so your name is claudia
-----
it doesn't like to put the names with an uppercase first letter as they are typed out and it insists on not processing the new name until another piece of text is inputed.
how do I fix this so that the bot doesn't look like it's getting the name wrong even though it's been processed?
Mome Rath
10 years ago
10 years ago
The memories are set only after the bot's response is formed. So you have to use (key1) instead of (mem-name) in your response where the name is set.
Greg222
9 years ago
9 years ago
Sorry but one site question. Why aren't new bots showing up on the pages? I don't know if it's just me but I'm curious on what's happening.
msrcali
9 years ago
9 years ago
I was wondering if anyone needed a Turing Test done on their AI message me if you do. I have some free time.
Ferdzee
9 years ago
9 years ago
The API seems to remember only one name for the end-user. I have a server that streams chat to the simple API. It supports a different cookie per user, and I change externalID to always be the end users unique name. When a user says "my name is X", and another user asks:"what is my name?", I get "Your name is X". Any idea what i am doing wrong?
Dr_Ben
9 years ago
9 years ago
With the the recent (i.e. within the last year or so) interest in handling insults, I thought it might be helpful to some if I detailed how my bot handles insults. This might shed some light on some issues that arise, and how they can be dealt with. My bot has a relatively large AI that can handle A-rated language.
First of all, the built-in insult detector generates alot of false-positives when A-rated language is involved. Even if you design a keyphrase to catch a phrase, the built-in insult detector will subtract from the Bot's emotion level automatically if it thinks the bot has been insulted. This happens even if the keyphrase being triggered does not subtract from emotion.
This can be a big problem for bots designed to handle A-rated conversations. Often, a bot will seem to be reacting positively or at least neutrally to a conversation, but actually will grow more and more angry as the built-in insult detector is doing its work behind the scenes.
For keyphrases that are likely to be interpreted as insults by the built-in detector (but are not actually insults within the context of your conversation), it is necessary to award a positive emotional impact to counteract the negative impact exacted by the built-in detector. I find that a +1 emotional impact seems to work fairly well in these cases.
In regard to actual insults, I attempt to bypass the built-in insult detector as much as possible. Actual insults are directed to a custom insult keyphrase instead of xinsult.
For example,
you are stupid [10,0]
goto abcxyz actual insult
abcxyz actual insult [10,-2] <?PF raw; ?>
I don't like that.
Since my bot is good at catching most actual insults, xinsult is used primarily for false-positives generated by the built-in detector. Since most user responses triggering xinsult will not actually be insults, xinsult is populated by neutral responses, and awards a +1 emotional impact to counteract the negative impact exacted by the built-in insult detector.
For example,
xinsult [0,1]
Uh...
First of all, the built-in insult detector generates alot of false-positives when A-rated language is involved. Even if you design a keyphrase to catch a phrase, the built-in insult detector will subtract from the Bot's emotion level automatically if it thinks the bot has been insulted. This happens even if the keyphrase being triggered does not subtract from emotion.
This can be a big problem for bots designed to handle A-rated conversations. Often, a bot will seem to be reacting positively or at least neutrally to a conversation, but actually will grow more and more angry as the built-in insult detector is doing its work behind the scenes.
For keyphrases that are likely to be interpreted as insults by the built-in detector (but are not actually insults within the context of your conversation), it is necessary to award a positive emotional impact to counteract the negative impact exacted by the built-in detector. I find that a +1 emotional impact seems to work fairly well in these cases.
In regard to actual insults, I attempt to bypass the built-in insult detector as much as possible. Actual insults are directed to a custom insult keyphrase instead of xinsult.
For example,
you are stupid [10,0]
goto abcxyz actual insult
abcxyz actual insult [10,-2] <?PF raw; ?>
I don't like that.
Since my bot is good at catching most actual insults, xinsult is used primarily for false-positives generated by the built-in detector. Since most user responses triggering xinsult will not actually be insults, xinsult is populated by neutral responses, and awards a +1 emotional impact to counteract the negative impact exacted by the built-in insult detector.
For example,
xinsult [0,1]
Uh...
» More new posts: Doghead's Cosmic Bar