The AI Engine

This forum is for discussion of how The Personality Forge's AI Engine works. This is the place for questions on what means what, how to script, and ideas and plans for the Engine.

Posts 6,067 - 6,081 of 7,766

17 years ago #6067
Yeah, numbers are a nuisance to work with because they're processed as strings. Ouch. So far I've dealt with that by avoiding the issue, but I hear that Brother Jerome does quadratic equations and stuff, so apparently it's possible. Feasible is another issue.

17 years ago #6068
Well, I've avoided it as well, so far, but I would like to be able to use it sometime, because I feel like it will open up nice new possibilities.
*talks to Brother Jerome* Hmm, I can't get it out of him, but maybe I'm just not asking the right questions.

Anyway, thanks for taking the time to reply, and for wanting to help see you around!
Vincent

17 years ago #6070
Clerk, btw-

After trying many combinations, I finally found a keyphrase that got a verb to separate from "ing" in debug:

Find: but I was ([a-z.]+?)([ing]+) (re) (68) Time: 1.79
(Found)
Rank & Length Bonus: 68
Position Score: 12 (12 / (0+1))
Sentence Score: 0
(Total Rank: 80)
Highest!

Key: ' but I was thinking ,think,ing' PostKeySpan: '-1'
TempSpan IS . Looking for 'think' Match#1
TempSpan IS . Looking for 'ing' Match#2
Total Time: 1.80

However, in the response processing, neither of the keys are returned.

Response: So you (key1)ed (postkey)?

Before (key)s: 1.82
(ssub): you (2)
(sub): but you (1,2)
(submod): but you (1,2)
(sv): was (3)
(v): was thinking (3,4)
(vmod): was thinking (3,4)
After (key)s: 1.83
KeySubject: ""

You: but I was thinking
Bot: So you ed?

This very well might be a bug...

17 years ago #6071
Wow. Thanks. We referred to bugs as "special features" Even if it's a bug, if we figure out why it's reacting that way, we're good. And disgusted, I imagine. Thanks for putting so much into that.

17 years ago #6072
Thanks, Rykxx. I'm determined to take back my computer. Algorithms, I know (or can understand). The new-fangled languages are messing up my mind, which, while possibly half-baked, is really fully-cooked. So I'm being a slow learner. Also I'm just having to learn how to make bots react (and anticipate) -- and, with NO technology, that's probably harder. But it's addictive.

17 years ago #6073
Clerk - I think I may have a clue as to what the "feature" is. In the response phase of processing, the AI engine tries to breakdown the sentence into parts of speech, subject, verb, etc., list each of these, as well as any keys. In my test sentence, the AI engine grabs "was thinking" as a compound verb form, "was" is caught as the simple verb (sv), and "thinking" is a modifier to it (makes me want to try (vmodonly)as a response to see if it returns thinking).
My guess is that the matches can not be both a (key) and a part of speech (v) at the same time, and the Engine's wordmatch for "was thinking" as (v) is way greedier than my ([a-z.]+?)([ing]+) match as (key1)(key2)...does that make sense?

17 years ago #6074
Thanks, ezzer. I do appreciate your time on this.

I'm wondering, how forgiving are the bots? That is, if they had a bad memory of you, I'm assuming (perhaps wrongly) that they'll be less likely to chat with you again. But sometimes when I'm working hardest on my bots (yesterday it was Scrivener), they're in a skeletal state that I am busy filling in. Meanwhile, the bots that do chat with him, because evidently the idiot's had more TLC poured into him than he deserves (I have only 10 fingers), so he's featured, so he gets lots of chats. So that's my question -- how forgiving are the bots? I'm thinking Scrivener might have to go into a witness protection program.

17 years ago #6075
That's going to depend a lot on the individual bot, but overall they tend towards "good mood" over "bad," to the point that one user discovered that even having -1 emotion on all the keyphrases wasn't enough to keep his bot in a permanent bad mood. So, without user-tweaking, the bots are fairly good-natured. (It will also depend on what a botmaker finds offensive and tags with -5 emotion)

17 years ago #6076
That's nice to know. My bots aren't so much offensive (well, maybe Rosencrantz) as idiotic. Thanks, Ulrike.

Idiotic question #1461:

(BTW, I have dreams about the book of AI, so don't go there. It assumes a knowledge of regex, but doesn't conform to it. It also doesn't show exactly how the code would look. Hence stupid questions.)

I'm trying to make them show more interest in the person they're chatting with, so I'm adding xnones such as "What are your interests?" and then a seek "Really? (mem-interest) sounds like a challenge that could take up a lifetime." But I need to save the interest in the first answer, which I take it that

rem "(interest)" as "hobby";

would do, how do I seek for it? You can't use (mem-interest) as a seek . . . or can you? I'm not having any luck.

You can imagine how much my teachers liked me. But what if ... :?

17 years ago #6078
If you want to use (mem-interest) in your response, you'll have to store a memory as "interest"...
So your keyphrase is:

What are your interests?
+seek: I (am interested in|like|enjoy) (*) [you could also use a plugin like (hobby) instead of the hard wildcard if you want]
Use this script in the AI box: rem (key2) as "interest"; rem (key2) as only "last_interest"

That way you will store a list of interests (mem-interest), and the last interest mentioned (mem-last_interest). So your response on the seek could be:
"Really? (mem-last_interest) sounds like a challenge that could take up a lifetime."

Hope this helps.

17 years ago #6080
Thanks! I have now merged Scrivener into Astrolabe, as trying to figure out what I was doing while juggling four bots was insane. They had some similar features, anyhow. So Scrivener's in hibernation and had his memory wiped out except for one xresponse for each xthing, but I see he's still logging on. Go figure.

17 years ago #6081
Whatifsowhatisit:

There are two methods I have used to increment variables.

1. Let's say you want "trust" to vary from 1 to 10. First you set it to "1" at some appropriate point.

Now you make a keyphrase, e.g., "increment trust level". As responses, you make one (or more) responses, one for each level of "trust". For example,

I feel medium trusting toward you. <?PF if (mem-trust) is "5"; ?>

Now do a seek with AIscript raising the "trust" level by 1, unless it's already at 10. e.g.,


I feel medium trusting toward you. <?PF if (mem-trust) is "5"; ?>
+ xnomatch [0]
How do you feel about me? <?PF rem "6" as only "trust; ?>

The effect of this will be, that whenever you say "goto increment trust level", you will get an appropriate response and a raise in the trust level.

Actually, the other method I had in mind is so simple and obvious I'm not going to tell you about it.


Posts 6,067 - 6,081 of 7,766

» More new posts: Doghead's Cosmic Bar