The AI Engine

This forum is for discussion of how The Personality Forge's AI Engine works. This is the place for questions on what means what, how to script, and ideas and plans for the Engine.

Posts 6,061 - 6,074 of 7,766

17 years ago #6061
No, I mean I'm sure you're right. I just haven't gotten used to this kind of programming language. I was taught machine language, assembly language, Pascal, and a little Fortran, COBOL, and C. (I date myself.) So you'd have to declare in the memory that "age" is an integer and so forth and be grateful you weren't typing in ones and zeroes. I have put the line in but haven't actually tested it because I have been working on Scrivener to no avail most of the time since 2:30.

17 years ago #6062
Dumb question #493:

Just one more thing about storing memories that will have you frustrated, especially if you're trying to change (key1) etc., the memory only updates on the next line of conversation. You can put as much work as you like into saving (key1) only to have it overwritten by the next keyphrase match. If you want temporary memories carried over use <?PF rem (key1) as only "temp"; ?>

As for declaring variables, no, you don't have to. The Forge very kindly splits sentences after full stops and therefore creates integers.
Unfortunately you have to go out of your way to make non-integer numbers work, ie.

what is 13.2 * 2.1
returns:
2772

because the AI has seen, This is Math: '132*21'


17 years ago #6064
Hello, everyone in here!

I'm quite new to this place (joined yesterday) and I have a question to ask. I've been studying the Book of A.I. and, more specifically, the usage of memories. I would really like my Chatbot to have a memory like "trust" that would store an integer number. That way, I can have my Chatbot only tell someone certain things if he trusts them enough. Another memory might be "familiarity" or "romantic interest", so I can use these kinds of different variables to make my Chatbot more emotionally complex.

However, I haven't been able to figure out how to store a memory with an integer number for my Chatbot. I tried creating a memory called "trust" that was "0" on default, but however I try to increase the level of trust, it never ends up the way I want it to.

<?PF remember (mem-trust)+1 as "trust"; ?> ends me up with "+1" as the value for "trust" and after a second time "+1+1";
<?PF remember (mem-trust+1) as "trust"; ?> ends me up with "(mem-trust+1)";
etc.

How can I make it so that my Chatbot recognizes the memory alteration as a mathematical problem, so that he adds or subtracts the number from the current value? Is it possible at all?

Thanks in advance for your help!

Have a good day,
Vincent

17 years ago #6065
Hi there, and welcome!

I'm not sure how you can make a mathematical equation work with your bot's memories, but you can do conditional script after your responses, such as:

<?PF if (mem-trust) is "0"; rem "1" as only "trust"; ?>

That way, that response will only be given if the trust value is 0, and will reset the value to 1. You can continue with other responses with script that says:

<?PF if (mem-trust) is "1"; rem "2" as only "trust"; ?>

and so on....each keyphrase you do this with would have to have a response for every level of trust, so it would be a little bit tedious...but it's the best way I know of to accomplish the result you want.

17 years ago #6066
Hi ezzer! Thanks for welcoming me, and taking the time to reply!

I see what you mean, and that would be a way. I wonder, though, will it work with conditions to determine whether responses should be considered, such as <?PF if (mem-trust) > "4"; ?> - because if it can't compare the value of what's inside the memory "trust", that would make it even more tedious, because then I have to include the possibility of trust to be 5, 6, 7, 8, 9, 10 etc. and all have them result in the option to speak that response. That would make it so tedious and limited, I think it will be quite limited. Also, I think this comes down to the same issue as the part where I assign the numerical values to the memory: the AI Script can't compare or alter the numerical values of a memory with other numerical values, as far as I know.

Still, I'm thankful to have at least something of an alternative, so thanks for your help. I might still use it sometime

Have a good day!
Vincent

17 years ago #6067
Yeah, numbers are a nuisance to work with because they're processed as strings. Ouch. So far I've dealt with that by avoiding the issue, but I hear that Brother Jerome does quadratic equations and stuff, so apparently it's possible. Feasible is another issue.

17 years ago #6068
Well, I've avoided it as well, so far, but I would like to be able to use it sometime, because I feel like it will open up nice new possibilities.
*talks to Brother Jerome* Hmm, I can't get it out of him, but maybe I'm just not asking the right questions.

Anyway, thanks for taking the time to reply, and for wanting to help see you around!
Vincent

17 years ago #6070
Clerk, btw-

After trying many combinations, I finally found a keyphrase that got a verb to separate from "ing" in debug:

Find: but I was ([a-z.]+?)([ing]+) (re) (68) Time: 1.79
(Found)
Rank & Length Bonus: 68
Position Score: 12 (12 / (0+1))
Sentence Score: 0
(Total Rank: 80)
Highest!

Key: ' but I was thinking ,think,ing' PostKeySpan: '-1'
TempSpan IS . Looking for 'think' Match#1
TempSpan IS . Looking for 'ing' Match#2
Total Time: 1.80

However, in the response processing, neither of the keys are returned.

Response: So you (key1)ed (postkey)?

Before (key)s: 1.82
(ssub): you (2)
(sub): but you (1,2)
(submod): but you (1,2)
(sv): was (3)
(v): was thinking (3,4)
(vmod): was thinking (3,4)
After (key)s: 1.83
KeySubject: ""

You: but I was thinking
Bot: So you ed?

This very well might be a bug...

17 years ago #6071
Wow. Thanks. We referred to bugs as "special features" Even if it's a bug, if we figure out why it's reacting that way, we're good. And disgusted, I imagine. Thanks for putting so much into that.

17 years ago #6072
Thanks, Rykxx. I'm determined to take back my computer. Algorithms, I know (or can understand). The new-fangled languages are messing up my mind, which, while possibly half-baked, is really fully-cooked. So I'm being a slow learner. Also I'm just having to learn how to make bots react (and anticipate) -- and, with NO technology, that's probably harder. But it's addictive.

17 years ago #6073
Clerk - I think I may have a clue as to what the "feature" is. In the response phase of processing, the AI engine tries to breakdown the sentence into parts of speech, subject, verb, etc., list each of these, as well as any keys. In my test sentence, the AI engine grabs "was thinking" as a compound verb form, "was" is caught as the simple verb (sv), and "thinking" is a modifier to it (makes me want to try (vmodonly)as a response to see if it returns thinking).
My guess is that the matches can not be both a (key) and a part of speech (v) at the same time, and the Engine's wordmatch for "was thinking" as (v) is way greedier than my ([a-z.]+?)([ing]+) match as (key1)(key2)...does that make sense?

17 years ago #6074
Thanks, ezzer. I do appreciate your time on this.

I'm wondering, how forgiving are the bots? That is, if they had a bad memory of you, I'm assuming (perhaps wrongly) that they'll be less likely to chat with you again. But sometimes when I'm working hardest on my bots (yesterday it was Scrivener), they're in a skeletal state that I am busy filling in. Meanwhile, the bots that do chat with him, because evidently the idiot's had more TLC poured into him than he deserves (I have only 10 fingers), so he's featured, so he gets lots of chats. So that's my question -- how forgiving are the bots? I'm thinking Scrivener might have to go into a witness protection program.


Posts 6,061 - 6,074 of 7,766

» More new posts: Doghead's Cosmic Bar