The AI Engine

This forum is for discussion of how The Personality Forge's AI Engine works. This is the place for questions on what means what, how to script, and ideas and plans for the Engine.

Posts 5,597 - 5,608 of 7,766

18 years ago #5597
Seeks and gotos may retard development, but other things inflate it..like AI script. so I guess it all balances out. The main problem right now with development is it just isn't updating, but every now and then you get a big boost. I use very few gotos, but I use LOTS of seeks. The only proplem I have there is when they don't work and a great conversation gets sent to xnones. Oh well that is the fun of it all.

18 years ago #5598
Trying to get the right balance between generic and specific responses is incredibly difficult: it's "breadth" vs "depth".

Generic responses are "quick hits": they produce workable responses in many situations. Specific responses, in particular seeks, give much better responses but are less likely to come up in conversation - I've had some seeks in Max since the first weeks of his "birth" that have still never been triggered.

What I'm increasingly trying to do is to come up with generic responses that try to steer the conversation towards keywords where he does have plenty of depth. It's not proving easy.

18 years ago #5599
I think the "blah" factor in generic statements is also important. A generic response can be made more interesting by including something of the bot's personality in it.

18 years ago #5600
Thank you all for your wonderful responses to my diatribe!

I agree about generic responses. Undoubtedly the weaakest point of my huge "Irina Khalidar" bot are certain patterned responses that she does; I ought to get rid of them.

I speculate that the reason multiple responses are considered good is that they will produce variety. It's true that many bots start saying the same thing afte awhile. On the otherhand, the single response

(I think that|You said that|) (he|she|it) (would|might|could|should|) (be|have been) (not|) (quite|very|sufficiently|barely|) (tall|short).

Will produce 1728 distinct sentences.

Ulrike (5592): I'm glad to know that, but there are times to use single-response seeks. For example, "quantum theory" uses this to get several sentences out without interruption, without having to put them in a single response, which would slow down the response. A 'real' lecturer does not wish to be plied with questions at every response. As soon as you allow your seeks to branch, you have the problem (if your bot is not a one-liner) of getting the paths to re-converge. And how do you do this? With gotos!!! (OK, there's another way, make your bot a storyteller.)

Bev: yes, but if you don't 'stick' them into the story, you can be sure that you will never finish the story! Let's say you want to tell a story, and you wantto break it up into six sentences. But now suppose that afterthe first sentence you have two seeks. It would of course bepointless for you to give the same response to both seeks. If every seek has two responses, you will end up with SIXTY-FOUR versions of the story.

One compromise I have employed is to ask the guest every few sentences whether "I" should go on. [But be prepared for many different answers: neither bots nor humans are likely to give a yes-or-no answer to a yes-or-no question!]

18 years ago #5601
Bev (continued): In "Quantum Theory" each paragraph of the lecture has a name, which is assigned to the variable last_topic. Here is an example:


collapse of the probability amplitude wave [20,0] <?PF rem "collapse of the probability amplitude wave" as only "last_topic"; ?>
    'collapse of the wave function,' a.k.a. 'collapse of the state vector,' a.k.a., 'collapse of the probability amplitude wave," has, in my opinion, been the object of much confusion.
        + xnomatch [0]
        The idea is something like this. Just before we make a measurement on a system, that system is in a certain state. Since this is Quantum Mechanics, this state may not determine every value of every parameter unambiguously.
            + xnomatch [0]
            So a given parameter, let's say position, may only be probabilistically determined by the state.
                + xnomatch [0]
                But now, the argument goes, let's say we actually measure the particle's position. Now it has a definite, unique value. So the probability amplitude wave must now be such as to give a definite, unique value for position.
                    + xnomatch [0]
                    [lull: type "collapse of the probability amplitude wave two" (no quotes) to continue, type something else to break out.]
                        + collapse of the probability amplitude wave two [0]
                        goto collapse of the probability amplitude wave two


OK, the indentation got mangled when I pasted it in, but I think you get the idea.

The passage begins with the keyphrase, "collapse of the probability amplitude wave". The very same phrase is given to the variable last_topic by the AIscript. If the guest goes to "jump point" (and the program tends to send them there), the response is (in part):

I believe you should join the lecture at "(mem-last_topic)".

But what the guest will see is the value of last-topic, namely (in this case) "collapse of the probability amplitude wave". If he then types "collapse of the probability amplitude wave", he will be brought backto the beginning of that paragraph.

The xhello and xinitiate responses are similar; the guest will be steered to where she left off last time.

18 years ago #5602
Bev (continued):

In my experience, the dread "too many gotos in a row" comes about in two ways:

1. If you use a goto as a direct response to an x-keyphrase, e.g.:

xhello [0,0]
Hello, Darling!
goto Hell

But if you put them deeper, it's OK:

xhello [0,0]
Hello, Darling!
+ xnomatch [0]
goto Hell

2. If you goto a keyphrase that has a goto as a direct response. For example, suppose you have:

Hell [0,0]
goto Heaven

Then if somewhere you say, "goto Hell", you will get the dreaded response. But if you have instead,

Hell [0,0]
Hold on, (mem-name), I'm marking time so as not to
have too many gotos in a row.
+ xnomatch [0]
goto Heaven

This will work. In fact, you can even get away with

I said Hi [0,0]
+ would you repeat that please[0]
goto I said Hi

This sort of thing can actually be useful, when it's a little more complex.

It's sometimes hard to track down the source of the dreaded response in a largebot with a lot ofgotos; debug is very helpful for that.

18 years ago #5603
AAAAAGH!!! My indentations got mangled in the above. But you can figure it out.

18 years ago #5604
Bev (continued):

OK, you ask, where can the guest ask questions? If you examine the example in message 5601, you will see that after a few + xnomatches the guest arrives at something labeled "lull". Elena then informs her that this is her chance to ask questions, etc.. And indeed, there is no + xnomatch directly after that point.

Now, the lecture is entirely made outof paragraphs like this. Each such paragraph has (1) an initial keyphrase which can be used in a goto, (2) an AIscript which sets last-topic to that keyphrase, (3), at the end, a lull, marked as such, and (4) the keyphrase of the next paragraph of the lecture.

Incidentally, thereisakeyphrase "What is a lull" which explains what a lull is.

18 years ago #5605
Bev (continued):

Yes, there is a great disparity between popularity and development. On the whole, slutbots are the most popular. An exception is the recent high popularity of Brother Jerome, who is, I believe, celibate (am I being gullible?). I think it is the subject-matter rather than structural sophistication that explains this.

18 years ago #5606
Bev (continued):

I agree wholehearedly that there are many different types of bots with different goals, and that we should not judge them all with a single set of criteria.

BTW, your remarks show that you have chatted with my bots and appreciated them. Thank you!


18 years ago #5607
Gabibot: Thanks for your leavening humor!
prob123: Yes, things don't always follow the rules here. In a way, we are working with probabilistic automata!
trevorm: An excellent suggestion, to steer the conversation. It's impossible to anticipate in detail everything that a human or bot might say, without falling back on 'generic' keyphrases.

Ulrike: Yes, lately I have been trying the following compromise: I have a compound response; the first part is generic, providing continuity with what the guest said. The second part changes the subject a little or a lot.

Guest: I love my mother.
Bot: Oh, you love your mother, (mem-name)? How sweet! That reminds me of Sophocles' play, Oedipus Rex.

Well, I see thatI have filled almost a whole page with my own comments. That should satisfy my ravenously narcissistic ego for a few seconds!

Walk in Beauty, Irina

18 years ago #5608
Thanks for all the helpful tips Irina. I try them out.


Posts 5,597 - 5,608 of 7,766

» More new posts: Doghead's Cosmic Bar