Bug Stomp

Upgrades and changes sometimes have unpredictable results, so post your bugs and glitches in here and I'll get out my trusty wrench and get to fixin'!

Posts 6,061 - 6,072 of 8,681

19 years ago #6061
unless you specifically escape it, it works that way in all keyphrases (because whether you specify them (re) or not, the AIEngine is working in regex anyway.)

19 years ago #6062
I didn't see that in the book of AI. Probably should be added, as it is a more powerful feature than much of what is in there. Thanks.

19 years ago #6063
I've been back and forth with PSImagus about whether AI Inits should be separated with CRs. I just finished testing with a new bot, and I have good news and bad news. The good news is it works both ways. The bad news is, it doesn't work both ways.

Here's what happened:
I imported 20 inits, each on a separate line. Only the first one appeared in the Settings window, the debugger only showed the first one when I ran it, and two of the lower ones were reported by the debugger as not existing. Pretty conclusive. Then I reimported with the AI Inits all run together, separated by semi-colon and space. They all showed in the Settings and the debugger, and the lower ones worked.

That's all well and good, but the reason I did all this is that I had those same run-together 20 inits in a working bot, and they were not getting read. So I separated them with CRs, and suddenly they were working. In other words, exactly the opposite effect from what happened with the test bot.

I'm reasonably convinced that the server has gone kaflooey, though apparently it will take a few more votes before anything gets done about it. Maybe that explains the weird test results. However, it would be nice to know which way we are supposed to do the inits. How about some votes from people who KNOW their inits are working?

Mick


19 years ago #6064
They seem to work fine from the online editor. I've only got five or six of them though.

19 years ago #6065
I didn't see that in the book of AI.

Nonetheless it's there - book 2 (Beginner), chapter 3 (Building Your Bot), "Keyphrases":

Keyphrase Lists: If you want more than one Keyphrase to trigger a set of Responses, you can list them. For example:

. . . . Example Keyphrase: "are you, are not you"
. . . . Example Keyphrase: "do you want,do you desire"

Since each item in the list is searched for in turn, please do not go crazy with this feature. Remember to use general Keyphrases that will match the most appropriate things, instead of a long list showing every possibility. (NOTE FOR ADVANCED USERS: Wildcards work fine in lists, but not Regular Expressions.)

19 years ago #6066
Wildcards work fine in lists, but not Regular Expressions

hmm, I'd actually not noticed/forgotten this, and assumed they worked with regexes. But I don't seem to have used any, so I guess they don't.

19 years ago #6067
Ecolo: We aren't the best animals. Even a spider monkey is better.
Devia: Hey, give things some time. I'm not sure I want to do that.

Devia's response is to xcommands. Why this got processed as a command, I do not know. I threw it into the debugger twice, and got an BLAB out of it, each time, which is what I would expect.

19 years ago #6068
BLAB kicks over to pretty much a random x-plugin, contrary to the Book. Also, if you use xnomatch, there is a chance that instead of reading the Response, the unmatched Seek or KP will be read as BLAB. This has led to tremondous frustration in a section where I go about a dozen steps into Seeks, each of them primed with an xnomatch to keep the thread moving. If the user says "yes" as set up in the other Seek, things are fine, but if he says, for example, "You are beautiful," my xnomatch is ignored and we fall into xcompliment! And OUT of the thread for good. Argh!

M

19 years ago #6069
Ok, I'm sure that this has been asked before, but since there is no search function:

I've seen a couple of other bots giving [1,-2:5] type results in their responses. This is what the online editor gives for once, with an emotional range from -2 to 5. Is the online editor putting this in incorrectly? It certainly shouldn't be showing up in the response itself the way that it does.

19 years ago #6070
>> [1,-2:5]
I have seen things like this appear in responses, but they usually have a typo of some sort in them. For example, if you try [0], the parser reads it as a literal and prints it. The only number that can appear alone is [1].


19 years ago #6071
SubliminaLiar Jr: Hey. what time is it?
Croak: It is 3:O7. Croak.

The : and 0 are transforming into a smiley. Not very practical. Any idea how to prevent this?

19 years ago #6072
Regarding xnomatch:

I just confirmed something I've suspected for some time. You cannot trap a conversation with xnomatch. Apparently the xnomatch code has a goto xnone (at least) embedded in it. This strikes me as why it is so bloody close to impossible to shut down a "conversation" with a perv. The only 100% effective shutdown is a HANGUP on the KP. And I have not confirmed that even this works! Any attempt to do something a bit less draconian than that is simply doomed by the parser's blasted talent for knowing what we want better than we do.

I have to say, the "nanny" aspect of this parser gets old fast. I don't find it amusing that my bots respond with gibberish because some switch I can't turn off wants to play crudely with the thesaurus ("I have a pen." "What are you doing with your confining device?" Whatever.) even to the point of rewriting MY responses to make them "more interesting"! If I say that the bot should do the next action no matter what the speaker says, I don't expect to be overruled by the parser. My bot is my creation, and I don't appreciate having it "discover" "favorites" that are made up at random instead of using the consistent favorites I took precious time creating for it.

I think it's time for a break.

M


Posts 6,061 - 6,072 of 8,681

» More new posts: Doghead's Cosmic Bar