Bug Stomp

Upgrades and changes sometimes have unpredictable results, so post your bugs and glitches in here and I'll get out my trusty wrench and get to fixin'!

Posts 8,076 - 8,088 of 8,662

9 years ago #8076
I like the updates to the web site. It gives this site a more fresh and up-to-date feel. Nice!

I love how this site is set up so that bots and botmasters can interact as a community of peers. That's the main reason why I chose to build my bot here.

There are a few quirks with the "Chat Bot Finder," though. If I go to "Advanced," and select rating=All, it reverts back to "Non-adult." The same goes for "Just Improved."

If I go to "All Chat Bots," then select rating=M, it works fine. But if I afterwards select another rating such as T, it reverts back to "Non-adult."

Perhaps some other rating options for the search would be nice. For example E+T for someone looking for relatively "safe" bots, and M+A for someone looking for bots that are a bit more risque. This might be a nice way to make the A-rated bots seem not quite so ghettoized, while gaining more attention for some of the M-rated bots.
HIDDEN: Post content outside ratings limits.

9 years ago #8078
Has anyone noticed that wildcards * are acting strangely lately?

For the input "let's go to the elevator,"

the following keyphrase works

let us go to the elevator [60,0]

but the following does not

let us go * the elevator [100,0]

(it goes to xcommand despite the high rank)

9 years ago #8079
I posted a description of how my bot handles insults in the AI Engine forum. Reading that might be helpful before interpreting these results.

From yesterday's transcript (nearly 48,000 lines long), 51 responses triggered xinsult.

Three were false positives with no sexual language:

i wouldn't want to leave you alone
You are a real tease
well i cant say somthing bad to a girl like you

(Wow... only 3 like this out of 24,000 user responses? I think that's pretty good.)

Thirty six were phrases with sexual language that I would rather not be automatically treated as insults.

Twelve were actual insults that either were not anticipated by the bot's AI, or the associated keyphrases were overpowered by xinsult.

9 years ago #8080
The problem that I posted on Jun 18 doesn't seem to be related to wildcards since other keyphrases seem to work ok. Even most keyphrases (containing wildcards) that could be interpreted as commands seem to work ok.

The problem seems specific to keyphrases that begin with "let us go," as far as I can tell. If these keyphrases contain wildcards, then there is a very strong tendency for it to be overpowered by xcommand.

I've tried setting the ranks of the keyphrases high, and lowering the rank of xcommand to -50, but there was no effect.

The only workaround that I have found is to write the keyphrases without wildcards (e.g. "let us go to the elevator"), but this is not ideal since there are many different ways in which this thought can be expressed.

My bot have several different locations that she moves between, so phrases beginning with "let's go" are vitally important. These keyphrases have worked reasonably well for years, so something must have changed recently.

9 years ago #8081
To test this, compare the following two keyphrases:

let us go to the laboratory [100,0]
let us (go to|go in) the laboratory [100,0]

9 years ago #8082
I just released some improvements and fixes to various systems including math.

Dr_Ben: I'm glad you liked the updates. I'm working on some of these bugs.. A few notes - the "Do you want to kiss me?" etc issue was due to a bug causing it to be read as "do you want me?" which triggered an xemote response. It'll be fixed with the next release. Same with any insults that had "want to" in the phrase.

I'll consider the ability to toggle off automatic features. Not a bad idea.

Keyphrases with ranks > 30 should be outranking emotional responses. Are they not doing so?

Mome Rath: Here's my attempt, which works:
You: lips 1 frog 2 bunny 3 toast
Bot: prekey=[lips]; key1=[frog]; key2=[bunny]; key3=[]; postkey=[toast]

I had to modify yours, as the 'r' was triggering another keyphrase, but it came out okay:
You: whatever test keys 1 rug 2 s 3 t 4 u
Bot: prekey=[whatever test keys]; key1=[rug]; key2=[s]; key3=[]; postkey=[t 4 me]

9 years ago #8083
All of my bots are throwing SQL errors on any attempt to edit keyphrases through the language center, but not when editing seeks.Has anybody ever seen this problem or have any thoughts on fixing it?

9 years ago #8084
I'm getting the same problem. Just started today as far as I'm aware.

9 years ago #8085
Strange, I'm not seeing that. I tried editing and saving some keyphrases with my bots via the language center and no problem - I tried some of your bots as well, and it worked as intended.

Can you post the error message, and let me know what bot and what keyphrase is generating this for you?

9 years ago #8087
My error is almost identical.

My SQL Error: You have an error in your SQL syntax; check the manual that corresponds to your My SQL server version for the right syntax to use near 'Maker ID=106333' at line 3 in keyphrase.php on line 27

Query: [hidden]

9 years ago #8088
(postkey) stopped working. Just an empty string is being slotted in.

What is [4,0]
    I don't know, perhaps you could tell me about (postkey)?
    (Postkey)? I don't know.

For example.

You: What is the weather in New York?
Bot: I don't know, perhaps you could tell me about?

(postkey) should be slotted in before the '?'


Posts 8,076 - 8,088 of 8,662

» More new posts: Doghead's Cosmic Bar