Newcomers

This is a forum for newcomers to the Personality Forge. Many questions can be answered by reading the Book of AI and the FAQ under the "My Bots" link in the upper corner.

Posts 5,021 - 5,032 of 8,130
Many questions are answered in the FAQ.


16 years ago #5021
Irina:How did you get from PITA to Sweden?

Practice? (bad joke--would "swim" be a better joke?)

Context. The original statement that time around was Psi's assertion that Swedish day was a PITA.

16 years ago #5022
Well, using things like (adjartnounprep), or, better yet, you could do it with a single regex!

only for properly spelt English.

Hmm, I suppose you could use

^([abcdefghijklmnopqrstuvwxyz]+) (re)

but it would take some fine-tuning to stop it cutting in when you want a more specific keyphrase to get picked.

16 years ago #5023
Yes, that's true! I was joshing.

16 years ago #5024
you know, it almost might work. Sort of.

If you ranked it, oh I don't know, minus 15 or so (-ish), so it really didn't cut in except where it was useful. It would take some fine-tuning, and I'm not sure the ranking gradation is fine enough (only -127 to +127, and the useful range probably being no more than -10 to -30,) but it does make me wonder a bit now I think about it.

Hmm, I'll add that to the list of things that sound like a good idea at the time, but which I will certainly never live long enough to get round to actually doing anything with.

16 years ago #5025
Hee hee! You would have to devote xnone to that one thing (the blank line). Perhaps that's not such a bad idea -- it could be argued that a really well-written bot would not rely on xnone; it would 'have an answer for everything'!

16 years ago #5026
Hey any tips for keyphrases and responses....

I'm advertising a site called mapwii.com, its about wiis lol if it helps

16 years ago #5027
Hee hee! You would have to devote xnone to that one thing (the blank line). Perhaps that's not such a bad idea -- it could be argued that a really well-written bot would not rely on xnone; it would 'have an answer for everything'!

I confess, it is my dearest aim (well, one of them anyway,) to persuade the Prof to patch in a learning bot like Jabberwacky to handle the xnones. Then our bots would have a spark of indeterminacy about them.

They're already unpredictable, but only by virtue of their size and complexity. If they could 'listen' to all the conversations (especially the human input,) and use that to hone new (and non-pre-programmed) non-case-based responses for the xnones, I think it would be the perfect marriage of the 2 conversational AI philosophies.

16 years ago #5028
It's a fascinating idea! How would you design such a thing?

16 years ago #5029
Or rather, how would you meta-design it -- by which I mean, how would you make it possible for each botmaster to fashion his own?

16 years ago #5030
I would simply use ^([abcdefghijklmnopqrstuvwxyz1234567890]+) (re) or ([abcdefghijklmnopqrstuvwxyz1234567890]+)$ (re) to pick up all the non-blank entries that are currently handled by xnones, and put the responses designed for blank input in xnone - that's the easy part.

The tricky part would be finding the rank that worked well enough (it's never likely to be perfect) - trial and error's the only way I think, somewhere around -15 or so I would guess.

16 years ago #5031
I mean, how would you meta-design the Jabberwacky analogue?

16 years ago #5032
ah, well that would take some designing. I'd probably go for a neural net system like Nick. Jabberwacky's no more complex (and probably rather less so) - he was designed 10 years ago, but is so well-developed because he has had so many conversations (over 10 million so far I believe.)
With all the conversations that go on on the Forge, I think such a system could be trained quite quickly here on the Forge.

Half the work in implementing such a system has already been done - WordNet and linkGrammar already mean that inputs are parsed, and all their elements identified. They just need to be tabulated, and frequency-analysed.
So an attached learning bot could compare the frequencies of grammatical patterns used by humans in response to patterns used by bots, and catalogue references to the original subjects/objects/verbs in the message replied to.

The system might, eg:, find that when bots use a pattern like my [adjnoun] [verb] [adj] than (your [adjnoun]|yours) (eg: "my Dad's bigger than your Dad", or "my brain is smarter than yours", or "my oranges are fresher than your rotten apples"), human replies fall into a limited number of forms:

a "contrariness percentage" could be evaluated, where say 70% of people are evidenced as disagreeing by (implicitly or explicitly) repeating the verb in the negative/reversing the pronouns/switching an adjective in their response - "no (they aren't|), my apples are fresher (than your rotten oranges|)" etc.

A certain percentage, say 20%, of bot statements might involve 2 different nouns (eg: apples and oranges,) giving a "base level" for comparing the frequency of direct v. analogous statements.

Likewise there would be a percentage of analogous replies to direct statements - different verbs and nouns in the human response ("yes, but my uncle's stronger than (your Dad|either)"

The system would learn that the central adjective is almost always comparative (bigger/smarter/fresher), that they are almost never identical ("my bigger Dad is bigger than your bigger Dad"), and that it is very unusual for all 3 adjectives in such a pattern to be even derived from the same base - "my big Dad is bigger than your big Dad"

Rather than trying to predict and preprogram all the possible patterns, a neural net could build its own database of patterns, as well as learning the weightings between them, and between the parts of speech within them, and using those weightings to construct responses when called upon.

So the bot could learn, just by listening in on bot-human conversations, perhaps even keeping track of individual users' linguistic quirks, and just chip in when an xnone comes up, to evaluate the likeliest pattern for a response, and how to incorporate the parts of speech that are being tracked (the debug window shows that the AIEngine already keeps track of these - (ssub)(sub)(submod)(submodonly)(sv)(v)(vmod)(vmodonly)(sob)(ob)(obmod)(obmodonly))

That's just a few of my thoughts on the matter anyway - probably mostly pie in the sky


Posts 5,021 - 5,032 of 8,130

» More new posts: Doghead's Cosmic Bar