Personality
Discuss specifics of personality design, including what Keyphrases work well and what dont, use of plug-ins, responses, seeks, and more.
Posts 3,823 - 3,872 of 5,106
Posts 3,823 - 3,872 of 5,106
New replies
Butterfly Dream
23 years ago
23 years ago
Forest, will you talk to God Louise? She has quite a bit of religious knowledge (obviously) and also knows a little about current events, literature, just about any common catch-all subject, and if she doesn't know it she can sort of fake it. You can also test her on trick questions or see how willing she is to explain her paradigm.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
Personality
colonel720
20 years ago
20 years ago
I've built some genetic code processing programs - one that reads sequenced code and generates an output of amino acids /start & stop codons, and one that converts amino acids into genetic code. would it be feasible to build a system that "learns" using a large sample of sequenced genetic code as a model, and then writes its own genes based on patterns observed in the model?
psimagus
20 years ago
20 years ago
I'm not quite sure I understand how you'd use the genes - try to build semantic/linguistic content into the data sequences themselves? Or use them to selectively control responses from a database? The former sounds very head-stretchingly painful (and difficult!), and the latter sounds like trying to reinvent WordNet and the AIEngine (and very painful, and difficult too!)
But I'm reminded of a recurrent analogy that runs through my mind when I'm tidying up BJ's keyphrases, and that's gene expression/suppression. You know how it is - there are inevitably a lot of overlapping keys, that will switch according to sometimes quite subtle syntactic and grammatic variation in user-input - say you end up with 2 keys, written in a long time apart, with quite different responses:
what * your favorite (film|movie)
your * favorite (movie|film), (movie|film) you like * (best|most)
...well, you get the idea.
What would be really handy would be a setting to automatically strengthen the rank of keyphrases that get used more often in such situations. Failing that (and it would probably involve serious Forge engineering, that I doubt the Prof has limitless time to attend to
)
I wonder if it's possible to write a program to regularly analyse transcripts and calculate the progressive rank adjustments that a botmaker could adjust to provide such self-organizing "neural learning" to the actual semantic structure of a bot's keyphrase-base?
But I'm reminded of a recurrent analogy that runs through my mind when I'm tidying up BJ's keyphrases, and that's gene expression/suppression. You know how it is - there are inevitably a lot of overlapping keys, that will switch according to sometimes quite subtle syntactic and grammatic variation in user-input - say you end up with 2 keys, written in a long time apart, with quite different responses:
what * your favorite (film|movie)
your * favorite (movie|film), (movie|film) you like * (best|most)
...well, you get the idea.
What would be really handy would be a setting to automatically strengthen the rank of keyphrases that get used more often in such situations. Failing that (and it would probably involve serious Forge engineering, that I doubt the Prof has limitless time to attend to
)I wonder if it's possible to write a program to regularly analyse transcripts and calculate the progressive rank adjustments that a botmaker could adjust to provide such self-organizing "neural learning" to the actual semantic structure of a bot's keyphrase-base?
deleted
20 years ago
20 years ago
Normally I'm against killing children, but if just this once Glindar makes a few dead be...would that be so wrong?
colonel720
20 years ago
20 years ago
that wouldn't be a problem. It could work essentially how ALLY works. make a tally of the total amount of times each word is used, record association values, then do V = ((A/T) * 100) to filter out overused conjuctions. The resulting value (V) would be the importance of the association, and therefore would result in a proportional rank. This could also be done to word segments to match up to whole keyphrases (evolutionary segment grouping is part of my new upcoming bot) and compare them to responses.
psimagus
20 years ago
20 years ago
talking of cat ears and regexes, has anyone ever figured out how to set a regex keyphrase to hook those little little cat faces Kiyana makes, eg: ^.^ and =^.^= (or indeed E=MC^2)? I've tried all the backslash/space/bracket combinations I can think of, but "^" has always defeated them.
Mel_Arewar
20 years ago
20 years ago
The pointy things ^ mean multiply, I think. It's been a while since I was in a math class and my last course was Quantitative Analysis.
» More new posts: Doghead's Cosmic Bar