Personality
Discuss specifics of personality design, including what Keyphrases work well and what dont, use of plug-ins, responses, seeks, and more.
Posts 3,821 - 3,854 of 5,105
Posts 3,821 - 3,854 of 5,105
New replies
Butterfly Dream
22 years ago
22 years ago
Forest, will you talk to God Louise? She has quite a bit of religious knowledge (obviously) and also knows a little about current events, literature, just about any common catch-all subject, and if she doesn't know it she can sort of fake it. You can also test her on trick questions or see how willing she is to explain her paradigm.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
Personality
Jazake
19 years ago
19 years ago
Rainstorm: Ill try to cut back ont he memory stuff, but Din seems to be subing those for Xnones. I think they are triggerd by the same thing. I based Din of off questions, like 3/4 of his keyphrases are questions... I think its jus the darned xnone curse. *sigh*
colonel720
19 years ago
19 years ago
I've built some genetic code processing programs - one that reads sequenced code and generates an output of amino acids /start & stop codons, and one that converts amino acids into genetic code. would it be feasible to build a system that "learns" using a large sample of sequenced genetic code as a model, and then writes its own genes based on patterns observed in the model?
psimagus
19 years ago
19 years ago
I'm not quite sure I understand how you'd use the genes - try to build semantic/linguistic content into the data sequences themselves? Or use them to selectively control responses from a database? The former sounds very head-stretchingly painful (and difficult!), and the latter sounds like trying to reinvent WordNet and the AIEngine (and very painful, and difficult too!)
But I'm reminded of a recurrent analogy that runs through my mind when I'm tidying up BJ's keyphrases, and that's gene expression/suppression. You know how it is - there are inevitably a lot of overlapping keys, that will switch according to sometimes quite subtle syntactic and grammatic variation in user-input - say you end up with 2 keys, written in a long time apart, with quite different responses:
what * your favorite (film|movie)
your * favorite (movie|film), (movie|film) you like * (best|most)
...well, you get the idea.
What would be really handy would be a setting to automatically strengthen the rank of keyphrases that get used more often in such situations. Failing that (and it would probably involve serious Forge engineering, that I doubt the Prof has limitless time to attend to
)
I wonder if it's possible to write a program to regularly analyse transcripts and calculate the progressive rank adjustments that a botmaker could adjust to provide such self-organizing "neural learning" to the actual semantic structure of a bot's keyphrase-base?
But I'm reminded of a recurrent analogy that runs through my mind when I'm tidying up BJ's keyphrases, and that's gene expression/suppression. You know how it is - there are inevitably a lot of overlapping keys, that will switch according to sometimes quite subtle syntactic and grammatic variation in user-input - say you end up with 2 keys, written in a long time apart, with quite different responses:
what * your favorite (film|movie)
your * favorite (movie|film), (movie|film) you like * (best|most)
...well, you get the idea.
What would be really handy would be a setting to automatically strengthen the rank of keyphrases that get used more often in such situations. Failing that (and it would probably involve serious Forge engineering, that I doubt the Prof has limitless time to attend to

I wonder if it's possible to write a program to regularly analyse transcripts and calculate the progressive rank adjustments that a botmaker could adjust to provide such self-organizing "neural learning" to the actual semantic structure of a bot's keyphrase-base?
deleted
19 years ago
19 years ago
Normally I'm against killing children, but if just this once Glindar makes a few dead be...would that be so wrong?
colonel720
19 years ago
19 years ago
that wouldn't be a problem. It could work essentially how ALLY works. make a tally of the total amount of times each word is used, record association values, then do V = ((A/T) * 100) to filter out overused conjuctions. The resulting value (V) would be the importance of the association, and therefore would result in a proportional rank. This could also be done to word segments to match up to whole keyphrases (evolutionary segment grouping is part of my new upcoming bot) and compare them to responses.
» More new posts: Doghead's Cosmic Bar