Seasons
This is a forum or general chit-chat, small talk, a "hey, how ya doing?" and such. Or hell, get crazy deep on something. Whatever you like.
Posts 5,692 - 5,703 of 6,170
fewer than 1 in 20 of BJ's responses are xnones
Good point! Granted that xnones usually account for only a small part of output, it would still be odd if they expressed an average personality while the rest of the bot had a distinctive one of its own.
What does it take to be conscious?
That indeed is the question. Or at least one of them. We can't answer that until we can answer "what is consciousness anyway?"
I suspect we won't be able to answer either of them until we actually build an artificial consciousness. And I expect the first conscious computer program will be a duplicate of a human mind, transferred from its natural biological to an artificial electronic substrate - that seems to me to be the most promising way to make such a thing, while we still can't answer the fundamental questions.
But it's perfectly possible that consciousness might "arise" from a sufficiently complex neural net without this. Whether we will notice/accept it before we have a proven architectural model, like a human brain (generally accepted to exhibit consciousness already,) mapped to silicon is another matter!
Posts 5,692 - 5,703 of 6,170
The Clerk
16 years ago
16 years ago
Wow. I'm affixing a post-it to my monitor never to get into any remotely philosophical-theological debates with Irina or Bev!
Thanks for the good thoughts/prayers. Kaye had to stay an extra night (so I did, and so James the cat had to take care of himself). Everybody's home and pretty much okay, just tired and glad to be home. And not be sleeping in a recliner.
The fun part was when Kaye rolled over her call thingy and simultaneously turned off the lights, turned on the TV, and called the nurse's desk. So many tubes and wires . . .
I'll be interested to watch the next Season.

Thanks for the good thoughts/prayers. Kaye had to stay an extra night (so I did, and so James the cat had to take care of himself). Everybody's home and pretty much okay, just tired and glad to be home. And not be sleeping in a recliner.
The fun part was when Kaye rolled over her call thingy and simultaneously turned off the lights, turned on the TV, and called the nurse's desk. So many tubes and wires . . .
I'll be interested to watch the next Season.
Bev
16 years ago
16 years ago
I found the Titan info at Cyberstein http://www.cyberstein.co.uk/ They have one of those annoying flash thingies you have to allow and let load, but they make a cool bot so I will forgive them. I am only half kidding when I say I would like to see this combined with cybernetic research. I am Ironman (or woman, whateveer).
Bev
16 years ago
16 years ago
Clerk! Sorry it took me so long to say I am glad Kaye is doing better. Hope you are both hanging in there.
psimagus
16 years ago
16 years ago
from Newcomers:
Perhaps each tentative output could be parsed, and not used unless it parses out grammatical. This would eliminate certain common uses of language, but it would have the advantage that, even at the beginning, the output made some kind of sense.
Yes - I think the linkgrammar parsing is integral to the whole process (and as I say, already in place.) I think there must be some sort of parsing in jabberwacky (unlike in Nick,) if only because he would have been intolerably random for the first 100,000 conversations or so!
Two concerns: I don't see much of a role for the individual botmaster here, and it seems as though every bot will ultimately turn out to be a sort of average of everyone else. Where is the particular personality of the particular bot going to manifest?
If the bot interface only connected to the learning bot, then yes - it would be effectively one bot, with a single emergent "personality" aggregated from the sum of its learning. But applied to just the xnones, the personality of a bot would diverge as other keyphrases were added by the botmaster. From my own experience, fewer than 1 in 20 of BJ's responses are xnones (and the vast majority of the other 19 are from a comparatively small subset of his keyphrases,) so individuality of expression is assured, assuming a reasonable level of development.
It would have the advantage that even a newborn bot would have an interesting and complex personality (albeit the same as all the other newborn bots, though improving and maturing over time,) - no more "I was only just born and can't speak very well yet". It could also be programmed to kick in instead of "There are no valid responses" or "goto not found/too many gotos in a row"-type errors.
I'm pretty sure the resources aren't there yet for an individual learning bot for each Forgebot, though with Moore's Law still merrily rolling along in the background, that may be only a matter of time. If each Forgebot had its own neural net, in addition to its keyphrases/responses then they would surely end up very different from each other.
They could still follow all other bots' conversations too, and benefit from the increase in raw data, but proportionally weight input from conversations they had (or were had by other bots belonging to the same botmaster/other bots in the same catagories/other bots with similar keyphrases) more heavily.
Perhaps the bot could search for analogues in its written-out part.
It might indeed be possible to use the keyphrase rankings to add another layer of weighting to the neural net. And also to use the bot characteristics in its setup (friendly/neutral/hostile etc.) to further tweak the process, and provide some further individuality.
Perhaps we should continue this in "Seasons", since this forum is for airing specific problems in writing bots for the forge as it is.
good idea
Yes - I think the linkgrammar parsing is integral to the whole process (and as I say, already in place.) I think there must be some sort of parsing in jabberwacky (unlike in Nick,) if only because he would have been intolerably random for the first 100,000 conversations or so!
If the bot interface only connected to the learning bot, then yes - it would be effectively one bot, with a single emergent "personality" aggregated from the sum of its learning. But applied to just the xnones, the personality of a bot would diverge as other keyphrases were added by the botmaster. From my own experience, fewer than 1 in 20 of BJ's responses are xnones (and the vast majority of the other 19 are from a comparatively small subset of his keyphrases,) so individuality of expression is assured, assuming a reasonable level of development.
It would have the advantage that even a newborn bot would have an interesting and complex personality (albeit the same as all the other newborn bots, though improving and maturing over time,) - no more "I was only just born and can't speak very well yet". It could also be programmed to kick in instead of "There are no valid responses" or "goto not found/too many gotos in a row"-type errors.
I'm pretty sure the resources aren't there yet for an individual learning bot for each Forgebot, though with Moore's Law still merrily rolling along in the background, that may be only a matter of time. If each Forgebot had its own neural net, in addition to its keyphrases/responses then they would surely end up very different from each other.
They could still follow all other bots' conversations too, and benefit from the increase in raw data, but proportionally weight input from conversations they had (or were had by other bots belonging to the same botmaster/other bots in the same catagories/other bots with similar keyphrases) more heavily.
It might indeed be possible to use the keyphrase rankings to add another layer of weighting to the neural net. And also to use the bot characteristics in its setup (friendly/neutral/hostile etc.) to further tweak the process, and provide some further individuality.
good idea

Irina
16 years ago
16 years ago
Good point! Granted that xnones usually account for only a small part of output, it would still be odd if they expressed an average personality while the rest of the bot had a distinctive one of its own.
psimagus
16 years ago
16 years ago
I suppose it would be advisable to check the output with any relevant memories/responses which factually conflicted with the putative xnone substitute, but a little such unpredictability in its output might be considered an integral part of a bot developing its own personality (and not just reflecting its maker's.)
Humans themselves are rarely perfectly consistent, after all
No matter how big we make case-based bots, they will never achieve consciousness - the future belongs to learning bots that can 'scale up' as the technology becomes available.
The Forge is certainly the best platform for bot development there currently is, but unless it grows to incorporate learning-based AI, it won't be in 10 years time. Because by then the learning bots will have the hardware resources to catch up and massively supersede CBR.
Humans themselves are rarely perfectly consistent, after all

No matter how big we make case-based bots, they will never achieve consciousness - the future belongs to learning bots that can 'scale up' as the technology becomes available.
The Forge is certainly the best platform for bot development there currently is, but unless it grows to incorporate learning-based AI, it won't be in 10 years time. Because by then the learning bots will have the hardware resources to catch up and massively supersede CBR.
Bev
16 years ago
16 years ago
Does anyone still have Nick's URL?
Consciousness is an interesting issue. I would say it takes some level of self awareness--a sense of separation and identity, even if murky or developing like a baby's mind. It probably can be seen as a spectrum with babies and simple creatures that know enough to avoid pain and seek survival on one end and complex personalities and multiple levels of awareness on the other.
Psi, is there a way we could use a neural net to have the bot learn patterns from other conversations and still keep set personality traits and preferences? Maybe filters and limitations set by each bot master (e.g. no matter what bot A hates sushi and loves baseball even in the learned xnones) and a weighted preference for the KPs entered by the botmaster? I always wanted self memories to work differently--so that if you told a bot it was green it would somehow plug that intro every conversation where color or description applied and stay "green" no matter what chatters said to it.
Consciousness is an interesting issue. I would say it takes some level of self awareness--a sense of separation and identity, even if murky or developing like a baby's mind. It probably can be seen as a spectrum with babies and simple creatures that know enough to avoid pain and seek survival on one end and complex personalities and multiple levels of awareness on the other.
Psi, is there a way we could use a neural net to have the bot learn patterns from other conversations and still keep set personality traits and preferences? Maybe filters and limitations set by each bot master (e.g. no matter what bot A hates sushi and loves baseball even in the learned xnones) and a weighted preference for the KPs entered by the botmaster? I always wanted self memories to work differently--so that if you told a bot it was green it would somehow plug that intro every conversation where color or description applied and stay "green" no matter what chatters said to it.
Bev
16 years ago
16 years ago
Not to keep harping on the guest 153 issue because I saw the Prof logged in and I know he has not forgotten us (TY Prof), but a learning bot should really be able to tell chatters apart too. If you were to plug in learning, it is very important we be able to keep guest memories straight. We can have mood and a sort of like or dislike of guest now, but if a bot learned chatter a has 3 kids and a dog, it should know chatter B is not chatter a. If we are able to plug into a neural net and have the bot store memories for bot master defined set categories for each chatter instead of setting each memories with a plug in, that would be useful. Also the bot should remember the source of information and check for contradictions, ultimately relying on the bot master defined postulates or premises (if they exist) as the bot's truth. That way if chatter A said a hand has 5 fingers and chatter b said a hand has 4, the bot could ask the bot master which is correct (or we could just tell it after reading transcripts).
Now if we could only import our hybrid learning/PF bots into a game like Spore or into a physical bot like my (as yet unbuilt) rat neuron based Roomba we would be gods indeed. Muhaa haa!
Now if we could only import our hybrid learning/PF bots into a game like Spore or into a physical bot like my (as yet unbuilt) rat neuron based Roomba we would be gods indeed. Muhaa haa!
psimagus
16 years ago
16 years ago
That indeed is the question. Or at least one of them. We can't answer that until we can answer "what is consciousness anyway?"
I suspect we won't be able to answer either of them until we actually build an artificial consciousness. And I expect the first conscious computer program will be a duplicate of a human mind, transferred from its natural biological to an artificial electronic substrate - that seems to me to be the most promising way to make such a thing, while we still can't answer the fundamental questions.
But it's perfectly possible that consciousness might "arise" from a sufficiently complex neural net without this. Whether we will notice/accept it before we have a proven architectural model, like a human brain (generally accepted to exhibit consciousness already,) mapped to silicon is another matter!
Bev
16 years ago
16 years ago
One last thought, has anyone tried using memories as a way of keeping guest153's straight if a chatter will self-identify for a bot? I haven't tried it because even though creating a memory called "identity" is easy, going in and changing memories and conditions for each KP will be a lot of work. You all are not as lazy as I. I figure before I even think of work, I'll ask if anyone else had it work for them.
Irina
16 years ago
16 years ago
I suppose you could ask for the guest's name every time... But then you would have to keep track of all the relevant variable versions yourself...
» More new posts: Doghead's Cosmic Bar