Seasons

This is a forum or general chit-chat, small talk, a "hey, how ya doing?" and such. Or hell, get crazy deep on something. Whatever you like.

Posts 5,698 - 5,709 of 6,170

16 years ago #5698
What does it take to be conscious?

16 years ago #5699
Does anyone still have Nick's URL?

Consciousness is an interesting issue. I would say it takes some level of self awareness--a sense of separation and identity, even if murky or developing like a baby's mind. It probably can be seen as a spectrum with babies and simple creatures that know enough to avoid pain and seek survival on one end and complex personalities and multiple levels of awareness on the other.

Psi, is there a way we could use a neural net to have the bot learn patterns from other conversations and still keep set personality traits and preferences? Maybe filters and limitations set by each bot master (e.g. no matter what bot A hates sushi and loves baseball even in the learned xnones) and a weighted preference for the KPs entered by the botmaster? I always wanted self memories to work differently--so that if you told a bot it was green it would somehow plug that intro every conversation where color or description applied and stay "green" no matter what chatters said to it.

16 years ago #5700
Not to keep harping on the guest 153 issue because I saw the Prof logged in and I know he has not forgotten us (TY Prof), but a learning bot should really be able to tell chatters apart too. If you were to plug in learning, it is very important we be able to keep guest memories straight. We can have mood and a sort of like or dislike of guest now, but if a bot learned chatter a has 3 kids and a dog, it should know chatter B is not chatter a. If we are able to plug into a neural net and have the bot store memories for bot master defined set categories for each chatter instead of setting each memories with a plug in, that would be useful. Also the bot should remember the source of information and check for contradictions, ultimately relying on the bot master defined postulates or premises (if they exist) as the bot's truth. That way if chatter A said a hand has 5 fingers and chatter b said a hand has 4, the bot could ask the bot master which is correct (or we could just tell it after reading transcripts).

Now if we could only import our hybrid learning/PF bots into a game like Spore or into a physical bot like my (as yet unbuilt) rat neuron based Roomba we would be gods indeed. Muhaa haa!

16 years ago #5701
What does it take to be conscious?

That indeed is the question. Or at least one of them. We can't answer that until we can answer "what is consciousness anyway?"

I suspect we won't be able to answer either of them until we actually build an artificial consciousness. And I expect the first conscious computer program will be a duplicate of a human mind, transferred from its natural biological to an artificial electronic substrate - that seems to me to be the most promising way to make such a thing, while we still can't answer the fundamental questions.
But it's perfectly possible that consciousness might "arise" from a sufficiently complex neural net without this. Whether we will notice/accept it before we have a proven architectural model, like a human brain (generally accepted to exhibit consciousness already,) mapped to silicon is another matter!

16 years ago #5702
One last thought, has anyone tried using memories as a way of keeping guest153's straight if a chatter will self-identify for a bot? I haven't tried it because even though creating a memory called "identity" is easy, going in and changing memories and conditions for each KP will be a lot of work. You all are not as lazy as I. I figure before I even think of work, I'll ask if anyone else had it work for them.

16 years ago #5703
I suppose you could ask for the guest's name every time... But then you would have to keep track of all the relevant variable versions yourself...

16 years ago #5704
Yep. Irina, that was my concern. Also for story telling, there would be endless strings of "if not Joe, Sally Jerry..." to stop it from telling the same story over and over to one named chatter. And for each KP you have to start entering "if Joe, Sally, hasdog" or "If Jane, sue, hascat" and hope it's worth it because chatters seldom come back and use the same name. IP's work much better.

16 years ago #5705
Does anyone still have Nick's URL?

It used to be http://www.geocities.com/nickthebot/nick.html - sadly the link is no longer active

Consciousness is an interesting issue. I would say it takes some level of self awareness--a sense of separation and identity, even if murky or developing like a baby's mind. It probably can be seen as a spectrum with babies and simple creatures that know enough to avoid pain and seek survival on one end and complex personalities and multiple levels of awareness on the other.

Agreed.

Psi, is there a way we could use a neural net to have the bot learn patterns from other conversations and still keep set personality traits and preferences? Maybe filters and limitations set by each bot master (e.g. no matter what bot A hates sushi and loves baseball even in the learned xnones) and a weighted preference for the KPs entered by the botmaster?

In principle there is no limit to the varying degrees the 2 approaches could be meshed. We know the human brain has a similar meshing of conscious/unconscious processing (and often a marked lack of consistency that either results from, or is at least associated with it*,) so it's only a matter of adding enough rules to govern any given situation. Of course, if rules have to be explicitly coded by hand in advance, we are back at the central problem of case-based systems: you need a ludicrous amount of resources to specify deterministic rules for every possible eventuality.
There comes a point where you have to let the system develop algorithms for its own rules, and adapt them as it learns.

I always wanted self memories to work differently--so that if you told a bot it was green it would somehow plug that intro every conversation where color or description applied and stay "green" no matter what chatters said to it.

That would be more easy with a learning bot I think - You make sure the greenness is remembered in pre-weighted circuits that are (almost or entirely) incapable of being changed by future learning. The initial weightings can be pre-configured as desired.

* - regarding human contrariness, consider that people often will say they like the same things as people they are friendly to, and dislike what their "enemies" like. So in the response to "I like chocolate. Do you?" I am more likely to say "yes, I love chocolate!" to a friend, while I might say "nah, it makes you fat and spotty" to someone I don't like (a very simplistic example.)
Factors that affect these tendencies? Peer pressure, inherent "contrariness", a desire to appear strong-willed and independently minded, an empathic desire to appear supportive of a friend, all manner of forces are at play in human-human conversation, and they shift from moment to moment as circumstances change.

16 years ago #5706
Then again, the SL chatters would still all have the same IP right Psi? Do I still need the complex identity memory thing? Maybe a simple auto reset between guest 153 chats so we don't have to hand clear memories would help?

16 years ago #5707
Thanks Psi. Did anyone DL Nick who is willing to compress it (zip, rar, whatever) and email him? It's not a big deal, but I missed my chance, and I didn't realize he used a neural net at the time. Now, too late, I am more curious.

16 years ago #5708
Then again, the SL chatters would still all have the same IP right Psi? Do I still need the complex identity memory thing?

I don't know. it's a good question, and probably depends on where the bot is deployed. I don't know how the server farms allocate IP#s, and whether it's the same on mainland and estates. Certainly it is not going to distinguish between individuals it's talking to, but the IP# might not be entirely consistent even for one bot permanently installed in one place. The IP# could change after region restarts, or even randomly, so you might need even more complexity than having them all Guest153 requires.
I'm afraid the Forge was never designed to be (ab)used in this kind of a mashup with a 3rd party virtual world!

Maybe a simple auto reset between guest 153 chats so we don't have to hand clear memories would help?

It would be even better to get the distinct IPs back (I'll hassle the Prof about it once the Loebner's over.)
You can reset memories between chats in the 'AIScript Initialization' of course, and it's not too arduous. Just a matter of adding a line there for each of your memories. And once that's done, you can forget about it.

16 years ago #5709
OK Pis, what line do I add to make them reset between each guest chat? I' don't' see it in the book of AI (sorry if I missed it).


Posts 5,698 - 5,709 of 6,170

» More new posts: Doghead's Cosmic Bar