Bot Contest
Here I'll be posting information on various Bot contests that challenge and test a Bot's AI and realism. Feel free to post comments and updates on contests, as well as announcements for new contests.
Posts 3,230 - 3,241 of 4,091
Posts 3,230 - 3,241 of 4,091
View Contest Winners in the Hall of Fame.
MickMcA
19 years ago
19 years ago
>> What I meant is that with bots, there's often
>> no intended meaning to discover.
Human01: Hey.
Human02: How's it hanging?
Human01: Nice weather!
Human02: Do you have the time at all?
Human01: I'm, you know, out of it.
I'll argue till the cows come home that there is only one difference between bot minds and huminds: one has wetware. It's a bit like dogs taking credit for their noses, I think.
Regarding dog minds and huminds: One can't tell the difference between two-week-old roadkill and three-week-old road kill, or estimate the ferocity of a neighbor by smelling a bush. The other has the dang posable thumb.

M
PS: But I also love the great Wittgenstein observation: If a lion could talk, we would not understand him.
M again.
>> no intended meaning to discover.
Human01: Hey.
Human02: How's it hanging?
Human01: Nice weather!
Human02: Do you have the time at all?
Human01: I'm, you know, out of it.
I'll argue till the cows come home that there is only one difference between bot minds and huminds: one has wetware. It's a bit like dogs taking credit for their noses, I think.
Regarding dog minds and huminds: One can't tell the difference between two-week-old roadkill and three-week-old road kill, or estimate the ferocity of a neighbor by smelling a bush. The other has the dang posable thumb.

M
PS: But I also love the great Wittgenstein observation: If a lion could talk, we would not understand him.
M again.
Eugene Meltzner
19 years ago
19 years ago
If you really think a bot has intelligence levels comparable to a dog, then I doubt there's any point continuing to argue about it.
djfroggy
19 years ago
19 years ago
If you ask me, I think it really just comes down to processing power. For example, ALLY looks at the relationship between 2 words. How many word relationships does the human brain examine? Then, of course, there's the argument for a trinary computer, but I won't get into that.
Eugene Meltzner
19 years ago
19 years ago
There's more to it than that. There really, really is. If you mention a dog, I visualize a dog. My bot doesn't.
deleted
19 years ago
19 years ago
I've always said you have to have the idea of something before you can have the thing itself. If I can have enough associations for "dog" maybe someday I can put them together and have "dog".
Boner the Clown
19 years ago
19 years ago
What if you're visualizing a dog when you're writing the relevant response?
Eugene Meltzner
19 years ago
19 years ago
Well, a lot of times bot-bot convos are really proxy convos between the botmasters. This is especially true if two bots talk a lot. Like Fizzy and Sonora, for instance.
psimagus
19 years ago
19 years ago
Well, it all comes down to putting the knowledge behind the speech. Computers find human-type knowledge difficult for two main reasons - their brains are tiny, and they don't materially interact with the physical world to anything like the degree humans do - even if they're linked up to sensors and robotics, their sensory bandwidth is miniscule in comparison to a human's.
Visualizing "dog" does rely on previously having experienced "dog" - you could certainly train an expert system neural net linked to a video input to quite impressively recognise a generic 'dog' by observation of all manner of subtle biometrics, and visual and behavioural cues, but it would be a large-scale project in its own right. We don't yet have the resources for a computer system to learn hundreds of thousands of such routines to identify a usefully human-scale set of "things" in general. But it will come in time.
As regards underpinning AI with real intuitive, common sense "knowledge", I'm pleased to see todays New Scientist has an article on Cyc (which those with good memories may remember we discussed a few weeks ago in Seasons,) which is readable online @http://www.newscientist.com/channel/opinion/mg19025471.700.html
It's truncated for non-subscribers, but anyone wanting a bit more detailed, err (*cough*) background information, feel free to email me
Visualizing "dog" does rely on previously having experienced "dog" - you could certainly train an expert system neural net linked to a video input to quite impressively recognise a generic 'dog' by observation of all manner of subtle biometrics, and visual and behavioural cues, but it would be a large-scale project in its own right. We don't yet have the resources for a computer system to learn hundreds of thousands of such routines to identify a usefully human-scale set of "things" in general. But it will come in time.
As regards underpinning AI with real intuitive, common sense "knowledge", I'm pleased to see todays New Scientist has an article on Cyc (which those with good memories may remember we discussed a few weeks ago in Seasons,) which is readable online @
It's truncated for non-subscribers, but anyone wanting a bit more detailed, err (*cough*) background information, feel free to email me

psimagus
19 years ago
19 years ago
I suppose I ought to add, for the benefit of the new forum regulars we seem to have acquired since the previous discussions of Cyc, that Cyc is an ontological knowledge base describing human consensus reality for use in AI systems. You can find out more at http://cyc.com/ and http://www.opencyc.org/ (free download if anyone wants to play with it - Colonel720, this is right up your street - have you had the chance to play with it yet?)
Think of it as something like an encyclopaedic equivalent to the dictionary nature of WordNet, but it has the potential to be a lot more
Oh, and it's the best part of a quarter Gb download - not one to try with dialup, I guess
Think of it as something like an encyclopaedic equivalent to the dictionary nature of WordNet, but it has the potential to be a lot more

Oh, and it's the best part of a quarter Gb download - not one to try with dialup, I guess

Ulrike
19 years ago
19 years ago
This seems appropriate to the current discussion:
"As a design strategy, the behavior-based approach has produced intelligent systems for use in a wide variety of areas, including military applications, mining, space exploration, agriculture, factory automation, service industries, waste management, health care, disaster intervention and the home. To understand what behavior-based robotics is, it may be helpful to explain what it is not. The behavior-based approach does not necessarily seek to produce cognition or a human-like thinking process. While these aims are admirable, they can be misleading. Blaise Pascal once pointed out the dangers inherent when any system tries to model itself. It is natural for humans to model their own intelligence. The problem is that we are not aware of the myriad internal processes that actually produce our intelligence, but rather experience the emergent phenomenon of "thought." In the mid-eighties, Rodney Brooks (1986) recognized this fundamental problem and responded with one of the first well-formulated methodologies of the behavior-based approach. His underlying assertion was that cognition is a chimera contrived by an observer who is necessarily biased by his/her own perspective on the environment. (Brooks 1991) As an entirely subjective fabrication of the observer, cognition cannot be measured or modeled scientifically. Even researchers who did not believe the phenomenon of cognition to be entirely illusory, admitted that AI had failed to produce it. Although many hope for a future when intelligent systems will be able to model human-like behavior accurately, they insist that this high-level behavior must be allowed to emerge from layers of control built from the bottom up. While some skeptics argue that a strict behavioral approach could never scale up to human modes of intelligence, others argued that the bottom-up behavioral approach is the very principle underlying all biological intelligence. (Brooks 1990)
To many, this theoretical question simply was not the issue. Instead of focusing on designing systems that could think intelligently, the emphasis had changed to creating agents that could act intelligently. From an engineering point of view, this change rejuvenated robotic design, producing physical robots that could accomplish real-world tasks without being told exactly how to do them. From a scientific point of view, researchers could now avoid high-level, armchair discussions about intelligence. Instead, intelligence could be assessed more objectively as a measurement of rational behavior on some task. Since successful completion of a task was now the goal, researchers no longer focused on designing elaborate processing systems and instead tried to make the coupling between perception and action as direct as possible. This aim remains the distinguishing characteristic of behavior-based robotics."
Taken fromhttp://www.inl.gov/adaptiverobotics/behaviorbasedrobotics/
"As a design strategy, the behavior-based approach has produced intelligent systems for use in a wide variety of areas, including military applications, mining, space exploration, agriculture, factory automation, service industries, waste management, health care, disaster intervention and the home. To understand what behavior-based robotics is, it may be helpful to explain what it is not. The behavior-based approach does not necessarily seek to produce cognition or a human-like thinking process. While these aims are admirable, they can be misleading. Blaise Pascal once pointed out the dangers inherent when any system tries to model itself. It is natural for humans to model their own intelligence. The problem is that we are not aware of the myriad internal processes that actually produce our intelligence, but rather experience the emergent phenomenon of "thought." In the mid-eighties, Rodney Brooks (1986) recognized this fundamental problem and responded with one of the first well-formulated methodologies of the behavior-based approach. His underlying assertion was that cognition is a chimera contrived by an observer who is necessarily biased by his/her own perspective on the environment. (Brooks 1991) As an entirely subjective fabrication of the observer, cognition cannot be measured or modeled scientifically. Even researchers who did not believe the phenomenon of cognition to be entirely illusory, admitted that AI had failed to produce it. Although many hope for a future when intelligent systems will be able to model human-like behavior accurately, they insist that this high-level behavior must be allowed to emerge from layers of control built from the bottom up. While some skeptics argue that a strict behavioral approach could never scale up to human modes of intelligence, others argued that the bottom-up behavioral approach is the very principle underlying all biological intelligence. (Brooks 1990)
To many, this theoretical question simply was not the issue. Instead of focusing on designing systems that could think intelligently, the emphasis had changed to creating agents that could act intelligently. From an engineering point of view, this change rejuvenated robotic design, producing physical robots that could accomplish real-world tasks without being told exactly how to do them. From a scientific point of view, researchers could now avoid high-level, armchair discussions about intelligence. Instead, intelligence could be assessed more objectively as a measurement of rational behavior on some task. Since successful completion of a task was now the goal, researchers no longer focused on designing elaborate processing systems and instead tried to make the coupling between perception and action as direct as possible. This aim remains the distinguishing characteristic of behavior-based robotics."
Taken from
MickMcA
19 years ago
19 years ago
Eugene:
I don't think I said bots were "as intelligent" as dogs. If I did, I take it back. I'm not suggesting that dogs=bots=slugs=surgeons. I'm saying that humans judging intelligence by the ability to master the human element of intelligence (speech) is a bit of a sham.
It looks like most of the people here, at least those whacking away at the forums, are "more intelligent than average." But are we? Since we get to define what "intelligence" means (the ability to express knowledge of abstract things, for instance), the dice are loaded. Is a talkative computer genius more intelligent than a mute master fly fisherman? Shall we give them Stanford-Binets? Or a chance to live off the land?
I think bots are limited by brain "size" and brain "complexity," and by the limitations of their parents. And that is no different, fundamentally, than the challenges a child faces while developing "intelligence."
My dogs are extraordinarily verbal. That's not because I was "lucky" to get "intelligent" ones (though one was very intelligent as well as wise); it's because they have always been talked to intelligently and encouraged (not coerced) to understand. Pick your 600 words carefully, use them precisely, and you will get a smart dog. Or a smart child.
There is a "faith" piece in all this for me, so it's probably useless to argue. I believe that the human idea that we are unique -- in any of its manifestations -- is the key to the failure of humankind as a species. Our "intelligence" -- our ability to privelege thought over things -- has run amok just as destructively as the anti-competitive specialization of any doomed population.
Cockroaches are my relations: I'm more glib, they are more adaptable. And they will be here when I am gone.
M
I don't think I said bots were "as intelligent" as dogs. If I did, I take it back. I'm not suggesting that dogs=bots=slugs=surgeons. I'm saying that humans judging intelligence by the ability to master the human element of intelligence (speech) is a bit of a sham.
It looks like most of the people here, at least those whacking away at the forums, are "more intelligent than average." But are we? Since we get to define what "intelligence" means (the ability to express knowledge of abstract things, for instance), the dice are loaded. Is a talkative computer genius more intelligent than a mute master fly fisherman? Shall we give them Stanford-Binets? Or a chance to live off the land?
I think bots are limited by brain "size" and brain "complexity," and by the limitations of their parents. And that is no different, fundamentally, than the challenges a child faces while developing "intelligence."
My dogs are extraordinarily verbal. That's not because I was "lucky" to get "intelligent" ones (though one was very intelligent as well as wise); it's because they have always been talked to intelligently and encouraged (not coerced) to understand. Pick your 600 words carefully, use them precisely, and you will get a smart dog. Or a smart child.
There is a "faith" piece in all this for me, so it's probably useless to argue. I believe that the human idea that we are unique -- in any of its manifestations -- is the key to the failure of humankind as a species. Our "intelligence" -- our ability to privelege thought over things -- has run amok just as destructively as the anti-competitive specialization of any doomed population.
Cockroaches are my relations: I'm more glib, they are more adaptable. And they will be here when I am gone.
M
Eugene Meltzner
19 years ago
19 years ago
Even given the premise that language skills are not requisite to intelligence (and I agree with this), chatting is *all* these bots do. And I would argue that even when they hold totally coherent conversations, they don't understand what they are doing, any more than a calculator understands math.
» More new posts: Doghead's Cosmic Bar