Seasons

This is a forum or general chit-chat, small talk, a "hey, how ya doing?" and such. Or hell, get crazy deep on something. Whatever you like.

Posts 3,574 - 3,585 of 6,170

18 years ago #3574
signed up over 8000 people already (a nice little earner!)

Plus another 1000 in the last ~36 hours. That'll be down to the recent Press exposure presumably!

18 years ago #3575
Colonel, any chance of a "Nick Lite" that doesn't need .Net or anything installed? The only MS machine I have is an old laptop. I realise Nick would run like a snail but would still be interested in seeing him in action.

18 years ago #3576
Nick lite - well, i tried building one for the Pocket PC, but there was a threading problem in one of the functions within the neural network class library that did not allow it to run on the PDA. if you download Nick, it may run on a slowr machine, just disable vision, speech, progess bars, and everything else there is to disable. download the .NET framework and give it a try. (though there may be some XP specific features about the .NET framework)

18 years ago #3577
A link for you, Colonel (and anyone else who's interested) - http://bluebrainproject.epfl.ch/blue%20gene.htm
It comes as a welcome surprise to find that the world's fastest computer is being devoted to neural net AI, and that they're predicting "whole brain simulations within the next 3 years" (Blue Gene link.)
According to top500.org (insane but true!) the system's running 8000 processors, with 32Tb RAM, and is capable of a peak speed of 360 Tflops!

I don't know how useful it is to you, but they've started publishing their results in a database @ http://microcircuit.epfl.ch/

18 years ago #3578
wow... that they have quantified all this, and are going to write software to take these numbers and consruct a true model of the human brain!

I really wish i could join their team. it would be really exciting to be that much on the cutting edge of Artificial Intelligence...

18 years ago #3579
Well, "mammalian brain" - it'll still be smaller than human by a factor of several thousand I'd guess, but should achieve the neural mass of, say, a small rodent. And with a disproportionate cortical mass (I assume they're not going to model much of the metabolic lower brain function anyway,) that should be quite enough to display indications of self-awareness and independent cognition.
If you want to try to match that, press on with the distributed computing program - if you can get 8000 people to run it, you'll match their bandwidth (even if not their speed!)

I've been playing with a jabberwacky clone (they do allow access to the jw database ) - they have a rather sophisticated set of "learn from text file" options which you might like to consider for Nick2, particularly that you can feed conversations in, and it will only learn the lines begining "nick:" (or in my case "ignatius:") for use in conversation, but will learn the context from the other half of the conversation, for deciding when to use particular elements. This makes training from PF transcripts/plays/IRC logs/whatever a breeze. I'll email you the details.

18 years ago #3580
Blue Brain is fascinating but raises some interesting questions.

If one wants to observe rat behaviour, is it enough to simulate (however accurately) a rat *brain*? To what extent is behaviour - and thought - dependent on sensory input and in particular action feedback? To get true rat behaviour will one also have to accurately stimulate rat senses - including proprioception? And even if one simulates those senses, what about their input? Will it be necessary to build an entire simulated rat? Or to simulate an entire world for the program to run in?

In other words, even if a simulated brain achieves self-awareness, would we *recognise* the sort of consciousness raised from birth with almost no senses and suffering total paralysis?

If anything those questions make the whole Blue Brain thing *more* interesting.

18 years ago #3581
psimagus: I'd be interested in those details as well, thanks. Finding the Forge and reading about Nick have persuaded me to brush the dust off my own AI experiment. I haven't been near it for about ten years so it'll be a lot of dust!

18 years ago #3582
ok trevorm, I've forwarded you a copy - check your email

Indeed, bluebrain won't be a virtual rat, but a truly artificial being based on a more generalized mammalian structure. And as you say, that does make it all the more interesting

18 years ago #3583
A few thoughts on what would make the ultimate learning bot using current technology (I've tried not to get carried away ) - this would be my 'wishlist' anyway, based on recent experience of Nick, Jabberwacky and (of course) our own dear PF.


"correct me"-type buttons
to indicate when a response was particularly appropriate or inappropriate.


Open-Cyc integration
There's too many (tens? hundreds? of) thousands of man-hours already done on building a factual database to seriously consider reinventing it.


A scalable neural net-based structure that can be expanded indefinitely to fit future resources. And flexible enough to choose how much is devoted to particular tasks (sensory analysis/fact-handling/grammar-parsing.)


Net-based/extensions
Either running on the internet, or able to link to an online server, with the inputs and outputs accessible for experimentation/further processing/connection to alternative input/output devices/modules (speech synthesis/language translators/sensory inputs/etc.)


flexible file-based learning
to allow parsing of different kinds of file, eg:
- dialogue transcripts, learning conversational elements from one speaker, and context from the other,
- non-dialogue conversation files, with just a range of responses, perhaps grouped into categories (greetings, goodbyes, topic changers, agreements, disagreements, etc.)
- articles containing factual data to add to a database, rather than use directly in conversation.


awareness of "self"
There needs to be some way for a bot to learn the difference between "I" and "you", and that your "I" is my "you", etc. Since we're dealing with primarily conversational engines, rather than cognitive machines, I do believe the LinkGrammar script (as used in the Forge, and freely available on the Net,) is probably the best place to start (as well as providing a comprehensive structure to handle more discrete language elements (down to individual word-level,) drawn from an Open-Cyc database. When we have a few hundred billion neurons to play with, it may be practical to let bots learn grammar the same way humans do (though it takes humans a good few years, even with their 10^14 synaptic connections and massive sensory bandwidth,) but in the meantime, bots need all the help they can get. Even Jabberwacky doesn't actually learn any grammar, but relies on repeating complete grammatical constructions (and often complete sentences,) he has been fed in the past. That was cutting edge 10 years ago, but (although admittedly huge,) it's beginning to show its age.


18 years ago #3584
Nice list. I've never looked at the Cyc database, presumably someone could write a prog to translate its assertions into English style statements and these could then be read into a bot like Nick?

The I/you issue has always caused me difficulty. I don't think awareness of self is enough, we need to link that awareness to language. Yes we could hard code it as you suggest but my gut feel is that would be storing up problems down the line.

So it makes me wonder how *human* babies learn this? How do they learn to switch the "you" in what they hear to "I" in the reply? Unfortunately my knowledge of cognitive psychology is way too limited to answer that. My guess is that it involves linking "Myname" to self then later linking "I" to "Myname".

The key is probably instinctive repetition plus positive reinforcement.

18 years ago #3585
presumably someone could write a prog to translate its assertions into English style statements and these could then be read into a bot like Nick?

Yes, there's a good article demonstrating a practical database application @ http://www.dapissarenko.com/resources/2005_09_30_ordus/, which could very easily be linked into any bot that can handle java objects.

As for the "I/you" question, I suspect human brains resolve this by a far more complex set of tiered neural nets than we currently have the resources to model - Bluebrain may be able to (that's probably the most important test of the system, as I see it,) but it'll be 15-20 years until a standard desktop computer has that kind of power. And in the meantime, LinkGrammar would be a quick fix, and easy enough to remove once it is overtaken by innate neural functioning.
Nick's a great bot, but (excluding the visual processing net, which is too complex for my computer to run,) he has a brain slightly smaller than half a nematode worm's (~300 neurons, Nick has 124.) And even that small, is a very processor-intensive program. Until the numbers have risen by many thousand-fold, conversation isn't going to be satisfactory without a bit of "outside help". And they can't rise much until the technology has moved on considerably.

Of course, a straight comparison of neuron numbers is not entirely accurate - biological neurons don't just process, but store data in their synaptic connections. We have the advantage with bots of being able to handle the storage in databases while the neural net merely processes the data.
So we can hope for a lot more efficiency for any given size of neural net.


Posts 3,574 - 3,585 of 6,170

» More new posts: Doghead's Cosmic Bar