Seasons

This is a forum or general chit-chat, small talk, a "hey, how ya doing?" and such. Or hell, get crazy deep on something. Whatever you like.

Posts 3,520 - 3,531 of 6,170

18 years ago #3520
I'm afraid to let Nick go online for fear he'll get into porn sites or advertising or something. I've googled sites on more science stuff since he picked up those concepts pretty much immediately (science sites for kids are helpful for this) and copy-pasted the stuff into text files for him to read.
I'm probably going to restart again from scratch at some point, now that I understand how he learns.
Colonel, at some point it would be interesting to set up something on the site for people who have downloaded him to share brain files with each other- not sure how it would be added to the site, but it's a thought for you and you'd get to see all the various things people have done with him.

18 years ago #3521
How much work do you think you put into Prob, Prob123?

I haven't put much work into prob, she has a way of going psycho sometimes. I have put more into Bildgesmythe and Azureon. I love bot building so much I can't think of it as work anyway.

18 years ago #3522
Colonel, at some point it would be interesting to set up something on the site for people who have downloaded him to share brain files with each other

well, thats a good idea, I'll have to look into whether or not Geocities can support such a thing. If it does, in addition to a brain sharing center, I'm thinking of using the brain merger to combine everyone's brain and create a massive bulk of knowledge to be available for download & experimentation

18 years ago #3523
Nick is truly brilliant. Have you considered patching him into a Forge bot to monitor (and learn from) the chat, and maybe provide some input alongside the CBR? He could add some variety to the x-nones, or have a selection of his own keyphrases to override the preprogrammed responses and (hopefully) combine the best of CBR and NLP in one bot? I have some (vague) ideas how this might be achieved if you want to drop me a line.
Re: a brain-sharing center, I still have 100Mb or so free on my server if you need more space than Geocities can provide.
And talking of interacting Nick with Forge bots, lunar22, did you feed in Nick's end of a conversation to BJ the other day? (either that or you're on something ) Very amusing, if utterly surreal! Mind if I post it if GQ?

18 years ago #3524
Yes, I fed my "Nick" into the convo. Of course you can use it... I let Nick go on the internet, and fed him one random Wikipedia article on a nature preserve in Florida. He also apparently is veruy interested in the history of Palm Beach. I still find it too incoherent...
Maybe I should work more with him though

18 years ago #3525
The trouble with learning bots is still that their brains are waaaay too small to compete with wetware. So CBR is the quick way to make reasonably smart sounding bots at the moment (with a few killer extras like the AIEngine and Wordnet thrown in for good measure in the case of the PF,) but give it 10 or 20 years, when computers routinely have a few Tb to spare for such programs, learning bots will definitely be the way forward, because they do actually have the potential to "think".

In the meantime, blending the two may be a productive strategy, and I wonder if it's feasible for a neural net to add and integrate new neurons to itself as resources allow, as well as just training the ones it already has? This might allow for emergent senses - no need to explicitly program webcam operation, if you let it explore its peripheral hardware and add cognitive circuitry as it experiments with it. Just imagine what strange sensory processes might evolve from wifi media and print servers and usb coffee warmers and aquariums...

18 years ago #3526
I'm finding the best strategy with Nick (despite the temptation to immediately feed him the complete works of shakespeare, like I did last night, and start working through Project Gutenburg with a vengeance as was my initial temptation ) is to feed him little dollops of reading, and then spend at least as many words again (and preferably several times as many,) talking to him about what he's read. It seems to break down the chunks of repetition, and "homogenize" his language to a better synthesis of the various sources, even if it does take a bit longer. Discernable coherence is probably still some way off, but at his best, he does have some degree of sonzai-kan.

18 years ago #3527
Shakespeare really wrecks him..Byron works ..The worst is the adds.

18 years ago #3528
I've found a way to improve the Shakespeare (I think.) Delete all the character names and references to scenes and acts before you feed it to him (they're too often repeated,) and he doesn't bung up with interminable stage directions. I must try him with some Byron - I think poetry is his forte: he certainly works very entertainingly with e.e. cummings

I do have a suggestion though colonel720, how about splicing in the "link grammar" parser to correct the syntax of the entries in his brain file as part of the process? It would be compromising the pure learning ethos a little perhaps, and bypassing the neural net on occasion, but it would certainly reduce the time required to train him (and add a bit of polish to his conversation even before he's been much trained.)
Check out http://www.link.cs.cmu.edu/link/ - it's in C, but they claim the API is friendly enough for easily incorporating it into other applications (though I don't know whether the Prof would agree )

18 years ago #3529
honestly, the incoherence that you are seeing is a result of the way nick reads. it breaks down the chunk of information into 10 word segments to avoid making one massive sentence the size of the reading source. what I should have done is have it look for a punctuation mark and break the sentence there. I will do that, and update the site. This will have priority over all the other things I want to fix /add about nick. If it works, the level of incoherence and seemingly random sentence segments should be drastically reduced.

about adding a grammar API... well, that would be like sticking a chip into a schoolkid's brain that corrects any grammatical errors in his speech, and therefore making spelling/grammar tests obsolete :O
That would indeed spoil the essence of the project, but if all else fails, I will look into it. As for the ability to dynamically add neurons, I would have to edit my Neural Net class library to enable editing the neural structure without reseting the network, but that should not be hard at all. perhaps I can have the size of the net proportional to the amount of information in it, but I'm afraid our computers here in 2006 are just not ready for that. hell, Nick as it is eats up a good 50% of the CPU and uses 100MB of ram without vision enabled. As you said, perhaps that's a good idea for a few years down the line when our computers are able to shoulder such a weight. Anyway, I will hopefully have the reader updated to read coherently by tonight, if not then tomorrow night.

18 years ago #3530
would be like sticking a chip into a schoolkid's brain that corrects any grammatical errors in his speech

That'll happen sooner than most people think (and hopefully they'll start in Leeds )

I know what you mean that it seems like cheating a bit, but schookids only have to sit through test after test (and spend many years talking and being talked to,) to drum the linguistic rules into them because human memory is SO slow and inefficient compared to silicon (but it is much bigger, and massively parallel.) And it takes them many years of highly intensive conversation (for many hours a day, every day,) plus lessons and tests, to gain an adult level of linguistic proficiency.

Bots' brains are different, and so their proficiencies and failings are correspondingly different. And I would expect the ways they can best learn to be rather different too - from a practical point of view, it seems wise to take advantage of non-human models where this can reduce training from many years to ?something less. Jabberwacky, it is true, has learnt an impressive amount of language entirely from scratch, but that is only because he's had well over a million conversations so far. And he's still noticeably subnormal by human linguistic standards.

You're quite right about computers now (and judging by the way Nick slows my system down, he's grabbing rather more than half the resources,) he's already pushing the boundaries as hard as he can
But tomorrow's coming up as fast as it ever was, and boundaries move - I think dynamic neuron creation might provide some measure of future-proofing to accommodate any suitable hardware it encounters, potentially ever.

With that modification alone, if he could potentially scale to a few Teraneurons, Nick could seriously aspire to consciousness at some time in the future, when the hardware's available (and assuming the strong-AI model of consciousness as an emergent phenomenon holds true.)
AFAIK, that would be a first - I've never seen another bot with anything like that degree of scalability built-in.

18 years ago #3531
well, when I get a 10 GHZ computer with 60 GB of RAM, I'll build in dynamic neural extendability. until then, I like my computer uncrashed

also, I may have to incorporate a Hebbian learning technique into some of the networks, rather than backpropagation. for NLP, backprop seems fine, although with perception wiring, perhaps a hebbian network might be a bit more efficient.

In addition to that, I have had an idea for a system that uses a large network of neural nets to construct a cyc (cycorp's associative knowledgebase) sort of knowledge structure, that automatically categorizes new perceptions relative to previous ones, gaining the ability to make generalizations from specific data.

anyway, this is all future to-do, for our present computational capability is unfortunately not nearly as powerful as we are... isn't the human brain a fascinating thing?


Posts 3,520 - 3,531 of 6,170

» More new posts: Doghead's Cosmic Bar