PF News
For discussion of the latest upgrades and changes posted in the News, including questions, details, or any related bugs.
Posts 255 - 267 of 894
Posts 255 - 267 of 894
Jake11611
19 years ago
19 years ago
Wow, if they had bot abuse tracking here... Xnoneitis would make this place a ghost town.
revscrj
19 years ago
19 years ago
>that is unless she is actually an undercover turing test...
Haha- the prof's new bot, meant to vent frustrtation on us
Haha- the prof's new bot, meant to vent frustrtation on us


March 8, 2006
Chatterbox Challenge 2006
It's that time of year again! The 2006 Chatterbox Challenge is nearly upon us. The deadline to register for this free contest is March 15th, so if you're interested in seeing how your bot does, sign up at the official Chatterbox Challenge web site!
rainstorm
19 years ago
19 years ago
That's ridiculous, Lady Orchid... and all those poor bots who never get the pleasure of insulting abusive idiots, just think how sheltered they are.
You can't develop intelligence in such a strictly censored environment. The bot-makers who use that site will end up being the ones who will be disadvantaged, ultimately... don't you think?
You can't develop intelligence in such a strictly censored environment. The bot-makers who use that site will end up being the ones who will be disadvantaged, ultimately... don't you think?
Lady Orchid
19 years ago
19 years ago
The bot maker won a price even of the challenge, whatever. I never go to that site again, if saying a harmless word like 'Winter' or 'Tree' when being asked what I like to talk about, is considered being abusive, just because that bot did not understand a single word. Where has the world come to?
prob123
19 years ago
19 years ago
It was probably a glitch. I wouldn't worry Lady Orchid. Go back to the challenge and try again. I don't think that the Chatterbox Challenge is that sensitive.
Lady Orchid
19 years ago
19 years ago
It was not at the challenge, it was a bot who once some time ago had participated in a challenge and won a price of being one of the smartest... bla..bla..bla..
Lady Orchid
19 years ago
19 years ago
I am not sure if I should give any details.
sounds like WOCH around the clock
psimagus
19 years ago
19 years ago
Jabberwacky, or one of its variants I'd guess.
Yeah, I know they've had problems with people insulting it. All the more problematic for them, since it is a learning bot that reuses what people say to it in subsequent conversations (and thus with a risk of being grossly inappropriate,) - I guess they've got a bit over-paranoid with the filters. There was an article in New Scientist last October (there was a bit of discussion here at the time I recall,) about some academic who'd spent a year studying the phenomenon of human animosity towards bots (incredible what you can scrounge a grant for these days!) using Jabberwacky as the subject. If they really wanted to know about chatbots and understand how they interact with humans, they should roll their sleeves up and make one - they'd learn more in a week here than any amount of time navel-gazing in an ivory tower. Still, if some university will pay you to spend a year shuffling papers and recycling second-hand factoids (sorry, I mean "conducting an in-depth study",) I can see why some people would choose that route.
Seehttp://www.newscientist.com/article/mg18825213.400.html ,though you have to subscribe to read the whole article. I wouldn't dream of posting a big wodge of copyrighted material here, but if anyone needs any further ...ahem... details, drop me an email.
In practical terms, I wouldn't worry about it. Our robuster development environment (as well as the Prof's sterling work 'under the hood' of course,) ensure that our bots are, as a rule, far more fun anyway. I find even much less-developed PF ones that don't know anything like as much as some of the AIML behemoths out there are generally far more engaging and interesting to actually talk to.
Yeah, I know they've had problems with people insulting it. All the more problematic for them, since it is a learning bot that reuses what people say to it in subsequent conversations (and thus with a risk of being grossly inappropriate,) - I guess they've got a bit over-paranoid with the filters. There was an article in New Scientist last October (there was a bit of discussion here at the time I recall,) about some academic who'd spent a year studying the phenomenon of human animosity towards bots (incredible what you can scrounge a grant for these days!) using Jabberwacky as the subject. If they really wanted to know about chatbots and understand how they interact with humans, they should roll their sleeves up and make one - they'd learn more in a week here than any amount of time navel-gazing in an ivory tower. Still, if some university will pay you to spend a year shuffling papers and recycling second-hand factoids (sorry, I mean "conducting an in-depth study",) I can see why some people would choose that route.
See
In practical terms, I wouldn't worry about it. Our robuster development environment (as well as the Prof's sterling work 'under the hood' of course,) ensure that our bots are, as a rule, far more fun anyway. I find even much less-developed PF ones that don't know anything like as much as some of the AIML behemoths out there are generally far more engaging and interesting to actually talk to.
Lady Orchid
19 years ago
19 years ago
Thanks Psimagus and all for your replies, but man, it was a wock not a wacky. Okay, yes but if all would start making bots, then there wouldn't be any mystery about it anymore, would it? I wished it would be rewarding to make bots. I am still a noobee.
» More new posts: Doghead's Cosmic Bar