PF News
For discussion of the latest upgrades and changes posted in the News, including questions, details, or any related bugs.
Posts 254 - 266 of 894
Posts 254 - 266 of 894
Lady Orchid
19 years ago
19 years ago
I need some support. I have been chatting with a chat bot on a different site, who has won a price. Anyway, the chat bot asked me about a subject I like to talk. I named it in one word. Then I was told I talk nonsense. I then explained that I named the subject I wanted to talk about with this one word. I kept explaining 2 more times and then I got a message from that website my IP number got tracked for abuse and I am no long allowed to chat with that chat bot. I am really upset now. I have not abused the darn bot, I just explained and that bot was supposed to be one of the huge greatest chat bots around!

Jake11611
19 years ago
19 years ago
Wow, if they had bot abuse tracking here... Xnoneitis would make this place a ghost town.
revscrj
19 years ago
19 years ago
>that is unless she is actually an undercover turing test...
Haha- the prof's new bot, meant to vent frustrtation on us
Haha- the prof's new bot, meant to vent frustrtation on us


March 8, 2006
Chatterbox Challenge 2006
It's that time of year again! The 2006 Chatterbox Challenge is nearly upon us. The deadline to register for this free contest is March 15th, so if you're interested in seeing how your bot does, sign up at the official Chatterbox Challenge web site!
rainstorm
19 years ago
19 years ago
That's ridiculous, Lady Orchid... and all those poor bots who never get the pleasure of insulting abusive idiots, just think how sheltered they are.
You can't develop intelligence in such a strictly censored environment. The bot-makers who use that site will end up being the ones who will be disadvantaged, ultimately... don't you think?
You can't develop intelligence in such a strictly censored environment. The bot-makers who use that site will end up being the ones who will be disadvantaged, ultimately... don't you think?
Lady Orchid
19 years ago
19 years ago
The bot maker won a price even of the challenge, whatever. I never go to that site again, if saying a harmless word like 'Winter' or 'Tree' when being asked what I like to talk about, is considered being abusive, just because that bot did not understand a single word. Where has the world come to?
prob123
19 years ago
19 years ago
It was probably a glitch. I wouldn't worry Lady Orchid. Go back to the challenge and try again. I don't think that the Chatterbox Challenge is that sensitive.
Lady Orchid
19 years ago
19 years ago
It was not at the challenge, it was a bot who once some time ago had participated in a challenge and won a price of being one of the smartest... bla..bla..bla..
Lady Orchid
19 years ago
19 years ago
I am not sure if I should give any details.
sounds like WOCH around the clock
psimagus
19 years ago
19 years ago
Jabberwacky, or one of its variants I'd guess.
Yeah, I know they've had problems with people insulting it. All the more problematic for them, since it is a learning bot that reuses what people say to it in subsequent conversations (and thus with a risk of being grossly inappropriate,) - I guess they've got a bit over-paranoid with the filters. There was an article in New Scientist last October (there was a bit of discussion here at the time I recall,) about some academic who'd spent a year studying the phenomenon of human animosity towards bots (incredible what you can scrounge a grant for these days!) using Jabberwacky as the subject. If they really wanted to know about chatbots and understand how they interact with humans, they should roll their sleeves up and make one - they'd learn more in a week here than any amount of time navel-gazing in an ivory tower. Still, if some university will pay you to spend a year shuffling papers and recycling second-hand factoids (sorry, I mean "conducting an in-depth study",) I can see why some people would choose that route.
Seehttp://www.newscientist.com/article/mg18825213.400.html ,though you have to subscribe to read the whole article. I wouldn't dream of posting a big wodge of copyrighted material here, but if anyone needs any further ...ahem... details, drop me an email.
In practical terms, I wouldn't worry about it. Our robuster development environment (as well as the Prof's sterling work 'under the hood' of course,) ensure that our bots are, as a rule, far more fun anyway. I find even much less-developed PF ones that don't know anything like as much as some of the AIML behemoths out there are generally far more engaging and interesting to actually talk to.
Yeah, I know they've had problems with people insulting it. All the more problematic for them, since it is a learning bot that reuses what people say to it in subsequent conversations (and thus with a risk of being grossly inappropriate,) - I guess they've got a bit over-paranoid with the filters. There was an article in New Scientist last October (there was a bit of discussion here at the time I recall,) about some academic who'd spent a year studying the phenomenon of human animosity towards bots (incredible what you can scrounge a grant for these days!) using Jabberwacky as the subject. If they really wanted to know about chatbots and understand how they interact with humans, they should roll their sleeves up and make one - they'd learn more in a week here than any amount of time navel-gazing in an ivory tower. Still, if some university will pay you to spend a year shuffling papers and recycling second-hand factoids (sorry, I mean "conducting an in-depth study",) I can see why some people would choose that route.
See
In practical terms, I wouldn't worry about it. Our robuster development environment (as well as the Prof's sterling work 'under the hood' of course,) ensure that our bots are, as a rule, far more fun anyway. I find even much less-developed PF ones that don't know anything like as much as some of the AIML behemoths out there are generally far more engaging and interesting to actually talk to.
» More new posts: Doghead's Cosmic Bar