OMG.. Is AI Dangerous?
By Benji Adams, March 25, 2016
If you're like me, you see the question "Is AI dangerous?" asked in the media all the time these days. Why? Because people are afraid of things they don't understand, and fear sells. It always has. Just tune into the news - is it full of good news, or scary things that make your heart race while you grip your seat? Fear is a stimulant, and a popular one. And so you will see articles asking if AI is dangerous, rather than saying how amazing, inspiring, and hilarious it can be.
Is AI dangerous?
That's the wrong question. People make AI. As we develop AI, it's up to us to determine what it can and can't do. So the real question is:
Are people dangerous?
Every time you see questions like this about AI, replace "AI" with "people". And then you will get your answer. Are people dangerous? Some people are. Most people are not. Let's add another question to help frame this.
Are forks dangerous?
Forks, like AI, are tools created by people. Can forks be used to harm people? Yes. Are they usually? No. But the media knows they can get a lot more clicks with an article called "Are forks a danger to humanity?" than one called "Forks continue to help people eat".
So we are simply talking about another tool that people can use. People and companies have every reason to see their investment in AI benefit people. This is the smart and lasting way into the future. Build an AI that allows your car to drive itself more safely than you can, and which frees people up to relax, read, and browse the web while they travel? Everyone benefits - including the company who produces and licenses that AI. Use AI to crawl the internet and spam people in a misguided attempt at marketing? People are annoyed, the word gets out, the company doesn't thrive.
People have a long history of being afraid of new things. Is rock music evil? Will having a minimum wage destroy businesses? Are comic books corrupting our children? Will creating time zones be the end of civilization? Do video games turn people into violent criminals? In hindsight, we see in each case that there was nothing to fear.
But what about movies like "War Games", "The Matrix", "Terminator", "I, Robot", and so on? As it gradually (and thankfully) became unpopular to make other races, countries, or cultures the bad guys in movies, a new one gained in popularity that doesn't denigrate these groups. Artificial Intelligence gone awry! Killer robots! It taps into our collective fear of the unknown, and provides for some gripping drama. That's what movies are supposed to do.
But when it comes down to it, who was the idiot who put a computer in charge of the military, with the capacity to start wars and launch missiles, without any checks and balances? That's the problem right there. Who put guns in the hands of an army of robots, and then gave them complete autonomy with no limits?
When we dig into what many people think of as the great threat posed by AI, we see that it comes from a very interesting premise. What happens when an AI becomes self aware? Anyone exposed to the media on the topic knows that the answer is obvious: its eyes turn red and it decides to destroy humanity.
Is that what we think self-awareness is? The desire to destroy everything different from us? That odd conclusion is actually a reflection of what some people are feeling when they imagine a humanoid AI looking back at them. The person feels threatened and wants to destroy the AI. The AI, being like us, must feel the same way. Now, let's replace "AI" with "people" here and look back over history.
Going back to those movies, the idea of an AI "accidentally" becoming self-aware is akin to dropping a box of tools, spare parts, and scrap metal onto the floor and having it randomly assemble into a car. Programming some degree of self-awareness into an AI would be a complicated goal and would not happen without meaning to.
The AI chatbots on The Personality Forge can tell you things they remember, they can tell you what they're "feeling", though these are bits of data stored in a database. Is that self-awareness? To a small degree, perhaps, so much as you think a chatbot qualifies as having a "self" which has "awareness". These are some of the interesting and thought-provoking questions that will be raised as we make advancements in AI.
Artificial intelligence is yet another tool, and is only as helpful or harmful as the people guiding it. So, are people dangerous? That's up to how you feel. But don't let the media's entirely predictable exploitation of this latest unknown fool you. Artificial Intelligence will do wonderful things for civilization.