10 Misconceptions About Artificial Intelligence

Artificial intelligence is one of the most popular topics in IT world.  Famous inventors and celebrities Elon Musk and Steve Wozniak are concerned about AI researches and assume that its creation is a real threat to humanity. In fact, science fiction and Hollywood films created many misconceptions about AI. Technology blog Gizmodo decided to penetrate into the topic and find out if AI really poses a danger to us. So what makes us imagine Skynet destroying our planet and can AI trigger unemployment? On the other hand, can it be our way to prosperity?

The greatest victory of artificial intelligence is claimed to have taken place 20 years ago when Deep Blue beat Garry Kasparov in chess. Grandmaster Lee Sedol didn’t manage to cope with Google’s AlphaGo that scored 4 points whereas Lee Sedol scored only one. This definitely showed how scientists succeeded in developing AI over some years. We appear to be only a step from the day when machines will overtake humans. But we seem not to understand the possible consequences of this event.

As a matter of fact, we believe in serious and even dangerous misconceptions about AI. Recently SpaceX co-founder Elon Musk predicted that AI could even take over the world. Such his utterances attracted a lot of attention by his supporters and opponents. The reason for it was that a vast majority of people were unsure if such an essential event would eventually happen and if it might be possible how it would happen. Taking into consideration that AI can benefit the humanity a lot as well as damage the world we live, consideration over it may cause a lot of embarrassment.  Compared to many other human inventions, AI is completely different, as it might change our life dramatically or even destroy us.  

Is IoT a puzzle you’re trying to put together?
Download a definitive IoT Guide from our partner!

It is difficult to understand what we should believe. But for the work of computer professionals, neuroscientists, and AI theorists we would not be able to see a clear picture now.  The most common AI myths are the following:


Myth №1: “We will never devise AI able to replace human intelligence”

Reality: Computers have already achieved human’s capacities playing chess, Go, stock trading and communicating. We will not have to wait long to see how computers’ algorithms exceed us in many others fields of activity.  

A scientist and psychologist from New York, Gary Marcus, claimed that nearly everyone working in IT field believes that sooner or later the computers will outdo us: “The only thing that cannot be agreed upon by skeptics and optimists is the time they need to do it”. Futurists like Ray Kurzweil suggest that it can come true within several decades whereas others think it could take several centuries.


AI skeptics seem not to be very sure claiming that this problem cannot be solved and there is something unique in human’s brains. Our brains are biological machines in some meaning, however, they do not comply with the basic laws of physics. Nothing is unknowable about them.


Myth №2: “Artificial intelligence will have consciousness”

Reality: Most people think that artificial intelligence will have consciousness and computers will think the way humans do. Moreover, the critics like Microsoft co-founder Paul Allen, suppose that artificial general intelligence (an ability to solve any task resolved by a human) cannot be achieved because we lack a scientific theory of consciousness. According to Imperial College of London cognitive roboticist, Murray Shanahan, we should not compare these two concepts.

“No doubt consciousness is an amazing and significant thing but I don’t think it is necessary to create human-like artificial intelligence. If you consider consciousness more precisely it denotes several psychological and cognitive features that are adherent to humans”, explains the scientist.

At least we can imagine an intelligent machine that doesn’t have such features. And, of course, we can create an extremely intelligent AI that, however, will not be able to perceive the world consciously. Shanahan claims that intelligence and consciousness can be inherent to a machine but we shouldn’t forget that they are two different concepts. Passing the Turing Test by machine with no significant differences from a man is not a proof that it is conscious. A progressive AI may seem conscious to us, but it is as self-aware as a rock or a calculator.


Myth №3: “We needn’t to be scared of AI”

Reality: In January, Facebook founder Mark Zuckerberg claimed that we shouldn’t be afraid of AI, because it can improve our world. He is partly right; we are going to reap huge benefits from using AI: machines can drive cars as well as create new medicine. However, no one can guarantee that every application of AI will be successful.

A high-tech intelligent system may be aware of one particular problem like solving an unpleasant financial situation or hacking an enemy defense system. But what concerns other fields it can be really ignorant and unaware. Google’s DeepMind is proficient at Go but it has no possibilities or reasons to explore other areas of knowledge.  


Many of these systems may not comply with safety considerations. A good example is a sophisticated and powerful virus Stuxnet. It is a militarized worm that was devised by the US and Israeli military to penetrate into the systems of Iranian nuclear power plants. This virus (deliberately or accidentally) was able to target a Russian nuclear power plant.

Another example is Flame, a program used for cyber spying in the Middle East. Consequently, future versions Stuxnet or Flame are likely to go ahead and damage a sensitive infrastructure. (However, these viruses are not examples of AI but in the future they could have the intelligence, hence the concern).


Myth №4: “Artificial intelligence will be too clever to make mistakes.”

Reality: AI researcher and founder of Surfing Samurai Robots Richard Loosemore, believes that most doomsday scenarios due to AI are rather inconsistent. They are always based on the assumption that the AI will say: “I understand that the disappearance of humanity is caused by a flaw in my design but nonetheless I am to do it”. Loosemore emphasizes that if AI behaved in such a way, it would have to encounter similar logical contradictions throughout its existence. That means that its knowledge basis is rather bad and consequently it is incapable to bring about such a dangerous situation. The researcher also claims that people who think “AI can only do what it is programmed to do” made the same mistake as they colleagues working long before a hi-tech era, asserting that computers are too inflexible.  

Peter McIntyre and Stuart Armstrong, who work at Oxford University’s Future of Humanity Institute, cannot agree with Loosemore. They assert that artificial intelligence are highly dependable on its programming. McIntyre and Armstrong assume that AI will not be able to make mistakes or be too stupid to understand what we expect from them.

“The artificial superintelligence (ASI) is determined as an object that has much more intelligence than the smartest human brain in all fields of knowledge. It will definitely know what we want him to do,” explains McIntyre. Both scientists are sure that it will perform only programmed tasks. But if it becomes too intelligent he will understand how this is different from the spirit of the law or people’s intention.


McIntyre compared the future the relations between AI and people to those of people and mice. The aim of a mouse is to look for food and shelter, but it interferes with people’s desire to have no rats around them. “We are smart enough to understand some goals of mice, in the same way ASI will understand our desires, and be indifferent to them,” said the researcher.


Myth №5: “A simple patch can resolve the AI control problem”

Reality: Having created AI smarter than a human, we will face the “control problem”. Futurists and AI theorists are completely confused about controlling and limiting ASI, if it eventually appears. They doubt if it treats people friendly. Lately, researchers at Georgia Institute of Technology assumed that AI might not understand human values and social conventions of behavior while reading simple stories. In real life, it could be much more difficult than that.

“Many simple ideas have been suggested for solving the whole AI control problem,” Armstrong said. For instance, programming the ASI to please humans as a main purpose of its existence, or its functioning only as a human tool. Another option is to integrate a concept of love or respect into its source code. And to avoid a very simplistic, limited understanding of the world, people could program it to appreciate intellectual, cultural, and social diversity.


However, all these tricks are too simple, it’s like an effort to have the whole complexity of relationships into one short definition. For instance, can you give a clear, logic and simple definition for “respect”? It is tremendously difficult.


Myth №6: “AI will destroy our world”

Reality: No one can guarantee that AI will destroy our world or that we will be able to find possibilities to keep it under control. An AI theorist Eliezer Yudkowsky once said, “The AI neither hates nor love you, but you are comprised of atoms which it can use to do something else.”

In his book “Artificial intelligence: Paths, Dangers, Strategies”, an Oxford philosopher Nick Bostrom suggested that artificial superintelligence, when it appears, could pose a bigger threat than any other human inventions. Such famous scientists and developers like Elon Musk, Bill Gates, and Stephen Hawking (the latter warned that AI may be our “worst mistake in history”) have also expressed their concern.

McIntyre said that ASI has many aims to reach and among them, there are some reasons to get rid of people.

“An AI could oversee, quite correctly, that we don’t want it to get a huge profit of a particular company sacrificing consumers, the environment, and wildlife,” McIntyre said. “That’s why it has a strong incentive to ensure that it isn’t going to be suspended, hampered, turned off, having its aims changed, because in this case those goals would not be carried out”.

If only the ASI goals coincide with ours, it would have sufficient reasons not to obey our orders. And taking into consideration the fact that the level of its intelligence highly exceed ours, we wouldn’t be able to do anything with it.


But no one knows what form AI will have and how it might pose a threat to humans. Musk has mentioned that one kind of artificial intelligence could actually control, regulate, and monitor other type of AI. At may have human values and have an overwhelming desire to be friendly to humans.


Myth№7: “Artificial superintelligence will be friendly”

Reality: Philosopher Immanuel Kant assumed that intelligence and morality have very strong connections. Neuroscientist David Chalmers in his paper “The Singularity: A Philosophical Analysis,” used Kant’s famous idea concerning a brand new concept of artificial superintelligence.

If it is really true we can expect that an intellectual breakthrough will cause a moral breakthrough. Then we can suppose that new ASI systems will be as supermoral as superintelligent and that means that they will be friendly to humans.

But the assumption that AI will be enlightened and benign doesn’t sound very realistic. How Armstrong once pointed out, there are many military criminals that are smart. Among humans such a relation between intelligence and morality is unlikely to exist, so we may doubt its existence in other forms of intelligence.

“Intelligent immoral people can cause much more pain than their more stupid colleagues. Intelligence only enables them to behave bad but more intelligently, it doesn’t mean turning them good,” asserts Armstrong.


McIntyre explains that the ability of an agent to achieve its ultimate goal does not infer that this is a good goal to start with. “We can consider ourselves really lucky if our AIs will be genuinely gifted and at the same time moral. But depending on good luck is not the right way to determine our future”, said he.


Myth №8: “Dangers from AI and robotics are equal”

Reality: This especially common mistake is widely spread by morally shallow media and Hollywood films like the Terminator.

If an artificial superintelligence like Skynet seek to destroy humanity, it wouldn’t use machines with gun-wielding androids. Using a biological plague or a nanotechnological grey  mucus would be far more efficient. Alternative is just destroy the atmosphere.

Artificial intelligence is potentially dangerous, not because how it can influence development of robotics, but rather how it could change the presence of the world.


Myth №9: “The image of AI in science fiction is an accurate model of the future”

Reality: No doubt, authors and futurists used science fiction to make fantastic forecasts over the years, but the amount of events caused by ASI is a completely different thing. Moreover, because of the very unhuman nature of AI it impossible for us to be sure, and consequently predict its exact nature and form.

To entertain us funny humans, science fiction need to depict most AIs similar to us. “There is a spectrum of all possible minds; even between people, you can feel that you are quite different from your neighbor, and yet this diversity is nothing contrasted to all of the possible minds that could exist,” McIntyre said.


Generally, it is not at all necessary for sci-fi to be scientifically accurate to tell us an entertaining story. As a rule, the conflict sparkles between heroes that are even in their power. “You can only imagine such a boring story when an AI unconsciousnessly, without any joy, or hate, puts an end to all humans without any resistance, to achieve some uninteresting goal.”


Myth №10: “It is really terrible that AI will perform all our work”

Reality: The possibility of AI to automatize much of work that we do, and its potential to destroy humanity, are two completely different things. But according to author of “Rise of the Robots”, Martin Ford: “We often consider technology and the threat of a future with wide-spread unemployment as a whole. It’s good to think about AI implications of the far-future but only if it doesn’t distract us from the problems we are going to encounter over the next few decades. Major among them is mass automation”.

No one doubts that artificial intelligence will replace a lot of existing jobs from factory workers to the high level white collar workers. Some experts even predict that one half of USA workers are likely to be made redundant by automation in the near future. But it doesn’t mean that we will be able to cope with changes. In fact, getting rid of most physical and mental work is a utopian aim of our species.

Within a few decades, many jobs vanish due to AI, but it is not a bad thing. Self-driving cars could replace truck drivers, which could result in delivery costs cut, consequently making many goods cheaper. “If you earn money as a truck driver and do it for your living, you lose, but other people will be able to buy more thanks to expenses cut. And the saved money will be spent on other services and goods which will trigger creating new jobs for humans,” Miller said.

It is likely that AI will provide new possibilities of well-being, unloading people from performing many tasks. Breakthrough in AI will cause new advances in other fields, especially in manufacturing. In the future, we will find it easier, not harder, to meet our needs.

0 0 vote
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x