The enlightened minds of mathematicians, cryptographers, engineers, physicists, inventors and others have shaped the computer and the Internet into what we know today. Some of them also caught a glimpse of the future and envisioned the technology we are using now or are about to see.Â Keeping an eye on the visionaries helps us prepare for the future.Â
Alan Mathison Turing is broadly acknowledged as the father of artificial intelligence – the human-like intelligence exhibited by machines and software. He was born 102 years ago, on 23rd of June.Â With the occasion of his anniversary, we selected some essential info and trivia on artificial intelligence. Enjoy reading!
How could we tell if a machine possessed intelligence?
According to Turing, a computer can be considered to “think” if, in a conversation between a human and a machine, the human could not tell if heâ€™s talking to a human or a computer. An intelligent machine would also be able to perceive its environment and take actions to maximize its success.
Educate or replicate?
Turing believed that, instead of building a complex program to mimic the adult mind, it would be better to create a simple one to simulate a child’s mind and then educate it.
Can computers pretend to be human? Stop them with CAPTCHA
Widely used on the Internet, the CAPTCHA test is based on a reversed form of the Turing Test. The goal of both the Turing Test and the CAPTCHA is to distinguish between a human and a computer.
Note: A Turing test consists of blind 5-minute text-conversations between human judges on one side, and computers or humans, on the other. If 30 percent of the human judges cannot tell a machine from a human, the computer can be said to possess artificial intelligence.
WHAT ABOUT AI TODAY?
First computer program to pass the Turing test Â – June 2014
Eugene Goostman, a computer program pretending to be a 13-year-old Ukrainian boy, convinced enough judges it was human to pass the Turing test in June, marking the first breakthrough in the famous Turing test, as reported by the Independent.
Security software on your PC is artificially intelligent
It may be hard to imagine, but a form of artificial intelligence is making decisions for you on your computer or smart phone while you are reading this text. For instance, Bitdefender communicates with a data-center where artificial intelligence engages complex mathematical algorithms to process huge amounts of data and filter malicious files from clean ones.
These technologies make use of machine learning, decision trees, neural networks, and Boltzmann algorithms, analyze enormous volumes of data, evaluate file characteristics to separate malicious and clean software or behavior, make associations and comparisons without human intervention. And, on top of that, artificial intelligence supervises other artificial intelligence implementations to make sure that everything works as planned. Welcome to the future!
Stephen Hawking on the benefits and risks of AI technology
Talks about artificial intelligence are as fervent today as they were in Turingâ€™s time. Apart from the obvious benefits, Physicist Stephen Hawking also grasps the risks of such complex technology.
“Recent landmarks such as self-driving cars, a computer winning at “Jeopardy!,” and the digital personal assistants Siri, Google Now, and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation,”Â the physicist says in a recent Business Insider article.
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
Can artificial intelligence turn evil? Scientist Omohundro expresses concerns
Autonomous robots of the future “are likely to behave in anti-social and harmful ways unless they are very carefully designed,” scientist Steve Omohundro writes in a paper in the Journal of Experimental & Theoretical Artificial Intelligence.
Â â€œWhen roboticists are asked by nervous onlookers about safety, a common answer is â€˜We can always unplug it!â€™ But imagine this outcome from the chess robotâ€™s point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess. This has very low utility and so expected utility maximization will cause the creation of the instrumental subgoal of preventing itself from being unplugged. If the system believes the roboticist will persist in trying to unplug it, it will be motivated to develop the subgoal of permanently stopping the roboticist. Because nothing in the simple chess utility function gives a negative weight to murder, the seemingly harmless chess robot will become a killer out of the drive for self-protection.â€
Image credit: 1. Wikipedia, 2. BBCÂ