What is Artificial Intelligence?
Artificial intelligence refers to the simulation of human intelligence in machines that are programmed to mimic the learning, perception, and problem-solving skills portrayed by the human mind.
It works by learning from examples and experience, understanding and responding to language, recognising objects, and combining these as well as other capabilities to perform functions that are similar to what a human could perform.
Artificial intelligence is a branch of computer science and an endeavour that aims to replicate human intelligence in machines. Its use has sparked many debates and led to many unanswered questions, to the extent that there is not a singular definition of artificial intelligence that is universally accepted. To put it simply, it has been defined by Stuart Russell and Peter Norvig in their book Artificial Intelligence: A Modern Approach as “the study of agents that receive precepts from the environment and perform actions.”
Background of Artificial Intelligence
Alan Turing, a British mathematician explored the mathematical possibility of artificial intelligence. He suggested that if humans use information that is available to them and use it along with reason to solve problems and make decisions, why can’t machines do the same. This concept was discussed in his 1950 paper Computing Machinery and Intelligence, where he discussed how to build and test the intelligence of intelligent machines.
What initially stood in Turing’s way was the lack of key prerequisites for intelligence in computers. Before 1949, they could execute commands, but couldn’t store and remember them. On top of that, the cost of leasing a computer was also very high, and advocacy of high-profile people as well as proof of concept was required to receive funding.
The first artificial intelligence programme is believed to be The Logic Theorist programme. It was funded by Research and Development (RAND) and was designed to imitate the problem-solving skills of humans. This event was very important in catalysing the next 20 years of AI research.
Despite what we might think, artificial intelligence programs are everywhere around us. They generally fall under two categories: Narrow AI and Artificial General Intelligence (AGI).
Narrow AI works within a limited context and usually performs single tasks very well. Although these seem very intelligent, they work under constraints and are far more limited than even the most basic human intelligence. Narrow AI is the most common and has been very successful throughout the last decade and has yielded great societal benefits. Some examples of Narrow AI include search engines such as Google, personal assistants like Siri and Alexa, and image recognition software.
The other broad category of AI is AGI, which is a machine that has general intelligence which it can apply to solve problems, like humans. We tend to see this kind of AI in movies like the robots from Westworld or Terminator from The Terminator.
It’s important to note however, that AGI does not exist and the quest for AGI in reality has been met with a lot of difficulty. A “universal algorithm for learning and acting in any environment” is extremely difficult, and creating a machine or a program that contains a complete set of cognitive abilities is a nearly impossible task.
Are there risks to Artificial Intelligence?
There is no doubt that AI has been revolutionary and world-changing, however this isn’t without risks and drawbacks.
“Mark my words, AI is far more dangerous than nukes.” This was a comment made by Tesla and SpaceX founder Elon Musk at the South by Southwest tech conference in Austin, Texas. “I am really quite close… to the cutting edge in AI, and it scares the hell out of me,” he told his SXSW audience. “It’s capable of vastly more than almost anyone knows, and the rate of improvement is exponential.” Although these ideas may seem extreme and far-fetched to some people, he isn’t alone in holding these views. The late physicist Stephen Hawking told an audience in Portugal that the impact of AI could be catastrophic if its rapid development isn’t controlled ethically and strictly. “Unless we learn how to prepare for, and avoid, the potential risks,” he explained, “AI could be the worst event in the history of our civilization.”
How would AI get to this point exactly? Cognitive scientist and author Gary Marcus shed some light on this in a 2013 New Yorker essay, Why we should think about the threat of artificial intelligence. He stated that as machines become smarter, their goals could potentially change. “Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called ‘technological singularity’ or ‘intelligence explosion,’ the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed”.
It should be noted that these risks are associated with AGI, which is something that hasn’t yet come to fruition and so the risk at the moment is no more than a hypothetical threat. Although AI has been at the centre of dystopian science fiction, experts agree that it isn’t something that we need to worry about anytime soon. The benefits of AI technology as of now far outweigh drawbacks as it has improved the quality of lives of many. It has helped reduce the amount of time required to spend on a task and has enabled multi-tasking. Due to decisions taken from previously gathered information, errors are reduced significantly.
There are advantages and disadvantages of AI, as with most technological inventions. As humans, it is important to consider all of the issue with care. Ultimately we must utilise the benefits that AI provides for the betterment of society.
Recent Comments