Katie Atkinson joins a roomful of skeptics to talk about Do Computer Scientists Dream Of Electric Sheep. Or in other words, she’s going to try and answer the questions, “when are the robots coming to take over?”
Many within the Artificial Intelligence (AI) community feel that the ultimate goal of AI research is to build a person. This is the hard view. The soft view is that AI is how to make computers do things that people are better at. For example, driverless cars. Humans are better at driving at the minute but 97% of accidents are due to human error. AI is the science of understanding entities and the engineering of intelligent entities. An intelligent entity is an agent – autonomy is central to the notion of agency. Agents decide for themselves what to do.
AI has been central to the success of things like natural language processing in things like Siri. Natural language is difficult for computers to deal with due to the ambiguity. NASA are now using autonomous planning and scheduling for their spacecraft and in 2018, the ESA rover will have autonomous control. Google are now working on self-driving cars and in 2005, the Defence Advanced Research Projects Agency (DARPA) started their “Urban Challenge”, a fake urban cityscape course for autonomous vehicles to complete.
How will our laws and infrastructure need to change to cope with self-driving cars? For example, if a self-driving car crashes, who/what is at fault? We’ll probably see a phased introduction of the technology, from driver assist to partial to full autonomy, which we could perhaps start implementing as early as 2020.
One of the first developments in AI was game playing. Back in 1997, Deep Blue developed by IBM beat world champion Gary Kasparov at chess. By 2002, the Fritz chess programme running on an ordinary PC was drawing with world champion Vladimir Kramnik. In 2011, IBM’s Watson won the Jeopardy TV quiz and researchers are now developing intelligent “Angry Birds”
AI was used in logistics planning for the first Gulf War in 1991. The US government stated that this single application paid back DARPA’s 30-year investment.
“Machines will be capable, within 20 years, of doing any work that a man can do” That sounds like a simultaneously thrilling and scary future until you discover that Herb Simon said it in 1965. These days, there are a number of AI hot topics:
Perceiving the environment
Problem solving by searching
Ability to represent knowledge
Reasoning with that knowledge to make decisions (in order to reach design objectives)
Planning what to do
Leading to improve performance and operate in unseen situations
Ability to communicate
Verification of autonomous systems
Hardware considerations for robots
Often there is no direct way to solve a problem, for example a Rubik’s Cube. So, you need to look at strategies. Once you can program these, you can look at doing things in a different way to humans, instead looking at the best way. Now, robots are faster than humans at solving a Rubik’s Cube (a robot called Cubestormer managed it in just 3.25 seconds in 2014)
How can we be sure that AI systems will act safely and legally? Testing and simulation can provide partial solutions. Formal verification provides analysis and mathematical proof that a system will always its formal requirements.
So, what next? What laws are required to regulate autonomous systems? For example military systems. Will robotic systems take jobs from humans? Should an autonomous system for legal reasoning have to pass the same exams as a human?
Previously unpublished – first written up in 2017.