Skip to main content

Google's AI reasons its way around the London Underground

DeepMind’s latest technique uses external memory to solve tasks that require logic and reasoning — a step toward more human-like AI.


Mary Evans Picture Library
DeepMind's AI uses external memory to accomplish tasks that require reasoning, such as learning to navigate the London Underground.
Artificial-intelligence (AI) systems known as neural networks can recognize images, translate languages and even master the ancient game of Go. But their limited ability to represent complex relationships between data or variables has prevented them from conquering tasks that require logic and reasoning.
In a paper published in Nature on 12 October1, the Google-owned company DeepMind in London reveals that it has taken a step towards overcoming this hurdle by creating a neural network with an external memory. The combination allows the neural network not only to learn, but to use memory to store and recall facts to make inferences like a conventional algorithm. This in turn enables it to tackle problems such as navigating the London Underground without any prior knowledge and solving logic puzzles. Though solving these problems would not be impressive for an algorithm programmed to do so, the hybrid system manages to accomplish this without any predefined rules.
Although the approach is not entirely new — DeepMind itself reported attempting a similar feat in a preprint in 20142 — “the progress made in this paper is remarkable”, says Yoshua Bengio, a computer scientist at the University of Montreal in Canada.

Memory magic


DeepMind's new system — which they call a 'differentiable neural computer' — can make sense of a map it has never seen before. It first trains its neural network on randomly generated map-like structures (which could represent stations connected by lines, or other relationships), in the process learning how to store descriptions of these relationships in its external memory as well as answer questions about them. Confronted with a new map, the DeepMind system can write these new relationships — connections between Underground stations, in one example from the paper — to memory, and recall it to plan a route.
A neural network learns by strengthening connections between virtual neuron-like units. Without a memory, such a network might need to see a specific London Undeground map thousands of times to learn the best way to navigate the tube.
DeepMind’s AI system used the same technique to tackle puzzles that require reasoning. After training on 20 different types of question-and-answer problems, it learnt to make accurate deductions. For example, the system deduced correctly that a ball is in a playground, having been informed that “John picked up the football” and “John is in the playground”.  It got such problems right more than 96% of the time. The system performed better than ‘recurrent neural networks’, which also have a memory, but one that is in the fabric of the network itself, and so is less flexible than an external memory.
Although the DeepMind technique has proven itself on only artificial problems, it could be applied to real-world tasks that involve making inferences from huge amounts of data. This could solve questions whose answers are not explicitly stated in the data set, says Alex Graves, a computer scientist at DeepMind and a co-author on the paper. For example, to determine whether two people lived in the same country at the same time, the system might collate facts from their respective Wikipedia pages.
Although the puzzles tackled by DeepMind’s AI are simple, Bengio sees the paper as a signal that neural networks are advancing beyond mere pattern recognition to human-like tasks such as reasoning. “This extension is very important if we want to approach human-level AI.”

Comments

Popular posts from this blog

ACTIVISTS FORCED TO SUBMIT TO SECRECY & FILE APPEAL IN LAWSUIT ALLEGING DOMESTIC MILITARY SPYING

John Jacob Towery In a major case involving significant allegations of domestic spying by the United States military, targeted activists have filed an appeal in the Ninth Circuit Court of Appeals. But no members of the press or public can read the appeal because the court forced plaintiffs to file it under seal. The lawsuit,  Panagacos v. Towery , accuses the Army of directing John Jacob Towery, who worked for the US Army Force Protection Division at Fort Lewis, to infiltrate a group called the Port Militarization Resistance (PMR) in Olympia and Tacoma in Washington. It also accuses the cities of Olympia and Tacoma of coordinating with the Army to violate the First and Fourth Amendment rights of activists. PMR organized demonstrations from 2006 to 2009 and engaged in nonviolent civil disobedience with the intention of preventing the shipment of Stryker vehicles or other military cargo to Iraq. A district court  dismissed  the case in June 2014. Essentially, t...

Snow Leopards Need a Protected Range Three Times the Size of Manhattan, Study Says

Snow Leopard 1 of 10 Big Cat Week, which kicked off Sunday, is part of National Geographic's Big Cats Initiative, an effort to stop poaching, protect habitats and save several species of big cats.  Story Highlights Researchers have found that snow leopards' protected ranges are too small. Just one snow leopard can roam an area three times the size of Manhattan and larger than the island of Aruba. As populations continue to expand, maintaining the habitats of large creatures becomes increasingly difficult. The stealthy snow leopard isn’t easy to track down, but a recent study has found that the large carnivores are running out of room to survive.  According to the study, almost 40 percent of all protected areas across the species’ range is too small to even support a pair of breeding snow leopards. Less than 15 percent and likely as little as 3 to 4 percent of all the animal’s protected areas ar...

Batman Ninja

Batman, along with a number of his allies and adversaries, finds himself transplanted from modern Gotham City to feudal Japan.