The deep-learning revolution: How understanding the brain will let us supercharge AI
Professor Terry Sejnowski. Image: Salk InstituteVideo: Cracking open deep learning's black box Watch NowIf anyone is qualified to talk about the machine-learning revolution currently underway it's Terry Sejnowski.Long before the virtual assistan

Video: Cracking open deep learning's black box Watch Now

If anyone is qualified to talk about the machine-learning revolution currently underway it's Terry Sejnowski.

Long before the virtual assistant Alexa was a glint in Amazon's eye or self-driving cars were considered remotely feasible, Professor Sejnowski was laying the foundations for the field of deep learning.

Sejnowski was one of a small group of researchers in the 1980s who challenged the prevailing approach to building artificial intelligence and proposed using mathematical models that could learn skills from data.

SEE: IT leader's guide to deep learning (Tech Pro Research)

Today those brain-inspired, deep-learning neural networks have led to major breakthroughs in machine learning: giving rise to virtual assistants that increasingly predict what we want, on-demand translation and computer vision systems that allow self-driving cars to "see" the world around them.

But Sejnowski says the machine learning is very much in its infancy, comparing it to the rudimentary aircraft that the Wright brothers flew in the US town of Kitty Hawk at the turn of the 20th century. While a landmark achievement, this early machine today appears impossibly crude next to the commercial jets that would follow in its wake.

"What we've done is, I think, is solve the difficult problems that are precursors to intelligence. Being able to talk on the telephone, and respond to queries and so forth, is just the first layer of intelligence. I think we're taking our first steps," he says.

Sejnowski compares the neural networks of today to the early steam engines developed by the engineer James Watt at the dawn of the Industrial Age - remarkable tools that we know work but are uncertain how.

"This is exactly what happened in the steam engines. 'My god, we've got this artifact that works. There must be some explanation for why it works and some way to understand it'.

"There's a tremendous amount of theoretical mathematical exploration occurring to really try to understand and build a theory for a deep learning."

If research into deep learning follows the same trajectory as that spurred by the steam engine, Sejnowski predicts society is at the start of journey of discovery that will prove transformative, citing how the first steam engines "attracted the attention of the physicists and mathematicians who developed a theory called thermodynamics, which then allowed them to improve the performance of the steam engine, and led to many innovative improvements that continued over the next hundred years, that led to these massive steam engines that pulled trains across the continent."

This temporal-difference algorithm was used together with deep learning by Google's game-playing AI AlphaGo, says Sejowski, and played a role in helping the system beat the world's leading champion at Go, a game so complex that the total number of positions that can be played is more than the number of atoms in the universe.

However, looking to nature for inspiration also exposes the gulf in the complexity of natural systems compared to the even the largest deep-learning neural networks today.

"Look into the brain, and what do you see? Well, deep learning turns out to be a tiny part of what goes on in the brain, tiny. The biggest deep learning networks have on the order of a billion connections, a billion parameters. Well, if you look into your brain and look at one cubic millimeter of the brain, it has about a billion synapses," he says.

"What we have now is kind of, like an almost minuscule little bit of the brain, that we are beginning to master in terms of how to use it to represent things and solve problems."

Even if society did build a neural network with a comparable number of connections to a human brain, we'd still be missing information about how this network should be structured to give rise to general intelligence found in humans.

"There's the rest of the brain right? The brain doesn't consist just of the cerebral cortex, right, say there are a million deep learning networks in our brain. How do you connect them up? How do they integrate?"

His belief that machine learning researchers should look to nature is echoed by Demis Hassabis, the co-founder of Google DeepMind.

"Studying animal cognition and its neural implementation also has a vital role to play, as it can provide a window into various important aspects of higher-level general intelligence," he wrote in a paper last year.

Read more:

Facebook's machine learning director shares tips for building a successful AI platform (TechRepublic)AI helpers aren't just for Facebook's Zuckerberg: Here's how to build your own (TechRepublic)IBM Watson: What are companies using it for? (ZDNet)How developers can take advantage of machine learning on Google Cloud Platform (TechRepublic)How to prepare your business to benefit from AI (TechRepublic)Executive's guide to AI in business (free ebook) Innovation Newsletter

Be in the know about smart cities, AI, Internet of Things, VR, AR, robotics, drones, autonomous driving, and more of the coolest tech innovations. Delivered Wednesdays and Fridays

Sign up today
...

This article is republished from www.techrepublic.com under a Creative Commons license.

RELATED POST