Deep learning: a move towards fully securing man-machine coexistence?

abstract background with blue digital lines

The flagship technique of artificial intelligence, deep learning, today enables machines to understand an image or a sound. A high-stakes issue, especially for GAFA (Google-Apple-Facebook-Amazon) who have been investing heavily in this technology for the past 10 years. The team at Blaxtair, experts in this technology, gives us their testimony.

No, deep learning is not just about building a computer that can win a game of Go. Today, this technology enables machines to understand a voice or an image. Siri, Google Now, recognizing content that violates its terms of use at Facebook…we owe all of these amazing functions to deep learning technology.

Like neurons

Technically, this machine learning algorithm is based on the functioning of successive layers of units that perform simple calculations whose results are used by the next layers of units. It is from this neuronal functioning that the machine derives its ability to learn independently.

“As with the old automatic learning algorithms (SVM, Cascade), there is always a supervised learning phase where we give them clues. But then the computer begins to learn on its own, by rapidly processing large amounts of data,” explains Sabri Bayoudh, technical director for BLAXTAIR.

The machine analyzes the scene

It was the explosion in the amount of data available and computing capabilities of the mid-2000s that enabled this concept, that was actually born at the end of the 80s, to finally emerge.

In the field of imaging, the proliferation of surveillance cameras has enabled billions of images to be archived, that the algorithm can use to learn how to analyze a scene. After a phase of supervised learning, it can then identify and locate the ground, the sky, a pedestrian or a car, etc. and determine what to do. “Schematically, explains Sabri Bayoudh, we no longer need to tell it that there is a car by describing the car in the image; it will search each image itself for the discernible features that we will have given it (wheels, for example) and will independently determine whether there is a car there and what it needs to do. “

Great news for highly deformable objects

So if deep learning can save valuable time for fixed form objects such as cars, then the benefits are even greater for highly deformable objects such as people. It is not actually “humanly” possible to supply the automatic learning algorithm with an infinite amount of shapes and stances that a living person can adopt. Human detection, therefore, has everything to gain from the use of this technology.

13 June 2016