Wednesday, November 30, 2016

DeepMind Google, not only chess and journalism: the AI now reads the sores – Intelligonews

the goal of The future is to create an artificial intelligence that is able to replace not only professional but also, in part, the man. For this the request is to have a‘IA always more performant and can avveicinarsi to the human intelligence. Here, then, is that Google would have pulled out of the cilintro a ” artificial intelligence able to observe and learn like any other human being. The creator of these wonders-the laboratory DeepMind.

laboratories DeepMind, specializing in artificial intelligence, had jumped to the headlines thanks to the game Checkers Go. But now her computer is super intelligent, not just play. The latest discovery of the laboratories was, in fact, that of teach the machine to read the sores the human. The computer has been trained for the purpose, with humans is exceptional. The tv journalists of the BBC. Well 5000 hours of tv news programmes and news conducted by experts of the word. Professionals able, to profession, to clearly every single word used. At the end of learning, software Google DeepMind has got, in the course of the tests, a percentage of success 46.8%. In the same test, a candidate human has got only 12.4% of success.

To make a comparison with the tests carried out in the past: LipNet was able to recognize 51 words. The collaboration with Google, has multiplied the results by identifying 110 thousand phrases and 17500 words.

"The goal of this work – say the researchers –that is to recognize phrases and expressions, regardless of whether there is or not the audio. Compared to previous work, the lip-reading has been tested with video spontaneous", that is not studied in the lab, but from the outside world. The study also indicates some possible applications of a technology such as this: it is possible, for example, transcribe with ease the silent films and subtitles for hearing impaired, entire conferences and events (even when the voices overlap each other). In the future we will be able to then dictate instructions or messages to our smartphones even in a noisy environment, or when it will be possible to talk about. digital assistants like Siri, Cortana or Google Assistant recepiranno a command only from the movement of our mouth. So, after the AI able to “help” the journalist to write a piece only 5 minutes from the receipt of the news (as it happened o n the 17th of march 2014 with a Quakebot, the software developed by journalist Ken Schwencke), also comes the computer can understand without listening.

LikeTweet

No comments:

Post a Comment