DEEP LEARNING
The idea to develop a truly intelligent computer: one that might understand human language and then can make its inferences and decisions on its own.
It will become obvious that such an effort would require no less than Google-scale data and computing power.
Deep learning copies the activity of neurons in the neocortex, 80 percent of the brain where thinking occurs. It learns, in a very real sense, to understand patterns in digital representations of sounds, images, and other types of data.
The fundamental idea that software can process is the neocortex’s large array of neurons in an artificial “neural network” which is decades old, and it has led to as many disappointments as breakthroughs. But due to improvements in mathematical formulas and increasingly powerful computers scientists can create layers of virtual neurons very easily.
Building a Brain
There have been many competing approaches to these challenges. One of them is to feed devices with information and rules about the world. It took lots of time and still left the system unable to deal with debatable data it was limited to controlled applications such as phone menu systems which ask you to make queries by saying specific words.
AI research, looked favorable because they attempt to simulate the way the brain works, though in a simplified manner. A program is a set of virtual neurons and then it is assigned in random numerical values. These weights determine how each simulated neuron responds.
Coders would develop a neural network to detect an object by attacking the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network isn’t able to accurately understand a particular pattern then an algorithm would adjust the weights.
The goal of this training is to get the network to continuously recognize the patterns in speech or sets of images that we humans can understand. This is the same way a child learns by noticing the details of an object.
The fact that stunned some AI experts was though, was the amount of improvement in image recognition. It was challenging for most humans. The accuracy increased above 50 percent when the software was asked to sort the images into 1,000 more general categories.
Big Data
Giving training to the layers of virtual neurons in the experiment took around 16,000 computer processors, the kind of computing infrastructure that Google has made for its search engine and other services is marvelous. About 80% of the advances in AI can be credited to the availability of more computer powers.
Deep learning has changed the way of voice search on our devices. The layers of neurons lead to more precise training on many varieties of a sound, the device can easily recognize sound more reliably, especially in noisy environments such as on metro platforms. It is easy to understand what was actually uttered, the result it returns is accurate as well.
Everyone doesn’t think that deep learning can move AI toward something rivalry human intelligence. Some people say deep learning and AI, in general, ignore too much of the brain’s intelligence in favor of brute-force computing.
What’s Next?
Google has already started thinking about future applications, the prospects are fascinating. It is a better image search which would help YouTube, for instance. It’s also favorable that more complicated image recognition could make Google’s self-driven cars much better. There are search and ads that underwrite it. There has been betterment from any technology that’s good and faster at recognizing what people are really looking for, maybe even before they realize it.
Everyone wants to have the exact meaning of words, phrases, and sentences that always trip up computers. In turn, it will require a more complicated way to graph the syntax of sentences. Google has already started using these kinds of analysis to improve grammar in translations. Language understanding will require devices to grasp what we humans think of as common-sense meaning. Finally, People plans to apply algorithms to help computers deal with the “boundaries in language.”
Self Driving Car
Microsoft also has there promising research on likely uses of deep learning in machine vision. It also has visualized personal sensors that deep neural networks could use to predict medical problems. Sensors might also help a city it might feed deep-learning systems that could predict where traffic jams might occur.
It is a field that attempts something interpreting the human brain, it’s certain that only one technique won’t solve all the problems. But for now, it is the one which is leading the way in artificial intelligence. Deep learning is a really powerful metaphor for learning about the challenges of the world