The Next Borderline of Artificial Intelligence
Emotion Reading AI

To weigh by the headlines, it would be relatively easy to be certain of the fact that AI will one day certainly take over the world, slowly and steadily. A Chinese venture capitalist, Kai-Fu Lee believes that AI will very soon make tens of trillions of dollars of wealth and further reveals that the U.S. and China are the two AI superpowers.

There is no questioning the fact that AI has mind-boggling potential, but we must also remember that it is still in infancy and there are no such AI superpowers established yet. Even the most advanced cutting-edge AI technology is considered an open source for the time-being.

Tech giants are spawning flimflam with sophisticated demonstrations of Ai. For instance, search engine giant Google’s AlphaGo Zero learned one of the world’s most tricky board games merely in three days and could easily give its top-ranked players a taste of defeat. On the other hand, numerous companies are professing innovation with self-driving vehicles. But don’t be hoodwinked: the games are entirely a different topic. The self-driving cars still remain on their training wheels.

The recent AI systems put their best efforts to replicate the operation of the human brain’s neural networks, however, their imitations are very constrained. They apply a technique called deep learning: After you specify your needs to the AI and feed it with explicit examples, it studies the patterns in those provided data and keeps them for future reference. As the accuracy of its patterns relies on the comprehensiveness of data, so the more illustrations you give it, the more beneficial it becomes.

Here lies a small inconvenience, though: An AI’s worth is directly proportional to the quality of data it receives, and it is able to interpret them only within the narrow precincts of the provided context. It is unable to comprehend what it has explored. Thus, it is incapable of applying its evaluation in different scenarios and contexts. Also, it cannot distinguish between causation and correlation.

The bigger problem with this type of AI is the thing that what it has learned remains a secret: a set of indescribable responses to information. Even the designer remains unknown of the answer to the question like how and what it does after the neural network has been trained. This is called the black box of AI.

Then arises the issue of reliability. Airlines are instating facial-recognition systems. Although AI is being used for credit analysis, marketing, and to control robots, drones, and cars, another problem that lies with it is, it can be easily fooled.

Last December, Google issued a paper showcasing that it could delude AI systems into recognizing a banana as a toaster. Humans showing their human traits. Without even using the information of what system has learned, researchers at the Indian Institute of Science have established that they could baffle almost any AI system as Google did. Privacy and security are an afterthought with AI.

Top AI firms have surrendered their keys to the kingdoms by making their technology open source. Earlier, the software was considered as a trade secret, but recently developers have started realizing that allowing others to see and build on their AI code to the public for free to explore, improve, and adapt.