The unprecedented rise of Artificial Intelligence (AI) and AI-based technologies in the current decade has harvested grave concerns regarding the future of humanity. AI has managed to eliminate several low-profile human jobs by automating tasks, rendering a major chunk of the population jobless in the process. Humans have also started using deep learning techniques to train an AI to produce music and paint portraits, diminishing their own artistic values in hindsight. Defense organizations of developed countries have sought the help of AI to autonomously kill people in war and terror zones. Meanwhile, an increasingly worrying trend observed in AI based software’s is the development of the same intrinsic faults and biases as its human creators. As AI moves away from objectivity to take on the mind of the person whose data has been fed into it, we are on the precipice of developing some potentially lethal AI minds out there.
Instead of developing AI to carry out such aforementioned tasks, it should move towards solving the myriad humanitarian issues we face in today’s society. Google, one of the world’s leading organizations developing AI, has started an initiative called “AI for Social Good” that thrives to achieve exactly that. On 29th October, 2018, Google announced a competition as part of the initiative that encourages independent groups to develop AI-based applications that have positive impacts on society at large. The “AI Impact Challenge”, as it’s called, will give away $25 million to projects that propose novel ways to use AI to help create a more humane society, ensuring that the money would aid such projects to transform those ideas into action.
Google hopes that the projects are built around using AI to solve problems in areas like environmental sciences, wildlife conservation, healthcare, and human trafficking. The company has already collaborated with the National Oceanic and Atmospheric Administration (NOAA) to implement AI in order to identify the location of whales by tracking and identifying whale sounds, in an attempt to protect them from environmental and wildlife threats. According to Google, AI has advanced enough to be used to predict floods and identify areas of forest that are susceptible to wildfires.
While developing AI to aid humanitarian facets, another important aspect that Google wishes to address is the elimination of biases in AI software’s that replicate human prejudice. Off late, Google had to pull down a part of its photo-tagging algorithm that couldn’t differentiate a black man from a gorilla, as the AI was fed images that primarily consisted of the faces of Caucasian and Asian men. Such issues have raised concern that AI could take on human racist biases, and novel methods should be proposed to eliminate such subjectivity in subsequent algorithms.
Google recently released a set of principles to guide its AI development, after pulling out of a U.S. defense project that aimed to use AI for nefarious military purposes. “We’re all grappling with questions of how AI should be used,” the company’s head of AI Jeff Dean stated. “AI truly has the potential to improve people’s lives.” The “AI Impact Challenge” seeks to integrate nonprofits, universities, and other organizations that don’t belong to the corporate and profit-driven world of Silicon Valley into the future-looking development for AI research and applications. Google will announce the winners of the competition at the 2019 Google I/O developer conference while also offering cloud resources to aid the development of the winning projects.