Some of the biggest hurdles in the field of artificial intelligence are preventing such software from developing the same intrinsic faults and biases as its human creators, and using AI to solve social issues instead of simply automating tasks. Now, Google, one of the world’s leading organizations developing AI software today, is launching a global competition to help spur the development of applications and research that have positive impacts on the field and society at large.

The competition, called the AI Impact Challenge, was announced today at an event called AI for Social Good held at the company’s Sunnyvale, California office, and it’s being overseen and managed by the company’s Google.org charitable arm. Google is positioning it as a way to integrate nonprofits, universities, and other organizations not within the corporate and profit-driven world of Silicon Valley into the future-looking development of AI research and applications. The company says it will award up to $25 million to a number of grantees to “help transform the best ideas into action.” As part of the contest, Google will offer cloud resources for the projects, and it is opening applications starting today. Accepted grantees will be announced at next year’s Google I/O developer conference.

Top of mind for Google with this initiative is using AI to solve problems in areas like environmental science, health care, and wildlife conservation. Google says AI is already used to help pin down the location of whales by tracking and identifying whale sounds, which can then be used to help protect from environmental and wildlife threats. The company says AI can also be used to predict floods and also to identify areas of forest that are especially susceptible to wildfires.

Another big area for Google is eliminating biases in AI software that could replicate the blind spots and prejudices of human beings. One notable and recent example was Google admitting in January that it couldn’t find a solution to fix its photo-tagging algorithm from identifying black people in photos as gorillas, initially a product of a largely white and Asian workforce not able to foresee how its image recognition software could make such fundamental mistakes. (Google’s workforce is only 2.5 percent black.) Instead of figure out a solution, Google simply removed the ability to search for certain primates on Google Photos. It’s those kinds of problems the ones Google says it has trouble foreseeing and needs help solving that the company hopes its contest can try and address.

The competition, alongside Google’s new AI for Social Good program, follows a public pledge published in early June, in which the company said it would never develop AI weaponry and that its AI research and product development would be guided by a set of ethical principles. As part of those principles, Google said it would not work on AI surveillance projects that violate “internationally accepted norms,” and that its research would follow “widely accepted principles of international law and human rights.” The company also said its AI research would primarily focus on projects that are “socially beneficial.”

In recent months, many of technology’s biggest players, Google included, have grappled with the ethics of developing technology and products that may be used by the military, or that could contribute to the development of surveillance states in the US and abroad. Many of these technologies, like facial and image recognition, involve sophisticated uses of AI. Google, in particular, has found itself embroiled by controversies around its participation with a US Department of Defense drone initiative called Project Maven, and with its secret plans to launch a search and algorithmic news product for the Chinese market.

After severe internal backlash, external criticism, and employee resignations, Google agreed to pull back from its work with Project Maven following the fulfillment of its contract. Yet Google has said it’s still actively exploring a product for the Chinese market, despite concerns it could be used to surveil Chinese citizens and tie their offline activities to their online behavior. Google has also said it still plans to work with the military, and its controversial Google Duplex service, which uses AI to mimic a human and make calls on a user’s behalf, will begin rolling out on Pixel devices next month.

Jeff Dean, the head of the company’s Google Brain AI division and a senior research fellow, says the AI Impact Challenge is not really a reaction to the company’s more recent controversies around military and surveillance-related work. “This has been in the works for quite some time. We’ve been doing work in the search space that is socially beneficial and not directly related to commercial applications,” he told a group of reporters after the event. “It’s really important for us to show what the potential for AI and machine learning can be, and to lead by example.”

This article originally appeared on : The Verge