Amazon and National Science Foundation earmark $10 million for AI fairness research
Amazon aims to promote the development of “fair” AI systems that minimize bias and address issues of transparency and accountability. Toward that end, it today announced that it’ll work with the National Science Foundation (NSF) to commit up to $10 million in research grants over the next three years focused on fairness in AI and machine learning.
“With the increasing use of AI in everyday life, fairness in artificial intelligence is a topic of increasing importance across academia, government, and industry,” Prem Natarajan, vice president of natural understanding in the Alexa AI group,” wrote in a blog post. “Here at Amazon, the fairness of the machine learning systems we build to support our businesses is critical to establishing and maintaining our customers’ trust.”
Amazon’s partnership with NSF will specifically target explainability, potential adverse biases and effects, mitigation strategies, validation of fairness, and considerations of inclusivity, with the goal of enabling “broadened acceptance” of AI systems and allowing the U.S. to “further capitalize” on the potential of AI technologies. The two organizations expect proposals, which they’re accepting starting today until May 10, to result in new open source tools, publicly available datasets, and publications.
In 2020 and 2021, Amazon and the NSF says they’ll continue the program with additional calls for letters of intent.
“We are excited to announce this new collaboration with Amazon to fund research focused on fairness in AI,” said Jim Kurose, NSF’s head for computer and information science and engineering. “This program will support research related to the development and implementation of trustworthy AI systems that incorporate transparency, fairness, and accountability into the design from the beginning.”
With today’s announcement, Amazon joins a growing number of corporations, academic institutions, and consortiums engaged in the study of ethical AI. Already, their work has produced algorithmic bias mitigation tools that promise to accelerate progress toward more impartial AI.
In May, Facebook announced Fairness Flow, which automatically warns if an algorithm is making an unfair judgment about a person based on his or her race, gender, or age. Accenture released a toolkit that automatically detects bias in AI algorithms and helps data scientists mitigate that bias. Microsoft launched a solution of its own in May, and in September, Google debuted the What-If Tool, a bias-detecting feature of the TensorBoard web dashboard for its TensorFlow machine learning framework.
IBM, not to be outdone, in the fall released AI Fairness 360, a cloud-based, fully automated suite that “continually provides [insights]” into how AI systems are making their decisions and recommends adjustments — such as algorithmic tweaks or counterbalancing data — that might lessen the impact of prejudice. And recent research from its Watson and Cloud Platforms group has focused on mitigating bias in AI models, specifically as they relate to facial recognition.