BY Fast Company 3 MINUTE READ

A handful of companies dominates not only how artificial intelligence is developed but critiqued. It’s time for that to change.

Timnit Gebru—a giant in the world of AI and then co-leader of Google’s AI ethics team— was pushed out of her job in December. Gebru had been fighting Google over a research paper – that she had co-authored, which explored the risks of the AI models the search giant uses to power its core products, including almost every English query on Google. The paper highlighted the potential biases (racial, gender, and others)) of these language models, as well as the outsize carbon emissions required to compute them.Google wanted the paper retracted; Gebru refused. After the company abruptly announced her departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff—despite Gebru’s credentials. The backlash was immediate. Thousands of Googlers and outside researchers signed a protest letter and called out Google for attempting to marginalise its critics, particularly those from underrepresented backgrounds.

A champion of diversity and equity in AI, Gebru is a Black woman, and was one of the few in Google’s research organization. In the aftermath, Alphabet CEO Sundar Pichai pledged an investigation, the results of which have not yet been released. (Google declined to comment for this story.)

To many who work in AI ethics, Gebru’s ouster was a shock but not a surprise, and served as a stark reminder of how Big Tech dominates their field. A handful of giant companies are able to use their money to direct the development of AI and decide who gets to critique it.

At stake is the equitable development of a technology that already underpins many of our most important automated systems.

From credit scoring and criminal sentencing to healthcare access and whether you get a job interview or not, AI algorithms are making life-altering decisions for people, with no oversight or transparency. The harms these models can cause out in the world are becoming apparent: false convictions based on biased facial recognition technology, discriminatory hiring systems, racist predictive policing dashboards. For AI to work for all members of society, the power dynamics across the industry have to change. The people most likely to be harmed by algorithms—those in marginalized communities—need a say in its development. “If the right people are not at the table, it’s not going to work,” Gebru says. “And in order for the right people to be at the table, they have to have power.”

Big Tech’s influence over AI ethics is near total. It begins with companies’ ability to lure top minds to industry research labs with the promise of prestige, computational resources and in-house data, and cold hard cash. And it extends throughout academia, to an extraordinary degree. A 2020 study of four top universities found that a majority of AI ethics researchers whose funding sources are known have accepted money from a tech giant. Indeed, one of the largest pools of money dedicated to AI ethics is a joint grant funded by the National Science Foundation and Amazon, presenting a classic conflict of interest. “Amazon has a lot to lose from some of the suggestions that are coming out of the ethics in AI community,” says Rediet Abebe, a computer science professor at UC Berkeley who cofounded the organization Black in AI with Gebru to provide support for Black researchers inan overwhelmingly white field. Perhaps unsurprisingly, nine of the first 10 principal investigators awarded grant money from the NSF-Amazonpool are male, and all are white or Asian. (Amazon did not respond to a request for comment.) Meanwhile, it’s not clear whether inhouse AI ethics researchers have any kind of say in what their employers are developing. Large tech companies are typically more focused on shipping products quickly than on understanding the potential impacts of their AI. Many watchdogs believe that Big Tech’s

investments in AI ethics— including in-house teams like the one Gebru used to lead—are little more than PR. “This [problem] is bigger than just Timnit,” says Safiya Noble, a professor at UCLA and the cofounder and co-director of the Center for Critical Internet Inquiry.

____________________________________________________________________________________

Author: Katherine Shwab.