Elon Musk fired Twitter’s Ethical AI team

Elon Musk fired Twitter’s Ethical AI team

As more and more problems with AI have surfaced, including biases around race, gender and age, many tech companies have installed “ethical AI” teams supposedly dedicated to identifying and mitigating such problems.

Twitter’s META unit has been more progressive than most in releasing details about problems with the company’s AI systems and in allowing outside researchers to probe its algorithms for new questions.

Last year, after This was noticed by Twitter users As the photo cropping algorithm appeared to favor white faces when choosing how to crop images, Twitter made the unusual decision to allow its META unit to publish details of the bias it discovered. The group too launched one of the first ever “bias bounty” contests, which allow outside researchers to test the algorithm on other problems. Last October, Chowdhury’s team also published details of unintentional political bias on Twitter, showing that right-wing news sources are, in fact, promoted more than left-wing ones.

Many outside researchers saw the layoffs as a blow, not just to Twitter, but to efforts to improve AI. “What a tragedy” Kate Starbirdan associate professor at the University of Washington who studies online misinformation, wrote on Twitter.

Twitter content

This content can also be viewed on the it website encourages of.

“The META team was one of the few good case studies of a tech company running an AI ethics group that communicates with the public and academia with significant credibility,” he says. Ali Alkhatibdirector of the Center for Applied Data Ethics at the University of San Francisco.

Alkhatib says Chowdhury is incredibly well-regarded within the ethical AI community, and her team has done truly valuable work holding Big Tech accountable. “There aren’t many corporate ethics teams worth taking seriously,” he says. “This was one of those whose work I taught in classes.”

Mark Riedl, a professor who studies artificial intelligence at Georgia Tech, says the algorithms used by Twitter and other social media giants have a huge impact on people’s lives and should be studied. “Whether META had any impact on Twitter is hard to tell from the outside, but the promise was there,” he says.

Riedl adds that letting outsiders examine Twitter’s algorithms is an important step toward greater transparency and understanding of AI-related issues. “They were becoming a watchdog that could help the rest of us understand how AI affects us,” he says. “The researchers at META had outstanding credentials with a long history of studying AI for social good.”

As for Musk’s idea of ‚Äč‚Äčopen-sourcing the Twitter algorithm, the reality would be far more complicated. There are many different algorithms that affect the way information surfaces and it’s hard to understand without real-time data being fed to it in terms of tweets, views and likes.

The idea that there is a single algorithm with explicit political affiliations could oversimplify a system that may contain more insidious biases and problems. Figuring this out is exactly the kind of work Twitter’s META group has been doing. “There aren’t many groups that rigorously study the biases and errors of their own algorithms,” says Alkhatib of the University of San Francisco. “META did it.” And now, it’s not.

#Elon #Musk #fired #Twitters #Ethical #team

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button