MIT Created a Racist AI But the Researchers Don’t Know How it Works

MIT Created a Racist AI But the Researchers Don’t Know How it Works
Published on

Scientists found that AI programs can determine someone's race with 90% accuracy just from their X-rays!

Scientists at Harvard and MIT are part of an international team of researchers who found that artificial intelligence programs can determine someone's race with over 90% accuracy just from their X-rays. But there is a problem no one knows how AI programs do it.

Artificial intelligence has a racism problem. Look no further than the bots that go on racist trumpet, the facial recognition tech that refuses to see Black people or discriminatory HR bots that won't hire people of color. It's a pernicious issue plaguing the world of neural networks and machine learning that not only strengthens existing biases and racist thinking but also worsens the effects of racist behavior towards communities of color everywhere.

And when it's coupled with the existing racism in the medical world, it can be a recipe for disaster.

That's what's so concerning about a new study published in The Lancet last week by a team of researchers from MIT and Harvard Medical School, which created an AI that could accurately identify a patient's self-reported race based on medical images like X-rays alone.

The miseducation of algorithms is a critical problem; when artificial intelligence mirrors unconscious thoughts, racism, and biases of the humans who generated these algorithms, it can lead to serious harm. Computer programs, for example, have wrongly flagged Black defendants as twice as likely to re-offend as someone who's white. When an AI used cost as a proxy for health needs, it falsely named Black patients as healthier than equally sick white ones, as less money was spent on them. Even AI used to write a play relied on using harmful stereotypes for casting. 

Examples of bias in natural language processing are boundless but MIT scientists have investigated another important, largely underexplored modality: medical images. Using both private and public datasets, the team found that AI can accurately predict the self-reported race of patients from medical images alone. Using imaging data of chest X-rays, limb X-rays, chest CT scans, and mammograms, the team trained a deep learning model to identify race as white, Black, or Asian even though the images themselves contained no explicit mention of the patient's race. This is a feat even the most seasoned physicians cannot do, and it's not clear how the model was able to do this.

In an attempt to tease out and make sense of the enigmatic "how" of it all, the researchers ran a slew of experiments. To investigate possible mechanisms of race detection, they looked at variables like differences in anatomy, bone density, resolution of images — and many more, and the models still prevailed with a high ability to detect race from chest X-rays. "These results were initially confusing because the members of our research team could not come anywhere close to identifying a good proxy for this task," says paper co-author Marzyeh Ghassemi, an assistant professor in the MIT Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science (IMES), who is an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and of the MIT Jameel Clinic. "Even when you filter medical images past where the images are recognizable as medical images at all, deep models maintain a very high performance. That is concerning because superhuman capacities are generally much more difficult to control, regulate, and prevent from harming people."

"The fact that algorithms 'see' race, as the authors convincingly document, can be dangerous. But an important and related fact is that, when used carefully, algorithms can also work to counter bias," says Ziad Obermeyer, associate professor at the University of California at Berkeley, whose research focuses on AI applied to health. "In our own work, led by computer scientist Emma Pierson at Cornell, we show that algorithms that learn from patients' pain experiences can find new sources of knee pain in X-rays that disproportionately affect Black patients and are disproportionately missed by radiologists. So just like any tool, algorithms can be a force for evil or a force for good which one depends on us, and the choices we make when we build algorithms."

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net