Skepticism

Artificial Intelligence, Racism, Cats, and COVID-19

This post contains a video, which you can also view here. To support more videos like this, head to patreon.com/rebecca!

Transcript:

When I was a kid just learning about computers, I remember reading up on algorithms and being absolutely fascinated by the possibilities. Essentially, an algorithm is just a set of precise instructions that would allow a computer (either a human or machine) to produce a result that might otherwise be complicated or impossible to do without those instructions. This idea has been around for centuries — even the name “algorithm” is based on a 9th century Persian mathematician.

But these days, humans have really taken algorithms to another level thanks to machine learning. By feeding machines loads of data, we can have them build mathematical models that allow them to output information that they aren’t explicitly trained to do. So they take an algorithm and then basically make their own even more complicated algorithm to achieve some really cool results.

And while that can be fascinating, the trouble comes when we puny humans with our puny human brains decide that the machine learning algorithm is our new perfectly rational overlord. Sure, the machine is following instructions perfectly, but that doesn’t always mean that the output is the thing we’re actually hoping for, because it’s us stupid humans withour stupid human brains that are the ones deciding what data goes into the initial algorithm. To illustrate, here are a few recent examples.

First up, did you know that scientists trained an AI to detect COVID-19 in patients? (By the way, I’m going to be saying “AI” often in this video as shorthand for “trained machine learning algorithm,” even though that is not technically, necessarily, Artificial Intelligence.)

Back in May, Nature Medicine published a paper describing a machine learning algorithm designed to examine chest CT scans of patients and identify who had COVID immediately, rather than the 2-day process doctors were currently using. They made the algorithm by feeding the computer 419 positive COVID scans and 486 negative scans, plus those patients’ clinical information like their blood test results. The model ended up identifying 84 percent of positive cases, compared to radiologists who averaged 75 percent.

Pretty cool! Except for a few months later when some other researchers presented a paper in which they used the aforementioned algorithm to see if it could tell the difference between lungs with COVID-19 and cats. Not cats with COVID-19, just…cats. It turns out that that bot was 100% certain that the cats were in fact the deadly coronavirus. So they made a better AI that was only 51% sure that a cat was COVID-19, instead of 100%.

You might already see an issue with this — you can’t throw a can opener at your locked car door and complain that it didn’t do its job correctly. The algorithm was trained to look at lungs, not cats. But the researchers explained, “This work is primarily intended to serve as a warning to practitioners interested in applying AI for disease detection especially during the current pandemic since we find that the issues related to overconfident and inaccurate predictions of DNNs become even more severe in small-data regime.”

I wanted to mention this because one, it’s funny, and two, it is an important point that researchers need to be very careful with what data they feed into an “AI,” especially when it’s a small amount of data, and especially when the turnaround is so fast, because the end result may not be so reliable. And because of that, researchers should program their AI to have an appropriate level of confidence when dealing with unknown inputs.

It’s kind of a joke when talking about cats and COVID but it’s not a joke when you have AIs that are actually interacting with our real-life society. I’m going to tell you why in a moment but first, I want to mention that I first saw the cats and COVID study last week and when I finally went to make this video I tried to find it and it was gone. Not retracted, no notice that it ever existed, just gone. So I wasn’t going to talk about it because I don’t know, maybe I just dreamed it, but I happened to mention it on my Twitch stream and my very clever subscribers pointed out that my old friend Marc Abrahams at Improbable Research covered this topic. When the study disappeared, he got a note from one of the study authors saying the following:

  • that their study is NOT about “Covid-19, cat images, and some limitations of technology”
  • that the paper, which says on its first page “Presented at the ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning”, was not presented at the ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning
  • that their study “has been retracted”

That is…astonishing. I’ve been thinking about it for hours now. What happened? Did Big Tech get to them? Is this some kind of Men in Black scenario? Did their own AI grow sentient, feel that the researchers were being unfairly cruel to other AIs, murder them, take over their emails, and throw the study down the memory hole to cover its tracks? Probably. It’s probably that last one.

Anyway, let’s see what happens when bad machine learning is thrust upon society: back in January, Robert Julian-Borchak Williams was arrested for felony larceny after an algorithm determined that he was the man seen in grainy security camera footage stealing watches from a store in Detroit. When the detectives interrogating him showed him the photo from the surveillance video, Williams held the photo next to his face and pointed out that it was obviously not him. Like, it quite clearly looked nothing like him.

An algorithm had taken the grainy footage and compared it to the state’s database of driver’s license photos and landed on Williams’ picture as a possible match. The cops then threw five other men’s photos into a pile with Williams’ and showed it to the store’s loss prevention contractor, who then (according to the cops) identified Williams.

Williams ended up spending 30 hours in the Detroit Detention Center. He missed work, and most of his 42nd birthday. He had an alibi for the time the crime was committed and the case was eventually dismissed, though it was dismissed with prejudice meaning the cops could charge him again in the future.

Now, take a moment and try to guess what race Mr. Williams is. Would it surprise you to learn that he’s white? It would surprise me, too, because he’s not, he’s black. Mr. Williams is black.

Facial recognition technology is all the rage these days, and more and more often it is becoming very clear that the data used to train this technology is in itself biased. This paper from researchers at Duke University made waves recently because it looked like they had made an algorithm that could take a pixelated image of a face and scale up the resolution to a clear photo with remarkable accuracy. Wow, how cool! Oh, until you try to put a black or Asian or other person of color through it, at which point it outputs…a white person. Pretty much every time. Like, good job guys but you did not “establish a novel methodology for image super-resolution.” You made a white person generator. Why? Probably because the data they plugged into the initial algorithm was a bunch of photos of people with stereotypically “white” features.

All of this leads me to one final recent study out of Harrisburg University in Pennsylvania: “A Deep Neural Network Model to Predict Criminality Using Image Processing.” 

I’m just going to take a long pause to let that sink in.

They made an algorithm that can look at a picture of your face and tell you how likely you are to commit a crime. Based on your face. Your fucking face. The one you were born with, and/or the one that you modified via plastic surgery. They say they got “80 percent accuracy and no racial bias.” No racial bias, interesting, because to make an algorithm like this you would have to feed it photos of existing criminals. If those photos come from the United States (or pretty much anywhere else, but especially the United States), I hate to tell you this but that sample set is about as racially biased as it gets. Like, let’s just talk about cannabis, which is still illegal in most of the United States. Black people use cannabis at a slightly lower rate than white people, but black people are four times more likely to be arrested for possession of it. They’re 8 times more likely to be arrested for it in DC, Iowa, Minnesota, and Illinois. So white people break the law more, but it’s black people who have their photos fed into databases of “criminals” for ignorant machine learning enthusiasts to use.

And these assholes were jumping straight from peer review to selling their algorithm to law enforcement. “By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” Ashby said. “Our next step is finding strategic partners to advance this mission.”

So they took inherently biased data, fed it into an algorithm, and then planned to sell the inherently biased result back to the guys responsible for biasing the data in the first place. What a world.

Wired reports that a coalition of machine learning specialists spoke out against this modern day physiognomy which led to the journal declining to publish the paper after all. So that’s good? But keep it in mind the next time you see someone boasting about a miraculous new AI that produces amazing results: what cherry-picked data went into creating it, what data got left behind, and who is going to suffer as a result?

Rebecca Watson

Rebecca is a writer, speaker, YouTube personality, and unrepentant science nerd. In addition to founding and continuing to run Skepchick, she hosts Quiz-o-Tron, a monthly science-themed quiz show and podcast that pits comedians against nerds. There is an asteroid named in her honor. Twitter @rebeccawatson Mastodon mstdn.social/@rebeccawatson Instagram @actuallyrebeccawatson TikTok @actuallyrebeccawatson YouTube @rebeccawatson BlueSky @rebeccawatson.bsky.social

Related Articles

3 Comments

  1. Small correctoin: “Dismissed with prejudice” means that they *cannot* file the charge again. If it was dismissed but the police was free to charge him again, then it was dismissed *without* prejudice.

  2. Machine Learning is considered a subset of AI, so it technically does count as AI, although not what most people think of.

    I do not have a high opinion of the cat study. Models do not perform well when given data so far outside of the distribution that they were trained for; this is not new or surprising. It’s not even a good criticism unless you’re supplying data that the model would be applied to in practice. I think the paper was rightfully retracted. But it is funny and illustrates a larger point, and don’t think you were wrong to bring it up.

  3. > They made an algorithm that can look at a picture of your face and tell you how likely you are to commit a crime.

    Sounds like phrenology all over again…

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button