Algorithmic accountability
Learnings from Stanford ETPP
The Stanford University extension course, “Ethics, Technology, and Public Policy for Practitioners” explores the ”ethical and social impacts of technological innovation.” This 7-week course was bookended by two speculative fiction stories: The Ones Who Walk Away from Omelas (1973), by Ursula K. LeGuin, and The Ones Who Stay and Fight (2020), by N.K. Jemisin. Jemisin’s story was written as a response to Le Guin’s. Although I strive to consider humanistic thinking and learning in my technological life, it’s easy to draw an artificial division between the humanities and technology. These stories, which envision almost-utopias with glaring flaws, both invite the reader to consider how she would respond to the societal injustice at the heart of each society that makes the thoroughly blessed lives of its citizens possible.
What do I take from these stories? We all know we live in flawed societies. With policies and regulations related to technology, there are always imperfections and indeed, likely injustices, even with the most intentionally fair-minded options. In this essay, I am looking at issues around the use of algorithms, the possible harms that can ensue from unfairness, and possible solutions.
Hiring by algorithm: a case study
Let’s take a look at one of the case studies used in the course: Hiring by Machine. As background to this case study, let’s consider the current job application landscape, at least in the United States. Numerous recruiters and applicants post anecdotes that any given job posting will get hundreds of applications within a day, especially for well-known companies. Recruiters are left with the impossible task of fairly sorting through a mountain of applications, many of which do not fulfill even a fraction of the stated requirements. Companies employ various strategies that use some form of scanning and filtering, most of which shouldn’t even be called AI, as they are simply looking for specific keywords and rejecting applications without these specific keywords.
The company in the “Hiring by Machine” case study is Strategeion, which was founded by a group of Army veterans to create an online platform for the benefit of veterans. Strategeion became a well-known company, one that attracted applicants because of its virtues, resulting in the company being flooded with applications. The company attempted to use a home-grown assessment model that used machine learning to look at previous successful candidates and seek out similar candidates in the pile of applications. When an applicant who had been disabled all her life applied, she was rejected. After investigation, the assessment model was found to have failed to find markers of previous athleticism in her profile, which were very common in the company’s previous successful candidates, even if those employees had become disabled by wartime injuries.
What is the fair solution to this situation? The recruiters were already swamped and unable to properly assess the resumes received, which was why the model was developed. The model had apparently worked successfully up until that point, as testing had shown the applications that the model selected were similar to what human recruiters selected. In practice, my suspicion is that most technical recruiters would likely admit, if anonymous, that they cannot guarantee absolute fairness in their decisions to keep or discard applications.
What many companies now do to bypass the problem of too many resumes is to reward referrals from employees within the company. Not all new employees will come from referrals, but it’s well-accepted that a referral provides a significant edge to a candidate. However, this solution has a tendency to result in candidates that are similar to the existing employees, just as Strategeion’s assessment model did.
If I look for a solution to this situation, all of them come up short. Requiring Strategeion to employ enough recruiters to individually look at every resume fairly would eat up enormous resources, which are arguably better spent on their mission. Strategeion, once made aware of a flaw in its assessment model, can try to correct it and try again, but what is the appeal process? Strategeion could rely on employee referrals for new candidates, a strategy that is at least as open to bias as the other methods. Ultimately, no matter which method is chosen, a huge number of qualified candidates will not be hired by Strategeion, simply due to logistical capacity. One could argue that the real problem arises when the same unjust and inaccurate rejection patterns are repeated across multiple companies, because that means some candidates are effectively shut out indefinitely. Given that racism, sexism, ableism, and classism are pervasive, shutting out the same candidates repeatedly is a major concern.
Accountability for algorithmic harms
How do we account for algorithmic harms? Algorithmic Accountability: A Primer (“the primer”) sets some guidelines. The primer defines an algorithm this way: “An algorithm is a set of instructions for how a computer should accomplish a particular task.” Although it’s common for people and companies to offload blame to “the computer” or “the system” for unfairness, ultimately algorithms exist because of inputs from humans. Even algorithms defined by machine learning rely on data sets that are derived and scoped by humans. The algorithms can be exceedingly complex to the point that no human can untangle their workings, but that doesn’t mean humans aren’t ultimately responsible. Cathy O’Neil, in her book Weapons of Math Destruction, Cathy O’Neil, defines an algorithm as an “opinion embedded in mathematics.”
The primer looks at the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system, which has been used extensively by judges to determine the risk of recidivism for defendants in pretrial hearings. Because COMPAS did not contain any data that was overtly labeled by race, the proponents argued it was free of racial bias. Yet the actual results from the COMPAS system showed a definitive racial bias in favor of white defendants and against Black defendants, and COMPAS was found to be inaccurate in its recidivism predictions. Although Northpointe, the creator of COMPAS, has not released details of how its algorithm works, we know that zip codes, for example, can serve as an imperfect proxy for race, and with a set of different data points, the defendants are likely categorized as effectively as if race had been used overtly.
We cannot directly judge the bias of an algorithm without information that its creators consider trade secrets. We have to consider mechanisms to reveal the inner workings of algorithms if we want them to make decisions crucial to people’s lives. We also have to consider who is accountable for faulty algorithms. In the case of COMPAS, no single person could be held accountable for its racial bias, despite the literal loss of freedom involved when COMPAS made a wrong prediction.
The primer suggests data journalism and government regulation are two methods of seeking greater algorithmic accountability. Data journalism is of immense value, but it typically requires a significant investment in time and effort, and unless the journalist is unpaid, data journalism requires a sponsor of some sort, which can introduce its own bias. In the worst case, government regulation triggers an arms race with companies who seek to skirt the regulations. In principle, companies can agree together to self-regulate, but they cannot control rogue companies. Government regulations also tend to lag behind the actual technology.
What I can do
My actions, going forward, will be to advocate for transparent algorithms that can be audited for fairness. I will seek data that establishes what the results of the algorithms are. I will inquire about the data sets used to train the algorithms. I will consider critically how algorithms are directly affecting me, even for such banal actions as choosing a film or dress. I will lobby for ways to appeal algorithmic decisions, and for ways that such appeals can be fair.

