Constraining inequity in algorithm and policy design
It’s become something of a cliche that algorithms are everywhere. An enormous number papers—including many of my own!—begin with some variation on the following:
Algorithmic decision-making systems are increasingly used to make decisions that affect people's lives in health care, criminal justice, lending, college admissions, hiring, and more. That means that, like the human decision-makers they've replaced, biases in these systems can harm racial and gender minorities or other marginalized groups.
The cliche remains relevant because the problem is real: algorithms do make increasingly important decisions, sometimes to disastrous effect. The question, though, is, what should we do about it?
People are jailed for unpaid court debts tens of thousands of times each year
Debtors’ prisons were once a central feature of the American legal system—so much so that even sitting supreme court justices could find themselves locked up for bad debts. Widely reviled—John Stuart Mill called them “barbarous expedients of a rude age, repugnant to justice as well as humanity”—debtors’ prisons were outlawed in nearly every American state by the mid-1800s. Nevertheless, over the last decade, reports have emerged of people going to jail for unpaid debts in Ferguson, Missouri; Corinth, Mississippi; Jackson, Mississippi; and other cities and states.
Departments enforce speed limits in a handful of areas, our research finds — and those are often disproportionately Black.
U.S. police officers pull over tens of thousands of drivers every day, making traffic stops the most common way people interact with law enforcement. They’re also a frequent source of tension: Black drivers are stopped and searched at higher rates than White drivers, and Black drivers are more likely than White drivers to say they were stopped for illegitimate reasons. Traffic stops can result in fines or the loss of a driver’s license even in the best cases, and can escalate to violence in the worst.
Do polling place movement and reassignment affect voter turnout?
Americans have fiercely debated closing and moving polling places in recent years. Civil rights groups like the Leadership Conference Education Fund have opposed high-profile polling location closures and relocations, charging that the changes represent a “particularly pernicious way to disenfranchise voters of color.” Many election administrators and other public officials have defended the changes as necessary concessions to efficiency, to ensure compliance with regulations like the ADA, or as a natural response to declining numbers of in-person voters.
A very brief introduction to prevalence and the mathematics of causal fairness
Can algorithmic fairness ever be harmful, even to the people it’s intended to protect? For some time, researchers have known that adhering to common algorithmic fairness criteria like equalized false positive rates can, counterintuitively, lead to worse outcomes for marginalized groups. The crux of the issue, however, is not if such fairness criteria can be harmful, but how often. In our new paper on causal fairness, “Causal Conceptions of Fairness and their Consequences,” Hamed Nilforoshan, Ravi Shroff, Sharad Goel, and I tried to make rigorous the intuition that, for a growing number of fairness definitions incorporating causal reasoning, the answer is “almost always.”
Fast derivatives for functions you can't write down
One of the things that makes Stan powerful is that—in addition to a large library of standard mathematical functions (e.g., \(\exp(x)\), \(x^y\), \(x + y\), \(\Gamma(x)\), etc.)—it also supports the use of higher-order functions, such as such as solving a user-specified system of ODEs. This greatly expands the range of Bayesian models Stan can handle.