# Johann D. Gaebler

## Police stop Black drivers more often than Whites. We found out why. (Washington Post)

### Departments enforce speed limits in a handful of areas, our research finds — and those are often disproportionately Black.

U.S. police officers pull over tens of thousands of drivers every day, making traffic stops the most common way people interact with law enforcement. They’re also a frequent source of tension: Black drivers are stopped and searched at higher rates than White drivers, and Black drivers are more likely than White drivers to say they were stopped for illegitimate reasons. Traffic stops can result in fines or the loss of a driver’s license even in the best cases, and can escalate to violence in the worst.

## Blocks as Geographic Discontinuities

### Do polling place movement and reassignment affect voter turnout?

Americans have fiercely debated closing and moving polling places in recent years. Civil rights groups like the Leadership Conference Education Fund have opposed high-profile polling location closures and relocations, charging that the changes represent a “particularly pernicious way to disenfranchise voters of color.” Many election administrators and other public officials have defended the changes as necessary concessions to efficiency, to ensure compliance with regulations like the ADA, or as a natural response to declining numbers of in-person voters.

## Slicing Infinity

### A very brief introduction to prevalence and the mathematics of causal fairness

Can algorithmic fairness ever be harmful, even to the people it’s intended to protect? For some time, researchers have known that adhering to common algorithmic fairness criteria like equalized false positive rates can, counterintuitively, lead to worse outcomes for marginalized groups. The crux of the issue, however, is not if such fairness criteria can be harmful, but how often. In our new paper on causal fairness, “Causal Conceptions of Fairness and their Consequences,” Hamed Nilforoshan, Ravi Shroff, Sharad Goel, and I tried to make rigorous the intuition that, for a growing number of fairness definitions incorporating causal reasoning, the answer is “almost always.”

## Autodiff for Implicit Functions in Stan

### Fast derivatives for functions you can't write down

One of the things that makes Stan powerful is that—in addition to a large library of standard mathematical functions (e.g., $$\exp(x)$$, $$x^y$$, $$x + y$$, $$\Gamma(x)$$, etc.)—it also supports the use of higher-order functions, such as such as solving a user-specified system of ODEs. This greatly expands the range of Bayesian models Stan can handle.