In the 1980s, researchers tested a job training program called JOBSTART in 13 U.S. cities. In 12 locations, the program had minimal benefit. But in San Jose, California, the results were good: After a few years, workers earned about $6,500 more per year than their peers who didn’t participate.
So in the 1990s, researchers from the U.S. Department of Labor implemented the program in 12 other cities. The results, however, have not been replicated. San Jose’s initial numbers remained outliers.
This scenario could be a consequence of what experts call the “winner’s curse”. When programs, policies, or ideas are tested, even in rigorous randomized experiments, things that work well one time may work less well the next time. (The term “winner’s curse” also refers to high winning bids at an auction, a different, but related question.)
This winner’s curse poses a problem for public officials, private-sector business leaders, and even scientists: By choosing something proven to work, they risk setting themselves up for decline. What goes up will often come down.
“In cases where people have multiple options, they choose the one that seems best to them, often based on the results of a randomized trial,” says Isaiah Andrews, an economist at MIT. “What you will find is that if you try this program again, it will tend to be disappointing compared to the initial estimate that led people to choose it.”
Andrews co-authored a study that examines this phenomenon and suggests new tools to study it, which could also help people avoid it.
The article, “Inference on Winners,” appears in the February issue of The quarterly economics journal. The authors are Andrews, a professor in MIT’s economics department and an expert in econometrics, the statistical methods of the field; Toru Kitagawa, professor of economics at Brown University; and Adam McCloskey, associate professor of economics at the University of Colorado.
Distinguish the differences
The type of winner’s curse addressed in this study dates back a few decades as a concept in the social sciences and also appears in the natural sciences: as the researchers note in the article, the winner’s curse has been observed in studies of genome-wide association, which attempt to link genes to traits.
When seemingly notable results don’t hold up, there can be a variety of reasons. Sometimes experiments or programs don’t all work out the same way when people try to replicate them. In other cases, random variation alone can create this kind of situation.
“Imagine a world in which all of these programs were exactly as effective,” Andrews says. “Well, by chance one of them will be prettier, and you’ll tend to choose that one. That means you’ve overestimated its effectiveness compared to the other options.” Good data analysis can help determine whether the outlier result was due to true differences in effectiveness or a random fluctuation.
To distinguish between these two possibilities, Andrews, Kitagawa and McCloskey developed new methods of analyzing the results. In particular, they proposed new estimators – a way to project results – that are “unbiased medians”. That is, they are equally likely to overestimate and underestimate effectiveness, even in winner’s curse contexts.
The methods also produce confidence intervals that help quantify the uncertainty of these estimates. Additionally, researchers propose “hybrid” inference approaches, which combine multiple methods of weighting research data and, as they show, often yield more accurate results than alternative methods.
With these new methods, Andrews, Kitagawa, and McCloskey establish stricter limits on the use of data from experiments, including confidence intervals, unbiased median estimates, and more. And to test the viability of their method, the researchers applied it to several social science research cases, starting with the JOBSTART experiment.
Interestingly, among the various ways that experimental results can become outliers, the researchers found that JOBSTART’s result in San Jose was probably not the result of simple chance. The results are different enough that there may be differences in how the program was administered or in its setting compared to other programs.
The Seattle test
To further test the hybrid inference method, Andrews, Kitagawa, and McCloskey then applied it to another research topic: programs providing housing vouchers to help people move into neighborhoods where residents have a greater economic mobility.
Nationwide economic studies have shown that some areas generate greater economic mobility than others, all else being equal. Encouraged by these findings, other researchers collaborated with officials in King County, Washington, to develop a program to help voucher recipients move to areas of higher opportunity. However, predictions about the performance of such programs may be subject to a winner’s curse, since the level of opportunity in each neighborhood is imperfectly estimated.
Andrews, Kitagawa, and McCloskey therefore applied the hybrid inference method to a test of these data at the neighborhood level, in 50 “travel zones” (primarily metropolitan areas) across the United States. The hybrid method once again helped them understand how certain previous estimates were. .
Simple estimates in this context suggest that for children growing up in households at the 25th percentile of annual income in the United States, relocation programs would create a gain of 12.25 percentage points in adult income. The hybrid inference method suggests that there would instead be a gain of 10.27 percentage points, a smaller but still substantial impact.
Indeed, as the authors write in the article, “even this smaller estimate is economically important” and “we conclude that targeting areas based on estimated opportunity succeeds in selecting on average areas with more opportunities.” high”. At the same time, the researchers found that their method made a difference.
Overall, Andrews says, “the ways we measure uncertainty can actually become unreliable themselves.” This problem is compounded, he notes, “when the data tells us very little, but we are wrongly overconfident and think the data tells us a lot. … Ideally, you would like something that is both reliable and tells us as much as possible. “.
More information:
Isaiah Andrews et al, Inference About Winners, The quarterly economics journal (2023). DOI: 10.1093/qje/qjad043
Provided by the Massachusetts Institute of Technology
This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and education.
Quote: How to avoid a “winner’s curse” for social programs (February 5, 2024) retrieved February 5, 2024 from
This document is subject to copyright. Except for fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.