I assemble a novel dataset to examine the long-term consequences of blacklisting, a Soviet policy used to deter market-oriented behavior through collective punishment of Ukrainian villages in 1932-33. Under blacklisting, all village residents could be banned from trade and provision of crucial goods, prohibited from moving, and imposed harsh in-kind fines. Formally, the policy was meant to punish the communities underperforming in terms of state food procurement (similar to in-kind taxation) because local procurement shortfalls supposedly were a consequence of intentional, profit-seeking behavior. Using a weather-based instrument for the locality’s blacklisting status, I document that blacklisting significantly reduced the present-day nightlight intensity (a proxy measure for economic development). Additional evidence points to entrepreneurship and trust as channels for this effect. My results support the notion that policies that suppress economic freedoms and disrupt social structure can have persistent negative effects on economic performance.
Newly-developed large language models (LLM)—because of how they are trained and designed—are implicit computational models of humans—a homo silicus. These models can be used the same way economists use homo economicus: they can be given endowments, information, preferences, and so on and then their behavior can be explored in scenarios via simulation. I demonstrate this approach using OpenAI’s GPT3 with experiments derived from Charness and Rabin (2002), Kahneman, Knetsch and Thaler (1986) and Samuelson and Zeckhauser (1988). The findings are qualitatively similar to the original results, but it is also trivially easy to try variations that offer fresh insights. Departing from the traditional laboratory paradigm, I also create a hiring scenario where an employer faces applicants that differ in experience and wage ask and then analyze how a minimum wage affects realized wages and the extent of labor-labor substitution.
Lots has been written here, as you can see from my systematic literature review (attached) and update here link, but many questions are unanswered. Descriptively, what is the average acceptance rate of CON applicants by state? What predicts successful vs unsuccessful CON applications? There’s a lot of variety in what types of facilities and equipment require CON in different state; AHPA lists 28 types of CON restrictions. Many of these types have been the focus of zero papers. In terms of the effects of CON, some big outcomes not addressed since 1998: hospital beds per capita, HHI, profits. My paper (Bailey, Hamami, McCorry, 2017) on how CON affects prices is more recent but the price data we used was far from ideal, you could probably do much better now. I found (link) that CON states have higher overall Medicare spending, but this is puzzling given that Medicare prices are mostly set nationally, you could use claims data to figure out what drives this (Quantity effects? differential upcoding? Part C?). Outcomes CON may effect that I believe have zero papers: insurance premiums, hospital utilization rates, self-reported health, most types of morbidity, nursing home abuse, hospital openings and closures by local area income. On the identification side, this is one of many literatures full of old papers that could be redone in light of the new literature on staggered adoption and two-way fixed effects.
Implicit marginal tax rates sometimes go over 100% when you consider lost subsidies as well as higher taxes. This could be trapping many people in poverty, but we don’t have a good idea of how many, because so many of the relevant subsidies operate at the state and local level. Descriptive work such as cataloguing where all these “benefits cliffs” are and how many people they effect would be hugely valuable. You could also study how people react to benefits cliffs using the data we do have (https://benefitscliffs.org).
“Evidence from the Introduction of Medicare” (Finkelstein, 2007) is a great paper built on weak data. There’s an AER waiting for anyone who could do the archival work to dig up the data to re-do it properly. Most importantly, can you get 1960s health spending data by payer at the state level or lower? Can you also get data on pre-Medicare public insurance programs like Kerr-Mills or Medical Assistance for the Aged?
Atul Gawande argues that one major driver of variation of health care spending across the US is variation in physician greed; some towns have a “culture of money” or “entrepreneurial spirit” among physicians. How could we get a good quantitative proxy for physician greed to test this? Ideas: the number of (non-medical?) businesses the average physician has a stake in, proportion of physicians with business degrees, physician spending on cars or luxury goods, proportion of physicians taking pharma money (Sunshine Act data).
States have passed over a hundred different types of mandated benefits, but the vast majority have zero papers focused on them. Many likely effects of the laws have also never been studied for any mandate or combination of mandates. Do they actually reduce uncompensated hospital care, as Summers (1989) predicts? Do mandates cause higher deductibles and copays, less coverage of non-mandated care, or narrower networks? How do mandates affect the income and employment of relevant providers? Can mandates be used as an instrument to determine the effectiveness of a treatment? On the identification side, redoing older papers using a dataset like MEPS-IC where self-insured firms can be used as a control would be a major advance.
Screening based on personalities gives job applicants incentives to misrepresent themselves. If groups misrepresent themselves in different fashions, then biases in the hiring process may arise. Using a within-subject, laboratory experiment comparing personality measures with and without incentives for misrepresentation, we find evidence of racial differences in faking behavior, but no evidence of gender differences. Faking attenuates gender differences evident in unincentivized personality measures but leads to racial differences where no differences exist in unincentivized measures. Our findings indicate that selection based on incentivized personality measures has the potential to adversely impact racial minorities in hiring.
Loneliness is increasingly recognized as an important public health issue, leading to appointments of ministers of loneliness in Japan and UK. It affects both young adults (Ellard, Dennison, and Tuomainen, 2022) and older people. For example, Stress in America 2020 survey finds that 73% of US adults aged 18-23 reported feeling lonely within the last two weeks. Do people value meeting new people and why can't they do it to deal with loneliness? Hypothesis: many people put a significant value on new connections, but establishing new connections might be hard, because any cold matching proposals are disproportionally likely to come from lemons (criminals, social outcasts). This means that most connection attempts with strangers are rejected. The potential experiment would attempt to elicit a monetary estimate of a 10-min meeting with a stranger by using the BDM approach. There are two subjects in the experiment who assume themselves to be strangers. Both receive an endowment of 10 USD. Each subject independently states their secret WTP. One subject of the pair is randomly chosen as a proposer. If the random number chosen for the proposer is above the threshold, the match is made. Matched subjects are invited to spend 10-min in a room or behind a meeting table within a larger room. Treatments: opposite sexes/same sex, online/offline meetings(?).The proposed experimental design eliminates adverse selection, because the probability of a match for a proposer depends only on their own WTP: if your WTP matters, WTP of your match does not.