This paper isolates the role of conflict or disagreement on inflation in two ways. In the first part of the paper, we present a stylized model, kept purposefully away from traditional macro models. Inflation arises despite the complete absence of money, credit, interest rates, production, and employment. Inflation is due to conflict, it cannot be explained by monetary policy or departures from a natural rate of output or employment. In contrast, the second part of the paper develops a flexible framework that nests many traditional macroeconomic models. We include both goods and labor to study the interaction of price and wage inflation. Our main results provide a decomposition of inflation into “adjustment” and “conflict” inflation, highlighting the essential nature of the latter. Conflict should be viewed as the proximate cause of inflation, fed by other root causes. Our framework sits on top of a wide set of particular models that can endogenize conflict.
There is a large productivity gap between rural and urban areas in most developing countries (Gollin et al, 2014). It is not completely clear why this gap exists, but multiple explanations are offered: Policymakers can choose either to support rural areas or to promote migration to cities. Rural workfare programs such as NREGA in India use the first approach, but their long-term equilibrium effects on their participants and their communities is not clear. For example, we know that workfare programs generate large welfare gains for workers but distort local labor markets (Imbert and Papp, 2015). It is plausible that they can also cause suboptimal skills accumulation or prevent workers' migration to urban areas with more productive jobs, which existing studies do not explore. The potential study might explore worker's outcomes within 10 years of being exposed to a workfare program: migration, occupation, income. The variation in workfare exposure can come either from an additional field experiment (potentially based on NREGA) with random selection of treated/controlled administrative units or by linking the information on individuals affected by NREGA program before (such as linked census data). It is important that any study would track workers both staying in the affected locality as well as those migrating to other areas. This idea is inspired by an Ahmed Mobarak conference presentation: (link)
It is hard to measure the influence of media on economic value. Rolling out 5G in Australia (link) and the corresponding anxiety around this is a unique setting where we can do it. There are many videos and articles online about 5G networks being damaging to human health. The media coverage can be measured by the number of Twitter posts on the topic or by the number of followers on a Facebook group that discusses the harm of 5G towers. The distance from the property to a visible 5G tower is another running variable. The coverage and the distance variation allows us to use difference-in-difference design to measure the drop in real estate prices. (Alternatively, one could use the release of the Chernobyl television series as an endogenous shock to measure the change of property prices located near nuclear power stations, say, in the USA.)
The paper describes a potential platform to facilitate academic peer review with emphasis on early-stage research. This platform aims to make peer review more accurate and timely by rewarding reviewers on the basis of peer prediction algorithms. The algorithm uses a variation of Peer Truth Serum for Crowdsourcing (Radanovic et al., 2016) with human raters competing against a machine learning benchmark. We explain how our approach addresses two large productive inefficiencies in science: mismatch between research questions and publication bias. Better peer review for early research creates additional incentives for sharing it, which simplifies matching ideas to teams and makes negative results and p-hacking more visible.
Why do relatively few women work part-time in the U.S. (and Canada) if it seems as a natural way to combine child/home care work (which women in the U.S. still do disproportionally) with employment?. This question is inspired by an insightful twitter thread of Alice Evans (King's College London, Yale). The U.S. labor force participation rate for women is comparable to most European countries: But relatively few of employed females work part-time (only 20% vs 60% for Netherlands or 40% for Germany): Potential Explanation/Hypotheses: Women work full time because they need to preserve health insurance which would be very costly otherwise in the US. Counterargument: Women also work full time in Canada which does not rely on private health insurance. Lower taxes in the US incentivize more work (Prescott, 2004). Counterargument: taxes explain less of variation in hours if using a more standard values for the elasticity of labor supply (Alesina, Glaeser and Sacerdote, 2005) American at-will labor separation policies and lower safety net create more incentives to invest in goodwill with your employer. Working full time shows that you are a loyal productive worker and increases on-the-job learning. Higher returns on education shift the choice in labor-leisure tradeoff towards working more. Higher returns on work experience create incentives to work more. Two potential approaches to test it: 1) a structural model to incorporate multiple mechanism/explanations and match to the data or 1) a good reduced form identification for just one channel based on experimental or quasi-experimental variation. It would be wise first to start with exploring more of recent literature on these channels and with studying the available secondary data to narrow down the list of hypotheses (e.g. returns on education and experience in the US, Canada and in European countries, study obvious large variation in full time work percentage within Europe).
Large language models (LLMs) such as ChatGPT have the potential to revolutionize research in economics and other disciplines. I describe 25 use cases along six domains in which LLMs are starting to become useful as both research assistants and tutors: ideation, writing, background research, data analysis, coding, and mathematical derivations. I provide general instructions and demonstrate specific examples for how to take advantage of each of these, classifying the LLM capabilities from experimental to highly useful. I hypothesize that ongoing advances will improve the performance of LLMs across all of these domains, and that economic researchers who take advantage of LLMs to automate micro tasks will become significantly more productive. Finally, I speculate on the longer-term implications of cognitive automation via LLMs for economic research.
I assemble a novel dataset to examine the long-term consequences of blacklisting, a Soviet policy used to deter market-oriented behavior through collective punishment of Ukrainian villages in 1932-33. Under blacklisting, all village residents could be banned from trade and provision of crucial goods, prohibited from moving, and imposed harsh in-kind fines. Formally, the policy was meant to punish the communities underperforming in terms of state food procurement (similar to in-kind taxation) because local procurement shortfalls supposedly were a consequence of intentional, profit-seeking behavior. Using a weather-based instrument for the locality’s blacklisting status, I document that blacklisting significantly reduced the present-day nightlight intensity (a proxy measure for economic development). Additional evidence points to entrepreneurship and trust as channels for this effect. My results support the notion that policies that suppress economic freedoms and disrupt social structure can have persistent negative effects on economic performance.
Newly-developed large language models (LLM)—because of how they are trained and designed—are implicit computational models of humans—a homo silicus. These models can be used the same way economists use homo economicus: they can be given endowments, information, preferences, and so on and then their behavior can be explored in scenarios via simulation. I demonstrate this approach using OpenAI’s GPT3 with experiments derived from Charness and Rabin (2002), Kahneman, Knetsch and Thaler (1986) and Samuelson and Zeckhauser (1988). The findings are qualitatively similar to the original results, but it is also trivially easy to try variations that offer fresh insights. Departing from the traditional laboratory paradigm, I also create a hiring scenario where an employer faces applicants that differ in experience and wage ask and then analyze how a minimum wage affects realized wages and the extent of labor-labor substitution.
Lots has been written here, as you can see from my systematic literature review (attached) and update here link, but many questions are unanswered. Descriptively, what is the average acceptance rate of CON applicants by state? What predicts successful vs unsuccessful CON applications? There’s a lot of variety in what types of facilities and equipment require CON in different state; AHPA lists 28 types of CON restrictions. Many of these types have been the focus of zero papers. In terms of the effects of CON, some big outcomes not addressed since 1998: hospital beds per capita, HHI, profits. My paper (Bailey, Hamami, McCorry, 2017) on how CON affects prices is more recent but the price data we used was far from ideal, you could probably do much better now. I found (link) that CON states have higher overall Medicare spending, but this is puzzling given that Medicare prices are mostly set nationally, you could use claims data to figure out what drives this (Quantity effects? differential upcoding? Part C?). Outcomes CON may effect that I believe have zero papers: insurance premiums, hospital utilization rates, self-reported health, most types of morbidity, nursing home abuse, hospital openings and closures by local area income. On the identification side, this is one of many literatures full of old papers that could be redone in light of the new literature on staggered adoption and two-way fixed effects.