Gustavo Schwenkler

Associate Professor of Finance

"Time-varying Media Coverage and Stock Returns" (with H. Zheng).

We show that financial news editors have time-varying reporting preferences that point out risky assets to investors. Using New York Times data and natural language processing techniques, we estimate the loadings of news coverage on common firm features and extract dynamic editorial preferences for different types of firms. We find that firms with high editor preference earn higher monthly returns than those with low editor preference. An annualized excess return of 12\% due to editor preferences cannot be explained by standard risk factors. Our findings empirically support recent theories that posit that, when investors face attention constraints and delegate their information collection to news publishers, editorial reporting choices signal to investor which assets carry high risk premia.

"News-Driven Peer Co-Movement in Crypto Markets" (with H. Zheng). Online Appendix. Data.

Best Presentation Award at 3rd UWA Blockchain, Cryptocurrency and FinTech Conference.

When large idiosyncratic shocks hit a cryptocurrency, some of its peers experience unusually large returns of the opposite sign. The co-movement is concentrated among peers that are co-mentioned with shocked cryptos in the news, and that are listed in the same exchanges as shocked cryptos. It is a form of mis-pricing that vanishes after several weeks, giving rise to predictable returns. We propose a profitable trading strategy that exploits this predictability, and explain our results with a slow information processing mechanism. To establish our results, we develop a novel natural language processing technology that identifies crypto peers from news data. Our results highlight the news as a key driver of co-movement among peer assets.

"The Network of Firms Implied by the News" (with H. Zheng). Online Appendix. Codes. Data.

We show that the news is a rich source of data on distressed firm links that drive firm-level risks and aggregate fluctuations. We find that the news tends to report about links in which a less popular firm is distressed and may contaminate a more popular firm. This flow of information constitutes a contagion effect that results in predictable stock returns and credit downgrades. On an aggregate level, we show that shocks to the degree of connectivity in the news-implied firm network predict increases in aggregate volatilities, credit spreads, and default rates, as well as persistent declines in output. To obtain our results, we develop a machine learning methodology that takes text data as input and outputs a network of firm links implied by the data. The results of this paper enable the accurate estimation of risks that drive business cycle fluctuations.

"Preventing COVID-19 Fatalities: State versus Federal Policies" (with J.-P. Renne and Guillaume Roussellet).

Are COVID-19 fatalities large when a federal government does not enforce containment policies and instead allow states to implement their own policies? We answer this question by developing a stochastic extension of a SIRD epidemiological model for a country composed of multiple states. Our model allows for interstate mobility. We consider three policies: mask mandates, stay-at-home orders, and interstate travel bans. We fit our model to daily U.S. state-level COVID-19 death counts and exploit our estimates to produce various policy counterfactuals. While the restrictions imposed by some states inhibited a significant number of virus deaths, we find that more than two-thirds of U.S. COVID-19 deaths could have been prevented by late November 2020 had the federal government enforced federal mandates as early as some of the earliest states did. Our results quantify the benefits of early actions by a federal government for the containment of a pandemic.

"Estimating the Dynamics of Consumption Growth"

We estimate models of consumption growth that allow for long-run risks and disasters using data for a series of countries over a time span of 200 years. Our estimates indicate that a model with small and frequent disasters that arrive at a mean-reverting rate best fits international consumption data. The implied posterior disaster intensity in such a model predicts equity returns without compromising the unpredictability of consumption growth. It also generates time-varying excess stock volatility, empirically validating key economic mechanisms often assumed in consumption-based asset pricing models.

"The Systemic Effects of Benchmarking" (with D. Duarte and K. Lee)

We show that an institutional investor whose performance is evaluated relative to a narrow benchmark trades in ways that exposes a retail investor to higher risks and welfare losses. In our model, the institutional investor is different from the retail investor because she derives higher utility when her benchmark outperforms. This forces institutional investors to overreact (underreact) to cash flow news in bad (good) states of the world, increasing individual and aggregate volatilities. While asset prices and wealth are higher in the presence of benchmarking, the retail investor is worse off due to the exposure to higher risks. We empirically validate the mechanisms in our model using data on U.S. equity mutual funds with sector-specific mandates.

"Efficient Estimation and Filtering for Multivariate Jump-Diffusions" (with F. Guay). Codes. Journal of Econometrics 223 (2021), 251-275.

This paper develops estimators of the transition density, filters, and parameters of multivariate jump-diffusions with latent components. The drift, volatility, jump intensity, and jump magnitude are allowed to be general functions of the state. Our density and filter estimators converge at the canonical square-root rate, implying computational efficiency. Our parameter estimators have the same asymptotic properties as true maximum likelihood estimators, implying statistical efficiency. Numerical experiments highlight the superior performance of our estimators.

"Inference for Large Financial Systems" (with K. Giesecke and J. Sirignano), Mathematical Finance 30 (2020), 3-46.

We consider the problem of parameter estimation for large interacting stochastic systems where data is available on the aggregate state of the system. Parameter inference is computationally challenging due to the scale and complexity of such systems. Weak convergence results, similar in spirit to a law of large numbers and a central limit theorem, can be used to approximate large systems in distribution. We exploit these weak convergence results in order to develop approximate maximum likelihood estimators for such systems. The approximate estimators are shown to converge to the true parameters and are asymptotically normal as the number of observations and the size of the system become large. Numerical studies demonstrate the computational efficiency and accuracy of the approximate MLEs. Although our approach is widely applicable to large systems in many fields, we are particularly motivated by examples arising in finance such as systemic risk in banking systems and large portfolios of loans.

"Simulated Likelihood Estimators for Discretely-Observed Jump-Diffusions" (with K. Giesecke), Journal of Econometrics 213 (2019), 297–320. Codes

This paper develops an unbiased Monte Carlo approximation to the transition density of a jump-diffusion process with state-dependent drift, volatility, jump intensity, and jump magnitude. The approximation is used to construct a likelihood estimator of the parameters of a jump-diffusion observed at fixed time intervals that need not be short. The estimator is asymptotically unbiased for any sample size. It has the same large-sample asymptotic properties as the true but uncomputable likelihood estimator. Numerical results illustrate its properties.

"Exploring the Sources of Default Clustering" (with K. Giesecke and S. Azizpour), Journal of Financial Economics 129 (2018), 154-183.

We study the sources of corporate default clustering in the United States. We reject the hypothesis that firms’ default times are correlated only because their conditional default rates depend on observable and latent systematic factors. By contrast, we find strong evidence that contagion, through which the default by one firm has a direct impact on the health of other firms, is a significant clustering source. The amount of clustering that cannot be explained by contagion and firms’ exposure to observable and latent systematic factors is insignificant. Our results have important implications for the pricing and management of correlated default risk.

"Filtered Likelihood for Point Processes" (with K. Giesecke), Journal of Econometrics 204 (2018), 33-53. Codes

Point processes are used to model the timing of defaults, market transactions, births, unemployment and many other events. We develop and study likelihood estimators of the parameters of a marked point process and of incompletely observed explanatory factors that influence the arrival intensity and mark distribution. We establish an approximation to the likelihood and analyze the convergence and large-sample properties of the associated estimators. Numerical results highlight the computational efficiency of our estimators, and show that they can outperform EM Algorithm estimators.