Gustavo Schwenkler

Associate Professor of Finance

"A Study Of The Reliability of Crypto Data Provision" (with A. Shah & D. Yang). 

Online Appendix

Weekly crypto market, size, and momentum factor data

We analyze the supply of cryptocurrency data from 20 leading providers and un- cover significant quality issues. We document the impact of these issues on empirical analysis, emphasizing the risks of relying on single sources. We propose a methodology to aggregate data across providers that ensures consistency and reliability and demonstrate its benefits in an empirical asset pricing study. We recommend the adoption of a unified identification system for cryptocurrencies, akin to traditional financial markets, to enhance data reliability. We also flag a need for potential regulatory oversight to ensure consumer protection in the market for crypto data.

"Why Does News Coverage Predict Returns? Evidence From The Underlying Editor Preferences For Risky Stocks" (with H. Zheng). 

Data & Codes.

We challenge the consensus that financial news coverage predicts returns solely through behavioral channels by introducing a novel rational mechanism in which news coverage signals risky stocks with high expected returns. Using New York Times business news coverage from 1995 to 2015, we show that a significant portion of the predictive power of news coverage can be attributed to being a proxy of priced risk. Our findings suggest that editor preferences provide a timely signal that aids investors in identifying high-risk, high-return investments. They extend our understanding of the informational content of news coverage and its impact on return dynamics.

"A Model of Venture Capital Investing With Subscription Lines" (with J. Bocks). 

Model implementations.

We introduce a model of venture capital investing in the presence of a subscription line, which we highlight as a critical component of the entrepreneurial growth engine of an economy. If an investment opportunity arises early, the fund can choose to draw the subscription line rather than calling capital from its investors. The subscription line is costly. But, in several calibrations, we find that the fund often prefers to use the subscription line over calling capital as the capital call is risky and distorts the performance metrics of the fund. In equilibrium, the bank internalizes that the fund has implicit market power because it can choose not to make an investment. This yields an inverted risk-reward relationship in the subscription line market. We find that the availability of subscription lines boosts the payoff for venture capital investors. It also magnifies the economic impact of venture capitalism by increasing the probability that startups get funded.

"News-Driven Peer Co-Movement in Crypto Markets" (with H. Zheng). 

Online Appendix

Data.

Best Presentation Award at 3rd UWA Blockchain, Cryptocurrency and FinTech Conference.

When large idiosyncratic shocks hit a cryptocurrency, some of its peers experience unusually large returns of the opposite sign. The co-movement is concentrated among peers that are co-mentioned with shocked cryptos in the news, and that are listed in the same exchanges as shocked cryptos. It is a form of mis-pricing that vanishes after several weeks, giving rise to predictable returns. We propose a profitable trading strategy that exploits this predictability, and explain our results with a slow information processing mechanism. To establish our results, we develop a novel natural language processing technology that identifies crypto peers from news data. Our results highlight the news as a key driver of co-movement among peer assets.

"The Network of Firms Implied by the News" (with H. Zheng).  

Online Appendix.  

Codes.  

Data.

We show that the news is a rich source of data on distressed firm links that drive firm-level risks and aggregate fluctuations. We find that the news tends to report about links in which a less popular firm is distressed and may contaminate a more popular firm. This flow of information constitutes a contagion effect that results in predictable stock returns and credit downgrades. On an aggregate level, we show that shocks to the degree of connectivity in the news-implied firm network predict increases in aggregate volatilities, credit spreads, and default rates, as well as persistent declines in output. To obtain our results, we develop a machine learning methodology that takes text data as input and outputs a network of firm links implied by the data. The results of this paper enable the accurate estimation of risks that drive business cycle fluctuations.

"Preventing COVID-19 Fatalities: State versus Federal Policies" (with J.-P. Renne and Guillaume Roussellet). 

Are COVID-19 fatalities large when a federal government does not enforce containment policies and instead allow states to implement their own policies? We answer this question by developing a stochastic extension of a SIRD epidemiological model for a country composed of multiple states. Our model allows for interstate mobility. We consider three policies: mask mandates, stay-at-home orders, and interstate travel bans. We fit our model to daily U.S. state-level COVID-19 death counts and exploit our estimates to produce various policy counterfactuals. While the restrictions imposed by some states inhibited a significant number of virus deaths, we find that more than two-thirds of U.S. COVID-19 deaths could have been prevented by late November 2020 had the federal government enforced federal mandates as early as some of the earliest states did. Our results quantify the benefits of early actions by a federal government for the containment of a pandemic.

"Estimating the Dynamics of Consumption Growth"

We estimate models of consumption growth that allow for long-run risks and disasters using data for a series of countries over a time span of 200 years. Our estimates indicate that a model with small and frequent disasters that arrive at a mean-reverting rate best fits international consumption data. The implied posterior disaster intensity in such a model predicts equity returns without compromising the unpredictability of consumption growth. It also generates time-varying excess stock volatility, empirically validating key economic mechanisms often assumed in consumption-based asset pricing models.

"The Systemic Effects of Benchmarking" (with D. Duarte and K. Lee)

We show that an institutional investor whose performance is evaluated relative to a narrow benchmark trades in ways that exposes a retail investor to higher risks and welfare losses. In our model, the institutional investor is different from the retail investor because she derives higher utility when her benchmark outperforms. This forces institutional investors to overreact (underreact) to cash flow news in bad (good) states of the world, increasing individual and aggregate volatilities. While asset prices and wealth are higher in the presence of benchmarking, the retail investor is worse off due to the exposure to higher risks. We empirically validate the mechanisms in our model using data on U.S. equity mutual funds with sector-specific mandates.

"Efficient Estimation and Filtering for Multivariate Jump-Diffusions" (with F. Guay).  Journal of Econometrics 223 (2021), 251-275.  

Codes

This paper develops estimators of the transition density, filters, and parameters of multivariate jump-diffusions with latent components. The drift, volatility, jump intensity, and jump magnitude are allowed to be general functions of the state. Our density and filter estimators converge at the canonical square-root rate, implying computational efficiency. Our parameter estimators have the same asymptotic properties as true maximum likelihood estimators, implying statistical efficiency. Numerical experiments highlight the superior performance of our estimators.

"Inference for Large Financial Systems" (with K. Giesecke and J. Sirignano), Mathematical Finance 30 (2020), 3-46.

We consider the problem of parameter estimation for large interacting stochastic systems where data is available on the aggregate state of the system. Parameter inference is computationally challenging due to the scale and complexity of such systems. Weak convergence results, similar in spirit to a law of large numbers and a central limit theorem, can be used to approximate large systems in distribution. We exploit these weak convergence results in order to develop approximate maximum likelihood estimators for such systems. The approximate estimators are shown to converge to the true parameters and are asymptotically normal as the number of observations and the size of the system become large. Numerical studies demonstrate the computational efficiency and accuracy of the approximate MLEs. Although our approach is widely applicable to large systems in many fields, we are particularly motivated by examples arising in finance such as systemic risk in banking systems and large portfolios of loans. 

"Simulated Likelihood Estimators for Discretely-Observed Jump-Diffusions" (with K. Giesecke), Journal of Econometrics 213 (2019), 297–320.  

Codes.

This paper develops an unbiased Monte Carlo approximation to the transition density of a jump-diffusion process with state-dependent drift, volatility, jump intensity, and jump magnitude. The approximation is used to construct a likelihood estimator of the parameters of a jump-diffusion observed at fixed time intervals that need not be short. The estimator is asymptotically unbiased for any sample size. It has the same large-sample asymptotic properties as the true but uncomputable likelihood estimator. Numerical results illustrate its properties.

"Exploring the Sources of Default Clustering" (with K. Giesecke and S. Azizpour), Journal of Financial Economics 129 (2018), 154-183.

We study the sources of corporate default clustering in the United States. We reject the hypothesis that firms’ default times are correlated only because their conditional default rates depend on observable and latent systematic factors. By contrast, we find strong evidence that contagion, through which the default by one firm has a direct impact on the health of other firms, is a significant clustering source. The amount of clustering that cannot be explained by contagion and firms’ exposure to observable and latent systematic factors is insignificant. Our results have important implications for the pricing and management of correlated default risk.

"Filtered Likelihood for Point Processes" (with K. Giesecke), Journal of Econometrics 204 (2018), 33-53.  

Codes.

Point processes are used to model the timing of defaults, market transactions, births, unemployment and many other events. We develop and study likelihood estimators of the parameters of a marked point process and of incompletely observed explanatory factors that influence the arrival intensity and mark distribution. We establish an approximation to the likelihood and analyze the convergence and large-sample properties of the associated estimators. Numerical results highlight the computational efficiency of our estimators, and show that they can outperform EM Algorithm estimators.