# Research articles for the 2020-06-07

arXiv

We estimate capital and labor income Pareto exponents across 348 country-year observations that span 51 countries over half a century. We document two stylized facts: (i) capital income is more unequally distributed than labor income; namely, the capital exponent (1-3) is smaller than labor (2-5), and (ii) capital and labor exponents are nearly uncorrelated. To explain these findings, we build an incomplete market model with job ladders and capital income risk that gives rise to a capital income Pareto exponent smaller than but nearly unrelated to the labor exponent. Our results suggest the importance of distinguishing income and wealth inequality.

arXiv

Residential segregation recently has shifted to more class or income-based in the United States, and neighborhoods are undergoing significant changes such as commuting patterns over time. To better understand the commuting inequality across neighborhoods of different income levels, this research analyzes commuting variability (in both distance and time) across wage groups as well as stability over time using the CTPP data 1990-2010 in Baton Rouge. In comparison to previous work, commuting distance is estimated more accurately by Monte Carlo simulation of individual trips to mitigate aggregation error and scale effect. The results based on neighborhoods mean wage rate indicate that commuting behaviors vary across areas of different wage rates and such variability is captured by a convex shape. Affluent neighborhoods tended to commute more but highest-wage neighborhoods retreated for less commuting. This trend remains relatively stable over time despite an overall transportation improvement in general. A complementary analysis based on the distribution of wage groups is conducted to gain more detailed insights and uncovers the lasting poor mobility (e.g., fewer location and transport options) of the lowest-wage workers in 1990-2010.

arXiv

We propose a deep neural network-based algorithm to identify the Markovian Nash equilibrium of general large $N$-player stochastic differential games. Following the idea of fictitious play, we recast the $N$-player game into $N$ decoupled decision problems (one for each player) and solve them iteratively. The individual decision problem is characterized by a semilinear Hamilton-Jacobi-Bellman equation, to solve which we employ the recently developed deep BSDE method. The resulted algorithm can solve large $N$-player games for which conventional numerical methods would suffer from the curse of dimensionality. Multiple numerical examples involving identical or heterogeneous agents, with risk-neutral or risk-sensitive objectives, are tested to validate the accuracy of the proposed algorithm in large group games. Even for a fifty-player game with the presence of common noise, the proposed algorithm still finds the approximate Nash equilibrium accurately, which, to our best knowledge, is difficult to achieve by other numerical algorithms.

arXiv

The current crisis, at the time of writing, has had a profound impact on the financial world, introducing the need for creative approaches to revitalising the economy at the micro level as well as the macro level. In this informal analysis and design proposal, we describe how infrastructure for digital assets can serve as a useful monetary and fiscal policy tool and an enabler of existing tools in the future, particularly during crises, while aligning the trajectory of financial technology innovation toward a brighter future. We propose an approach to digital currency that would allow people without banking relationships to transact electronically and privately, including both internet purchases and point-of-sale purchases that are required to be cashless. We also propose an approach to digital currency that would allow for more efficient and transparent clearing and settlement, implementation of monetary and fiscal policy, and management of systemic risk. The digital currency could be implemented as central bank digital currency (CBDC), or it could be issued by the government and collateralised by public funds or Treasury assets. Our proposed architecture allows both manifestations and would be operated by banks and other money services businesses, operating within a framework overseen by government regulators. We argue that now is the time for action to undertake development of such a system, not only because of the current crisis but also in anticipation of future crises resulting from geopolitical risks, the continued globalisation of the digital economy, and the changing value and risks that technology brings.

SSRN

We use party-identifying language â€" like â€œLiberal Mediaâ€ and â€œMAGAâ€â€" to identify Republican users on the investor social platform StockTwits. Using a difference-in-difference design, we find that the beliefs of partisan Republicans about equities remain relatively unfazed during the COVID-19 pandemic, while other users become considerably more pessimistic. In cross-sectional tests, we find Republicans become relatively more optimistic about stocks that suffered the most from COVID-19, but more pessimistic about Chinese stocks. Finally, stocks with the greatest partisan disagreement on StockTwits have significantly more trading in the broader market, which explains 20% of the increase in stock turnover during the pandemic.

arXiv

We suggest the Doubly Multiplicative Error class of models (DMEM) for modeling and forecasting realized volatility, which combines two components accommodating low-, respectively, high-frequency features in the data. We derive the theoretical properties of the Maximum Likelihood and Generalized Method of Moments estimators. Two such models are then proposed, the Component-MEM, which uses daily data for both components, and the MEM-MIDAS, which exploits the logic of MIxed-DAta Sampling (MIDAS). The empirical application involves the S&P 500, NASDAQ, FTSE 100 and Hang Seng indices: irrespective of the market, both DMEM's outperform the HAR and other relevant GARCH-type models.

arXiv

In the financial sector, a reliable forecast the future financial performance of a company is of great importance for investors' investment decisions. In this paper we compare long-term short-term memory (LSTM) networks to temporal convolution network (TCNs) in the prediction of future earnings per share (EPS). The experimental analysis is based on quarterly financial reporting data and daily stock market returns. For a broad sample of US firms, we find that both LSTMs outperform the naive persistent model with up to 30.0% more accurate predictions, while TCNs achieve and an improvement of 30.8%. Both types of networks are at least as accurate as analysts and exceed them by up to 12.2% (LSTM) and 13.2% (TCN).

arXiv

We study the effects of financial shocks on the United States economy by using a Bayesian structural vector autoregressive (SVAR) model that exploits the non-normalities in the data. We use this method to uniquely identify the model and employ inequality constraints to single out financial shocks. The results point to the existence of two distinct financial shocks that have opposing effects on inflation, which supports the idea that financial shocks are transmitted to the real economy through both demand and supply side channels.

arXiv

In this study, a numerical quadrature for the generalized inverse Gaussian distribution is derived from the Gauss--Hermite quadrature by exploiting its relationship with the normal distribution. The proposed quadrature is not Gaussian, but it exactly integrates the polynomials of both positive and negative orders. Using the quadrature, the generalized hyperbolic distribution is efficiently approximated as a finite normal variance-mean mixture. Therefore, the expectations under the distribution, such as cumulative distribution function and European option price, are accurately computed as weighted sums of those under normal distributions. The random variates from the generalized hyperbolic distribution are also sampled in a straightforward manner. The accuracy of the methods is illustrated with numerical examples.

arXiv

The prediction of stock prices is an important task in economics, investment and making financial decisions. This has, for decades, spurred the interest of many researchers to make focused contributions to the design of accurate stock price predictive models; of which some have been utilized to predict the next day opening and closing prices of the stock indices. This paper proposes the design and implementation of a hybrid symbiotic organisms search trained feedforward neural network model for effective and accurate stock price prediction. The symbiotic organisms search algorithm is used as an efficient optimization technique to train the feedforward neural networks, while the resulting training process is used to build a better stock price prediction model. Furthermore, the study also presents a comparative performance evaluation of three different stock price forecasting models; namely, the particle swarm optimization trained feedforward neural network model, the genetic algorithm trained feedforward neural network model and the well-known ARIMA model. The system developed in support of this study utilizes sixteen stock indices as time series datasets for training and testing purpose. Three statistical evaluation measures are used to compare the results of the implemented models, namely the root mean squared error, the mean absolute percentage error and the mean absolution deviation. The computational results obtained reveal that the symbiotic organisms search trained feedforward neural network model exhibits outstanding predictive performance compared to the other models. However, the performance study shows that the three metaheuristics trained feedforward neural network models have promising predictive competence for solving problems of high dimensional nonlinear time series data, which are difficult to capture by traditional models.

arXiv

The problem of portfolio allocation in the context of stocks evolving in random environments, that is with volatility and returns depending on random factors, has attracted a lot of attention. The problem of maximizing a power utility at a terminal time with only one random factor can be linearized thanks to a classical distortion transformation. In the present paper, we address the problem with several factors using a perturbation technique around the case where these factors are perfectly correlated reducing the problem to the case with a single factor. We illustrate our result with a particular model for which we have explicit formulas. A rigorous accuracy result is also derived using a verification result for the HJB equation involved. In order to keep the notations as explicit as possible, we treat the case with one stock and two factors and we describe an extension to the case with two stocks and two factors.

SSRN

We show that the news is a rich source of data on distressed firm links that drive firm-level and aggregate risks. The news tends to report about links in which a less popular firm is distressed and may contaminate a more popular firm. This constitutes a contagion channel that yields predictable returns and downgrades. Shocks to the degree of news-implied firm connectivity predict increases in aggregate volatilities, credit spreads, and default rates, and declines in output. To obtain our results, we propose a machine learning methodology that takes text data as input and outputs a data-implied firm network.