The presentations at the well-visited Axioma conference dealt with the quantification of macro risks, the usage of fundamental data, such as ESG and accounting data, in a systematic investment process and the performance of multi-factor portfolios. BNP Paribas Investment Partners presented its recent paper on funding ratio risk decomposition.
Quantifying macro risks
The forum was kicked off by Sebastian Ceria, Axioma’s CEO. The traditional way of stress-testing consists of taking the current portfolio back to a number of stressful historical periods. The main disadvantage of this approach is that historical periods may not reflect expected dynamics. Ceria discussed “risk resolution”, a novel way for stress-testing equity portfolios via an already existing equity factor model. Typically, pricing factors in such models are just fully equity specific, such as the overall equity market return, equity sectors and equity style variables (like size, value and momentum). The key to the risk resolution approach is to define the relationship between macro factors and the pricing factors in the equity model. For instance, if one has established for instance the relationship between the 10-year interest rate and the equity pricing factors, one can shock the interest rate, translate it into shocks of the equity factors which are then channeled into the existing equity model which yields the shocked value of the equity portfolio. Mr. Ceria emphasised that the estimation of the relation between the already existing factors in the model and the macro variables can be non-trivial. Crucial is in particular what period is used for the estimation of the equations, because the relations can be highly non-stationary.
The event’s organisation was so kind to give me the opportunity to present our paper “Decomposing Funding Ratio Risk: Providing pension funds with key insights into their liabilities hedge mismatch and other factor exposures”. This describes an innovative method to make insightful how a given pension fund’s funding ratio is exposed to macro risks, like interest rate, inflation, credit spread and pure equity risks. For further details on this, see our earlier blogs “Understanding funding ratio risk better, part 1 and part 2”.
Using fundamental data in a systematic fashion
Three presentations dealt with this topic. These were given by Gerben de Zwart and Maarten Smit, both of APG Asset Management, Yin Luo of Deutsche Bank and Söhnke Bartram of Warwick Business School
De Zwart and Smit talked about APG’s efforts to integrate environmental, social and governance (ESG) principles into their systematic equity strategies. This, according to them, is an area full of challenges. ESG data providers for instance work with their own definitions, methodology and data. This implies that their ESG rankings are often lowly correlated. Also the data tend to change slowly over time. The literature is also not always very clear in its conclusions on the linkage between ESG factors and expected company returns. However, the link between ESG and risk, namely that low ESG rankings tend to increase future risk, is more broadly accepted. In this context some new APG regression results were shown. Firstly, it appeared that current rankings for slowly-moving ESG variables were negatively correlated with 12-months ahead stock specific risk. Secondly, a fast moving variable measuring bad company news was examined as well and the conclusion was that bad news, defined in terms of news message wordings as “bribery”, “corruption”, “scandal” and so on, is expected to result in a subsequent increase in stock specific risk.
Luo showed how the Black and Litterman (BL) model can be applied in portfolio construction. The basic idea behind the BL model is to take the confidence into account of one’s return expectations. If one is for instance expecting a very high return for a given stock, but not very confident in this, the BL model will push this expectation down to a more normal level. This shrinkage will be the larger the less confident one is. Luo’s main example dealt with combining pure quant signals, made on the basis of proprietary models, with fundamental signals on the basis of sell-side analyst recommendations. The BL-model was found to produce good back-test results in terms of metrics like Sharpe ratio, information ratio and tracking error and it outperformed a collection or rivaling more conventional portfolio construction techniques.
Professor Bartram presented joined work with Mark Grinblatt, professor at UCLA Anderson School of Management. Bartram started by asking himself a simple question, but one that is thrilling in the context of the active/passive investing debate: “Can I, as a statistician who doesn’t know much about accounting figures, use publicly available data from the quarterly balance sheets and income statements and make a profit out of it?” This question was approached by making a regression-based fair value model that fits actual market capitalisations with a set of 28 commonly reported accounting items. Each month each stock’s actual price was compared with the model’s fair value price. The resulting trading signal was then used in trading strategies depending on the precise way of how the trading strategy was set up and also depending on the applied risk-adjustment method, it appeared that such a trading strategy could earn a risk-adjusted abnormal return of 4 to 9% per year. This suggests that the equity market is not fully reflecting the available information in accounting data and thus that the market is inefficient. A pity is though that the analysis is not taking into account trading costs.
Combing factors in a portfolio
Lastly, Jennifer Bender of State Street Global Advisors and Nick Baltas of UBS Investment Bank examined multi-factor portfolios.
The reason to combine multiple factors in a portfolio lies in diversification gains because of low factor correlations in general. Jennifer Bender addressed the issue of how to construct a multi-factor portfolio in the equity space using value, quality, momentum and low volatility as factors. Should one start with individual stocks and allocate most to stocks which score the best in the multi-dimensional sense on all four factors? Or, would it be better to make first the four one-dimensional factor portfolios and then allocate to these? The key difference between both approaches deals with the interaction effects of the factors at the individual stock level. Jennifer. Bender’s conclusion was that the outcomes of the two approaches really differ and that the first method is offering more possibilities to fine-tune the overall risk-return trade-off. However, in my opinion the results of the two approaches did not differ that much. Furthermore, the analysis did not give an overview of the factor exposures generated by both approaches, nor of their market Betas. Could it not be that differences in one or more factor exposures and/or market Betas explain the observed performance differences?
Using a collection of individual futures per asset class, Baltas examined combinations of return factors across four different asset classes being equities, government bonds, commodities and FX. Working with factors like value, momentum and trend- following he made long-short quantile portfolios per asset class and combined these per factor. So, for instance the “cross-asset momentum” factor is a combo of four long-short momentum quantiles, one for each asset class. Baltas also made joined combos such as the “cross-asset value plus trend-following” factor. Lastly, he examined the low volatility factor within each of the four asset classes, but did here not make a cross-asset combo. The main conclusions of his extensive work are the following: 1) the cross-asset value and cross-asset value plus momentum factor returns were weak during the recent decade, 2) the cross-asset trend-following factor had the strongest pattern, but it was flat post 2008, and 3) within equities and government bonds there were strong low volatility patterns.