Big data leading to big changes in asset management

Post with image
Please note that this article may contain technical language. For this reason, it is not recommended to readers without professional investment experience.

Uber has become one symbol of the way new technologies can rapidly transform a long established business, creating simultaneously chaos and major opportunities. In asset management, there is a marked increase in the use of the internet and social media to create a more interactive relationship with clients. After all, why go to the cost of printing booklets if clients can access all the information they need online – even in video format – and use it to directly compare providers?

Going a step further, some financial technology (fintech) start-ups are proposing ‘robo-advising’ with the ultimate goal of automating content provision. They dream of Siri – Apple’s voice-controlled personal assistant application – being able to answer, in any language, questions such as: “Siri, where should I invest today to secure my retirement?” Needless to say, this functionality is a little further down the road, although IBM has already developed an algorithm, Watson, that uses “natural language processing and machine learning to reveal insights from large amounts of unstructured data.”

Elsewhere, in the ‘big data’ sphere, companies such as Amazon and Netflix can predict which book or movie people are going to like, based on our previous choices and a large database of earlier clicks or purchases. It seems fairly probable that such extrapolations can also be applied to past investments.

Smart beta and factor investing: merely the latest ‘Uberisation’ steps

However, not only the distribution side of our industry is changing, but also the production side where researchers and managers define and implement investment strategies. It has been going on for some time now and smart beta and factor investing are merely the latest steps in this ‘Uberisation’ process.

This all started with what would now be called ‘big data’, in Chicago, in 1960. The Center for Research in Security Prices started providing a comprehensive database of prices for securities, which paved the way for academic research to find abnormalities in the way such prices evolve. Among others, the low-volatility anomaly was identified by its discoverer, Robert Haugen, after an analysis of numbers from this database. In 1995, the database was extended to mutual funds, which allowed for studies of the persistence in mutual fund performance (Carhart, 1997), which first established a link between the exposure to factors and the recurring performance of asset managers.

Big data: a feature of asset management for years

One step further, the Institutional Brokers’ Estimate System (IBES) has started to transform the job of analysts by providing access to the average of predictions from thousands of analysts covering more than 40 000 companies. No single manager could possibly integrate all this content without at least some quantitative filter, or ‘data science’, to use the current fintech terminology.

To some extent, quantitative asset management involves the integration of all these data management techniques to provide a form of ‘robo-management’ – just as ‘robo-advising’ involves the integration of information technologies to enhance the sale of mutual funds.

Etienne Vincent

Head of Global Quantitative Management, THEAM

Leave a reply

Your email adress will not be published. Required fields are marked*