This is my first blog at Geospatial Insight – exciting times!
Prior to joining Geospatial Insight, I’ve worked with modellers in the Financial Services Industry, including the best and worst of quants, both buy-side and sell-side, as well as traders, portfolio managers, economists, commodities specialists, risk managers, actuaries - essentially anyone building, implementing and using models. Many of those models have affected your daily lives, such as the Bank of England Forecasting Model. If you are interested in learning more, read this.
Here is why I’m fascinated by - and now work at - Geospatial Insight
In the 1990s and 2000s, firms competed on the best, speediest and fastest-implemented models. Whoever had access to the best portfolio optimizer, latest pricing method, or the fastest test environment for the broadest suite of moving average models, then they tended to win out.
That has changed. Post-2008, the playing field levelled. Models (rightly) have become scrutinized by a plethora of regulators, particularly in the larger institutions. Hardware became cheaper, reducing barriers to entry for industrial-scale models for smaller institutions. Also, model suites and development environments have become more readily available, many instantly downloadable, for example convex optimizers (Stanford CVX), backtesting engines (Quantopian), pricers and simulators (QuantLib), loss models (OASIS), macro-economic analysis (Dynare and IRIS) and of course machine and deep learning tools (Scikit-Learn, XGBoost, Tensorflow, Caffe, Torch) all callable from standard and dedicated languages:- Python, R, MATLAB, C++, Java, C#, Julia and more. Today, hobbyist day-traders can within a few hours access data, visualize and analyse it, construct and (ideally test) a model, even implement it into a trading environment, a workflow that fifteen years ago would take months or years and was the proprietary preserve of only the wealthiest institutions. Model building has been democratized.
Today, firms compete by accessing new “alternative” datasets to identify novel trading signals, capturing trends not immediately revealed in traditional terminal-supplied market, economic and fundamentals datasets. In a future blog, I will explore the alternative data ecosystem – and there’s a heck of a lot of alternative data around right now – and the role geospatial data and thus Geospatial Insight play in it.
For now, I want to highlight the importance of the intersection of data and model. First, data and model must intersect to be useful. For a fabulous example, see this Swiss Re Internal Capital Adequacy Management system (ICAM) video [view from 3 minutes in], demonstrating the loss and liability calculations and thus regulatory and operating capital in response to an unfurling pandemic in this case from simulated data, though as the slides and interface show, the system can model same in response to real captured data from flood or other climatic events.
Second, critically assess your data and model universes. Quoting Ben Steiner of BNP Paribas and a Director of the Society of Quantitative Analysts in New York
““Not everything that matters can be measured; not everything that can be measured matters” Invariably, we build models where we have data. Not all “risk” is easily quantifiable using historical data. As quants we ignore this at our peril.”
Just because we have data, we can be tempted to model it, useful or not. Equally, we must not forget the value in accommodating instances where we don’t (think we) have data.
At Geospatial Insight, I am swimming in a wealth of data from the satellite archive, augmented by current and future satellite, aerial, drone and other data, offering new time-series and data-sets hitherto inaccessible to modellers. It’s tempting to think there may be extractable model-based meaning from all of it, but as Steiner reminds us, we might be fooling ourselves much of the time. However, in some cases certain signals from our (vast) dataset can directly drive a trading strategy, others serve as useful proxy factors informing portfolio factor universes, indicate candidates for credit default, or feed insurance risk scenarios. We will consider tangible examples in future blogs.
For now, I will offer this comment. Models and analyses in quantitative finance - I’ll pick on Black-Scholes and CAPM - assume markets act rationally, and in so doing simply abstract out the complexity of geography. Even the most elaborate asset-liability models often do not account for geographic difference. A firm’s market value, it is assumed, will naturally capture geographic concentrations and risks. However, take a step back. Any sizeable company will have asset bases, customer universes and liabilities that are geographically contingent. A sudden catastrophe event in one place, a labour strike in another, a quiet rise in consumer debt and an announced tax change in a third can all impact a firm’s value, but the market may not act immediately, perhaps inaccurately given limited information, and in some cases might miss “unknown” events entirely.
Models have indeed become commoditized, but the opportunities presented by machine learning, big data and cloud computation alongside emerging geographical information offer an opportunity to construct more accurate and, dare I say it, truthful financial models. Such models, with accompanying insights, will facilitate valuation, in others risk/liability forecasting and in others still, opportunities for arbitrage.
We call this GeoFinance.
https://www.geospatial-insight.com/markets/investment-market/