The hypocrisy of sustainability demands and ESG investing

I find ESG concerns in the world of investing to be interesting, not least because they force stakeholders to ask some rather candid and introspective questions of themselves around the type of work that is being done and what sort of role they should play in the larger scheme of things. Ultimately, if you reduce these questions down to their underlying assumptions, you’ll find that there’re rooted in rather hypocritical views: A lot of ESG-related concerns will boil down to people asking you to be a better person than they manage to be themselves.

The types of ESG topics that see interest from investors and their clients often relate to issues around global warming, the impact of globalisation on developing countries, pollution, and other phenomena that are best described as externalities of economic activity. That is, they are issues that arise because of the economic activity of stakeholders and that would need internalisation for a humanitarian and economically positive outcome to result.

With regard to externalities the main question that often needs to be asked is whose responsibility it is, really, to ensure that the externalities are accounted for and rectified. Within the ESG framework, a lot of these questions are often funnelled to companies and their management teams, after the responsibility for these have been abdicated by investors and their clients (and their clients) through an industry-wide process of passing the blame. I find this unconscionable.

Let’s consider an issue like the negative externalities of something like the fast-fashion industry. Not only does fast fashion contribute to environmental issues like waste and pollution, but there are labour-related issues to take into consideration as well. This is because fast fashion is—and has to be—cheap. This allows the companies offering the fashion to sell to a large customer-base, enjoying many of the benefits of scale and large addressable markets that investors think they enjoy so much. However, the low cost of these near-disposable fast-fashion items comes at the cost of poor working conditions for the people involved in the items’ manufacture. 

Clothing requires dexterity to manufacture, making this task hard to automate and requiring an army of human hands. High wages in developed countries and the unwillingness of customers in the same countries to shoulder the higher costs have pushed fashion brands to move more of their manufacturing abroad. (This is the case also for companies that otherwise historically have prioritised local production to allow for faster turnaround times to afford leaner inventories.) In other words, the use of low-cost labour in the manufacture of fast fashion is an externality of globalisation: If low-cost labour is available and transportation costs won’t eat away at these, it makes economical sense to move manufacturing abroad, especially when customers prioritise low prices at the expense of local craftmanship. 

The markets that offer low-cost labour for manufacturing are however less well-developed socially, and workers typically suffer lower protections and greater exploitations in these markets in addition to the low pay that they typically earn. While it can be argued that this is a temporary symptom of early stages of economic development (working conditions in the industrialising West were typically not particularly rosy either), there is another argument to be made that people in the West have a humanitarian responsibility not to exploit people unnecessarily or to expect anyone—poor or not—to suffer on their behalf. (This latter point should really not have to be made; it should be obvious that anyone following classical liberal principles should strive to better the lot of their fellow humans and that international companies can have a very important role to play here.)

However, bettering other people’s lots quickly becomes complicated in practice. For companies in the fast-fashion industry, even if they might choose to pay their workers more than the average local wage, to so better their lots, this might cause local issues by artificially inflating wages (which would be economically unproductive or destabilising). In addition, the companies will also have to balance these labour and production costs with the proportion of such costs that they can pass onto their customers. 

This is where ESG-related concerns become interesting, because the end-customers are often very unwilling to shoulder the increased costs. Instead, these customers will vote with their feet and patronise brands and retailers that can offer lower prices (often at the expense of the condition of their frontline manufacturing workers and their own profit margins). This demonstrates that the customers are an important part of the ESG equation. Investors are too, however, in their strong emphasis on companies growing their margins and to so offer improved shareholder returns. Altogether, this leaves the companies themselves somewhere in-between a rock and a hard place: By passing on the costs of their labour-related externalities onto their customers they run the risk of declining sales, but by internalising these costs, they run the risk of alienating investors. 

This makes for one of the most interesting dilemmas of ESG concerns in the investment context: How can an emphasis on ESG-related issues be balanced with investor and company return requirements? While I don’t have a good answer to this question myself, it should really be one of the main questions being asked in the space to allow stakeholders to scale their expectations accordingly. 

Another question that is interesting to ask in this context is whose responsibility the solutions to these issues actually are? As discussed above, a lot of the labour-related issues that come from the fast-fashion industry have their root in the economic reality of globalisation. The strategy pursued by agents in economic systems are typically optimised for specific environments, which are themselves the result of various positive and negative incentives. Ultimately, these incentives are set by the expectations of customers, investors, and governments, and when it comes to issues caused by globalisation, I wonder if governments shouldn’t have a larger role to play here by creating better incentives that force customers to internalise the human costs of their consumption by setting things like minimum prices on one hand and minimum wages on the other, thus creating the incentives for a higher pass-through rate of customer cash to worker pockets. This is an interesting discussion that I think should be had.

Ultimately, these issues have grown out of the inconsistencies and conflict of customer demands and their actual behaviour. On the one hand, fast-fashion customers want low-cost clothing (which they’ve shown by supporting a large industry focussed on satisfying this demand and by prioritising low cost over local manufacturing). On the other hand, these same customers however ask of their retailers to offer more sustainable alternatives—somehow expecting the price to stay the same even as the cost will increase for the manufacturer. In other words, the customers want to have their cake and eat it too. Of course, this creates an opportunity for manufacturers that are able to satisfy both of these demands in the form of low-cost garments that are manufactured to a high humanitarian standard. This is however a very hard balance to strike, and I can think of only a handful of players able to walk this very narrow line (most of which still offer product at a higher-than-average price-point). 

Another set of issues however grow out of investors projecting their own—only partially overlapping—demand onto the companies in their never-ending hunt for yield. (Even as this, of course, is fuelled by their clients’ demands for higher and higher yield to help fund their own liabilities and obligations.) While this is understandable, it makes any investor’s emphasis on ESG-related issues a bit hypocritical. I don’t think it’s fair for investors to project these issues onto the companies they invest in, asking them to do the right thing and to pay their workers more while at the same time hungering for yield. In this context, investors are the same as the customers themselves; they want to have their cake and to eat it too. The companies themselves are stuck in the middle of these concerns, with very little firepower to actually do anything about these very conflicting demands. 

For this reason, my instinct is to side with the companies. While I am aware that there are many companies that are poorly run and that have chosen to do the wrong thing, I oftentimes think this is just one way for them to optimise their strategy according to the incentives that they’re operating under (where better incentives would lead to better behaviours across the board). This however makes the companies doing the right thing—and doing so successfully—all the more impressive as they’ve found a way to thrive under quite complicated conditions. Such companies deserve a lot of respect, not least from investors. 

This is where I find the emphasis on ESG-style investing to be very misplaced, because it creates additional incentives for companies that I’m not sure is going to be helpful or productive as it incentivises investors to ask companies of the impossible. Again, this is a version of shareholder activism, where investors are projecting their own failures onto companies and asking them to do right because the investors themselves have failed. This is wrong, as any conversation around ESG-type investing or similar concerns needs to form around an appreciation of the reality of the demands being made. For example, the only investors who could—in good faith—expect the best of companies are the ones who have chosen to internalise the cost of their own wishes by communicating to their clients that ESG-style investing comes at a price, which is the risk of a lower return.

I don’t think this attitude is common in the investment community, as you can see in the widening valuation skew between CAPEX-heavy and CAPEX-light industries and the resultant investor interest. While CAPEX-heavy industries like heavy manufacturing actively invest in their local economies and often create products with a tangible economic value, this comes at the cost of lower free cash flows and thus investor returns. Conversely, CAPEX-light businesses like software companies skim value away from global economies because of the zero-sum products they create and because of their scalable business models, this also leads to strong free cash flows and high investor returns. Given these two choices, rational investors will of course choose to invest in the higher-yielding alternative, but it needs to be recognised that this comes with negative economic externalities in the form of growing local income inequality and stunted domestic economic growth. 

What is cheeky is for such high-grossing investors to first pocket the returns from their low-CAPEX investments (on behalf of their clients) and then to turn around and ask the companies to better themselves by increasing CAPEX (or OPEX to be consistent with the labour-cost example) when they wouldn’t actually invest in such companiesbecause it’s not consistent with their investment mandate and client demands. Instead, any ESG investor worth the name would need to communicate this trade-off with their clients clearly, and to be willing to stand by and see their clients (who often are the ones pushing for ESG-type investing) to turn around and find an investor offering better returns. An ‘ESG’ investor is therefore not worth the name if—when you press them on the topic—they confess that “lower-grossing companies don’t have a place in their portfolios”. This I suspect, is however the attitude taken by the vast majority of investment professionals since ‘ESG’ is really—at the end of the day—just a marketing term. 

Altogether, this however shows that what’s really needed to get to the bottom of these issues is the recognition that if ESG investing was easy, everybody would already be doing it. But it’s not easy, and I think all stakeholders need to wake up to this fact. Good ESG investing requires honest expectations from clients, just like companies doing good requires honest understanding from their customers. While good and clear communication (from companies and governments alike) is needed to increase the recognition of this low-cost-high-wages dilemma, ultimately, it’s people like you and me who need to internalise the costs of our expectations: If we want to see a better world where companies are actively doing good and where investors reward companies for doing so, we need to be willing to pay extra for that luxury. There are no free lunches. Rather, everything comes at a price and before you ask for something, you need to look yourself in the mirror and ask if you’re willing to pay that price before you’re asking anyone else to pay it on your behalf.

Why does America feel so broken?

What ecology can teach us about economy & politics

Over the last few years it has become increasingly common to look at the state of the world and to throw one’s hands up in disbelief, dejectedly exclaiming something along the lines of:

“The system is broken. America is broken. Everything is broken.”

Many of us have, at least on occasion, shared this sentiment. It’s hard to look at a system suffering from growing income inequality and political upheaval without feeling it’s all coming apart at the seams. This feeling can however be hard to reconcile with the more-objective observation that the US—on the whole—remains a decent place to live. (Because poverty is relative, being poor in America affords you absolute riches on a global scale.) Altogether, these discrepancies raise the interesting question why all of this is taking place: Why does it feel like America is so broken (when there are so many worse places to live)?

Strangely, ecology might hold the answer. To understand this, we must however first take a detour into physics. When it all comes together in the end, we’ll have a more-productive framework with which to interpret what’s going on in the American system, including the economy and its contemporary politics. 

(I’ll be simplifying a bit in the following, in the interest of space. If you want to skip straight to the conclusion, scroll down to ‘Why does the (American) system feel so broken?’)

Complex systems are organised by energy flows

In thermodynamics (or non-equilibrium thermodynamics, to be precise) there is a concept that has been popularised as “Nature abhors a gradient”. What this means is that whenever a gradient arises, it will be dissipated. If the gradient is strong enough, a self-organising system will result, actively feeding off the birthing gradient until it has been dissipated. In other words, whenever a gradient arises, a system will arise to feed off the gradient to dissipate it. The stronger the gradient, the more complex the system. The more persistent the gradient, the more persistent the system.

Companies are examples of complex, dissipative systems, only that they self-organise where there is a money-gradient to exploit and diffuse. Several companies (and other money-gradient-diffusers) working together in an interconnected network is what we refer to as an ‘economy’. 

Economics is the study of such money-powered systems, just like ecology is the study of organisms (energy-powered systems) as they organise into ecosystems. Seen from this perspective, economics and ecology are sub-disciplines of the branch of non-equilibrium thermodynamics that concerns itself with the study of these self-organising, gradient-dissipating systems. As such, many of the tools developed for use within either economics or ecology can be adapted for use also within the other sub-discipline (and indeed, such cross-fertilisation of tools and ideas has happened very productively in the past). 

Altogether, this provides us with the first part of the framework that we’re building in this post: Economies are ecosystems assembled from money-gradient dissipative systems

Next, let’s learn a bit about ecology: How ecosystems are organised and how they function. (If you want to learn more about this, and beyond what I’m offering in this blog post, I really recommend Eric Schneider and Dorion Sagan’s book Into the Cool [link].)

Ecosystems are supported by energy-capturing producers

The best illustration of how an ecosystem is organised is a ‘food pyramid’. These pyramids show how organisms organise into (trophic) layers and how these layers organise into hierarchies.

An example of a simple food pyramid, showing how the producers make up the bulk of the system, with each tropic layer supporting the layer above it, running from producers to tiered layers of consumers.

The supporting layer of a trophic hierarchy is that of the producers. Importantly, the producers in an ecological hierarchy are not the source of the energy. Rather, the producer-layer is the layer where the energy (which comes from outside the system) is first captured and brought into the system. In a biological ecosystem, the producers would (typically) be plants, which capture energy (sunlight) through photosynthesis. Herbivores would then consume the energy captured by the plants, while carnivores consume the energy consumed by the herbivores.

The pyramid-shape of these trophic hierarchies results from the energy transfer between layers not being perfect. This causes energy to be lost from and between layers, reducing the amount of energy available to flow and support successive ones. As a result, most ecosystems are made up of no more than five or six trophic layers. The more mature the ecosystem is, the better it is at capturing and retaining energy within the system, allowing it to support more layers.

The pyramid shape comes about because the transfer of energy between layers is not perfect (sketched in in grey), meaning each layer has access to less energy than the layer preceding it.

Because the producers make energy available to the rest of the system, the size of the producer layer (which is itself determined by its ability to capture energy) becomes a proxy for the size of the overall system. As a result, an ecosystem with producers that can capture a lot of energy will have more energy available to it than an ecosystem where the producers capture less energy.

Altogether, the ecosystem is a networked system of individual gradient-dissipators (organised into layers) where the system as a whole acts as a better gradient-dissipator than any individual organism or layer would be on its own. The more tropic layers the ecosystem contains, the more efficient the ecosystem becomes at both capturing and retaining energy within the system. These ecosystems would be ‘mature’; operating at the very peak of their gradient-dissipating potential

Ecosystems mature through a process of increasing specialisation. When a new niche (source of energy) opens up, ‘pioneer’ species will migrate into the niche to dissipate the energy gradients. (Consider, for example, the emergence of a new island from the sea and how the first species to take root will be producers like plants which only later attract animal-consumers.) This newly-colonised ecosystem is immature and less able to dissipate (capture and retain) energy than a more mature ecosystem. The pioneer species however create niches for more specialist species, allowing these to take root and the ecosystem to mature further. The ecosystem will be fully mature at the point where it reaches a steady-state of sorts, where it dissipates gradients as efficiently as possible given the continued influx of energy (at the producer layer) and in the absence of any external disruption or stressors (like the introduction of an invasive new species or a catastrophic, destabilising event).

This provides us with the second piece of the framework we’re building: More energy means greater complexity.

Stressed ecosystems lose energy and grow more immature

Mature ecosystems operate at a high level of complexity, with the complexity supported by the system’s high capacity for capturing and retaining energy within the system. Whatever energy is lost as energy is passed from one trophic layer to the next (or from the activities happening within each layer) will be replaced by more energy being captured at the producer-layer. This is what allows the ecosystem to operate at a stable state. At this state, the amount of energy flowing through the system is stable.

The introduction of a stressor will however upset this delicate balance. An invasive species is ‘invasive’ because it is (for some reason or another) better-able to capture or make use of energy to support itself than other organisms in the system (perhaps because there are no predators to keep it in check). This dynamic allows the invasive species to thrive in the invaded ecosystem, capturing and diverting energy away from other layers. This causes a disruption to the flow of energy through the system, which stresses it. A catastrophic event (like a flood or a forest fire) will similarly reduce the amount of energy flowing through the system.

With less energy available to it, the ecosystem will then revert to a more immature state. As a result, the stressed ecosystem will cause energy to ‘leak’ out of it, reducing the system’s ability to dissipate the original energy gradient. These stressed ecosystems are no longer stable at the high level they were at previously. Rather, the systems regress as they oscillate towards a new, lower-energy state that better represents the amount of money flowing through it.

This provides us with the third piece of the framework: Stressors cause complex systems to grow less complex or collapse.

To make this ecosystem analogy more relevant to economics or finance, let’s paint the picture again, but replacing organisms with companies and niches with industries.

Economies are organised much like ecosystems

In the money-powered ecosystems we know as economies, niches are populated by companies that compete on their ability to dissipate money-gradients and to use the captured energy (cash) to grow and maintain their internal organisation. It’s this dynamic that we measure when we look at company-specific metrics like, for example, revenue growth (their ability to consume more money-energy) and ROIC (their ability to capture more money-energy to support themselves).

The economy per se is the product of many interconnected companies and niches, feeding off and supporting each other, much like organisms in nature. From this perspective, the organisation of an economy is fractal: Regardless of what level you look, you’ll see the same layers of organisation of producers supporting layers and layers of consumers (within companies, within sectors, within industries, and within economies). 

As a niche (a sub-economy) matures, the system goes through the same progression from pioneers to specialists as biological ecosystems do. By way of example, we can consider something like the Internet (which represented the emergence of a new niche). This new niche was colonised by pioneers like Amazon and Facebook and Google; generalist-producers whose presence attracted consumers to the niche.

As the Internet-economy ecosystem has matured as it is able to capture more and more money-energy, it has also been able to support more and more trophic layers; allowing the ecosystem to grow increasingly complex. Today, this system is growing increasingly mature, and increased money-gradient dissipative efficiencies are added at the specialist fringes (for example in the form of direct-to-consumer brands powered by producers like Amazon/Google/Facebook).

From this perspective, it wouldn’t be possible to find something like ‘the next Amazon’, as that part of the ecosystem (niche) is already populated. Instead, it makes more sense to look for companies that can find money-gradients that remain to be dissipated (e.g. similar to how the relative Internet late-comer Shopify identified the money-gradient created by the direct-to-consumer businesses enabled by producers like Google and Facebook). As the ecosystem is growing more mature, any remaining money-gradients will however grow increasingly hard to find. More fruitful areas to scour would be wholly new niches with the potential to evolve into bona fideecosystems. 

All of this is not to say that a mature ecosystem (or economy) is in any way stable. ‘Stability’ implies something fixed and unchanging, but that’s not the dynamic that’s at play here. No, rather, an ecosystem can be mature at the system level, even as that system (and its maturity) is formed by many small fluxes happening within the system as new organisations rise and fall. In this way, the top-level stability of an ecosystem or economy is much like that of the stock market, where there ‘market price’ is the product of the rising and falling stock prices of the thousands and thousands of companies within. In other words, even as the emergent system is itself stable, the organisation within is anything but. 

Just as ecosystems are supported by the flow of energy within them, with greater energy-flows translating into more complex ecosystems able to support increasingly specialised organisms, so economies are supported by money-flows. Because of this dependency on flow, economies can—just like ecosystems—grow stressed. In economies, this stress is the result of diverted or reduced money-flows that cause less money to flow through the economy overall. These reduced money-flows are less able to support the economy at a given level of complexity, causing the economy to regress to a more primitive state. In other words, stressed economies will regress to a more primitive state

By replacing ‘ecosystems’ with ‘economies’, our framework is complete:

  • Economies dissipate money-gradients (through the action of its component organisations).
  • Greater money-flows translate into more complex and mature economies.
  • Stressors reduce money-flows, which causes economies to regress.

Armed with this framework we can now (finally!) begin to answer the original question:

Why does the (American) system feel so broken?

The short answer to this question is that the American economy is stressed. As such, the system is in the process of reverting to a lower-energy stable state. This process will be painful and disruptive until the system has settled into the state that will be the most stable for its current level of money-flows. 

The longer answer is, well, longer.

A booming manufacturing sector grows the American economy-pie

Up to the final decades of the 20th century, the American economy was growing; powered by an unprecedented burst of technological innovation and manufacturing prowess. (I like Robert J. Gordon’s The Rise and Fall of American Growthfor an in-depth overview of this process [link].) With increasing productivity in the manufacturing sector, the people employed therein, in the foundational—productive—layers of the economy, saw their incomes rise as a result. This great wave of newly-created wealth propelled the economy forward as people employed in manufacturing spent their earnings, sending money-energy flowing through the economy. Some of this energy was captured by the managerial (service)-based layer higher up in the hierarchy, allowing the primary service-based sectors to grow. This layer, in turn, fuelled the growth of further service-based layers above. The net result was that the total economy-sized pie was growing bigger, allowing everyone to capture a bigger share. Times were good. The economy was booming. Importantly, this growth was however fuelled by the growth of the productive layer of the economy: The manufacturing sector.

(We’ll look further into the debate whether manufacturing or services is the main source of wealth in a future blog post, but for now, let’s suffice to say that money-energy is captured by the producer layer, while the higher-up service-based layers only consume the money-energy flow in the form of zero-sum transactions.)

Growing the producer layer fuels the growth of the ecosystem/economy as a larger foundational layer can support larger consumer layers and greater specialisation: The ecosystem is growing more efficient and more mature.

Globalisation has reduced the money-energy capture of the manufacturer-producer layer

Globalisation (a disruptive event) happened in the 1950s and 1960s with incremental parts of the global logistics machine falling into place. (I like Mark Levinson’s The Box for a digestible overview of the history of shipping and globalisation [link].) As globalisation grew more dominant, more and more manufacturing moved off-shore. While this allowed goods to be manufactured at lower cost (reducing the cost of finished goods to the benefit of retailers and consumers), this also gutted a large part of the domestic producer layer (as the producer layer went global). The people employed in the manufacturing sector found themselves on average making less (if lucky enough to hang on to a job). In other words, globalisation caused the producer layer to shrink. As a result, less money-energy was being captured by this foundational layer of the economy. This caused the ecosystem to grow proportionally heavier at the top. In other words, the economy grew imbalanced. It grew stressed.

A shrinking producer layer translates into lower flows for the consumer layers, causing them to shrink, in turn. Overall, the system is losing energy, reverting back to a smaller; more immature; more primitive state.

(It is worth noting here that economies like Germany avoided the globalisation-induced stress by allowing the manufacturing sector to evolve as the manufacturing workforce grew more skilled. Low-skilled jobs were exported with globalisation while high-skilled manufacturing jobs were retained. This allowed the producer layer to keep growing; supporting the economy; allowing it to keep maturing. The mistake the US made was in allowing so much of its manufacturing to move off-shore, making little distinction between high- and low-skilled manufacturing jobs. [I think Dan Wang [link] writes very well on this topic and its implications for the manufacturing sector.])

For individuals, the producer-consumer dynamics are very similar, and best expressed using Maslow’s Hierarchy of Needs. In this hierarchy, the supportive needs (survival and belonging) need to be satisfied before self-actualisation becomes possible. If a person is struggling economically, they will revert back to a lower level in the hierarchy, showing an increased focus on satisfying those more basal needs. If an economy is to be as productive as possible, we want everyone in the economy to operate at the level of self-actualisation, since this allows them to be friendly, proactive, cooperative, and innovative. 

A simplified Maslow’s Hierarchy of Needs. If the supporting needs in the hierarchy are unfulfilled, the successive needs cannot be satisfied.

The contemporary American economy is very stressed

The economic stress caused by the gutting of the money-capturing capacity of the productive layers of the American economy has likely been further exacerbated by the Baumol effect and the asset-price inflation that I wrote about in a previous post [link]. Let’s quickly look at each, in turn.

First, as the price of manufactured goods started dropping (because of the positive impact of globalisation and continued technological innovation in the consumer-goods sector), the proportion of the household budget spent on goods went down. In other words, as less money bought more goods, more goods would be needed to buy a fixed amount of services. This is the Baumol effect. (I find this publication [link] to be a good introduction to and discussion of the Baumol effect.)

Second, as more money (in dollar-terms) has been flooding into the economy following the Great Financial Crisis, the total value captured by each dollar has been going down. Now, if this was happening on an economy-wide level, nobody would be any wiser; we’d just call this good old-fashioned inflation. However, if this inflation is happening in a specific asset classes (like, say, US stocks), it would seem (from the level of the economy) as if the price of that asset class was going up. This means that people whose incomes were tied to these rising prices would see their incomes go up. Conversely, people without incomes tied to this price-inflation would see their incomes go down.

Putting all this together, we now have a situation where the producer layer in the American economy is less productive (because so much manufacturing has moved off-shore). This has led to less money being captured by the producer layer, which would otherwise flow from there to support the rest of the economy. In addition, asset-price inflation has seen more wealth get concentrated at the top layers of the economy, from where very little trickles down. (There is a limit to how much money someone like Jeff Bezos can spend on what they actually need, causing money to pool and not being spent. As a result, the money-flows coagulate.) Altogether, this creates a topsy-turvy distribution of money-energy in the economy, effectively standing the pyramid on its head: Where a lot of money-energy has pooled in the upper layers with very little of it flowing to support the rest of the economy. 

This pyramid is broken.

Altogether, this is why it feels like the American system is broken. The flow of money through the system has been interrupted, and with less money-energy flowing, the gradient that supports the system has been depleted. There is less money-energy to support the system. The system is stressed, and it’s reverting back to a more primitive, immature state (which is more appropriate for its current level of money-flows). While this process is happening, it feels like everything is going wrong when, in fact, it’s just a system re-tracing its path back into a state that it had previously left behind. It’s a bit like the air going out of a balloon. 

If you understand the underlying logic, all of this makes sense: Everything that’s happening in America is part of a greater, predictable process, rooted in thermodynamics and how energy gradients give rise to complex systems.

Importantly, not only does this mean that we know what is likely to happen (the system’s progression is predictable), it also means that we have the tools required to understand how to stop the process and to set the economy back on its original path. Everything that’s required is a bit of political will and enough sensible people in government to recognise that what’s needed is to rebalance the Great American Pyramid to make the American economy less stressed. The sooner we do this and set the economy back on its original path, the less time it will take to get back to where we started—50 years ago. 

Failing this, we’ll see more economic and political upheaval and increased polarisation of the political discourse. This, however, is all part of the process. A stressed economy will see more and more of a push to revert to equilibrium, where the money-energy is evenly distributed throughout. The have-nots will cry louder that we need to eat the rich to survive, and the rich will seek increasing protection behind lenient bodies in Senate and Congress, hoping to protect what they have. Ultimately, a government that promises a more forceful and authoritarian redistribution of wealth is likely to be favoured. We might not like this, but it will re-balance the economy and set the machine chugging forward once again. The more socialist economies of Scandinavia offer good examples: While their manufacturing has moved off-shore, they are much more principled in using taxation to redistribute the wealth. Ultimately, it’s up to each individual economy and political system to decide how they want to re-balance the pyramid. The path chosen is less important than the ultimate result.

At the end of the day, people need very few things to be happy. Among these is the need to earn a not-too-small proportion of the income of the very richest in society. This balances the distribution of wealth in the economy, allowing more of the money to flow. When the money flows, the economy is stable (and people are happy). Happy people will also go out and happen to things, which is how innovation happens. Innovation causes the economy-pie to grow. America knows how to do this part very well. For America to stop being broken, we need to let the money flow once again.

Approximating reality

What follows is a stream of thoughts on the nature of making sense of the world. It might not be perfectly coherent, but it’s a love letter to knowledge and also a mission-statement of mine, of sorts.

Understanding reality (even if only by approximation) can help us understand the goal of our own lives and work better: The better our approximations of reality, the better able we’ll be at making sense of the world. Part of what I’m hoping to achieve with this blog is to demonstrate just how interconnected reality is, and how you can dart and weave between topics as different as physics and biology and finance to better approximate your understanding of reality as a whole.

*

Reality is the underlying fabric of our universe, weaving a tapestry that is both complex and unseen. Yet, the movement of the weave and the processes that have laid the threads into place are subject to physical law; laws that are possible for us to understand and translate into words that make sense to human eyes and ears. Ultimately, the better we understand these laws and the tapestry painted under their influence, the more we will know, the deeper we will see, and the better we’ll be able to make sense of what’s happening in the world. 

Making sense of the world is the aim of each profession. As such, growing more knowledgeable is a prerequisite for doing each profession well. This is because knowledge is a framework by which we can approximate reality—a useful simplification of something that otherwise would be far too complex—lovingly stitched together one piece of data at a time.  

Each nugget of information is a piece of data, binary and blind. Data, however, springs to life as it grows from one to two and from two to many. Indeed, the more data that we accumulate, the more complex and intricate the patterns that start to appear. Linear at first, the patterns will twist and weave in on themselves with an increasing frenzy until they run across the tapestry to connect distant motifs and reveal them as part of one underlying theme. 

It’s by seeing and recognising these patterns that we make sense of the world; deriving patterns from data and then patterns from first-order patterns. Ultimately, these patterns mature into frameworks; more theoretical models of cause and effect that help us leverage insight to predict what comes next. At this point, our understanding has evolved from the passive seeing of revealed patterns to the active hunt for new ones, as if the framework is the flashlight that allows us to peer at parts of the tapestry still hidden in the dark. Each new pattern so discovered adds more power to our framework-flashlight, allowing us to peer deeper and deeper into the not-yet-seen.

Armed with a growing library of patterns, we can start forming hypotheses about how the world works. The better we understand the underlying processes, the more refined our frameworks will be—and the more informed the resultant hypotheses. Such informed hypotheses will, in turn, generate more accurate predictions. As such, a good framework is one that is a good approximation of reality; a useful simplification of something that otherwise would be far too complex. We keep iterating on these frameworks by asking a never-ending series of ‘Why’s, not resting until we’ve touched the ground at the very bottom of things.

From this it follows that the more accurate your approximations of reality, the more successful you will be. Indeed, success is nothing but the product of a long line of hypotheses and predictions that turned out to be less wrong than the average prediction, if not all-out correct. For this reason everyone owes it to themselves to become the most knowledgeable person who they can be; to keep hungering for information and to keep iterating their frameworks so they approximate reality more and more. (The more often we’re surprised, the more we’ve been blindly guessing, either because our framework isn’t a good approximation of reality or because we didn’t trust our approximations in the first place.)

Every piece of information tells us something, even if that ‘something’ might not be evident from the start. The more you learn and the deeper the patterns that you see, the easier it will become to put each piece of data into its proper place. And with each piece of data in its proper place, the easier it will be to generate predictions from smaller and more disparate nuggets of data. The more you’ve seen, the more you’ll see that each new scenario is just an iteration of a story that’s already played out in some shape.

The beautiful thing about the tapestry of reality is that the laws governing its weave have specified the use of a fixed palette of threads. Every motif that appears in the weave will make use of these threads, if not all of them at once, at least we’ll not be seeing any threads that are new. Thus, while we cannot predict what exact motif will appear at any given point of the tapestry, we can use our approximated understanding of reality to predict what average motif is likely to appear and what sub-palette of threads that is likely to have been (and will be) used.

The most (consistently) successful people among us have a better ‘feel’ for the weave of the tapestry than everyone else. They simply understand reality better; reality just makes sense to them. Such successful people also have enough confidence in their approximations to trust them to not lead them astray. That is not to say that all things are knowable. Oh no, there is enough randomness in the world to keep things interesting; not everything in life will be predictable. Oftentimes, having a vague hunch about the nature or likelihood of something is also typically more than enough.

Everything that we do is hypothesis-driven, and success depends on having the most accurate understanding of how the world works. The better you understand the world, the more everything that happens will start to make sense. For the person who knows everything there is to know, randomness is the only thing left to surprise.

Stocks, inflation, and Big Macs

I’m intrigued by inflation; it’s yet another output of a complicated, nonlinear system that plays an important role in the economy and that we don’t understand very well.

Baskets and burgers as a measure of inflation

One of the challenges of working with inflation is that it’s hard to measure across economies and through time. One common way to measure of inflation is to use the consumer price index (CPI). The CPI is constructed by tracking a basket of common items (food, gasoline, rent, etc.) between regions to provide an idea of how prices differ between them and how they develop over time. Inflation can be calculated as the change in the CPI between periods (CPI inflation). 

CPI inflation is however only a rough measure of inflation since it’s partially region- and time-dependent. For example, people living in different regions consider different items to be ‘common’ (the weekly needs of an older rural farmer will be different from those of the young socialite couple in the city), which makes it hard to compare like with like. Similarly, the needs of people living at different times will also be different; the items that we consider common today (TVs, air travel, convenience foods) were less common in the past (the weekly needs of a young suburban family in the 1950s are different from the needs of the same family today). The CPI can accommodate such differences to some extent by creating different baskets for rural and urban consumers, and by acknowledging that basket substitutions are inevitable as consumer needs change over time. 

Economic growth and technological change will however also affect how inflation is measured. Today, for example, we benefit from both the low-cost labour afforded by globalisation and low-cost technology as a result of technological innovation. As a result, many of the common items that are tracked by the CPI have actually declined in price. This trend is thought to be at least partially responsible for the low inflation we’ve been seeing in the US over the last 25 years or so.

Average annual US urban CPI normalised to 1982 – 1984 and annual US urban CPI year-on-year change (inflation). The graph plotting inflation is dominated by the 1970s period of high inflation, and the rate of inflation has been steadily declining ever since.

Despite CPI inflation being low, this doesn’t rhyme well with a lot of people’s experiences of escalating living costs. For example, housing is more expensive today than it used to be. Education is also more expensive, as is childcare. Healthcare costs are spiralling out of control. If we define inflation as the loss of a currency’s purchasing power over time (as more monetary units are needed to acquire the same product or service), how can we reconcile these different observations; that living costs are going up while the purchasing power of the currency (as measured) isn’t going down particularly fast? The simplest answer to this question is that we’re not measuring inflation correctly, despite our best efforts. So, what can be done? 

Ideally, you’d like to track a representative cross-section of the economy to capture all the different inputs that go into setting a price (domestic and commercial rents, transportation and energy costs, commodity prices, salaries and living and employment costs, and so on)—and to do so better than the CPI basket can do: The CPI basket is flawed because its constitution changes over time. Conversely, if you were to track the same product, you would be able to create a better measure of how prices change (especially if the price of the product is determined by the costs of its inputs and production). This measure would then be possible to use as a gauge—a stable yardstick—to hold up to the economy and to see how prices differ and change relative to the gauge. 

With such considerations in mind The Economist magazine has created something they call The Big Mac Index; using the price of the McDonald’s Big Mac as a gauge to hold up to the economy. (Seen through this lens, a Big Mac should always be worth the same, and if the price of the Big Mac changes it’s because the costs of its production have changed.) There’s something beautifully simple about this approach (even if it’s of course going to be flawed in other ways).

The Big Mac as a gauge of purchasing power: How much wealth can a burger buy you?

Because of my interest in inflation and feeling that CPI inflation doesn’t capture it very well, I have been looking for a good gauge. Since The Economist’s Big Mac Index is such a gauge, I figured ‘why not’, and downloaded a spreadsheet with Big Mac prices in the US since 2000 from GitHub and started to play with the numbers. Right out of the gate it was obvious that the price of a Big Mac has gone up a lot: While a Big Mac cost $2.51 in 2000, it was $5.67 in early 2020! In other words, the price of the Big Mac has more than doubled in the last 20 years. Put another way, this means that while one dollar would have bought you 0.4 Big Macs in 2000, that same dollar will only buy you 0.18 Big Macs today. Such is the loss of purchasing power because of inflation.

The number of Big Macs you get per dollar has been decreasing steadily over the past 20 years. I got the Big Mac data from GitHub.

Plotting this loss of purchasing power (number of Big Macs you get per dollar) offers an intriguing observation: The number of Big Macs that you get for a dollar follows a power-law-like relationship with a high degree of correlation (r2 = 0.99). In other words, the price of a Big Mac can be predicted ahead of time. Therefore, if this correlation holds, the amount of dollars needed to get you a Big Mac seems to double every 20 years or so, so that the Big Mac that cost $5 in 2016 will cost $10 in 2035, and $20 in 2055. At the same time, the Big Mac itself shouldn’t change (a Big Mac is a Big Mac is a Big Mac). Instead, something is happening to the dollar—and happening faster than what is captured by the CPI. (While the price of a Big Mac correlates with US urban CPI, CPI inflation has been lower than the price-increase of the Big Mac: US urban CPI was 172 in 2000 and 257 in 2020; an increase of only 1.5×.) The Big Mac might therefore afford us a slightly different perspective on inflation.

The declining number of Big Macs you get per dollar follows a power-law function with high correlation.
The predictable relationship of the number of dollars needed to buy you a Big Mac over time allows us to extrapolate how much a Big Mac will cost in the future. The number of dollars per Big Mac seems to double every 20 years or so.

(Now, Big Burger might have a spreadsheet in their HQs looking something like this, which they use for pricing their burgers. This can also be a measure of how inflation is eroding the purchasing power of the dollar. The truth is probably somewhere in-between. However, let’s assume that the Big Mac can tell us something about inflation, for now.)

If we use the Big Mac as a gauge (assuming that the value of a Big Mac remains stable over time), we can start expressing the cost of other assets in units of Big Macs. All of a sudden, we can now see how the price of other assets are changing not in dollars (which lose purchasing power over time), but in Big Macs (whose value we’re assuming will remain stable over time). Now, if we do this, we start making more interesting observations. For example, the number of Big Macs you need to buy the average US home (~70,000) has remained steady since the Great Financial Crisis. (The housing bubble can also be clearly seen as an increase in the number of Big Macs needed to buy the average US home; up to 100,000 Big Macs at its peak.) This suggests that if you were paid in Big Macs, the housing market would be as-welcoming in 2020 as it was in 2009 as the price hasn’t changed. Now, that’sinteresting: This doesn’t rhyme well with our impression of escalating housing costs.

The number of Big Macs needed to buy the average US home has remained stable since the Great Financial Crisis. The housing bubble itself is clearly visible as an increase in the number of Big Macs needed to buy the average US house. I got the housing data from the St. Louis Fred.

Another thing that feels like it’s been rising in value is the stock market. However, if we plot the number of Big Macs needed to buy one unit of the S&P500 (which I like to use as a US stock market proxy), we’ll see that the number of Big Macs per S&P500 unit has remained relatively stable over time at ~400 Big Macs…! (The dot-com bubble, the dot-com burst, and the Great Financial Crisis are clearly visible.) The correlation between how many Big Macs you get per dollar and how many S&P500 units you get for the same dollar isn’t super-strong (but still meaningful) at r2 ~ 0.7. This suggests that there is a (simple, absolute) relationship between the dollar-cost of a unit of the S&P500 and the dollar-cost of a Big Mac, so that if one goes up, the other will go up too. In other words, it’s possible that some of the increase in the dollar-price of a Big Mac is related to the price-increase of the stock market.

The number of Big Macs needed to buy one unit of the S&P500 has remained relatively constant over time at around 400 Big Macs per unit. The fallout from the dot-com bust and the Great Financial crisis are clearly visible as deviation from the long-term trend.
The number of units of a Big Mac or the S&P500 that a dollar gets you has declined steadily over time. The number of stock-market units you get for your dollar has been more volatile than the number of Big Macs that you get, but the overall rate of decline has been reasonably similar.

Two types of inflation and the burger as a lifebuoy

I’ve been wondering for some time now if the US economy isn’t seeing asset-price inflation (as opposed to consumer-price inflation) and if that’s not why CPI inflation seems to be so out of sync with people’s actual experience of life in the US economy. According to this interpretation, inflation isn’t happening in the cost of goods and services tracked by the CPI basket as much as it’s happening in pure assets (like stocks and real estate). 

The drivers of this kind of asset inflation would be many and complex. Many people have been crying foul of the Fed, for example, for having kept interest rates artificially low. This might have incentivised more and more money to go into the stock market rather than the economy proper (for reasons that will form the topic of another blog post). When more and more money is chasing the same asset (stock market units, say), the price of those assets will, however, understandably, go up. If more money is chasing fewer assets in the economy per se, it’s likely that we will see CPI inflation instead. (The 1970s oil crisis and resultant inflation would be a case-in-point.) Conversely, if more money is chasing fewer assets in the more abstract, asset-based economy (like real estate or the stock market), it is less likely that we will see CPI inflation. Instead, asset-price inflation should result. 

As the purchasing power of the currency is declining, the value of the goods and assets themselves will however not change (assuming a stable supply and no technological innovation). Thus, the value of a loaf of bread should always be worth the same, even as the price of the loaf might change over time. The same is true of assets (assuming stable supply and production costs), even as their role as investments complicate this relationship somewhat. (C.f. gold.)

In an economy seeing CPI inflation, the people with the lowest incomes will be the hardest hit as they need to spend more and more of their income on the same amount of goods. As the purchasing power of the currency is declining, the value of the goods and assets in the economy however doesn’t change. (More monetary units will just be needed to secure the same amount of goods.) This sort of inflation will burn from the bottom up: As the cost of goods increases, so will the price of assets, and incomes will be degraded unless they keep up with inflation. This type of inflation will affect people equally (even as people with higher incomes will have more of a buffer and feel the absolute effects of the inflation less than those with lower incomes, even as the relativeimpact is the same). 

In an economy seeing asset-price inflation, people with different asset-price exposures will however see their purchasing power develop differently: As the price of assets (but not consumable goods) is increasing, the people with the highest incomes will benefit as the asset-price inflation will cause their relative incomes to increase (as more of their wealth is invested in assets and therefore tied to their value). Therefore, as asset-prices go up, so will the wealth of people whose income is tied to the assets themselves. People without asset-dependent incomes will not see their income increase, and, as a result, even as their absolute income remains the same, their relative income will go down.

This is interesting, because this is pretty similar to what is currently happening in the US economy: Where people with low incomes are seeing their cost of living going up even as their incomes have stayed the same. Conversely, people with higher incomes (of which more is invested in assets and thus linked to asset values) have seen their income (and wealth) increase. As a result, their cost of living has remained more stable over time. Another way to see this would be to think of people with high incomes and wealth invested in assets as having a lifebuoy tied around their waist. As asset-prices increase because of asset-price inflation, the lifebuoy (the asset-dependent investment) will see them float on top of the asset-price inflation. (From their perspective, not much as changed.) For people without investments in assets (and thus no asset-dependent lifebuoys), the asset-price inflation will leave them in the same place—but worse off—as they started: They have no lifebuoy to keep them afloat. Altogether, this means that asset-price inflation will (in relative terms) make the rich richer and the poor poorer, which fits very well with what we’ve been seeing in the US economy.

Graph showing cumulative income growth for US earners. The steep increase of the top-income group is stark in comparison to the lower-income groups. In the curve for the top-income earners, do note the peaks around 2000 and the mid-2000s, corresponding to similar rallies in the stock market and house-prices. High-income earners are more likely to have incomes that are tied to the performance of assets like stocks and real estate, making their incomes more dependent on asset-price inflation (and more immune to its economic effects). I took the graph from this blog post from the Economic Policy Institute [link].

Why, then, has the price of assets been increasing? Here, I actually think that the people who criticise the Fed have a point: By lowering interest rates artificially (below the level that they would be at in a free-market system), the Fed has incentivised money to move from savings (in, for example, bank accounts and bonds) to move into the stock market (which inflates the price of market assets). In addition, economic stimulus in the form of bond repurchases and QE has seen the amount of money in the asset-connected economy going up. In addition, I think growing pension assets might have an additional role to play here, too (partially by exacerbating the impact of the previous examples). If this is the case, the rich getting richer might be an unintended effect of past decades’ stimulus to encourage continued economic growth. Ironically, this stimulus might, therefore, have had exactly the opposite effect. 

So, what does all of this have to do with Big Macs? Not much. The Big Mac is just another way of expressing relationships between different goods and assets in the economy. You can reach many of these same conclusions by relying simply on the CPI, but I find the Big Mac to be a more intuitive tool. It also seems to capture more than the CPI does (even if it probably misses other things instead). As such, it makes a useful complement to CPI-based inflation metrics. However, expressed in this way, there are two obvious conclusions to be made: First, you want your income to be tied to assets (so your income is tied to asset-price inflation) or something that goes up alongside it (like the Big Mac), so you don’t lose purchasing power over time. (This loss of purchasing power might however be the price of the liquidity afforded by using a currency that is a derivative of an asset rather than the asset itself.) Second, the way to improve income inequality in the US will be to equalise the asset-base that people’s incomes are pegged to so that asset-price inflation won’t cause the high-income earners to float away from the low-income earners. How to do so constructively (and avoiding ironic side-effects) is the really hard but interesting question, and it’s one I don’t have a good answer to right now. (For example, having everyone own their own home turned out to not be such a great idea, as the housing bubble and the Great Financial crisis showed.)

Tech stocks as a quasi-currency

Following the initial COVID-19-induced market unrest, teach-heavy financial markets (e.g. the US) have re-traced their initial losses and today, the S&P500 is less than 8 % from its peak. While this could be interpreted as tentative signs of a recovery in financial markets, alarm has been sounded over the growing deviation between this perceived financial recovery and the lack of any meaningful progress in the economy per se. Comments along the line that that ‘the stock market is growing increasingly divorced from economic reality’ are common.

S&P500 during the year to-date, via Google.

On the face of it, these concerns make sense. The US is currently in an economic recession, and the government’s poor handling of the virus means that the country’s economic outlook looks arguably worse today than it did during the initial market turmoil. However, if we look beyond these face-level impressions, we begin to see that the story is, in fact, more complicated. Indeed, if we zone in on the S&P500, we find that many of the post-COVID-19-gains have been made by gains in tech stocks. In other words, it’s only a handful of stocks that have been the driver of the recent market recovery. If we look at a more tech-heavy index like the NASDAQ100 (up over 6 % from the pre-virus peak), we can see this more clearly still.

NASDAQ100 during the year to-date, via Google.

This makes for a strange financial recovery. So, what’s going on?

Many explanations have been offered. One of the most common goes along the line that investors have somehow lost their mind and that we’re seeing the start of another Fed-fuelled tech-stock bubble. In the long-term, this might (or might not) be true. In the short-term, this is however puzzling, since stocks should (in an ideal universe) be valued on the basis of the value of their discounted cash flows into perpetuity. At current valuations, this means that many of the highly-valued tech stocks would need to see some rather punchy growth numbers going forward to justify their current valuations. Now, of course, it is possible that investors have been moving their investment horizons forward; valuing companies on 2030+ cash flows and extending these longer into perpetuity to reach these higher terminal values. However, I think this interpretation is (short-term) incorrect.

First, I must confess that I, over time, have grown increasingly uncomfortable with the idea that valuations ‘mean’ anything. I think a strong focus on valuation is very firmly rooted in the value-investing tradition personified by Benjamin Graham and Warren Buffet; a strategy that worked well in the inefficient markets of the mid-1900s, when it was—in many cases—sufficient to read a company’s filings and to crunch some simple numbers to see if a company’s stock was undervalued or not relative to the intrinsic value of the business itself. Over time, this style of investing (real-time valuation) has transmutated into growth investing, where more and more of the business value is expected to be generated in the future, with investment made more on the basis of expected future value than the business’ value today.

Over the time that this transmutation has taken place, we have however also seen two additional developments that have changed the nature of markets:

First, the early successes of successful investors (Warren Buffett included) have given rise to the idea that financial markets offer the economic equivalent of a ‘free lunch’: Put some money into the market and see it grow over time (c.f. passive investing). This attitude has seen more and more investment funnelled into markets (at the expense of other investment venues with longer-term investment horizons, like manufacturing innovation and scientific excellence) to the point, where, today, a large proportion of the social welfare system (e.g. pensions, insurance, savings) have been invested in ‘going-up’ assets like stocks or real estate. With more and more money competing for (increasingly) much-needed gains, the competition for gains have intensified. (This leaves fewer and fewer stocks unturned, washing away the sure gains of value investing and pushing the balance in the favour of growth investing, and, increasingly, riskier assets like private equity.) 

Second, at the same time, and because of their increasingly important role as a source of income, financial markets have (ironically) become less free: The more of an economy’s social welfare is invested in markets, the more dangerous any volatility in these markets can become. (We saw the impact of this all too clearly during the Great Financial Crisis and the resultant fallout.) As a result, there is an increasing need for economies to manage financial markets; to engineer away any volatility to safe-guard continued growth. (We have seen this with the increasingly desperate support packages that have been made available to prop up the markets to avoid any virus-associated fallout.) One the one hand, this is the inevitable price that we must pay for the free-lunch mentality: If the social welfare of a country is invested in—and therefore dependent on—continued future gains in financial markets, it becomes an economic imperative to avoid any market corrections (which would be costly). On the other hand, this is (regrettably, but predictably) introducing even more volatility into the markets—for the very reason that they are no longer ‘free’. 

Free-market systems are naturally self-correcting because they adhere to the same rules that govern other evolutionary systems (like the competition among genes and their extended phenotypes in a natural environment), where different strategies compete for the spoils as part of an ever-escalating arms race. This type of evolutionary competition favours the exploitation of untapped resources, and as one niche becomes over-populated, another one will bloom in its stead. This dynamic is more stable because it self-corrects, as any emergent instabilities are exploited and deflated as part of the overall process. This ‘blind’ process will not stop even after interference by a ‘seeing’ actor. Instead, any management of the system will introduce new rules to the process, shifting the dynamic in the direction of new optima. These optima—if the process is managed well—can be more preferable than what came before. Often, they are however counter-productive: Because the system is so complicated, any interference is likely to push it in the direction of a less-favourable optimum. This is true of both organisms and stock markets.

Altogether, this is a very long-winded way of saying that valuations are relative: What is considered over- and under-valued will depend on the system and the rules that govern it, and what optima arise as a result of these factors. Without a good benchmark by which to estimate ‘value’, any distinction between under-and over-valued becomes meaningless. Instead, we are reduced to relative terms like ‘more’ or ‘less’ expensive, but unless we compare like with like, this is often a futile exercise. In term of company valuations, this means that we cannot compare different companies (different business models will necessitate different valuations) between markets (different economic systems will produce different optima) or over time (for the same reason). The valuation of a company in one market today will be influenced by very different factors to the valuation of a company in another market at another time.

You might recognise these problems as being similar to inflationary problems, and you’d be right: The study of inflation is much-plagued by the fact that we cannot compare the value of a currency across economies or between time-periods. And without such a pan-market-pan-time-gauge, we cannot compare like with like, and the entire exercise fades into the realm of model-driven interpretation (as opposed to rational analysis). This applies to both currencies and companies.

What makes currencies relevant to the discussion at hand is that a currency acts as a store of value. In the olden days, the coins themselves had value (being struck from metals like gold and silver). Over time, currencies evolved into derivatives of the store of value; being minted from lesser metals, backed by centralised collections of silver and gold. Today, most currencies are backed by the issuer itself; their value being derived from the issuer’s promise that they will honour the value of the issue. As such, the value of a currency is implicitly the value of the issuer’s word and their ability to do good on their promise. This makes currencies hard to value in absolute terms (how do you value the strength of somebody’s word?). Instead, we’re stuck valuing currencies in relative terms, by comparing to other currencies.

Currencies are however not the only store of value in an economy (even if they are often the most liquid store of value). Rather, anything of value can also be a store of it. As a result, economies will see the rise of something that’s called a ‘quasi-currency’; an alternative currency that shows some of the characteristics of a currency (e.g. acting as a store of value, even if its liquidity might be lower). These quasi-currencies can take the shape of everything from real estate to derivatives to precious metals or oil and stocks.

Accepting the existence of quasi-currencies is when the dynamic starts to become interesting: Depending on their popularity, these different quasi-currencies will see different levels of ‘inflation’ (an increase in the money-equivalent-value of the quasi-currency, as a result of a mis-match between supply and demand). These varying levels of inflation accordingly make the quasi-currencies variably good stores of value, since, ideally, you want to store your wealth in a form that grows it (rather than destroys it). This means that the best store of value will fluctuate according to prevailing market conditions and their relative popularity (whatever is commonly agreed to be a good store of value will become a good store of value as part of a self-fulfilling prophecy). In this interpretation, a rise in the relative value of an asset is the result of an inflationary process in a specific form of quasi-currency.

When consumer-price inflation is high, the currency of the economy is devalued: The currency has stopped being a good store of value, and ‘value’ is migrating into other quasi-currencies. (In hyper-inflationary economies, the concept of a quasi-currency is anything that is not the currency per se, like the broken clocks used for barter during the worst days of hyper-inflation in the dying days of the Weimar Republic). In the inflationary economy of the 1980s, the store of value—the quasi-currency—seemed to be gold (the asset that saw the most impressive dollar-gains). In the late 1980s, real estate might have became a store of value (leading to the property boom and bust), and after this fixed income saw a brief period in the limelight (because of attractive dollar-yields), culminating with the fixed-income crash of 1994. (There are circumstantial suggestions that money-flows from fixed income to stocks in 1994 fuelled the first stock-market acceleration that culminated with the dot-com bubble and subsequent crash.)

After the Great Financial Crisis, US stocks seem to have become a de facto quasi-currency, with market gains accelerating in 2013. Today, more and more of this quasi-currency status seems to have migrated to the big tech stocks, and I would argue that the market uncertainties surrounding the coronavirus have exacerbated this process because the tech stocks represent some of the world’s most successful companies, which makes them an objectively good store of value. According to this interpretation, the valuations of this select group of high-performing, super-successful companies is not made with the intent of earning a meaningful return; it is done with the intent of keeping the money safe.

As it happens, money is gregarious, and where money goes, more money will go (as part of the self-fulfilling prophecy I mentioned above). This makes the flow of money (the liquid medium by which one quasi-currency and store of value is transformed into another) into a quasi-currency a self-perpetuating process, and beyond a certain threshold of money-flows, the flows will accelerate. Accelerating flows make the quasi-currency seem increasingly attractive as a store of value (i.e. a good investment, in real-money terms), and this will attract more money, in turn. Ultimately, we are left with a situation where a lot of money is stored in a small class of assets (whose value, in real-money terms, has increased). This could be a bubble if the crowding of money becomes acute enough (as people pile in, not wanting to be ‘left out’, up to the point where the slightest uncertainty about the quality of the store being enough to break the spell), or can be just a temporary blip if the crowding was spread across a larger set of assets/quasi-currencies.

In short, I think the current spurt in tech valuations is due to this process: Just because of the uncertainty of what’s going on in the economy, there is a lot of uncertainty about what will happen to the markets themselves. As a result, money has flowed from more-questionable assets (whatever their objective quality) to (what is perceived) as higher-quality assets. Today, this just happens to be high-performing US tech stocks. Therefore, the ballooning valuations of this basket is not made with the intent of generating a return (that’s just a happy side-effect), instead, it’s made with the intent of safe-keeping. Money might want to be free, but it also wants to be safe.

Shareholder activism and centralised decision-making

Shareholder activism puzzles me, since it goes against free-market principles by allowing people who are far from a business (and therefore less well-informed) to influence the running of it. In the long term, I think this will do little to improve the corporate governance and shareholder returns of individual companies. Instead, I think that shareholder activism is a symptom of the fact that most investment managers are not good at what they do. Some introspection on behalf of the industry might be in order. 

Now, I am only a free-markets advocate to the extent that my default is to revert back to ‘leaving well-enough alone’ and to agitate for a more long-term and ecosystem-centric view of markets and the regulations that govern them. Because of the non-linear dynamics at play in your typical market, any changes or new regulations are liable to have unpredictable and long-term effects. As a result, intervening into dynamic systems will require significant thought to predict and correct for any unforeseen consequences that might result. Even then, the intended effects are often far from the changes that ultimately do result. As a result, my opposition to intervention is not absolute, but I reserve my approval to policies that have considered the long-term and long-range impacts and take these into account. 

In brief, my main issue with shareholder activism is that it takes power away from the people who are the closest to a business, who—one supposes—have the most relevant and accurate information to hand. This increases the distance between decisions and decision-makers, which dilutes the efficacy of the decision-making and introduces the potential for unintended, off-target effects. Indeed, a lot of this boils down to the perennial argument between centralised and decentralised decision-making. 

My preference—which informs the argument below—is to be in favour of decentralised decision-making, since this maximises the autonomy of individual agents in the system. Such dynamics also apply across systems, and across industries we see an increasing push for technologies operating ‘at the edge’, whether this be in the form of an ‘Internet of Things’ or local processing on decentralised devices to reduce lag time (the comparison between Android and iOS voice-to-text springs to mind, see below). The rationale for such initiatives in that the most robust systems are also often the most dynamic and flexible, and this is enabled by decentralisation. (Octopodes are an example of this in the natural world, where each arm is capable of making decisions independently of the brain. But I digress.)

https://twitter.com/jamescham/status/1265512829806927873

*          *          *

Shareholder activism has a long history, but regulatory changes have seen increasing interest in shareholder activism over the last 20 years. Previously, shareholder activism often took the form of divestment. As a result, well-run companies would see their popularity among investors grow, resulting in win-win scenarios as share prices go up because of higher investor demand. Conversely, less-well-run companies would see reduced investor demand and lower market-capitalisations. In other words, these dynamics (where investors vote with their feet to support well-run companies over their less-well-run brethren) would exert a selective pressure on companies; selecting for well-run companies and penalising the less-well-run. The quality of the average company would therefore be expected to increase over time, as well-run companies persist, and poorly-run companies fade away into irrelevance due to an inability to raise capital. 

Shareholder activism however turns this dynamic on its head. Aggressive shareholder activism often sees the activist take a meaningful holding in a company. The targeted companies have often suffered poor operating performance, which the activist seeks to improve over the short term to pump the share price enough to justify exiting the position. The long-term health of the company (poorly run or not) often suffers as a result. Less aggressive shareholder activism is however more common, and it is often practised by mutual fund managers in the form of ‘engagement’. Under this form of activism, fund managers engage with company management teams to advise them on matters like capital allocation and corporate governance and the like. Not only does this allow the fund managers to feel as if they’re ‘doing something’, but it also allows them to justify their fees (and existence) in the face of the rise of passive investing. Furthermore, the assumption (on behalf of the fund managers) is that they know better than their corporate kin.

By gently encouraging (or forcing) poorly-run companies to improve, activism however only artificially increases the quality of a company. External change will always be incremental and therefore less effective than the change that comes from within. Shareholder activism would therefore be expected to reduce the overall quality of companies in the long run, by allowing non-competitive companies to persist and retain access to capital markets. In this way, activism is almost like a type of corporate social welfare, ultimately leaving poorly-run companies increasingly dependent on their activist investor base in order to thrive. This is not sustainable in the long run as these dynamics could easily erode managerial accountability. (Hypothetically, management teams could blame their poor performance on poor advice given to them by activist investors.)

In addition, activism is also an easy ‘out’ for the fund managers themselves: By forcing companies to take actions designed to increase shareholder returns, investors are effectively pushing their responsibilities onto the corporate management teams. In an activist ecosystem, it is no longer the responsibility of the investors to pick good companies to invest in. Why bother, if you can ‘engage’ with the duds to smooth over your own investment mistakes (and look good in the eyes of your own investors in the process)? From this perspective, rather than allowing investors to ‘earn their fees’ by adding shareholder activism to their toolbox, activism is in fact another argument in favour of passive investing as activist shareholders cannot leave well enough alone. 

Arguments in favour of fund managers enabling grassroots shareholder activism—by the individual investors of mutual funds—are also misplaced: Can we really expect people who are not experts on the companies they’re (directly or indirectly) invested in to know what is the best approach for a company to take when it comes to issues such as corporate governance in order to maximise the long-term benefits for all parties involved?

(I don’t want to unnecessarily invoke politics, but it makes for a good parallel, and one need only look to the failures of democracy in societies where the deme is not well-versed in the complications of public policy to see how easily it can go awry even when implemented with the best of intentions. The best-case scenario often leaves the regulator playing whac-a-mole with new, unintended problems arising, when the implemented policies do not even address the underlying problems at hand. The UK’s decision to leave the European Union and the rise of Donald Trump in the US, are examples of short-term, activist solutions with long-range consequences that fail to address the complicated, long-term, underlying problems.)

One of the immediate impacts of shareholder ‘engagement’ on behalf of fund managers is an increasing emphasis on ‘diversity’ on corporate boards. Unfortunately, because of the vague definitions of ‘diversity’, such policies often revert back to specifying superficial proxies of diversity like sex, gender, and ethnicity, which does little to increase more meaningful measures of diversity (the exact nature of which could very well vary from company to company and industry to industry). In addition, shareholder policies around corporate governance often include provisions to limit the tenure of board members, again with the intent of maximising superficial measures of diversity. Objectively talented CEOs like Mark Leonard at Constellation Software have lamented such policies in the past.

For example, in his 2017 letter to shareholders, Leonard commented on how hard it can be to find talented directors to staff a company’s board: They need to be people with relevant experience, who are also good at what they do, and who are willing to commit both time and effort to their new responsibility as directors over a long period of time. Leonard then proceeded by commenting how silly it is of shareholders to make the already-hard task of finding competent and qualified directors even harder for companies: Talented directors are hard enough to find already, without the need for further complications in the form of artificial demands in the name of ‘diversity’. This is just one example of how well-meaning shareholder activist-engagement can have a negative impact on the long-term performance of otherwise well-run companies. 

Whenever you introduce new selective pressures on a complex, dynamic, evolutionary system, you need to consider the long-term run-on effects of the selection process, and only implement those policies that have the maximum potential of optimising for the actual goal that you’re trying to optimise for. Financial markets are no different, and shareholder activism is likely to have long-term negative impacts that are yet to be felt. The most robust systems are often the ones that see maximum autonomy and minimum interference, and where interventions have been carefully chosen with the intent of optimising for a specific measure in the long run. 

My concerns regarding shareholder activism is that the practise will not optimise for what it claims it will optimise for (improved corporate governance and shareholder returns). Instead, shareholder activism is a short-term solution to a different problem: The fact that financial markets are growing increasingly competitive and that many investors are struggling to stay relevant. Perhaps increasing investor activism directed at fund managers is in order, but that’s still not addressing the underlying problems at hand: That we’ve organised the long-term welfare of many people around the idea that financial markets are wells of prosperity rather than the zero-sum games that they actually are. As a result, we see the impact of problems like many investment managers not being good at what they do (everyone can never be above average). Instead, we should make it easier for under-performing managers to fade into irrelevance (just like their poorly performing corporate counterparts) so something better can grow in their place. 

Finding patterns in markets

As time has gone on, I have grown increasingly fascinated with the science of finance (even as I have yet to digest much of the academic literature on the topic). Instead, I started by dipping my toes into more-easily digestible accounts of people who have been successful applying scientific principles to finance. 

The reason who I have chosen to go down this route is because I have long been of the view that studying the opinions, actions, and scientific directions of talented, successful, and intelligent people will maximise one’s chances of learning something interesting (since you’re essentially piggy-backing on the insights and experience already offered and learned by such individuals). To put it another way, picking up where successful people have left off will allow you to stand on the shoulders of giants more easily and successfully than any other approach. 

To this effect, I have been growing my library with books on and about people with interesting and/or successful ideas on and about finance. Most recently, I read The Man Who Solved The Market, Gregory Zuckerman’s book on Jim Simons and his colleagues at Renaissance Technologies. 

Zuckerman’s book isn’t the best I’ve read: While it offered up interesting tidbits on investing and quantitative finance, it didn’t go into these in any depth. As a result, the book does not offer much insight; it’s more a journalistic account of Renaissance Technologies’ origins and rise to fame, and the personal stories of the people who helped make this happen. In part, I suspect this is because of the secretive nature of the hedge fund that means that not much is known about its insights and methods. In addition, I have also come across suggestions elsewhere that run along the lines of our public understanding of the science of finance suffers from the fact that any meaningful insights will be more profitable if not shared widely. I think there’s some truth to that. 

Yet, Zuckerman’s book does contain some interesting comments and perspectives (even if these would fit on less than a two-page spread) that pretty much boil down to the observation that financial markets are complicated systems and that their movements and undulations are the product of many, many disparate actions and forces. Of these, the company ‘fundamentals’ (e.g. annual reports, conference call transcripts, company accounts) that fundamental investors focus on represent just one small part of a much bigger whole. Instead, a holistic view of companies (and other actors) as being part of an interconnected web of other commercial entities across economies, time, societies, and geographies offers a much richer view of the complicated world in all its glory.

This is also a very non-linear perspective. 

Importantly, a  nonlinear perspective of markets allows an investor to rise above the myopia of many other market players. If executed well—through the identification and application of meaningful, pan-economic signals—this approach will afford a market player a differentiated and superior perspective that will allow them to beat the market—even for long periods of time. 

To wit, financial markets are zero-sum systems, much like a game of poker. For the most part, no new material products are being made and no value is being created (this privilege belongs to people like farmers, engineers, and other growers/creators of material things). As a result, the same amount of money will leave a market as went into it. Effectively, this means that financial markets form a game of wits where the player(s) with the most successful strategies will win. Their winnings will also be bankrolled by the losses of other, less fortunate, players. 

A lot of success under these types of conditions come down to skill (and just a little bit of luck). Indeed, biological metaphors can be apt here, with the zero-sum game of the financial market following a similar playbook to the creative destruction that governs the fate of individual organisms in an ecosystem, where the most fit individuals will enjoy the greatest probability of parenting the next generation. So, too, is the fate of players in financial markets, where successful strategies will see the highest probability of their gains compounding over the long term. Indeed, similar dynamics apply to many other complex and iterative systems. (Company cultures spring to mind.) 

As a result, zero-sum games are inherently unfair, and unless there are no winners whatsoever, the dynamic will only produce winners at the expense of losers. In a zero-sum games, it will be impossible for everyone to perform above average. Zero-sum games in nonlinear systems are however even more unfair than this, because stellar performances will be quite rare. In such systems, there will only ever be a few, differentiated players raking in most of the gains at the expense of much-larger numbers of less-successful ones.

Science will thrive in any complex system where skill (defined as sustained above-average performance) is the main determinant of success. This is because science is a system for sussing out cause and effect and to pull out one of the underlying threads from a complicated tapestry to understand its contributions to the whole.

As such, science (in the form of maths) is the strategy adopted by successful quantitative finance shops like Renaissance Technologies, and it is what has allowed these funds to see the success that they have. In the words of a yet-to-be-successful Simons, “If we have enough data, I know that we can make predictions.”

Science is a method for understanding the underlying dynamics of a system (however complex), and such understanding constitutes skill when the game is played. (A relatively good understanding of poker will see you perform better—to be more skilled—in a game played with a sample of people. The better your understating of the game—and how it is played by other people—the higher your skill and the better your performance regardless of the skill of the other players.) Financial markets are not much different: The better your relative understanding of the inner workings of the markets—and what drives gains and losses—the more skilled you are and the better your performance will be. 

Apart from realisations like this (that a better understanding of markets will translate into skill and outperformance, which is obvious in retrospect), another point that was made by Renaissance staffers in Zuckerman’s book was that the markets are human

Benjamin Graham, one of the fathers of value-investing, said that the market is a “voting machine in the short run, but a weighing machine in the long run”. Whatever your feelings about value-investing in general, this point of Graham’s is still true. Markets are made up by people, and as people, they are host to a bewildering array of more or less false beliefs. What Graham alluded to in the quote above is that these follies will stack up over the short run. (When these follies take the form of exuberance, we see bubbles forming, making the luckiest fools rich.) In the long run, however, reality will win out and it is only the most skilful investors who come out on top as their long-held beliefs were shown to be true.

As a result, markets can never be fully efficient: As the air goes out of one foolish belief, there are a hundred more to take its place. Instead, markets are grossly inefficient; being the result of the more-or-less coordinated actions of thousands of individual actors, each one imperfect and subject to their own follies, biases, and beliefs. Yet, even as every folly is unique in its own way, human nature is not. There are only so many ways in which a human being will respond to a given stimulus. As a result, a market will reflect the average collective belief of each player therein; each price-level is the total sum of buys and sells (and the underlying beliefs that motivated those actions) at any given time. 

Quantitative investing takes the view that while it is impossible to understand the market (the system is just too complex), you could potentially understand the psychology of the players therein (even if you’ll never understand their individual motivations or beliefs). The assumption then becomes that human behaviour with regard to markets is predictable, even as the markets themselves are not. Therefore, if you can understand how people have reacted to an event in the past, there is a possibility that you can benefit from people reacting similarly in the future. There’s something very beautiful about this approach, and by using math and removing the emotions, follies, and beliefs from the equation, you can rise above all such human concerns to instead go fishing for the reliable signals that they create in aggregate.

Price data is full of such signals, being the collective sum of thousands of strands of information. If you learn how to decode the sum of all the strands, you therefore won’t need to understand the underlying strands themselves (in the form of annual reports or investor presentations or news items), because all of that information (or, more accurately, the collective interpretation thereof) has been baked into the price. If you can identify reliable-enough signals in the data, you therefore won’t need to forecast cashflows or interest rates, because all that data is already there. This is not all that much different from trying to understand the underlying order in the natural world by dissecting fluid dynamics, cell biology, or star formation. 

As a result, quantitative investing makes use of large datasets, allowing the investor (and their computer models) to slice markets up into segments and to figure out how any given set of price-movements have correlated with market gains and losses in the past. The assumption is that if these correlations exist (and at an acceptable level of statistical significance), money can be made by betting on the pattern holding true also in the future. Of course, the details of quantitative investing is much more complicated than that, and Zuckerberg writes that Renaissance Technologies uses massive data sets spanning everything from finance and economics to society and culture. Yet, if you were to distil quantitative investing down to one key insight, that would be it: That markets are human and human behaviour is quite predictable. 

Now, we can argue how ‘scientific’ this insight is, but I think it identifies a fundamental force to human-made complex systems: The human. The resultant systems are only complex because they emerge from the collective actions of thousands—if not millions of individuals—each governed by reasons best understood to themselves. Naturally, much of the same applies to other complex systems, where individual components can be understood with a fair degree of accuracy, while their collective actions and interactions cannot. 

For those who doubt that human behaviour is predictable and that it produces tell-tale signs in market pricing, I offer up the following exhibit: The ‘anatomy of a crash’ in the S&P500. 


The anatomy of a crash in the S&P500. The blue line shows the price-performance of the S&P500 during the Great Financial Crisis (on the underlying chart, with values shown on the axes). The ‘COVID-19 Crisis’ is overlaid in cyan (without axes because the time-scale and scale of the price-performance is different and I’m not of a data-wiz to get it to match up when I’m just trying to make a point). 

The anatomy of a crash is market-dependent, with the resultant price-movements depending on factors unique to each market. Yet, these individual differences aside, the resultant, collective reaction is quite predictable—as you can see in the chart, above. 

Importantly, in the chart, you can see how the COVID-19 Crisis follows the anatomy of a crash in the S&P500 quite well, the only difference being the exact scale of the price-movements and the duration of these. Indeed, while the Great Financial Crisis played out over the better part of a year (and then some, as shown on the graph), the COVID-19 Crisis played out over a much shorter time-scale (the cyan line above shows the S&P500’s behaviour from the beginning of January to the second-last week of May). Notably, this could also help explain why the market seems to be up despite the economic backdrop: According to this chart, the COVID-19 crisis was an S&P500 crash in fast-forward; playing out over a matter of weeks rather than months (or even years). 

Of course, and I shouldn’t have to add this: That while the COVID-19 Crisis seems to follow the path of the Great Financial Crisis quite well, this does not mean that the pattern will hold in the future. Complex, nonlinear systems are beautiful like that—they always have the power to surprise you, even when you think you know what’s going on. But at least that keeps us busy. There will always be more to understand. 

Debt, destabilisation, and market crises

I started university in the aftermath of the Great Financial Crisis, which I was wholly oblivious to, because this was long before economics and finance appeared on my interest-radar. Given the uncertainty of what’s currently going on in today’s financial markets (and especially the uncertainty regarding what’s likely to happen), I therefore thought this would be a good time to make up for previous failings and to learn more about previous market crises.

One of the things that have struck me about market- and business-cycles is how similar their anatomy is, with the embers of growth springing from the ashes of what came before, and how irrational exuberance dominates the narrative at the peak (with references to how ‘it’s different this time’ becoming more and more common, as if it is a protective spell potent enough to ward off what’s coming).

Another similarity between then and now is the structural mis-estimation of risk that seems endemic in the lead-up to debt crises. For example, as I was reading about the Great Financial Crisis, I was intrigued by the many similarities in the narrative around the structural mis-estimation of risk in subprime mortgage lending (then) and the conversations around leveraged corporate loans (today). This intrigued me because mis-estimating the level of risk could lead to hyper-normal levels of debt defaults with a negative impact on high-yielding debt instruments in times of crisis.

The bulk of this blog post aims to provide a bit of background, but, to summarise: During the Great Financial Crisis, debt instruments like collateralised debt obligations (“CDOs”) were implicated in the financial instability that followed the weakening of the real-estate market. Altogether, the structural weaknesses that condensed out of the system in the form of CDOs were (partially) the result of banks hoping to make money by selling loans and investors ‘hunting for yield’ in a low-interest-rate environment, which was further exacerbated by lax regulatory standards that had caused lending standards to drop. Eventually, these structural weaknesses exposed the financial system to unaccounted-for vulnerabilities when the economy started slowing.

Today, CDOs have subsequently declined in popularity as part of the fallout from the Financial Crisis. However, many of the underlying drivers that caused the weaknesses to appear in the system have not been rectified since, and instead the last 10 years have seen increases in the popularity of a similar instrument, the collateralised loan obligation (“CLO”).

As I was learning more about CLOs, I grew a little bit concerned about what impact they might have on the financial system if growing numbers of companies start defaulting on their debts because of the economic fallout from COVID-19. This post is an attempt to outline some of my concerns. Do note, however, that I am not an expert on CLOs or other similarly-structured financial instruments. As a result, important nuance might be lacking and there might be unintended errors of omission. I apologise for these in advance,

What is a CLO?

First, it might be helpful to understand a bit better what a CLO actually is: CLOs are a type of CDO (a type of securitised loan) that pools together multiple loans made out to businesses (rather than individuals), with the aim of the pooling being to increase the available supply of lenders. Like CDOs more generally, this ‘securitisation’ of loans helps lower the cost of debt for the borrower but also makes it easier for the lenders to sell the loans to investors (and thus pass along the risk). However, while CDOs were growing increasingly complex in the lead-up to the Great Financial Crisis, today’s CLOs are less complex, and they don’t typically include credit-default swaps or securitisations. They also include more diversified collateral. This means that while CDOs and CLOs are similar in theory, they can be quite different in practise. See the table, below, for a side-by-side comparison of CDOs and CLOs.

A side-by-side comparison of CDOs and CLOs from the Bank of International Settlement’s (“BIS”) September 2019 Quarterly Review [link].

A characteristic of CLOs is that they are composed of ‘leveraged loans’, that is, loans made out to indebted companies. (CDOs, on the other hand, are typically composed of mortgages.) Because of the inherent risk of leverage, loans of this kind are normally not considered ‘investment grade’’. However, because of the pooled structure of a CLO (typically involving the pooling of 150 – 300 different loans), the constituent loans can collectively be considered more diversified than the individual loans themselves (more companies, exposed to different sectors). This is thought to lower the overall risk of the resultant security (the CLO), as long as the risks have been properly accounted for. In addition, while each individual loan comes with a set interest yield for the investor, the loans comprising the CLOs are collectively divided into ‘tranches’. This ‘tranching’ allows investors to choose what level of risk they are willing to accept for a particular interest yield.

To help visualise this, the tranches in a CDO/CLO can be thought of as similar to the floors of a building; being composed of lower, middle, and higher levels/tranches. The interest payments will flow from the higher tranches to the lower tranches, which make the higher tranches the most
likely to receive the payments. This makes the higher tranches the least risky. Conversely, the lower tranches will be last to receive the interest payments, which make them the most risky.

The lower tranches in a CDO/CLO are the riskiest because if a borrower should be unable to repay the loan, their default will be ‘captured’ by the lower tranches to insulate the higher tranches, which would not be affected by the default. An investor in this lower part of the CDO/CLO structure is compensated for taking on this higher level of risk because of the higher interest yield that is offered by that tranche.

Rating agencies like Standard and Poor’s and Moody’s help rate the tranches in the CDO/CLO according to the perceived level of risk; the diversified nature of the loans and their aggregate sensitivity. The higher tranches in a CDO/CLO will be rated more highly than the lowest tranches, and the ratings will span the spectrum from investment-grade for the higher levels all the way down to high-yield debt (junk) for the lower tranches.

Over the past 10 years, investors (banks, insurance companies, and asset managers) have been incentivised to take on more risk as they hunt for yield in the current low-interest-rate environment, and some of this demand has been satisfied by CLOs. (Some CLOs can yield higher returns than either high-yield bonds or private equity.) As a result, CLOs have exploded in popularity since 2012, and in December 2019 the amount of outstanding leveraged loan debt was estimated at around $1.2 trillion (in $US-denominated debt—which is similar to the size of the 2007 subprime mortgage market, of which CLOs are estimated to account for 50 %). Global estimates further peg the total amount of leveraged loans currently outstanding at closer to $2.2. trillion.

Much like the CDOs before them, the growing demand for CLOs has however resulted in declining debt covenant standards, and the last couple of years have seen a higher proportion of ‘covenant-lite’ loan issues and to more-heavily indebted companies. The infographic below is from BIS, again, which helps give an idea of how CLOs have grown in popularity and evolved since the Financial Crisis:

How CLOs could be potentially destabilising

One of the fundamental assumptions underlying the perceived risk of a CLO is that the economic environment will not deviate much from ‘normal’. Indeed, many financial models are (still) based on the assumption that events will follow a normal distribution, ignoring the growing body of work showing that the distribution of many natural (and economic and social) phenomena are better described using fatter-tailed, power-law distributions. Partly, this is because of the difficulties inherent to modelling such scenarios because of their non-linear nature.

In orthodox financial theory (which operates under the assumption of normality), unlikely events are extremely unlikely (often misunderstood as something like Taleb’s ‘black swans’). Conversely, under the assumption of a fat-tailed power-law-like distribution, unlikely events are much more common, and perhaps best accounted for as outliers. (To understand this, consider how the concept of outliers applies to the stock market, where extremely well-performing companies like Amazon or Constellation Software are not black swans but outliers that occur with a much-higher frequency than the normal distribution would suggest.)

To revert back to loans, the flawed assumptions around the shape of the distribution (normal vs fat-tailed) and the distribution and nature of the risk were the downfall of the CDO industry in the lead-up to the Great Financial Crisis. This is because the models used to assess the risk of the loans (mortgages) did not take the much-higher-than-assumed probability of mass-defaults of borrowers into account. Indeed, most models assumed no more than 5 % of borrowers defaulting at any given time, which would have been be tolerated by the CDOs. However, as a result, the financial system was under-prepared for dealing with the fallout of an outlier event occurring (where peak default-rates reached 40 %). The complexity of the CDO structure further exacerbated the impact.

Reading around CLOs, I am intrigued that some aspects of CDOs are true also of CLOs today. While CDOs have become more heavily regulated since the Financial Crisis, most of the new regulations have focussed on specific lending standards rather than the underlying assumptions regarding the distribution of risk. As a result, CLO marketing materials are free to focus on the ‘diversified nature of the loans’. Indeed, as one promotional flyer (from AXA [link]; emphasis mine) put it:

Of course, investors could lose a substantial portion of their investment in the case where multiple loans of the underlying pool default at a rate significantly exceeding levels anticipated at CLO inception. In the worst-case scenario, i.e. the event of default of the CLO, the holders off the AAA tranche can decide to liquidate the loan pool to repay note holders in order of seniority. While possible, the scenario where a CLO structure has been unwound due to an event of default, with no cash flows left to allocate to the equity tranche holders, has never occurred in the more than 20-year history of CLO issuance, even in the worst years of 2008-2009.

The overall message of the CLO industry has been that companies across industries are extremely unlikely to default on their debt en masse, with justifications being that ‘nothing of this sort has ever happened before’. Indeed, CLOs emerged from the Financial Crisis relatively unscathed, and now, risk assessments are (still) being based on (incomplete) historical data and poorly-fitted models (including the assumption that default rates are unlikely to top 5 % and that recovery rates will be around 70 %, when they could be half that in a worst-case scenario).

These assumptions have been put to the test since SARS-CoV-2 hit in the form a ‘perfect storm’ of cross-sector and -industry economic impact, putting unprecedented pressure on leveraged companies.

As it happens, leveraged loans are collectively sensitive to similar economic conditions, and when these conditions start to deteriorate, leveraged loans will start to be downgraded (signalling an increased perceived risk of default) and some of the loans will start to actually default. Loan downgrades will also have a direct impact on the leveraged companies that benefitted from the loans, and, for example, the Financial Times reports that some 20 % of CLOs (representing $600 billion of leveraged loans) have already been downgraded, while Moody’s has placed 20 % of their rated CLOs (40 % of which had been rated ‘AAA’ previously!) on a watchlist for potential downgrades.

Growing defaults will also make the leveraged loans increasingly illiquid, which will be exacerbated as some investors are forced to sell because of their investment criteria. This could have an indirect impact in affecting investor returns, and many could suffer losses.

Further impacts of CLO downgrades will be to reduce the availability of loans for struggling companies (because investors have gotten spooked) and the impact on borrowers and lenders alike could potentially exacerbate the impact of the economic slowdown regardless the size of potential stimulus packages. For example, the need for banks and insurance companies to liquidate assets to raise capital to satisfy capital requirement could lead to fire-sales, while the growing proportion of people nearing retirement age will be hit by declining asset values as their pension pots are impaired. Altogether, this could hurt the people and companies who are the most in need of help.

Fortunately, more than half of $US-denominated leveraged loans are due by 2024 and 2025 (with only ~$150 billion being due in 2021) but shrinking investor appetite for risky debt could have effects earlier than that. For example, MSCI estimates that if the current economic conditions persist, that leveraged loan defaults could reach “unprecedented levels”, with the rating agency Fitch estimating that leveraged loan defaults could reach as high as 10 – 15 %. (This is in excess of the 5 % risk assumption that is built into the CLOs.)

The pace of the CLO-rating downgrades could exacerbate issues further, by increasing the proportion of high-yield tranches in CLO investor portfolios. Again, much like during the Great Financial Crisis, this shows that the real, underlying, ‘fat-tail’ risk of the securities has not been well-understood.

A chart from MSCI [link] showing 30-day rolling average correlation between implied
default rates for different sectors in the immediate aftermath of the initial COVID-19 shock
.

So, why should we care?

I think developments in the CLO space will be important to monitor for people hoping to better understand the potential economic fallout from COVID-19 since CLO downgrades are correlated with economic performance as companies are cut off from access to funding. Therefore, one reason for my concern is the comparatively high yield of the riskiest CLOs, despite this still being a relatively early point for fully understanding the full economic impact of COVID-19. For example, CCC-rated yields today are equivalent to those seen during the more acute stages of the Financial Crisis in 2008/2009 (a full 2 years after the first signs of trouble had emerged and the fallout was becoming critical). 

Focussing on that part of history, the similarities between what was happening to the market in 2007 and what’s happening today are quite intriguing. To wit, the first signs of something being amiss in the lead-up to the Financial Crisis occurred in autumn 2006 as banks started to lose money on CDOs. February 2007 saw HBSC’s announcement that it had taken a surprising hit on subprime mortgage investments. After this, the market stabilised and the US Federal Reserve said in March that the impact was likely to have been “contained”. In the US, credit quality also improves in March/April because of tax returns. Meanwhile, the stock market seemed relatively unaffected—until more bad news arrived in June 2007. This was when the market reacted, and the fallout lasted for the next 18 months. 

I am intrigued by the similarities between the reaction of the 2007 and 2020 stock markets and the timeline of the developments. For example, while the market saw a strong impact from the growing spread of SARS-CoV-2 in February and March, it has since stabilised. Many Q1 numbers also did not take the economic impact of COVID-19 into account, and the Q2 numbers might therefore have a larger impact. (For example, the most recent earnings call of one of the companies most heavily exposed to CLO deterioration, Oxford Lane Capital, articulated that “We know the economy is going to have a horrible second quarter”.) Low underlying consumer demand will also exacerbate the current situation, since easing lockdowns will do little to reinvigorate suffering economies unless people feel safe enough to return to normal. This will require more effective policies around the economic reboot than we have seen to-date, and leveraged loan defaults are expected to increase from current levels through to the second half of 2020. 

In addition, analysts following the CLO market are expecting continued downgrades as more and more CLOs are failing their stability checks (i.e. the underlying debt pools are seeing higher-than-expected defaults), and some suspect that this is only the first act of something that will take longer to play out. (Indeed, Oxford Lane Capital, again, said in their most-recent earnings call that they are “concerned about a likely increase in defaults”.) Conversely to the last crisis, where banks were the worst hit from the CDO fallout (causing a structural financial crisis), the organisations that are exposed to the most risk from a potential CLO fallout today are the leveraged companies themselves and their investors (making this more of a corporate debt crisis).

Of course, government stimulus packages might be able to stave off some of the financial fallout from growing default rates, but they will also be far from perfect, since they won’t be addressing the underlying cause of the fallout: The virus itself and our primitive understanding of risk and the financial markets themselves.