AI Will Fix Everything

In 2022:

A Google engineer claimed that an AI he was conversing with was sentient.

Tesla revealed Optimus.

ChatGPT came out.

Lots and lots of other AI investment and research and progress and hype happened.

Does this mean our investing woes are over? If AI can drive our cars and do our cooking and know what music we’ll like, surely it can invest our money for us, too?

Huh?

But before liquidating your assets and handing them over to the bots, it might be worth exploring what they are and how they work. 

“AI” simply refers to the processing and synthesising of information by machines.

That sounds simple – so why all the fuss? Well, when people refer to AI and the impending AI revolution/dystopia, they are really referring to the advanced shit – to machine learning. These machine learning models use fancy statistical algorithms to “learn” via trial and error.

How? They have a cost function, set by humans. They have input data – and something to be inferred from this data. They edit their internal state in such a way as to minimise their error given this input data and their cost function.

How do they calculate the cost? How do they know when they have made a mistake? Because the input data contains answers. This is how these models are trained. They learn/train from this input data and predict the correct response to new input data that doesn’t have answers. That’s all they are.

But wait why am I trying to explain this when we have AI:

ChatGPTs answer to how it works. Source: https://chat.openai.com/chat.
Technically that’s a problem

AI is rooted in statistics and probability and, like most of statistics, it works (best) when the data is distributed in a certain kind of way; it likes normally-distributed, iid observations to learn from.

It works very well for problems such as classifying an image as a dog or a cat. A few pixels are never going to dominate a sample. An anomalous patch of brown in the centre of the image is irrelevant. Rules are learned from the image in its entirety, outliers don’t matter.

This is easy for AI to do. Source: https://gsurma.medium.com/image-classifier-cats-vs-dogs-with-convolutional-neural-networks-cnns-and-google-colabs-4e9af21ae7a8.

But that’s different in the investing environment.

Return is driven by outliers. By extreme events. By fat tails. These are much harder to predict because they are so infrequent.

This is partly caused by complexity. It is simply not possible to feed models all the data in the universe – which would be required to predict market movements 10+ years in advance. There are simply too many interdependent variables.

Another reason for this is memory; the return of the FTSE 100 each year isn’t the result of a new trial. It’s actually affected by all the previous “experiments”, in ways we don’t understand. This is very hard to incorporate into models.

Note that this doesn’t matter in the short run. So in the short run, AI can be very useful. There’s a reason why quant funds now seem to own most of the market. Does this mean that in the future, when compute and storage and energy are cheap, everyone will just run their own model and trade short-term?

No. Because remember – to generate alpha you must have an advantage. Those who have more data, those who have better data, those who have more compute and storage, those with better models, etc. will win in this type of environment. SPOILER ALERT: that won’t be retail investors.

Humans suck too

But hold on, aren’t markets too complex for humans as well?

Models struggle because they are backwards-looking. By design. Their output is essentially the most-likely outcome given the inputs, based on historical data.

This is sort of how humans work, too. I guess the question is will these models ever be able to acquire enough data to develop the same type of judgement as humans.

You may think that’s a long way off but it’s actually not. Think about the average retail investor – robo-advisers probably make better decisions than them already. And they don’t even leverage AI.

Because they don’t have to. Because 1) personal finance and to a large extent retail investing are genericisable and 2) most people don’t know the basics. Hence basic knowledge can be applied to most people with relative success.

Terra Machina

What happens when the machines take over?

What happens when everyone’s investment decisions are dictated by AI?

Will everyone just invest in the same things?

Yes and no.

Yes because people will likely use the same models trained on basically the same data. Everyone is trying to solve a similar optimisation problem – max CAGR.

No because preferences might be slightly different. People have different capital and risk tolerance and faith in the model and a bunch of other differences, too. No because not everyone will have access to the same data or models or compute.

Like any area that will be automated by AI, human skills will atrophy.

This is a problem. I certainly wouldn’t want my InvestingAI making all my investing decisions. I certainly wouldn’t want 0 knowledge of what it’s doing.

But don’t we already do this? Most people have no clue that Google Maps is using AI and what it’s doing and why. But no one cares.

Because stakes are lower. If I get sent to the slightly wrong place, or it takes longer than expected, or Maps crashes, I’ll be mildly miffed but it’s not the end of the world. But if my InvestingAI invests in asset X for reasons I don’t understand, and asset X proceeds to fall 75% and so I’m down bad on asset X when I liquidate asset X, I’ll be furious.

The nearer you are to life-or-death scenarios, the more careful you have to be about human skill atrophy. I still want pilots to be fully trained, even though autopilot does all the work.

Retail investing isn’t quite life and death, but it’s close.

My non-AI prediction

And wait also, AI can’t tell if you’re nervous about your Mum’s health so want to save some money for private health care rather than invest it. Or about a long-term trend that you see emerging. Or when an investment seems too good to be true.

This is the stuff AI can’t do:

  1. Make long-term predictions. There are too many variables and too much complexity. Note how we can’t predict the motion of billiard balls after 9 collisions. Now try and predict the price of the FTSE 100 20 years from now.
  2. Take emotions into account. Not effectively anyway. You could add “how you’re feeling”, heart rate, endorphin levels, etc. as inputs into the model but sorry that’s not the same thing.
  3. Use intuition and judgement. It may make non-obvious decisions in a black box sort of way which may seem like it is. But it’s not. 

But then but is AI actually better because it can’t do these things? There’s certainly a good argument to be made for that. Is an AI going to make a better decision than big John who has just had a few pints and is in a good mood and has a “feeling” oil is going to go down so places an order for one 3x leveraged WTI Crude Oil short please? Yeah.

As with most applications, the best approach will be in the intersection between humans and AI. The bots will do the heavy lifting. The research. They will run the numbers and give suggestions. But humans will still have to review these suggestions and make a judgement. We’ll still have the responsibility to get that right.

Even AI knows. Source: https://chat.openai.com/chat.

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.