The human brain is a machine, a very special piece of kit.
It looks at other phenomena and makes a call as to whether sentience is available for it to interact with or not. Actually, sorry, I am probably wrong here — sentience probably realises that intelligence is a spectrum that goes from zero to sentience and beyond. This is my penchant when it comes to Theory of Mind and it allows me to temper my very human hubris as to what can or cannot be a vessel to human-level minds. This is what prevents us from becoming too solipsistic.
If you need evidence that solipsism is a bad strategy, look no further than the markets. Wave after wave of pioneers have attempted to preempt or even tame the markets with innovations going back since prehistory — barter systems, currency, exchanges, HFT and, right now, AI. All they have achieved is, simply, make the system more efficient and allow the market to dominate in a new way. Each time, the novelty/machinery that was hyped up to be the final frontier (hence the solipsism) was just assimilated. Those who realised that the market is going to adapt lived to play another day. Those who refused to see the market as intelligent are not here anymore.
This is what prevents us from becoming too solipsistic.
Say, you are the next big thing, the super quant/machine-learning messiah. You build a system that manages to predict the market better than all your peers. You have plugged in every single input that you could think about — the micro and macro market-structure features and the crazy alternative data information. You have implemented the execution aspects correctly. You put the system into action and it starts interacting with the real beast, the wider market. Big profits are being made initially, and your system is adapting to minute changes and so you think you will live forever.
What happens next? The market becomes you and you are the one everyone wants to beat. Some new challenger will be motivated and resourceful enough and you will be beaten — this is not an ‘if’ but it is a ‘when’. The other important thing to bear in mind is that when you become the market the random market participant will evolve to stand an even chance of beating you. All of a sudden, the super intelligence of your AI is back on par with the human mind.
This happens because your AI does not really have skin in the game — the market is a derivative of real-life entrepreneurship, politics and human (because we still have control) whims. As you must have heard, 2018 was a bad year for quant firms and if you are wondering why this can still happen, the answer is because there is always over-confidence in the extent of understanding and control that one really has.
As long as machine learners are going to rely on probability instead of trying to nail down causation, they will be at the mercy of the next black swan. The hubris and hype around novel systems must be tempered by a good dose of realism. Keep it real — I tend to favour reinforcement learning that generate clear-cut rules with few significant features, although the raw inputs can be much higher in terms of dimensions.
Why? Why not have activation functions with a long trail of features with non-trivial coefficients? Well because when the market actually makes up its mind, it will fall upon much fewer factors. Trying to map out every single eventuality probabilistically may give you a good sense of academic fulfilment but the false sense of security that it brings into your system is not going to be worth it in the end.
A system with empathy with these concepts stands a better chance but a human trader will still be able to outperform machines as it stands.
The market is not a perfect information game — the search space is much larger than Chess or Go. It is not simply about pattern recognition or trying to ‘dream’ Markov-chain scenarios of what is realistic. Existing market participants can game-theory your system into making an error of over-confidence. A system with empathy with these concepts stands a better chance but a human trader will still be able to outperform machines as it stands.
Obviously, this won’t last forever — theory of mind is being researched by the AI community and this will bring a new appreciation of reality to machine learning frameworks. To some extent, adversarial learning systems rely implicitly on ToM in the sense that they accept that adversaries are to be interacted with. The relationship is currently enforced by the maker instead of being spontaneously discovered by the machine learning unit but the leap to that level of autonomy is not unthinkable.
In short, if a machine wants to sustain an out-performance in the market, it must have ‘empathy’ with the market and this entails ToM. If machines were to decisively break away from human leadership in the market, machines should influence real-life (remember the market derives from real-life) in such a way that humans lose control and hence lose empathy with the market. When that happens, we would have moved on from dabbling in the market, one way or another anyway.
Written by Hans Balgobin
Author’s note: The cover picture is my rendering of Plato’s Allegory of the Cave with AI taking the place of the prisoners and the market taking the place of the shadow makers.