AI-driven trading algorithms have their champions but QMA Asset Management CEO Andrew Dyson is not one of them. Dysons says what while QMA is exploring the use of such AI trading software he doubts that usable trading signals could be deduced from these algorithms. Dyson does add that there is always a place for data analysis but it cannot replace human quants.

Faye Kilburn interviewed AI skeptic Andrew Dyson for Risk.net

“We are testing artificial intelligence in a number of scenarios, but we’re very skeptical of its value. There may be places where you can improve your trading process that we are actively considering. But as a decision-making tool we’re not using it and not by accident. We remain very wary of AI. It is a wonderful marketing concept. But, I think it’s easy to get carried away and overstate its value at the investment level,” Dyson says.

AI is being used to optimise everything from golf clubs and swings to the flavour of beer. But while it is reasonable to believe that repeating something like a golf swing, for example, will produce the same outcome, it is contested whether finance has similar repeatable patterns or laws of nature that machines can learn.

Dyson is one of several voices in the quant world to question whether the patterns of financial markets can be learned at all, given their inherently volatile nature.

“I can at least intuitively believe that you could design a better golf club using AI, but I can’t test and learn in that same independent way in finance. I can mine the historic data, but the future is completely different,” Dyson says. “We can test and learn within a particular single cycle, but we can’t have any confidence that the next time, it will be repeated.”

[…]

Underlying Dyson’s skepticism about AI is the firm’s core philosophy that all signals must have some understandable rationale, whether economic or behavioural. One of the biggest dangers with AI is that models become impossible to follow. At the same time, all statistical models are also notoriously vulnerable to finding patterns in the data that are meaningless – a problem known as overfitting. AI, Dyson says, makes matters worse.

“Overfitting is fundamentally a problem of having too many variables and deriving a spurious correlation, particularly, if those underlying variables are connected to one another. The problem with AI is you are taking in many more variables and you’ve actually got no way of knowing whether they are even connected to one another. So you’re creating the conditions where you could see more overfitting and it becomes harder to diagnose as a result,” Dyson says.