Chaos Winks at Predictive AI

*I’m a geek, I love AI, I love generative AI (it helps me create these posts), I love to hate Predictive AI (which this post is about) – I’ve created model after model, ensembles you name it, they work, until they don’t.
**Theres a few techie bits in this, random forests and gradient booters are types of predictive model.
***I used chatGPT to help me write this.

When the Model Works… Until It Doesn’t

For about a week, everything works brilliantly. The ensemble of random forests, gradient boosters, and whatever else I’ve thrown together sings in harmony. Predictions hit, money rolls in, and for a moment, it feels like I’ve cracked the code. Then, almost as if the universe has a sense of humour, it starts slipping away. The patterns that seemed so obedient begin to drift, and profits fade. Rebuild the models, they roar back to life, then fade again. It’s the same story every time — as if trying to pin the future down just makes it wriggle more. Perhaps the world doesn’t like being predicted.

The Mirage of Retrospective Accuracy

Then there are days like January 1st, 2025 — days that make you think predictive AI might just be magic. The system singled out four horses with odds of 10/1 or greater: Good Deal at 20/1 in Fairyhouse, Precious Metal at 16/1 in Cheltenham, Be Fierce at 10/1 in Tramore, and John Storey at 18/1 in Fairyhouse. Taken together, these selections with a £1 bet would have netted a staggering £74,900.80. A happy new year indeed.

It wasn’t a fluke in the data, either. I had checked for leakage, overfitting, everything a careful modeller should. On paper, it all looked flawless. And my reaction? Disbelief, excitement — the kind that makes you imagine finally updating the kitchen. For a brief, shining moment, the model felt like THE golden ticket.

But herein lies the catch: backtests and historical data can be seductively convincing. They present a neat, retrospective story, where patterns appear stable and outcomes predictable. In reality, though, these patterns are fragile. The system’s apparent brilliance is often just a reflection of what was — not necessarily what will be. The universe, it seems, enjoys a little chaos.

New Data Isn’t Just “More Data”

Here’s a curious pattern I’ve noticed. The models almost always falter on new data — except, strangely, when I step away. Run the predictions, place the bets in the morning, and then deliberately stay away from the computer until late evening. Don’t watch, don’t check, don’t obsess. When I do that, more often than not, the results surprise me — small wins, bigger wins, occasionally a streak that feels almost miraculous. Yesterday was exactly that: a £60 return from just £7 of bets — a trixie, a double, and two singles.

It’s horribly unscientific, but it feels like the universe enjoys a little tease: the less I watch and expect, the more it seems willing to reward. The patterns the models suggest don’t change — but my interaction with them, or lack thereof, seems to matter. Perhaps it’s coincidence, or perhaps stepping back removes a kind of self-imposed tension that lets chance play out more generously.

The takeaway? New data is not just “more of the same.” It’s a living, shifting beast. And sometimes, the wisest thing a modeller can do is step aside and let it unfold without interference.

Prediction as Participation

One of the stranger lessons in predictive AI is that sometimes, the act of predicting feels like participating in the very system you’re trying to forecast. Place a bet, and suddenly you’re part of the market you were attempting to analyse. Post a prediction, and the world shifts subtly in response. Patterns that seemed stable in historical data start to bend, adapt, or outright vanish.

It’s not magic, nor is it conspiracy — it’s just feedback. Every market, every competitive system, every human-driven scenario is, at least in part, reactive. People, algorithms, and odds all respond to information. In horse racing, odds adjust with bets; in finance, prices react to forecasts; even social behaviour changes when predictions circulate. The very attempt to anticipate the future alters it.

This is why those moments of model brilliance feel so fleeting. One week you’re celebrating £74,900.80 in theoretical winnings. The next, a fresh wave of data arrives, and the patterns you thought were stable dissolve like sugar in water. Prediction is not merely observation; it’s participation. The model doesn’t just watch the universe — it nudges it, however slightly, each time it makes a call. And in systems that are already complex, those nudges ripple unpredictably.

The Observer Effect — Metaphor or Warning?

Some days, the chaos of prediction feels almost… quantum. I can’t help but picture the universe as a mischievous particle, watching me, winking, and rearranging itself the moment I try to pin it down. In physics, the observer effect describes how measuring a system can change its state. In horse racing, finance, or any reactive system, it sometimes feels eerily similar.

Place a bet, check the odds, tweak the model — and the outcomes dance just out of reach. Build a prediction system, let it run in isolation, and the world seems to cooperate. Expose it, interact with it, and suddenly, unpredictability spikes. It’s almost as if the universe whispers: “You may have numbers, but I have whimsy.”

Of course, this isn’t literal quantum mechanics at work. No particles are collapsing into waveforms at Fairyhouse or Cheltenham. But the metaphor lands: the more you observe and expect, the more the system feels chaotic. Models encounter new data, human behaviour adapts, and the feedback loops proliferate. In that sense, predictive AI’s struggles can feel like a macro-level echo of the quantum world’s teasing unpredictability — a reminder that some things may resist being neatly boxed, no matter how clever our algorithms.

The end?

If there’s a takeaway for anyone dabbling in predictive AI, it’s this: tread carefully, humbly, and probabilistically. Models can reveal patterns, highlight opportunities, and give an edge — but they are never infallible. Historical brilliance does not guarantee future success, and new data almost always behaves differently.

Expect volatility, embrace the unexpected, and recognise that feedback loops will always be part of the game. Small wins, careful bets, and iterative learning often outperform chasing grand, foolproof predictions. In short: plan for chaos, respect your models, but never confuse them with the universe itself.


Predictive AI can mesmerise with dazzling historical wins, yet falters when faced with new data. Horse-racing experiments reveal that models succeed briefly, then unpredictably fail. In reactive systems, observation and expectation subtly alter outcomes. The lesson: respect the models, plan for chaos, and never mistake algorithms for control over the world.

One response to “Chaos Winks at Predictive AI”

  1. Chaos winks ……

    I usually find picking a colour or a nice horse name doesn’t always work! 🤭

    Liked by 1 person

Leave a comment

Conversations with AI is a very public attempt to make some sense of what insights, if any, AI can bring into my world, and maybe yours.

Please subscribe to my newsletter, I try to post daily, I’ll send no spam, and you can unsubscribe at any time.

← Back

Thank you for your response. ✨

Designed with WordPress.