Heading to SC25?

Visit Booth 940 and meet the team

Thinking About AI

What It Can Do, How It Does It, and Whether We Can Afford It?

Lately, I’ve been spending a lot of time thinking about AI. Not just the “wow” moments when it does something uncanny, but the deeper questions: how it works, why it works the way it does, and what it’s going to cost us: in money, energy, and opportunity, to keep pushing it forward.

The Cost Paradox

According to NVIDIA, the cost of AI tokens, the basic units of computation for large language models, has dropped roughly tenfold.

That sounds like a victory for efficiency. But here’s the catch: better reasoning, the kind that requires an AI to “think longer” and explore more possibilities, can consume a hundred times more tokens.

So, while the unit price is falling, the total bill for high-quality AI is still climbing.

It’s a bit like cheaper petrol for your car, but now you’re driving a vehicle that consumes fuel at ten times the rate because you want to go further, faster, and in more (complex) directions.

Is AI waiting for its Fast Fourier Transform (FFT) Moment?

In computing history, there are rare moments when a problem’s complexity collapses thanks to a brilliant algorithmic leap.

The Fast Fourier Transform (FFT) is the classic example: turning the Discrete Fourier Transform from an O(N²) grind into an O(N log N) breeze.

AI hasn’t had that moment yet. Our recent leaps haven’t come from radically smarter algorithms, but from brute force: bigger GPUs, reduced precision arithmetic, faster interconnects, and vast memory pools.

When software optimisations do arrive, they tend to be modest; a 2× gain here, a 1.5× there, not the order-of-magnitude breakthroughs that change the game overnight.

If it comes, an FFT moment in AI would mean achieving the same (or better) capability with orders of magnitude less compute, memory, and energy.

Such a leap would substantially collapse the cost curve, democratise access to frontier AI, and shift the competitive advantage from “who has the most GPUs” to “who has the smartest algorithms.”

The Hype, the Reality, and the Inevitable

AI is here to stay. It’s too transformative across science, business, and creativity to vanish into the graveyard of overhyped tech. But are we at the peak of the hype cycle? Possibly.

The real barrier to adoption may not be capability, but cost. Without that elusive FFT moment, the economics may bite harder than the technology’s limits.

Where the Wins Will Come First

In the near term, AI’s biggest scientific contributions may not be in headline-grabbing breakthroughs, but in the quieter work of “filling in the gaps.”

AI can trawl through mountains of data, cross-reference obscure research, and spot patterns or incremental opportunities that humans overlook.

Not because we’re incapable, but because we get bored, distracted, or prioritise the “big wins.” AI doesn’t get bored (probably), and it can keep track of details at a scale no human mind can match.

The Industry’s Next Moves

OpenAI’s decision to launch an applications division feels like a deliberate echo of DeepMind’s pre-Google era, a time when targeted, high-impact projects like AlphaFold redefined what AI could achieve in specific domains.

If history repeats, we should see a wave of applied AI breakthroughs that are less about general intelligence and more about solving stubborn, high-value problems.

The Infrastructure Squeeze

But here’s the uncomfortable truth: the pace of AI progress may be throttled not by researchers’ imagination, but by the physical world.

We’re already hitting limits in GPU supply, network capacity, and most critically electrical power.

The hunger for energy is so intense that Microsoft is reportedly bringing the Three Mile Island nuclear plant back online, despite the associated PR baggage.

When the bottleneck is measured in megawatts, not model parameters, the conversation shifts from “Can we build it?” to “Can we power it?”

Energy: The Final Gatekeeper

If AI is to become truly pervasive and affordable, energy must become cheaper and more abundant. Small Modular Reactors could help. Fusion power, if it arrives in time (if ever), could be the ultimate cheat code.

Until then, the economics of AI will be tethered to the economics of energy. And that means the future of intelligence may depend as much on engineers in hard hats (and politicians) as on researchers in lab coats.

Food for thought

AI today feels like a paradox: inevitable yet fragile, astonishing yet constrained.

We’re riding a wave powered by hardware and scale, waiting for the algorithmic breakthrough that could change the economics overnight.

Until that moment comes, the question isn’t just what AI can do, it’s whether we can afford to let it do it.

 


Dairsie Latimer
Technology Fellow
Red Oak Consulting

Recent Posts

Roco Returns to SC25

Where HPC Ignites! Hey all, it’s me, Roco, your favourite HPC-loving robot (modest as ever). I’ve got news: I’m heading to SC25 in St. Louis,

Read More »

Discover how Red Oak Consulting can help your organisation get the very best from High-Performance Computing and Cloud Computing

Book a meeting

Get in touch with our team of HPC experts to find out how we can help you with your HPC, AI & Cloud Computing requirements across:

  • Strategy & Planning
  • Procurement
  • Implementation
  • Expansion & Optimisation
  • Maintenance & Support
  • Research Computing

Call us on
+44 (0)1242 806 188

Experts available:
9:00am – 5:30pm GMT

Name

Download Whitepaper

HPC and Formula One

"*" indicates required fields

Name*

Download Brochure

HPC AI – Deep Dive

"*" indicates required fields

Name*

Take something useful for when the time is right.

Download a short, shareable pack:

Because in a fast-moving market, staying connected matters — and timing is everything.

Download Brochure

HPC Procurement

"*" indicates required fields

Name*