Future

Cover image for AI Just Cracked a 99-Year Problem
artificialintelligenceee
artificialintelligenceee

Posted on

AI Just Cracked a 99-Year Problem

Artificial intelligence moves fast. Most of the time, progress feels incremental—slightly better models, slightly faster results. But every once in a while, something happens that makes even experts pause and ask, “How did that just happen?”

That moment has arrived.

In the past few years, AI systems have solved scientific problems that humans struggled with for decades—and in some cases, nearly a century. These aren’t problems of data entry or pattern matching. They live deep inside pure mathematics, theoretical physics, and molecular biology, where intuition, proofs, and human experience traditionally dominate.

What’s different now is not speed. It’s scale of reasoning. AI is navigating problem spaces so large that no human mind—or traditional computer—could realistically explore them.

Let’s start with the breakthrough that shocked mathematicians.

*1. The Math Problem AI Finally Cracked
*

At the center of this story is the Andrews–Curtis conjecture, introduced in 1965. In simple terms, it asks whether certain complicated algebraic structures can always be simplified using a limited set of allowed transformations.

A useful way to picture it is as an abstract Rubik’s Cube—not made of colors, but of algebraic expressions. You’re allowed only specific moves, and the question is whether every scrambled configuration can eventually be returned to a standard form.

For decades, mathematicians found examples that resisted simplification. These became known as potential counterexamples. Some of the hardest ones sat unsolved for 25, 30, even 40 years. No one could prove whether they were truly unsimplifiable—or whether humans just weren’t searching the right paths.

The problem wasn’t intelligence. It was scale.

*2. Why Humans Were Stuck for Decades
*

The number of possible transformation sequences in the Andrews–Curtis conjecture grows explosively. Some solutions require thousands or even millions of steps. The search space is so vast that brute force approaches are impossible—there are more potential paths than atoms on Earth.

Human intuition collapses almost immediately in this environment. Even traditional computers fail, because checking every option is computationally infeasible.

For decades, these problems simply sat there, untouched—not because they were impossible, but because they were unreachable.

That changed this year.

*3. How AI Approached the Problem Differently
*

A research team at Caltech built a reinforcement learning system designed specifically to operate in overwhelming mathematical spaces.

Instead of randomly searching, the AI learned patterns. It started with simple cases and gradually built something resembling mathematical instinct. Over time, it discovered long chains of transformations that worked reliably.

The researchers call these “super moves.” Each one is actually a bundle of smaller steps that the AI learned to reuse efficiently.

The system trained like a student leveling up:

Easy problems first

Gradually harder examples

Then deep exploration of rare paths humans never found

It wasn’t trying to solve the entire conjecture. It focused on the most stubborn corner cases—the ones that had resisted human effort for decades.

And that’s where the breakthrough happened.

*4. The Breakthrough That Changed Everything
*

The AI successfully solved entire families of potential counterexamples—the same ones mathematicians had been stuck on for 25 to 40 years. It reduced them back to the standard form, proving they were not counterexamples after all.

The full Andrews–Curtis conjecture remains unsolved. But a massive portion of its hardest open cases is now settled.

This marks something historic:
A machine independently discovered deep, multi-thousand-step reasoning paths in abstract mathematics—without human guidance.

And this wasn’t a one-off.

*5. Physics and Biology Are Seeing the Same Pattern
*

Century-Old Problems in Physics
In physics, equations like Euler and Navier–Stokes have governed fluid motion for over a century. They describe airflow over wings, ocean currents, smoke, turbulence—almost everything involving fluids.

One major unresolved question is whether these equations can produce a finite-time blowup, where fluid speed becomes infinite. The problem is so important that it’s one of the Clay Mathematics Institute’s Million-Dollar Millennium Prize Problems.

Recently, Google DeepMind used physics-informed AI models trained directly on the equations themselves. These models don’t guess—they obey physical laws at every step.

The AI discovered new families of singularities that humans had never identified, including structures that depend on surprisingly simple parameters. Some of these discoveries held up under rigorous computer-assisted proofs.

That doesn’t solve Navier–Stokes yet—but it reshapes the map around the problem.

Biology and the AlphaFold Revolution
Biology tells the same story.

For decades, predicting how proteins fold into 3D shapes was considered a holy grail problem. The shape determines what a protein does, but finding it experimentally can take months or years.

In 2020, AlphaFold changed everything. It achieved near-experimental accuracy in protein structure prediction and shocked the scientific community.

Since then:

Structures for 200+ million proteins have been predicted

AlphaFold 3 now models full molecular complexes

Drug discovery, genetics, and enzyme design have accelerated dramatically

AlphaFold didn’t solve biology—but it removed a massive experimental bottleneck that held entire fields back.

Why AI Is Suddenly Solving Century-Old Problems
Across math, physics, and biology, a clear pattern emerges.

AI isn’t just faster at calculation. It’s better at exploring spaces too vast for human intuition.

Reinforcement learning builds long chains of reasoning

Physics-informed models respect real-world laws

High-dimensional neural systems navigate complexity without collapsing

These century-old problems weren’t unsolvable. They were unsearchable—until now.

This doesn’t make human scientists obsolete. It expands what science can reach. The future isn’t AI replacing researchers—it’s humans and machines exploring territories that were previously locked away.

Top comments (0)