Artificial intelligence began as humanity’s most ambitious intellectual leap; yet it is rapidly turning into the most expensive rerun in financial history. Today, the algorithms born in research labs are not designed to pursue discovery — they are optimized for capital’s rhythm. When the 2017 paper Attention Is All You Need was published, there was still a sense of scientific curiosity beating at the heart of AI. A few researchers were trying to decode the mathematics of language. Eight years later, that curiosity has been repackaged into a financial instrument on Wall Street. Billions of dollars in venture capital have transformed the transformer architecture from a scientific breakthrough into a market derivative. The result: the models scaled up, but intelligence itself scaled down.
Llion Jones’s declaration — “I’m sick of transformers” — is not just personal disillusionment; it is a scream from within the system itself. The current trajectory of artificial intelligence has become less about science and more about financial optimization. In the first half of 2025, global AI investments reached $116 billion, but the vast majority of that money continues to feed the same architectures, the same model families, the same benchmarks. The share of funding going to truly new ideas is down 40% from 2019. What we call the “AI revolution” is starting to look more like an “AI loop”: the same idea, reproduced endlessly with bigger budgets.
This financial loop is not only reshaping the direction of technology — it is redefining the very nature of science. Research agendas are no longer set by scientists, but by investor relations teams. Startups now pitch total addressable market and return-on-investment charts rather than scientific novelty. Machine learning terminology — token, parameter, inference — has entered the language of finance, converted into market metrics. Science, once it begins speaking the dialect of money, eventually forgets its own vocabulary. That is exactly what’s happening today in AI research: discovery has been replaced by investor confidence.
MIT’s State of AI in Business 2025 report defines this trend as “AI washing.” It refers to projects that claim to be “AI-powered” without containing any real AI infrastructure. In the first half of 2025, investor presentations mentioning “AI” increased by 63%, yet only 22% of those projects contained actual machine learning components. This is one of the largest perception manipulations in the history of the digital economy: AI is no longer a technology, but a marketing label.And the financial world trades that label like a stock symbol.
This also means the tech industry has begun to consume its own future. A system obsessed with infinite growth ultimately destroys its own efficiency. Large Language Models are technically magnificent but economically unsustainable. Training a single LLM that cost a few million dollars in 2020 now exceeds $1.2 billion in 2025. Their energy usage rivals that of a mid-sized country’s annual electricity consumption. And yet, profit margins remain near zero. Productivity gains don’t appear in corporate balance sheets. Investment is growing, but output is shrinking. Technology accelerates while profitability stagnates — a structural paradox of the AI economy.
The core reason lies in how finance governs technology. Wall Street logic is built on minimizing risk and maximizing return. But science is born out of risk. The novelty of an idea is proportional to its chance of failure. Finance cannot tolerate failure. Consequently, innovation is only pursued when it is guaranteed. Safe bets, predictable outcomes, and low-volatility returns have become the strategic objectives of AI research. That is precisely why radical approaches like Spiking Neural Networks receive no funding. Architectures like RWKV, which merge RNN-style memory with transformer-level performance, are pushed out of labs because they fail the “marketability test.” The greatest barrier to real innovation today is not technical — it is the comfort zone of capital.
At this point, the issue is no longer merely technological; it is geopolitical and ideological. The global AI race is fundamentally a confrontation between two models of power:
the financialized innovation of American capitalism, and the state-planned AI model of China.
In the U.S., investor pressure dictates scientific direction; in China, state objectives do. One seeks to maximize profit, the other control. Europe struggles to define a third way, anchored in ethics and regulation — but it is falling behind.
Meanwhile, countries like Turkey remain both technologically and financially dependent: GPU infrastructure, model licenses, and core frameworks are Western-controlled. This makes true “AI sovereignty” nearly impossible to achieve.
And yet paradoxically, dependency itself might be the seed of opportunity. For countries like Turkey, the real chance does not lie in joining the transformer scaling race, but in developing alternative architectures and ethical models.
Neuromorphic computing, explainable AI, low-energy algorithms, and data independence — these are the neglected frontiers that will actually shape the future. The next disruption in AI will not come from size, but from efficiency. As energy crises intensify and carbon metrics tighten, “small but meaningful” models will replace “large but wasteful” ones.
Economically, the current state of AI resembles the run-up to the 2008 financial crisis. Back then, banks hid risk behind the illusion of stability — “too big to fail.” Today, tech giants are selling the same myth under a new name: “too smart to fail.” But the underlying logic is the same — an unfounded belief that infinite growth can persist in a finite world. The AI bubble, like mortgage derivatives, is valued not for its actual productivity but for its expectation of returns. This will not just trigger a financial correction; it could provoke an existential one. When humanity’s mechanism for knowledge creation is subordinated to capital’s compulsion for growth, knowledge ceases to have meaning — it becomes a commodity.
Two possible futures emerge from this crossroad.
In the first scenario, financial centers complete their conquest of innovation, turning AI into an infrastructure utility. AI becomes the domain of cloud providers, energy corporations, and data monopolies — a service industry like electricity or water, except privately owned.
In the second scenario, an open-source, decentralized scientific ecosystem rises. Architectures like RWKV or SNN evolve through community-driven, non-financial support. Research becomes a public act again. Economically, this model is weaker — but epistemologically, it is stronger. It restores science to its human purpose.
So which future are we heading toward?
Current data suggests the first. Amazon, Microsoft, and Google already own the physical backbone of AI through their global data centers. They also control the research grants, the hardware supply chains, and the energy infrastructure. This is an unprecedented concentration of power in human history. In the Industrial Revolution, whoever controlled the means of production controlled the economy; in the AI Revolution, whoever controls the data centers controls knowledge itself. This is not just economic hegemony — it is epistemic dominance.
Yet in the long term, finance will collide with its own limits. Because true innovation is not about funding discovery, but enabling it. Capital can accelerate science, but the moment it tries to steer it, science begins to die. That is precisely what we are witnessing now: AI expanding at a fatal velocity, while its meaning evaporates. Llion Jones’s phrase “it’s no longer fun” is not technological fatigue — it is existential exhaustion. When a system forgets why it exists, everything it produces becomes meaningless.
The future of AI is no longer a technical question; it is an ethical one.
True intelligence is measured not by accuracy, but by intent. And capital cannot own intent — financial intelligence is always rational, but never humane.
The salvation of artificial intelligence will not come from capital, but from curiosity.
Perhaps one day, somewhere in a small lab, a researcher working without a funding application will write the sentence that defines the next era:
“Attention was never all we needed.”
Sources
Stanford HAI — AI Index Report 2025
MIT Sloan — The State of AI in Business 2025
FTI Consulting — AI Investment Landscape 2025
Financial Times — Wall Street’s AI Bubble and Investor Psychology (2025)
Llion Jones — TED AI Conference Keynote (2025, Lisbon)
OECD Digital Economy Paper №354 (2024) — Funding Concentration in AI Research
McKinsey Global Institute — The Economics of AI Scale (2024)
European Commission — AI Act Regulatory Impact Report (2025)
For further actions, you may consider blocking this person and/or reporting abuse

Top comments (0)