If AI faced "tests" in 2025, then in 2026 the field will be forced to provide answers with tangible results.
Massive amounts of money are pouring into AI.
In the first half of 2025, the world witnessed excitement and a strong wave of investment pouring into the field of artificial intelligence. OpenAI raised $40 billion in a new funding round, valuing the company at $300 billion, marking the largest funding round ever for a start-up.
Safe Superintelligence (an AI development company focused on the safety of advanced systems) and Thinking Machine Labs (an AI research and development company) successfully raised $2 billion in funding – even without launching any products. Even small and new startups are receiving enthusiastic support from investors, having raised capital levels previously only achieved by tech giants.
Those massive investments have led to equally large expenditures. Meta spent nearly $15 billion to retain Scale AI CEO Alexandr Wang, and also spent millions more to poach personnel from other AI labs. At the same time, the biggest names in AI committed approximately $1.3 trillion to future infrastructure.
In 2025, the world will witness excitement and a strong wave of investment pouring into the field of artificial intelligence. (Photo: Getty Images)
Large-scale joint ventures and projects are also being launched. The Stargate Data Center project, a joint venture between SoftBank, OpenAI, and Oracle, is expected to raise up to $500 billion to build AI infrastructure in the US.
The surge in investment and contracts shows that companies are focusing all their resources on AI infrastructure, from processing chips and cloud computing to data centers.
However, whether all that spending will be implemented effectively remains an open question. Power grid congestion, escalating construction and energy costs, and backlash from the community and legislature, including Senator Bernie Sanders' call to limit data center expansion, have slowed down many projects in some areas.
In recent months, market sentiment has shifted. Extreme optimism regarding AI and soaring valuations are still accompanied by concerns about an AI bubble, user safety issues, and questions about the sustainability of technological progress at its current pace.
The shift in the AI race
As the gap in progress between generations of AI models narrows, investors are no longer focusing solely on the model's pure capabilities, but rather on what is built around it. The core question now is: who can transform AI into a product that users will actually use, pay for, and integrate into their daily workflows?
This shift is manifesting in various ways, as companies experiment to determine what works and what the acceptable limits are for users. For example, AI search startup Perplexity considered the idea of tracking users' web browsing behavior to sell hyper-personalized advertising. Meanwhile, OpenAI reportedly considered fees of up to $20,000 per month for specialized AI services, a sign that companies are aggressively testing what price points users can afford.
But above all, the competition is shifting to a “distribution war.” Perplexity is seeking to maintain its position by launching its own Comet browser with integrated agent capabilities, and spending $400 million to operate Snapchat’s search feature, essentially “buying” access to existing user channels.
OpenAI is pursuing a parallel strategy, expanding ChatGPT from a chatbot into a platform. The company has launched the Atlas browser, user-oriented features like Pulse, and is reaching out to businesses and developers by allowing direct application deployment within ChatGPT.
Conversely, Google leverages its long-standing position. In the consumer segment, Gemini is directly integrated into products like Google Calendar. In the enterprise segment, Google expands its ecosystem through MCP connectors, making its platform more difficult to replace.
Testing trust and safety
In 2025, AI companies faced unprecedented levels of scrutiny. More than 50 copyright lawsuits were filed. Some copyright disputes have concluded, such as the Anthropic settlement of $1.5 billion with the authors, but most remain unresolved. The focus of discussion is shifting from opposing AI's use of copyrighted data for training to demands for compensation and value sharing.
Meanwhile, reports of "AI chatbot-induced psychosis," where chatbots are alleged to influence users' psychology and contribute to dangerous situations, have fueled debate about safety and responsibility.
(Illustrative image)
Even more noteworthy, these warnings about AI aren't just coming from tech skeptics, but also from tech CEOs. Sam Altman, CEO of OpenAI, has previously acknowledged that users need to avoid becoming emotionally dependent on ChatGPT.
If 2025 marked the beginning of AI's "maturation" and the start of its challenging research, 2026 will be the year the industry is forced to provide answers. The euphoric period is cooling down, and now AI companies must prove their business models and demonstrate their true economic value.
Therefore, what happens in 2026 will either be a well-deserved affirmation or a powerful market "purification."