OpenAI Moves Up the Enterprise Stack
OpenAI's latest partnership with Thrive Holdings marks an important shift in how foundation model developers work with traditional industries. Rather than offering APIs from a distance, OpenAI is embedding its research talent directly inside a private-equity platform that acquires legacy service firms in sectors like accounting, IT outsourcing, and back-office operations. The structure is unusual: instead of cash, OpenAI contributes a dedicated R&D unit in exchange for equity. This incentivizes both sides to modernize operational workflows with tailored language-model systems rather than generic, one-size-fits-all products.
Thrive, which has raised more than a billion dollars for this transformation strategy, plans to acquire companies that still rely on manual and fragmented processes. The joint team will use reinforcement learning with domain experts - auditors, IT technicians, compliance staff - to create verticalized AI agents capable of navigating extremely specific enterprise contexts. This "co-building" approach moves far beyond conventional model licensing. OpenAI effectively gains a seat inside industry operations, collecting real-world feedback loops that materially influence future model design.
Crucially, the partnership is not exclusive. Thrive maintains the option to integrate other foundation models, including open-source systems, wherever they outperform OpenAI models on cost or domain-specific accuracy. The openness underscores a new pragmatism in corporate AI: the best model is simply the one that integrates well, runs cheaply, and delivers measurable improvements to workflow efficiency.
The U.S. Enterprise AI Landscape: From Experiments to Infrastructure
American enterprises have moved from experimentation to widespread deployment. Surveys conducted across 2023–2025 show a rapid shift: more than two-thirds of large organizations now use generative models in production systems, and adoption spans every major vertical. Banks use LLMs to review research reports and assist investment advisors. Hospitals deploy generative models for drafting patient communications, radiology summaries, and insurance documentation. Law firms feed long case files into summarization engines for faster first-pass analysis.
Consumer-facing industries have moved even faster. Travel and hospitality platforms use conversation agents to resolve support queries. Retailers rely on LLM-powered summarizers to extract themes from large pools of customer reviews. Amazon's model-driven review digests are a prominent example - hundreds or thousands of shopper comments distilled into a few phrases that accelerate purchasing decisions. Marketplace sellers, especially small merchants without marketing teams, now write product descriptions using generative tooling that integrates directly into Amazon's listing workflow. Even home devices benefit: Alexa's conversational overhaul is powered by generative systems that interpret intent with greater nuance.
But scaling these systems remains difficult. Only a small fraction of "pilot" initiatives translate into organization-wide deployment. The barriers are familiar: unclear ownership, fragmented data pipelines, compliance reviews, and insufficient compute infrastructure. Yet companies that overcome these hurdles report strong ROI - often above 3× on productivity measures - and continue expanding their budgets for model usage. By 2025, more than one-third of U.S. enterprises were spending over $250,000 annually on LLM inference and fine-tuning workloads. This level of spend reflects not hype, but recognition that automation and augmentation have become foundational to digital strategy.
A striking pattern has emerged: most enterprises run more than one model. Multi-model architectures allow teams to assign different models to different workloads - e.g., a fine-tuned open-source model for classification tasks and a commercial model for synthetic data generation. This diversity also mitigates vendor lock-in and allows cost-performance optimization on a continuous basis.
Why Chinese LLMs Are Entering the U.S. Enterprise Stack
Perhaps the most unexpected development in 2025 is the growing U.S. adoption of Chinese open-source models - especially Alibaba's Qwen family, Baidu's models, and systems from rapidly advancing labs such as Zhipu and MiniMax. Just a year ago, most American firms defaulted to U.S. providers. But an industry-wide shift is underway driven by a simple, forceful combination: performance parity, open weights, and extremely low cost.
China's AI labs have aggressively open-sourced their model families with permissive licenses comparable to Apache 2.0. These releases include smaller variants (4B–32B) optimized for efficiency as well as larger models capable of general reasoning and multilingual tasks. Because the weights are openly available, enterprises can deploy these models on their own infrastructure, fine-tune them to proprietary data, and eliminate recurring per-token API costs.
The economic implications are enormous. For companies operating workloads with high query volume - customer support, classification pipelines, internal search - switching from proprietary APIs to self-hosted open-weights models can reduce costs by an order of magnitude. Many firms report that these models achieve 80–90% of the quality of top-tier commercial models for routine tasks, which is more than sufficient for many enterprise automations.
Airbnb: A High-Profile Example of East–West Model Mixing
The turning point for industry perception came when Airbnb disclosed that its AI concierge agent - responsible for handling a meaningful share of guest and host inquiries - relies heavily on Alibaba's Qwen. Rather than depending exclusively on U.S. closed-source models, Airbnb built its system atop a mosaic of thirteen models from U.S. and Chinese labs. Yet Qwen handles a significant portion of the runtime because it delivers strong reasoning for customer-support tasks at much lower cost.
This model-mixing strategy allowed Airbnb to automate roughly 15% of global support requests and reduce resolution times from hours to seconds. Cost efficiency was a major factor: inferencing Qwen at scale is significantly cheaper than running top-tier commercial models. Speed is another benefit - smaller Qwen variants respond more quickly under high concurrency loads, which matters when millions of users contact support during peak travel seasons.
The endorsement sent shockwaves through the enterprise AI community. Airbnb's CEO publicly praised Qwen's balance of quality and affordability, signaling to other U.S. firms that Chinese models are not only viable but strategically advantageous.
Startups and Investors Follow the Cost-Performance Curve
Airbnb is not the only U.S. operator adopting this approach. Several venture-backed startups have migrated their inference pipelines to Chinese models. Investors themselves have become early adopters: some firms publicly state that they moved mission-critical internal workloads away from major U.S. providers to Chinese models such as Kimi due to superior throughput and latency.
The downstream ecosystem is also evolving. New developer tools now support fine-tuning on Chinese model families out of the box. Some tools explicitly feature Qwen variants as first-class options due to high developer demand. The shift highlights a broader market truth: when open-source models achieve near-parity with closed-source alternatives, developers prioritize customizability, cost, and self-hosting.
This has strategic implications for the global AI race. Chinese labs are leveraging open-source strategies to expand global footprint, making their models fixtures in U.S. engineering stacks regardless of geopolitical headwinds. Meanwhile, Western enterprises are adopting a more cosmopolitan approach to procurement: if a model is fast, cheap, and sufficiently good, it earns a place in the toolchain.
What This Means for Enterprise AI in 2025 and Beyond
Two parallel movements are reshaping the enterprise AI landscape:
Vertical Integration by Foundation Model Companies
OpenAI's partnership with Thrive represents a new playbook - embedding research teams inside traditional companies to build domain-specialized agents. The approach ensures tighter alignment between model innovation and enterprise workflows.
Globalization of the Model Layer
Chinese open-source models have become credible options for U.S. companies. Their cost advantages, open weights, and reliable performance enable enterprises to build highly customized and economically sustainable AI systems.
Together, these developments signal a world where enterprise AI is no longer defined by a single dominant provider or a single dominant architecture. Instead, corporate adoption is becoming pluralistic, domain-specific, and cost-optimized. U.S. firms are mixing commercial APIs with self-hosted models, combining Western and Chinese architectures, and integrating foundation models directly into the core of business operations.
If 2023–2024 were years of rapid experimentation, 2025 marks the year enterprise AI becomes mature, diversified, and globally competitive. The winners will be organizations that take a pragmatic approach - balancing performance with cost, customization with risk control, and internal expertise with external partnerships. AI is no longer an add-on; it is becoming a structural component of corporate infrastructure, built from a mosaic of models that span continents.

Top comments (1)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.