BEDROCK Podcast E16 - From AI Cloud Talk to Big Models
- BedRock

- Sep 27, 2025
- 10 min read
Tracy
Hello everyone, welcome to the BEDROCK podcast. This is Episode 16, a collaboration with financial blogger Trade韭. Today’s topic focuses on AI. Thank you for tuning in.
Trader韭
Let’s get started. It’s really great that TC and Tracy are taking the time to record during their business trip in Kazakhstan. You’re here to look into local industries and companies, right?
TC
Yes. We’re looking at some local companies and internet-related opportunities. Many emerging markets draw lessons from the success paths of China and the U.S., so we are here to learn and conduct research.
Trader韭
From an investment perspective, you’re looking both at China’s development stage 5–10 years ago and the opportunities of the AI era in the next 5–10 years. In essence, it’s always about looking forward.
TC
Exactly. It’s just that different regions are at different development stages, so there are opportunities for “catch-up” and “benchmarking,” but the essence is still forward-looking.
Trader韭
Great to talk with TC and Tracy about AI. Let’s start with the hottest topic: the “Neo Cloud.” For example, Oracle (ORCL) surged last week on expectations of a big deal with OpenAI; Nebius (NBIS) is expected to collaborate with Microsoft; and there’s CoreWeave. Could you first explain what “Neo Cloud” really is? How is it different from traditional CSPs (cloud service providers)?
TC
Tracy, would you like to start?
Tracy
Sure. At the most basic level, it’s all about data centers, which provide cloud services on top. In the past, traditional cloud was more CPU-based, serving enterprises/SMBs/individuals with full-stack services. Since these customers often had limited technical capabilities, they required more “concierge-style” support. That’s why traditional cloud could command higher margins through IaaS/PaaS/SaaS add-ons.
The rise of “Neo Cloud” is mainly due to a shift in customer structure: large model companies like OpenAI and Anthropic are now the big clients. They are highly technical, don’t need much added-value on the upper layers, and instead focus on the efficiency and cost of the underlying compute. Hence, they prefer “bare metal” — directly renting GPU compute.
So, “Neo Cloud” is built on GPU-centric architecture, offering model companies more basic, closer-to-the-metal capabilities. In the short term, they win by providing scale and basic capacity, though in the future they may move upward to provide more value-added services.
Tracy
To summarize: Traditional cloud is CPU-driven, with relatively weaker customers, allowing cloud providers to capture high value-add. Today’s AI data centers are GPU-driven, with much stronger customers (model companies), so the services cloud providers can offer are more “basic.”
Trader韭
That sounds a bit like being a “contractor”: more on the supply side, doing the heavy lifting.
TC
I’d add: traditional cloud is more like a platform company, selling compute, storage, and various PaaS/SaaS full-stack services, with pricing power in their hands. The global leaders (Azure/AWS/Google Cloud) cover a wide range, from databases to security.
By contrast, “Neo Cloud” is more like a supplier: leasing compute to model companies or big tech firms, providing bare metal. You can think of it as outsourced supply and financing partners — almost like “off-balance-sheet” entities.
This type of business has lower short-term margins, fierce competition, and weaker pricing power. If these assets were held directly on the balance sheets of the hyperscalers, they would be extremely heavy, so companies like Microsoft prefer to pass these orders on through leasing or subcontracting. This is somewhat similar to how Foxconn works for Apple.
TC
The two businesses are very different. Neo Cloud hopes to ride the AI wave, scale quickly to get a seat at the table, and then gradually “upsell” software and services. But in the short term, the share of true high-margin business is low — that’s the price of “getting on the table.”
Trader韭
So, either you do the heavy lifting and pre-fund for model companies, or you take subcontracting orders from traditional CSPs.
TC
Exactly. Traditional CSPs are in a dilemma: if they take on massive contracts from companies like OpenAI, their prices are often much higher than Neo Cloud (because of cost structure and architecture), and since they need to maintain high operating margins (>40%), they cannot sacrifice their overall profitability for lots of low-margin deals. The more realistic option is to sign the master contract themselves and subcontract to partners. Microsoft’s cooperation with Nebius can be understood this way. Oracle’s plans show that much of the AI cloud demand will flow to external partners. AWS, Google, and Azure are doing something similar. Moreover, they prefer their own custom chips (like Google’s TPU) for long-term control of ecosystem and cost.
However, many model companies still prefer NVIDIA’s ecosystem — the reason is simple: CUDA allows models to be trained and tuned faster. This creates two tensions: (1) cost/architecture, and (2) long-term ecosystem strategies. As a result, many orders flow to Neo Cloud.
Tracy
To add: whether the orders come from CSPs or directly from model companies, the ultimate demand is from the model companies.
Trader韭
So if Neo Cloud is fully GPU+CUDA, they’re basically “working for NVIDIA.” Meanwhile, traditional CSPs don’t want to be stuck with only low-value business, but they also can’t abandon the market. So they take orders now and transition gradually to custom ASICs. Is that the logic?
TC
Yes. You can think of traditional cloud as “branded OEMs” — customers pay extra for the brand and full-stack service. Neo Cloud, on the other hand, is like “white-box manufacturers” — customers (model companies) are highly technical and don’t need the “first-step” services. Instead, they need supply and financing partners to efficiently take heavy assets off-balance-sheet. This makes the model companies’ business models lighter.
Trader韭
So their strategy is to first scale up and secure big clients, then gradually raise value-add. Traditional CSPs, because of their customer structure and strategy, cannot realistically move downward into the “white-box” position.
Tracy
Another analogy: Microsoft’s customers don’t need to know what specific chips are used — what matters is the service delivery. But for large model clients, the choice of GPU and network architecture directly affects training and inference efficiency and cost. NVIDIA still has an advantage over ASICs (consensus is about one generation ahead), and speed directly ensures revenue growth. Model companies cannot accept half a year or more of extra debugging just to switch to ASICs.
Traditional CSPs prefer that customers not dictate hardware choices, so they can preserve more system-engineering space and profit. But model companies tend to specify hardware and network — creating a natural tension.
Trader韭
So the industry chain becomes clearer: compute chips → cloud → models → applications/agents. Each layer wants to commoditize the others and be the “chain master.” Right now, the model layer seems like the leading candidate.
TC
Exactly. NVIDIA wants its ecosystem to be central, so it supports many Neo Cloud players, helping model companies run faster on CUDA. Model companies want you to adopt their APIs and toolchains. Microsoft has its own calculations. Different positions mean different actions, and orders flow across systems.
SaaS and software companies need to deeply integrate with enterprise data and solve engineering challenges. If the models are just external black boxes with little close service, it’s hard to make the integration deep. That’s why many SaaS AI monetizations underperformed expectations this year, while model companies’ ARR grew faster. Lightweight integrations like coding agents and chatbots are iterating quickly, with strong user perception and direct monetization.
Trader韭
Let’s wrap up with a quick comment on Alibaba in China. How do traditional cloud and AI cloud compare?
TC
We haven’t done detailed work, but the challenge for Chinese SaaS is that organizational and management models have long relied on people, so the value of software substitution is realized more slowly. AI will help, but it won’t be an instant leap. Domestic model ecosystems (like Qwen) have room locally but remain behind the U.S. Players are concentrated (Bytedance, Tencent, Alibaba, etc.), and building “full-stack + strong engineering delivery” will still take time and capital intensity.
Tracy
My instinct: the narrative in China will ultimately resemble the U.S., but monetization paths will differ. Since labor is cheaper in China, user subscriptions take longer to become a self-sustaining loop. Alternative monetization paths (ads, finance, etc.) may be more relevant. How these paths play out in the AI era remains to be seen.
TC
Another issue is “who will pull ahead.” If clear winners emerge and competition eases, the opportunity space will be better. Heavy capital requirements mean leaders will likely come from big tech.
Tracy
As for compute investments, China may also explore “off-balance-sheet” structures, but currently most are still managed hosting/contract-building + leasing, at much smaller scale than overseas.
Trader韭
We’ve covered upstream (cloud and compute). Should we move down to models and agents?
Tracy
Sure. Models and applications will, in turn, shape the upstream.
Trader韭
At the start of the year, some people thought models were “homogenizing,” but now OpenAI has pulled ahead again. What changed?
TC
Models aren’t “one test and done.” They involve hallucination control, stability, service capabilities, engineering accumulation, feedback loops, multi-model orchestration, and many details. As user scale and load grow, stability and operations become huge barriers. After GPT-5, “automatic model selection/orchestration” allows serving problems of varying complexity at better cost and performance.
Consumers (ToC) don’t constantly compare multiple models — they stick to the most reliable one long-term. Just like in search, being slightly better plus smoother service was enough to capture overwhelming share. Surpassing the leader in ToC is extremely difficult. ToB depends more on integration depth and engineering delivery.
Tracy
The widening gap is also clear in user scale: some players are falling behind (e.g., Meta’s chase is uncertain), while OpenAI has hundreds of millions of weekly active users. Scale advantages are reinforcing. Most costs are fixed (compute/engineering), while marginal revenue growth far outpaces marginal compute spend, further locking in advantages.
Trader韭
So from products like ChatGPT, first-mover advantage and user stickiness are strong. Even between free and paid tiers, the effect is obvious.
Tracy
Yes, unless there is a paradigm shift (like robots). Otherwise, it’s hard to shake an established habit and ecosystem.
TC
This is a game of “intelligence and efficiency,” not just cost. Like companies paying 10–20% more for top talent — you don’t know which 10–20% will make the critical difference. In productivity-related contexts, users are willing to pay for better models.
Trader韭
OpenAI’s ARR this year is around $13 billion, mostly from consumer subscriptions, right?
Tracy
Yes, mainly subscriptions. API is just over 20%, direct ToB connections are a small share. In the future, with agent products and tiered subscriptions, ARPU will increase, but consumer subscriptions remain the main driver.
Trader韭
And Anthropic?
Tracy
Anthropic has grown rapidly this year, especially in coding (Claude Code). Its revenue growth has been remarkable, and it could approach $10 billion ARR by year-end. It’s stronger in professional ToB scenarios, leveraging academic/vertical expertise (especially programming). But compared with OpenAI’s “comprehensive service + ToC scale,” Anthropic’s stickiness is weaker: programmers will keep comparing and switching. Migration costs are lower than ToC.
TC
But the market is big enough. In the Bay Area, many engineering teams treat coding agents as “assistant teams,” and senior engineers report efficiency gains of at least 20%. The key is shortening iteration cycles with human-machine collaboration. Agent-to-agent ecosystems will drive more API calls and compute demand, enabling things previously impossible or too time-consuming.
Trader韭
So basic coding jobs are threatened, but API and agent usage looks more cost-effective.
TC
Not only in the U.S. — many countries and companies will face similar structural changes.
Trader韭
If we rank by ARR, Cursor and similar tools are hot, though they may not count as “model companies.”
Tracy
Correct. Google’s AI revenues are mostly indirect via search/ads, with direct monetization small, but it still belongs in the first tier.
TC
You can divide the battlefield into ToC / ToB / To Physical World. ToC and ToB are already commercializing quickly. Physical world (world models, robotics) will take more time but offer bigger space. Tesla and Google are investing. Whoever cracks real-world problems will define the next stage.
Trader韭
Robotaxis still face “last mile” issues, while humanoid/task-specific robots could start with single roles and expand. That path seems feasible.
TC
Yes. Start with high-value, repetitive jobs, and over 10–20 years gradually move toward general-purpose.
Trader韭
So we’ve covered model leaders: OpenAI, Google, xAI (Grok). Cursor is more of a tool/app.
TC
Between the U.S. and China, the core players are just a handful. If possible, it’s rational to diversify across the leaders.
Trader韭
From your perspective as global growth investors, how do you position?
TC
Two approaches: (1) selectively invest in unlisted quality assets (like opportunities in compute/model ecosystems); and (2) in public markets, top-down selection of companies that can truly deliver earnings and cash flow. Our framework spans compute—cloud—software—models, and we allocate selectively, dynamically adjusting.
Tracy
Not much to add, that’s basically it.
Trader韭
So overall: models (incl. agents) and “cloud+chips.” You’ve also reminded us not to be too absolute — traditional giants (like Microsoft, ServiceNow) still have opportunities in ToB customer relationships and data governance. The key is iteration speed and product strength.
Tracy
The long-term AI market is in the trillions. Even if models capture 50–70% of the share, what remains is still in the trillions — plenty of room for good companies. The question is “who can scale.”
TC
I agree. As the industry scales, many once-niche areas (like data center networking, optical interconnects, liquid cooling) will become “big enough.” Don’t just watch the obvious “chain masters.” Secondary and tertiary tracks may also yield major companies.
Trader韭
Like riding a rocket — get on first, then pick your seat. Finally, what about companies that benefit from AI but aren’t labeled as “AI stocks”?
TC
For example, Meta is using AI to strengthen its business model — not a “pure AI stock,” but still a clear beneficiary. In our portfolio, broadly AI-related or AI-benefiting stocks are around 50%, but we don’t only bet on “explicit AI labels.”
Trader韭
Back to compute: odds and certainty are very different from last year, right?
Tracy
Yes. Based on long-term demand, compute still has several times growth potential, but the structure is shifting: GPU vs ASIC allocation, rising share of networking/interconnects, liquid cooling advances, and changing training/inference cluster shapes. It requires more detailed tracking.
Trader韭
Quick takes on Meta and Apple?
Tracy
Both are similar — they have native advantages and are clearly AI-empowered. For Meta, AI increases user time spent and ad conversion (+3–5ppts), supporting mid-term growth. But the concern is that if they lag in “models and comprehensive services” long-term, it will be hard to secure a seat at the trillion-dollar AI table — they become more like an “option.” Apple’s situation is similar or even weaker, unless they take a more aggressive stance on their model gaps and strategy.
Overall, this cycle isn’t about “internet connectivity,” but about “AI replacing labor and boosting efficiency.” The space is huge, the participants many, and the pace fast. There are both opportunities and risks. Our strategy is to use systematic methodology to find globally competitive, execution-capable companies, dynamically adjust, and proceed prudently.
Trader韭
This discussion is purely for methodological exchange and does not constitute investment advice. If you want to know more about our views on the four core model companies, you can contact our assistant (qualified investor status required to discuss private funds). That’s it for today — thank you TC and Tracy for sharing.
TC
Thanks, Tracy.
Trader韭
Haha.
Tracy
Thank you.
Trader韭
See you next time.




Comments