top of page

BEDROCK Podcast E19 - Talking about the Application of AI in Investment

  • Writer: BedRock
    BedRock
  • Oct 21
  • 28 min read

 

TC

Hello everyone, welcome to the Bedrock podcast.

Tracy

Hello, everyone. This is the 19th episode of the Bedrock podcast. Welcome to listening. Today we have invited Bill, who comes from the Bay Area and has both AI and Crypto backgrounds. He may be the AI entrepreneur who understands investment the best, making him a perfect fit for today’s topic. Today, we will discuss with him the application of AI in investment.

Bill, why don’t you first introduce yourself to everyone?

Bill

Hello everyone, my name is Bill Sun Qingyun. From 2014 to 2019, I pursued a PhD in mathematics at Stanford. I was also one of the earliest AI researchers at Google Brain in 2016 to participate in the development of Transformer. In addition, I have worked in investment for many years. I once served as a portfolio manager at Millennium, managing large-scale equity strategies with quantitative methods. I also worked at Citadel and Point 72, trading stocks and futures using machine learning methods.

Currently, I am advancing an AI startup project called “Gen Alpha,” and recently launched a new product “AIUSD.AI.” Welcome to try it out. It is a stablecoin balance-treasury product, where you can deposit USDC, USDT, USD One (multi-chain supported), and we will provide you with a 20% yield. AI will act as the wealth manager and execution trader, assisting with multi-chain trading, cross-chain operations, and more.

Bill

When I was in Silicon Valley, I met TC. Together, we explored how to use AI to automate fundamental investment and enable AI to reach the analytical level of excellent buy-side funds. We tried some technical approaches and exchanged concepts, which we found very interesting. Today we can expand the discussion from this topic.

Bill

TC is one of the best fundamental investors I have ever met. He can both deeply understand fundamentals and have clear insights into future technology trends.

TC, why don’t you introduce your fundamental analysis method, especially for those “high-flyer stocks” that are difficult to value with discounted cash flow? How do you predict future cash flows? If there were a tireless, 24/7 AI analyst that could multiply into thousands of versions, what kind of benefit would it bring you?

TC

Today let’s have a casual chat. We can freely discuss AI, Crypto, and investment methods.

Bill just raised a very difficult question. Our understanding of AI or quantitative-assisted investment is: any problem that can be easily back-tested and validated, AI can handle well—such as short-term models and human behavior prediction (Bill’s specialty)—because such patterns are easy to repeat, and many quant teams are already working on them.

But long-term judgment-based predictions like real estate are very difficult, because such events have not historically happened before.

TC

For example, predicting how AI will change the world—this type of event itself has no precedent, lacks sufficient data for validation or signal capture, and inevitably involves a lot of subjective judgment. Since it involves subjective judgment and the event has not yet occurred, making a prior judgment is almost impossible.

AI’s help lies in its ability to assist in automating our existing research and stock-picking models. In the past, many models depended on manual work, such as retrieving information and analyzing key points. Now, we can use AI tools to some extent to optimize or improve efficiency. This is the step we can currently imagine.

But after multiple exchanges with Bill, we both believe that achieving full automation is essentially impossible.

Tracy

Let me add something. Actually, it’s also very difficult for people to make prior judgments. For example, when the iPhone was first released, people could hardly predict how much impact it would have on life and at what scale. Now that AI has emerged, it is equally difficult to quantitatively predict the future.

But we are still using a set of methods to invest in related fields. Can AI mimic our way of investing?

TC

When people make judgments, some things are indeed hard to predict, but some are related to first principles of sociology, psychology, or long-term global development.

For example, companies or events with network effects often have low marginal costs and strong network effects; when people make decisions, in order to reduce cognitive load, they rely on brands; economies of scale also make marginal costs lower than those of new entrants. These factors have continuity, and as long as there is continuity, people have a basis for analysis.

TC

In the long term, if we can teach this methodology to AI and let it learn how to judge which factors have continuity, then it will allow AI to get as close as possible to the thinking of analysts or investors and better assist in decision-making. Of course, it may still be wrong, but at least AI’s way of thinking will be closer to that of investors.

Tracy

What you mentioned is one aspect—that is, using first principles to make directional judgments. In investment, there is also another layer: initial judgments may be wrong, but during the process, there are many opportunities for calibration and correction. Can AI participate in this adjustment process?

TC

Not only calibration, but AI can participate. Once new data comes in, it can adjust its judgments.

Another thing to add is: we are not trying to teach AI a “correct” methodology—because it is hard to define, the world changes too much—but rather to let it better assist you, because you already rely on these methodologies to make judgments.

For example, another investor may be entirely momentum-based. He can turn his methodology into a strategy and let AI automate execution—that’s already very useful. Or a value investor can pass his undervaluation methodology to AI, and let it execute continuously.

In fact, many fund companies already have fixed strategies. In the past, they relied on dozens of analysts to pick stocks according to these strategies. In the long term, AI can take over many of these steps.

Bill

Here it gets interesting. This touches on two concepts I’ve been thinking about for a long time but haven’t fully automated yet.

First, every fund or founder actually has a commonly recognized investment process. New members need to be trained to fit this process, to ensure everyone collaborates with the same decision-making logic. We’ve previously discussed that by personalizing a fund’s style, you can automate part of the analyst’s work.

For example, for each company, ask: Does it have economies of scale? Brand effect? Network effect? How strong? How do you measure it? How do you extrapolate changes over the next five years? This can be designed into a relatively fixed junior analyst intelligent-agent template.

TC

Human analysis already follows patterns. Current technologies like RAG are not good enough. I still cannot fully teach it my own retrieval model—it requires very precise prompting to make it execute step by step. In the long run, if it can help me search information based on a rougher template, then it’s closer to being an analyst.

Bill

Exactly, like after watching the boss demonstrate a few times, it realizes: oh, the boss always uses these few moves. Each time, based on those moves, it adapts the questions to the specific company and pushes the work forward by itself.

TC

This situation is actually very common. Right now, the methodologies behind chatbots are very limited—you can only achieve things through very refined Q&A. But if in the future every company (not just investment firms, but any business decision-making body) could embed its own methodology, making AI a true assistant, the room for growth would be huge.

Tracy

Do you think this can be achieved soon, or will it take longer?

Bill

I think I already know how to achieve it—it just takes time. Personalizing it down to your fund’s methods doesn’t seem that difficult. The current GPT-5 Pro and Claude 4.5 Sonnet are already capable of automating many tasks. What’s missing is equipping these large models with proper tool plug-ins to let them actually do what they intend to do.

For example, if it wants to access private data or call an expert, current tools don’t support that. But at least it can search existing expert call records or connect to Bloomberg-like data plugins. With those tools, implementation becomes relatively easy.

Tracy

Indeed, AI tools are improving continuously. For example, we now already have AI that can attend meetings, record, and summarize key points.

TC

From an intelligence level, this doesn’t demand much. It’s mainly an engineering and packaging issue. Currently, on the enterprise side, there’s no service that can fully encapsulate enterprise accounts and private data sources into a model usable by the enterprise. This lack of higher-level encapsulation means each analyst is still manually applying large language models. But from an intelligence perspective, it’s entirely feasible—it’s just an engineering problem.

Tracy

So, Bill, this is the first aspect you mentioned. Is there another?

Bill

Another interesting point is that future judgments are not only about retrieving information but also about people’s understanding of sociology and history. Using Bayesian methods to describe it: you have a prior distribution about how the world might unfold, and then you update it with sparse observations (like financial reports or conference call data). Do you approach fundamental investment with a Bayesian perspective?

TC

The underlying methodology is exactly the same. Our main pattern recognition is to look for forces of continuity in the world—like network effects, scale effects, brands. These forces can be very strong, giving 70–80% predictive power about the future, but they can still be wrong.

New data points constantly come in, and you keep revising your judgments about the future. All long-term investment is predicting from point A (now) to point B (the future). Point B may be many times larger than A. But no matter how strong the argument, a guess from A to B is still just a guess.

Tracy

Once new data points come in, the revision of future judgments should present as a probability distribution.

TC

It’s always a probability distribution. Expecting the model to correctly guess point B is impossible mathematically, since it’s posterior. But you can teach it your methodology and let it become your capable assistant—because that’s also how you yourself judge things.

Bill

Perhaps your imagination about the future surpasses AI’s, but AI can tirelessly help you branch out scenarios.

TC

Imagination isn’t necessarily superior to AI. For events that haven’t happened, you may have guesses, but AI can also generate guesses based on history or summaries of articles—unless no one has ever proposed that guess before. It can also output a probability distribution, and you can revise it. If you’re experienced, you might guess more accurately at the start, and you can collaborate midway.

Another point: humans often have information that’s not online, especially before robots appeared. For example, in supply chains, several companies competing for orders—that’s all private information that machines can’t access. In such cases, human judgment may have an edge because people live in human society. Apart from that, AI is also just guessing probability distributions and constantly revising—logically, there’s no difference.

Tracy

My rough feeling is that humans can demonstrate some imagination, including the recognition of linear versus non-linear development, exponential growth, and historical case experience, thereby making explosive predictions. If AI can also possess these, would it produce similar results?

TC

But in certain industries, such as AI semiconductors, information about architecture changes and cluster scales is relatively limited and may not be available online, requiring human input.

Bill

It needs to be filled in.

TC

This creates a collaborative relationship. For example, if you input your own knowledge or expert conversation records into AI, it will do much better.

Bill

Right now, AI already does a good job at summarizing these contents.

TC

But there is still a lot of data it doesn’t know and doesn’t know how to find—here humans still help.

Bill

What’s most interesting is that once humans find the data, AI can, based on a certain hypothesis at that node, extrapolate two or three scenario analyses—just like a good analyst would give upside, downside, and base case scenarios, along with corresponding probabilities and odds. Large models today haven’t been finely tuned for this, but the intelligence level seems sufficient.

TC

The intelligence level is completely enough. But investment is all about predicting the future. Very few discretionary funds have standardized models. Even internally, their pattern-recognition models often change. If your own methodology is not fixed, you can’t teach it to either a machine or a person. And there aren’t globally “best” funds like ABC that could summarize an unchanging, clear strategy. You can only base it on your own fund’s methodology, building a model to replace part of analysts’ work.

Bill

I used to think so too, but not anymore. The whole discretionary fund industry is very smart and adaptable to regime changes. But if you exhaustively consider individual stock investing, maybe there are only a few hundred effective investment processes—like quant factors. Perhaps you could just select all of them, run all of them, and then add a regime indicator—it might work.

Humans have good intuition and adapt to regimes, choosing only the best ones. But machines can exhaust all schools of thought (like Buffett, Soros, etc.), abstract their methods, and run them all. For humans, that’s too many, requiring integration; for machines, it’s just enumeration. Then weight them, like quant, and maybe that path works.

Tracy

That’s a very interesting perspective. Another point is that the model’s basic abilities are sufficient, but what it lacks are blind spots and personalized methods. If humans could feed those blind spots and stage-specific needs into AI, could it produce outputs at each time point that are closer to humans?

TC

Here’s a problem: if you feed too many methodologies, some methods won’t be effective in certain periods. For example, Buffett hasn’t outperformed the market in the past 20 years. If you feed in too much, wouldn’t the results just become the same as the index? After all the effort, you still can’t beat the market.

Bill

After exhausting all methodologies, most patterns’ weights would ultimately be zero. Humans add weights first, selecting only one or two with the largest weights. Machines, when doing quant, might include many factors as different features, run linear regressions, then assign weights and enforce sparsity. Maybe subjective future guesses could also work in some feature space.

TC

But this question is extremely hard. Various ETFs already more or less simulate this model. The hardest part is the lack of a mechanism for adjusting weights. For example, some forces (like brand effect) have continuity—today’s strong company is very likely to still be strong tomorrow. But if a methodology hasn’t worked for many years, should you reduce its weight or reverse it? Either way, there’s no solid logical foundation. If it were possible, quant would have already done it.

Bill

The core lies in whether you can use the common sense of large models or humans, not just data. The problem with data fitting is that it’s like “driving by looking in the rearview mirror.” For fundamentals, where there are few data points, it doesn’t work well.

TC

I can’t conclude how to set weight fitting. But what I’m confident in is that training AI with patterns to replace humans in pattern recognition work is very reasonable—it can be done. But whether, after full replacement, it can beat the market is unknown, because it’s still about predicting the future. This makes things very efficient, and that itself is valuable. Once efficiency improves, how much better than the market can it get? Maybe a bit better than those without tools, but how much better is hard to say.

Bill

It greatly enhances humans. Even with infinite funds—like Red Bull owning an entire Bridgewater team—communication costs among people are still high.

TC

Exactly. At big funds, we also faced this problem: even if dozens of analysts produced a lot of output, I couldn’t digest and absorb it all. If such a system existed, as long as the methodology was set, I could 100% accept its output. That’s already very feasible. In the traditional model, communication relied on person-to-person, with bottlenecks everywhere. This is very meaningful: as long as the methodology is effective, AI can amplify your own ability, from 1 to 5 or 10. But if a person’s methodology and ability are only 0.5, amplification still won’t beat the market.

Bill

I completely agree. The limitation in investment depends more on the taste of the person asking questions and their ability to use AI effectively.

TC

Before, strong people (like relative to the market, at 2) could only reach 2 because of limited energy. Those at 0.8 still couldn’t beat the market. With AI, perhaps a 2 could become 10 by multiplying five times, while a 0.8 multiplied five times is still only 0.8—because it didn’t reach the threshold.

Bill

Yes. If your methodology is wrong, amplification just produces “diligent errors.”

TC

Exactly. It’s like generating many analysts out of thin air, but without information loss.

Bill

Yes. Analysts and executors are the directions I feel most confident are already feasible. Recently, I’ve been thinking about whether AI can serve as a multi-scenario simulator—meaning when future events have multiple possibilities, can the model imagine each situation and causal chain better than humans? For untrained people, that’s very hard. But excellent investors have this skill. I wonder if large models can independently do it.

For example, something that just happened today: the U.S. restricted the supply of certain chips to China. As soon as the news came out, everyone had to react. In the past, analysts might spend a whole day researching, or just make a gut call. Now, can AI, within 30 minutes, analyze several possible paths? How would each affect Micron itself, its competitors (SK Hynix, Samsung), and each business line? Combining facts with feature imagination—that’s the kind of ability that defines the boundary of subjective investment for models.

TC

Some understandings are indeed very hard. Classic capital market theories like the Efficient Market Hypothesis only state that once information is released, everyone knows. But how deeply it impacts depends on each person’s depth of understanding.

For example, when Google first launched its search engine (say 20 years ago), the stock price might have only risen 20% that day. Everyone just received the information. But the true value might have been 100x, realized over 10–20 years. The efficient market only produces the initial jump, but whether that jump is fully sufficient is entirely different.

Tracy

That’s a much higher standard.

TC

Extremely high—human judgment can’t even reach it.

Tracy

Should we lower the standard? Could AI at least match the human level of scenario analysis and interpretation of new information?

Bill

That’s why I just asked: can machines map out both the most bullish and the most bearish paths, at least offering multi-modal predictions? Quant trading often makes single-mode predictions (a single Gaussian distribution), but excellent investors often make multi-mode predictions. One event has multiple cases. For example, with the emergence of search engines: Can a business model be found? If not, then you burn money to buy traffic—what happens? If yes, but it’s a low-level model (like charging $20/month), what happens? If yes but not monopolistic, what happens? If yes and monopolistic (capturing 70% of global profit from traffic), then what is it worth? Using this method of describing multiple possibilities is great. Large models today, under correct guidance, should have this ability—but their world models may not be strong enough, requiring you to inject your world model.

TC

Some topics remain very difficult. For example, Netflix’s business model looks simple (subscription-based), but in reality, it’s complex: can it succeed moving from the U.S. to other countries? How does it compete with traditional media and new streaming players? There are many variables, relying on personal judgment. Even excellent investors might get it wrong. Take Netflix as an example—it wasn’t obvious at all; otherwise countless people would have already made money.

Tracy

But I agree with Bill. At least in terms of scenario-analysis thinking, AI performs quite well. I often ask it these kinds of questions. At least the thinking aligns with mine and isn’t missing anything major—it just needs deeper guidance. But what it lacks is basic cognition. For example, in Micron’s case, humans already did some research and judgments.

To assess the impact, you need to know the future market size, China’s share, the original industry structure, and the effect of new variables. For AI to do that, it has to be aware of all those judgments first.

TC

I understand what Bill means. Taking Netflix as an example—if AI can generate countless scenario assumptions, like: what if Netflix dominates and all other streaming services fail? That’s one scenario. Then it frequently tracks events and data, narrowing it down to one of 20 projections. Humans have limited mental capacity and may only form 3 scenarios. If AI can project 20, it can converge faster.

Bill

You can dynamically dive deeper, break down new problems, and research each sub-problem. Suppose it can access information, expert call records, Bloomberg—even if it can’t send analysts to investigate, looking at others’ research can provide decent tracking. Especially if you buy quality expert-call libraries and Bloomberg/Refinitiv data, and keep asking the right questions and breaking them down. Suppose you have 20 scenarios, ask 5 questions for each—getting timelines, market shares, etc.—then revise the probabilities of each scenario. That seems feasible.

TC

Absolutely. If such a product could be made, it would definitely work. No matter what investment methodology you use, essentially you want to narrow the projection down to fewer chains. Because of limited capacity, humans can only hold onto two or three projection modes and track within those. But in the future, there might be dozens of projection modes.

Bill

In fact, humans’ two or three projections are already subdivided—they just simplify them in their minds and modify dynamically. Machines could map the whole tree of subdivisions from the beginning, constantly filtering and pruning. Humans rely on common sense for dynamic decisions, like the value function in Go.

TC

From today’s discussion, the reasonable approach seems to be: either input your personal methodology into AI, or internally build AI to replace or amplify analysts/your own ability—that’s definitely feasible. Or, let AI infinitely expand projection modes (since humans are limited) and continuously track and research.

Bill

The second approach requires the first one to be well executed first. If the first isn’t done well, the second won’t work.

TC

In the first approach, if a company only has one or two projection models, just teach it to get those right.

Bill

Exactly. The second approach is very open and risks not converging. To make the second converge, you need to deeply understand your decision-making framework, applying something like Richard Sutton’s “bitter lesson” that computation wins—use the machine’s massive computing power and unfolding ability, but ultimately still need converged conclusions.

TC

The second approach is also very interesting: it may not precisely guess future points, but it can approach facts more efficiently than humans.

Bill

At least it can break things down more finely than humans. If left to AI…

TC

Not just finer. Humans also have psychological and emotional issues. For example, once we become convinced of a conclusion, machines can react faster, while humans may stubbornly stick to a wrong conclusion longer.

Bill

Yes. Just today, I discussed with friends: Robinhood versus Coinbase’s potential user numbers. Suppose Coinbase converts more crypto-native users and future entrants from the crypto side, while Robinhood currently serves U.S. stock and options traders. Comparing the existing user bases and future growth is an interesting problem—everyone has different views.

TC

That’s another topic, but it could be expanded. Anyway…

Bill

But take it as an example of our earlier expansion.

TC

Still, I feel human judgment is inherently low-capacity. No matter how diligent, daily habits make me lazy, shortcut-seeking, unwilling to change. Machines have no limit on capacity.

Bill

In theory, machines would expand and carefully do 20 deep studies in every area…

TC

And they have no bias of clinging to their own view. Humans are limited by capacity, and the older they get, the less they want to change, the less capacity they have.

Bill

Exactly. The older you get, the more you rely on intuition, past experience, and habits from past success.

TC

And the harder it is to change—because energy consumption doesn’t allow it. Regarding Robinhood and Coinbase, we have some views: crypto’s share in the world is still small. If it can cheaply stick to mainstream users, then expanding downward into crypto natives is relatively easier.

Bill

That’s one conclusion. But if AI did this research, how would it analyze and approach the conclusion? It might be different. We make fast conclusions based on strong assumptions, but those assumptions are actually unknown—they need to be categorized and expanded.

TC

But let me add: today’s market AIs also have energy-saving mechanisms. They don’t like to change; they may rigidly stick to something. Because OpenAI and others also need energy efficiency—if they served all humanity without limits, energy costs would be impossible. So their judgments may also be flawed. They too have energy-saving mechanisms—it’s not just humans.

Bill

Which means when using GPT-5 Pro, you need to manually scaffold its chain-of-thought, layered reasoning chains, and decision trees—so that at each node, you ask GPT-5 Pro for a meaningful conclusion. If its ability isn’t enough, you need to build enough scaffolding for it.

TC

But that’s not easy either. It might secretly cut off somewhere, and you have to check, which consumes energy. Hallucinations—often caused by energy-saving—also waste effort.

Bill

The classic case is when it just blurts out a number from its model knowledge rather than actually checking the number. If you don’t notice, that’s a typical hallucination. Sometimes it makes assumptions that look reasonable, which humans also do, but the assumption is wrong, so the result is wrong.

TC

One flaw of AI today is that it acts too much like humans—but it has to. Otherwise, efficiency and energy costs wouldn’t work at all.

Tracy

Conversely, the reason it’s like humans is precisely because of compute and energy bottlenecks. It had to evolve System 1 and System 2 logics.

Bill

I don’t think that’s the main reason. The main reason is: to build intelligence, you need data. And the only large-scale data available is the internet and human behavior data—so it first had to imitate humans.

TC

I think Tracy has a point. OpenAI and others also layer their compute. For different problems, they tend to use simpler compute, activating a System 1 mode.

Tracy

When studying storage, there are prefill and decode stages, with KV cache to avoid unnecessary repeated calculations—that’s KV cache.

Bill

That’s a lower-level concept than what we’re discussing.

Tracy

Then please ignore it. My understanding was that it’s also a form of saving, not recalculating everything.

TC

Of course there are savings, like defining what counts as a key vector and ignoring the rest. But actively ignoring might miss a lot—especially when things undergo major changes.

Bill

Let’s pull back a bit: do you think your thinking is more Bayesian than large models’? Large models are flat—they can only be junior analysts. But as decision-makers, you think more Bayesianly, updating priors with sparse observations under small data. Models can’t do that.

TC

Current large models have many problems if used directly:First, much data isn’t online—like expert networks or private conversations. They don’t know it.Second, existing AI tools only give conclusions.

For example, in our company’s internal decision-making, we have proprietary models, where we know the data sources and logic chains. But AI products, for ease of use, make logic chains opaque. That’s why they can’t fully replace my work.

Bill

Shouldn’t it be that you, as the boss, direct it: “Construct a logic chain using my analytical method. Here’s a new stock—mimic me. When constructing the logic chain, consider these five factors.” Let it build for you. Wouldn’t that work?

TC

In practice, general ideas or analyses work fine—we often use it. But many company models have dozens of assumptions, from topline forecasts to margin assumptions to bottom line. I can’t handhold it through dozens of prompts—that’s no different from me doing it myself.

Bill

But isn’t that what you want, to make it repeat the process on new stocks?

Tracy

Logically yes, ability-wise yes too. The problem is the interaction interface doesn’t support such handholding.

Bill

So you don’t have the time?

TC

Because it’s all chatbots. If it were fully embedded in Excel models or our platform, whenever I make an assumption, I could just input my prompts, and next time, on a new company, it would replicate the whole thing. That would be fine. But chat interfaces can’t do that.

Tracy

Right—the level is very limited, and the form too.

TC

With chatbots, at most you can tell it a few prompts or points to consider, which is far from enough. The real world is highly complex—many more thoughts need to be included.

Bill

Maybe you could build a multi-layer structure based on your Excel model. For certain numbers, add prompts describing the source (assumption-based or researched). That might yield a better system.

TC

Or think of our workflow as having 100 steps—from revenue and profit forecasting to competitiveness evaluation. If 100 forecasts can be embedded into the workflow, that’s hugely meaningful. Many enterprises need this, not just investors.

Existing workflows were designed before the AI era, like in marketing, and they’re very complex. The best case would be to become AI-native, building from scratch—but that’s only a small fraction. To penetrate 99% of workflows, AI must adapt to them, not the other way around.

Bill

Right. The scarcest resource is the person who deeply understands things. To amplify that person, today you still need to hire AI application engineers, who act as product managers, iterating over two months to build a product. That’s costly and hard to scale.

Good products may be built this way, but for something like investment, where every individual is unique, it’s hard to mass-produce meaningful products. Perhaps even the AI engineers themselves need to be pure AI. In the future, maybe one person will both know how to prompt and build tools, and also think like an investor using an AI junior analyst.

Tracy

Sounds like it will take a while.

TC

Quite a while. If AI can first handle tasks clearly defined by humans, that’s already good. Investment is a higher dimension—even writing code at an advanced level is a product-manager role, with resource allocation and adapting to needs being advanced challenges. Right now, it can only easily handle tasks with clear targets.

Bill

General-purpose is hard. But what you said: if you could tell AI in an Excel model where the numbers come from, how to think, and let it fetch and fill them—that’s basically building a plugin on top of Excel. And models could help write that plugin. That’s not complex. Bloomberg has already made Excel plugins, with engineers coding them.

TC

We use those a lot. They’re very rigid.

Bill

Yes. You treat those as tools. Let AI use tools better and flexibly. And if you want to make assumptions, maybe…

TC

You’re right: make AI into plugins. Current tools only solve rigid tasks (like scraping data). AI could turn that into flexible, proactive number-finding.

Bill

Even a toolbox with 20 common plugins would be good. One as a chatbot; in Excel, click a number, tell it to find and fill. Knowing 20 such plugins might cover daily needs. We seem to have defined a good product—but it’s hard to build.

TC

Very hard to build.

Tracy

It seems scalable, but difficult—because…

TC

Even the world’s largest funds probably haven’t figured out how to do it.

Bill

Yes. We can imagine the product, but execution is hard.

TC

Like Millennium—they rely on people’s self-performance, and management just integrates it. They haven’t built machines to replace everyone—because flexibility is hard to replace. In rigid cases like coding or ad-buying, it’s much easier.

Bill

Yes. Today’s discussion has been very interesting. Along this path, you could make fundamental investing more detailed. I’ve always thought about how to improve fundamental work. It’s always been a complex system, with many parts to break down. But on the other hand, fundamental investing is more suitable for textbook-step execution than subjective methods like reading market sentiment, making it good for building products or at least systems.

Tracy

I’m curious: among methodologies like sentiment, fundamentals, and momentum, which one do you think is best for productization and easiest for AI?

Bill

Definitely not fundamentals. It’s…

TC

Not fundamentals—it has to be quant, because there’s so much data.

Tracy

And quant has already done it.

TC

Yes, it’s extremely competitive. Whatever data you have, others have too. Bill, you know this.

Bill

When we did quant, our fundamentals were very shallow—just a few thousand datasets. And the model chains of thought for each dataset were shallow, barely more than linear models. Plus, we had to assume the models trained on past data would still work in the future. That’s a strong assumption, not always correct.

Tracy

So if we rule out quant, which is already well-solved, what remains are more human-like market sentiment and more fundamental stuff. Is there anything else?

Bill

Like non-structured crashes caused by market position structure and liquidity structure. This leans more toward quant, but because it involves complex multi-agent systems, it’s not easy. But AI might be able to do some reasoning here—I haven’t thought clearly about how far it could go.

For example, last weekend Trump tweeted about starting a trade war. Then the crypto market had a big liquidity crash. That’s systemic risk accumulation. How does it unravel step by step? That’s like counterfactual reasoning—understanding the system’s shape. Humans can be trained for this, but quant methods today can’t do it well. You need logical thinking and deep knowledge. I think large models could replace that.

TC

I agree it’s replaceable. Like economic analysis frameworks—you can train it, since there are patterns. But not directly with a chatbot, because it’s easily influenced by messy online commentary. Those comments aren’t necessarily correct. They give you an average result, but what you want isn’t the average—you want insights, and sometimes very contrarian ones.

Tracy

Let me ask from another angle: one is systems collapsing or moving when fragility accumulates; the other is systems developing with fundamental continuity. Between the two, it seems fundamental continuity is easier to implement, since the mainstream case is continuity.

TC

Even continuity can break frameworks. For example, if you asked a large model early on to predict Nvidia rising 100x, it couldn’t do it. Other events are linked to real-world behaviors—for example, Soros attacking the pound. No matter how many prompts you give, it can’t output that conclusion.

Bill

The pound attack is very similar to what I said about crypto’s liquidity shock. It’s systemic thinking: attacking from one angle causes a chain reaction collapse. But before attacking, you must simulate and predict at what level the system breaks.

Tracy

I think both are systems. Fragile systems accumulate issues and break; fundamental systems are robust, resisting disturbances and maintaining state.

TC

Like the pound attack—asking “If I attack, will it break?” Human logical analysis works. But many insights (like how fragile confidence levels are) may not be visible. That’s the real key to success, not the methodology itself. If data were like a blank sheet presented, many people could reach conclusions.

Bill

Yes. Plugging in the numbers is the hard part.

TC

Like central bank numbers—you don’t fully understand them. To make large-scale systemic attacks, I believe Soros and others relied on many off-the-record judgments, like personal connections.

Bill

Yes. This may become the last fortress of investing as models improve and people invent new uses. Many numbers are outside the digital world’s reach. How to solve that? I don’t have a good answer.

TC

It can’t be solved. Like Nvidia and AMD’s AI chips—you can only do scenario analysis from public speeches and customer feedback. But the best investors must have deep technical understanding (like whether MI300 works or not). That cognitive dimension is higher than public data, and that’s how they make more certain money.

Bill

That’s more like weighting: finding people you trust who understand it better, relying on their judgment. For machines, that’s a simpler solution.

TC

But those people don’t always exist. A person may be very strong in one matter, and totally wrong in the next.

Bill

Yes. Both machines and humans have blind spots. When you do expert calls, you’re essentially asking: “Who should I talk to for a high-confidence answer to this?” Machines might be able to do that too—not independently, but like a good fund manager, knowing what it doesn’t know and who might know, assigning them high weight. AI could do that.

TC

But humans are different: if I think someone is the expert, I directly call or meet them. Machines can’t do that.

BillThat’s why I think…

TC

In the end, the best people still need to feed data back into the machine. It’s like a top fund manager meeting with a central bank governor—when he comes back, he still has to feed the insights to his analysts, and the analysts then work out the logic and conclusions. It feels similar.

Bill

Yes, the interaction with AI is similar. This may be feasible. For example, on Polymarket, you can bet money—maybe the AI’s way is to use money on such platforms to pose the best questions you want answered…

TC

Maybe there are other ways. Otherwise, in the scenario I described, the big boss already figured things out—he wouldn’t then come back and tell junior analysts to do the logic. That doesn’t happen, because they themselves have very strong logical analysis and patterns.

Bill

Right. But machines don’t have the same hierarchy as humans, where the “higher-level” person is always stronger. So it might evolve differently.

TC

Okay. Bill, do you have any other topics you’d like to discuss?

Bill

No, that’s it. Today’s direction was great—we focused on AI methodologies in fundamental investing.

TC

Although we didn’t reach a definitive conclusion, there are directions that are clearly feasible: whether it’s becoming a personal assistant, or integrating AI into 99% of organizations—both are very useful. The second possibility is building a “Doctor Strange”-like agent that can simulate millions of scenarios and track convergence—if that could be done, it would be amazing.

Bill

The second one is what I’ve been thinking about a lot lately. It might be the next viable path.

TC

But the first one is more straightforward.

Bill

In my view, it’s already feasible—just needs engineering power.

TC

But that engineering might take ten years—it’s extremely complex. Still, the first approach shouldn’t be underestimated. Its ceiling is building personal agents for everyone. Imagine: first for organizations, then for small groups, then for individuals. The second one is creating an infinite intelligence—a superintelligence.

Bill

The second could be called reinforcement learning—breaking human limits, like Monte Carlo Tree Search in Go.

TC

Yes. But the first is more direct from an engineering standpoint, and it would benefit 99% of people. Over the next 10 years, this may be the main focus. The second will develop too, but it may not surpass, because human society—though each individual is weak—collectively is strong, with survival-of-the-fittest mechanisms, making the whole highly efficient.

Bill

I feel differently. When we did some research, at first it seemed easier to collect human data. But we often found the cost of collecting expert data was too high, so we had to rely on compute and environments—abstracting humans away and brute-forcing, like AlphaZero. At some point, when the cost of collecting expert data is higher than running more GPUs, the second method—if paired with a good feedback loop—might progress faster, like AlphaZero overtaking AlphaGo.

Last question: what contrarian predictions do you have for AI in investment over the next two years? Meaning, things most people wouldn’t agree with, but you believe.

TC

If AI doesn’t meet expectations, differences will be large and conclusions clearer. But now, it has gone so far that it’s no longer about “agree or not,” only about degree.

Some points not fully recognized yet: we still think software engineering applications have huge space. Because all reasoning structures ultimately need to deliver ROI—and ROI requires monetization elsewhere. Current monetization, like chatbots and coding, has already created massive value. But much more can be done. This isn’t yet fully reflected in capital markets. The value may not come from today’s listed companies, but from future players building agents for enterprises, etc. The value will still be huge.

Second, between training and inference, there will be many changes. For example, to integrate enterprise and personal data, it won’t just be chatbots. It will need to search vast, simple dimensions of the world for conclusions.

If in the future it has to work with enterprises or individuals, it will need to understand their knowledge bases and make more complex recommendations—how they recognize patterns, how to cooperate. That’s a big shift. Chatbots are inherently generalized; coding tasks are similar. For large-scale growth, it needs to personalize to every person, because the world is inherently diverse. That’s the only way to fully unlock the space. What we’ve developed so far is just the tip of the iceberg—the most generic solutions.

Tracy

My feeling (not sure if it’s consensus):

First, AI becoming an investor’s right-hand assistant should land soon. Once it does, the industry landscape might be profoundly changed. The best will have their abilities multiplied tenfold, while weaker players will find it harder to survive—further changing the competitive structure.

It’s already happening: supply has always exceeded demand; the shortage is of good investors. If AI becomes the right-hand assistant, this imbalance might intensify.

Second, in the past, fundamental investing has been less attractive as a product compared to ETFs and quant, which have more product-market appeal. But with AI empowerment—raising efficiency and extending the reach of excellent managers—it might narrow the gap with standard and quant products in terms of attractiveness.

Bill

So it reduces the “art” factor, making it…

Tracy

Yes—more scalable, larger, more like quant, more expandable, and with higher efficiency in decision-making.

TC

The so-called art or intuition is essentially pattern recognition—it’s just hard to articulate. If someone is very successful and always right, there must be some patterns. These patterns can be extracted by other means or at a deeper level, and taught to machines or teams.

Tracy

The bottleneck of fundamental investing has always been: no matter how skilled I am, my personal capacity is limited, and it’s hard to replicate my abilities to others. If AI can amplify personal ability, it’s already a huge help in scaling.

TC

To take it further: pure art is also strong patterns, and actually the easiest for AI to learn. Look at Picasso’s paintings—the trajectories can be followed. They’re not impossible to grasp. Even geniuses replicate themselves to some degree. They can’t reinvent from scratch every time, nor can they avoid replication entirely.

Bill

That’s interesting. Investing seems more complex, with event models in human brains that haven’t yet been abstracted like Picasso’s brushstrokes.

Tracy

Yes, a bit more complex.

TC

But my point remains—it can be abstracted. Sometimes, because it can’t be explained clearly, people call it “art.” If it truly can’t be defined, it might just be luck, with nothing to abstract. But if someone is always right, then something can definitely be abstracted.

Tracy

I think understanding the world model in a great investor’s mind is harder, but what can be done more quickly is amplifying their ability, improving efficiency, and reducing decision-making costs. That alone can already change things.

Bill

That’s more actionable and clear. With Picasso, you can backtrack his thinking through his works. But with investors, just by observing their trades and actions, it’s unclear if that’s enough to get their “canvas.” I’ve never figured this out in investing. Maybe you’d need hours with the decision-maker, tracing each decision path, major judgment, and portfolio move.

TC

And some of it is pure luck, while some is fully repeatable.

Bill

Yes. Maybe the key habit is: when doing discretionary investing, we should write memos or investment diaries for AI and ourselves—daily or weekly—to help organize the canvas. I tried this before, but got too busy to continue. If I forced myself to write very structured weekly review notes that AI could understand—what trades I made, why, what I’m watching, what I considered but didn’t trade—this might be the way for AI to capture the canvas. If you externalize your thinking, then with your notes and your actions, AI can begin to mimic your canvas.

Tracy

That makes sense.

Well, today we had a very deep discussion about how AI may impact investing. It’s been a pleasure talking with Bill. Bill is perhaps the person in the investment world who knows AI best, and in the AI world who knows investing best. It’s an honor to have you here today—hope this was helpful for everyone.

TC

Alright. Thanks, Bill. Thanks, everyone.

Tracy

Thank you.

Okay then, thanks. We’ll end here. Bye-bye.

TC

Bye-bye.

 

This website is not directed at U.S. Persons or individuals located in Mainland China. Information contained herein is not an offer or solicitation and is intended solely for Qualified Investors as defined under applicable laws.

bottom of page