Most people think expert networks sell access to experts.
They’re not entirely wrong, but it’s a surface-level understanding.
If we break it down to first principles, what they really sell is information asymmetry.
At their core, expert networks give customers access to insights they can’t easily get elsewhere, as quickly as possible. This isn't just about finding a person; it's about extracting the specific, non-obvious insights that person possesses before that knowledge becomes obsolete or widely known.
But the future of this industry isn't about slightly more efficient calls or bigger libraries of transcripts. From my work with some of the largest expert networks and active conversations with executives across the industry, the real strategic conversations appear to be coalescing around what step-change shift comes next.
The dialogue is often dominated by the immediate, tangible question: "Where's the ROI?" This reveals a deeper tension between optimizing for today and truly innovating for tomorrow.
For me, the next game to be won will likely be predicated on synthetic experts: AI systems trained on vast, proprietary datasets of real-world expertise, capable of delivering or augmenting personalized, on-demand insights without the traditional bottlenecks of human supply.
Asymmetry
Information asymmetry is the engine of the entire expert network industry.
Buyers—whether they're private equity investors conducting due diligence, or management consultants mapping a new market for a client—are fundamentally seeking an edge. This edge comes from knowledge that is elusive in public sources like financial statements or market reports.
The knowledge that buyers are looking for isn't raw data; it's insights informed by experience and earned credibility on a specific company or trend at a precise moment.
As an example, an investor doesn't just want to know a company's revenue; they want to know from a former sales director why customer churn in the Midwest division spiked last quarter.
This kind of asymmetry has a short half-life.
While difficult to quantify without direct, sector-specific data, we can infer its fleeting nature. In dynamic sectors, insights are highly perishable; a take on a supply chain disruption is valuable today but could be obsolete tomorrow.
The core task of the network, then, is to deliver that fleeting insight with speed and minimal friction. Traditionally, networks have done this by brokering calls—a form of on-demand, specialized recruiting that tends to be far more manual than most clients realize.
As the market matures—with forecasts suggesting it could surpass $12 billion by 2033—the pressure to scale this process without friction is immense.
And that figure likely doesn't factor in how AI could realistically inflate the TAM by enabling entirely new use cases or how reduced operational costs might lower the barrier to entry, potentially increasing the number of specialized players in the space.
Act I
The first act of the industry, pioneered by firms like Gerson Lehrman Group (GLG) in the late 1990s, was built on direct, one-on-one calls.
The job of an associate in this era was a game of pure hustle: receive a client brief, scour internal databases (and later, LinkedIn), and hit the phones to recruit a relevant expert for a single conversation.
Public metrics on the model's scalability limits are scarce, but anecdotal reports describe it as a process of manual outreach that scaled profitably (but linearly) with demand.
This hands-on approach worked because it directly addressed asymmetry, but it was inefficient.
But ultimately, this proved people would pay for asymmetry as a service.
Clients willingly paid a premium—often over $1,300 per call—for a single hour of targeted conversation.
This was Act 1: personalized scarcity.
Act II
The transition to the industry’s second act arrived when firms realized the value of a call didn't disappear when the phone was hung up. They began to codify these conversations into searchable, proprietary databases, effectively detaching the expertise from the live human who delivered it.
Practically, this meant turning ephemeral talks into enduring assets.
This turned insight into a scalable product with zero marginal cost of replication. An analyst could now "talk" to ten experts before breakfast by simply reading their past conversations.
Buyers saw the value in access and accepted the trade-off. They exchanged the personalization of a live call for the aggregation and workflow advantages of a library.
But this bet wasn’t perfect. For hyper-specific queries, relevance gaps and transcript staleness can persist, creating client dissatisfaction when a static library lacks the agency for bespoke follow-ups.
Though not mutually exclusive, this business model brought down the unit economics of information asymmetry. Critically, it reduced per-insight costs, making expert networks accessible to smaller firms.
This was an answer to the cost-inefficacy of the old model.
Incumbents like GLG and AlphaSights, after some initial skepticism, have since accelerated their investments in transcript offerings, recognizing the compounding value of these data assets.
Act 2 was a critical proof point: it confirmed that the market for insights could thrive independent of real-time human interaction.
Tegus was the winner. Stream and Third Bridge (Forum) validated the model further too.
Building on Act 1, it showed asymmetry could be productized, setting the stage for further abstraction.
The Gradient
The advent of Generative AI has sparked a wave of conversations about optimization. The immediate, tangible ROI comes from addressing short-term operational pains: using AI to source experts faster, automate client brief interpretation, and streamline compliance. These are OpEx reductions that drop straight to the bottom line or create a competitive edge in speed.
But this article isn't about short-term ROI. It's about long-term defensibility.
Focusing only on these incremental gains is what I call "climbing the gradient." It’s a necessary phase of productivity uplift, but it mistakes optimization for transformation.
From my conversations with ENs executives, the focus often remains on using GenAI to improve existing business models, not to invent new ones. The core tension is balancing short-term uplift with long-term differentiation. This is the classic Innovator's Dilemma: a fear of cannibalizing current market share to build for a future that is still full of unknown unknowns.
Gains from faster sourcing or bigger libraries are real, but they commoditize quickly. AI-powered summaries, for instance, are rapidly becoming table stakes—driven not by competitive differentiation but by changing consumer expectations shaped by AI tools in other parts of their lives.
If AI makes the basics table stakes, then true, lasting advantage—a real moat—must come from innovating on top of it.
The game isn't about being 10% faster at scheduling a call.
It’s about asking whether a call needs to happen at all.
Act III
Enter Act 3.
I believe what we’re about to see blends the personalization of Act 1 with the scalability of Act 2: productized personalized insights.
This model delivers the tailored, interactive experience of a one-on-one call with the scalability and efficiency of a digital product. It's the ability to ask a new, specific question and get a reasoned answer on-demand, powered by the aggregated knowledge of thousands of prior expert conversations.
My prediction is that the next generation of dominant expert networks will be built upon synthetic experts.
It is critical, however, to differentiate a synthetic expert from a simple RAG search interface. A synthetic expert is better understood as the underlying infrastructure that can power new form factors—whether through an API for AI agents, an audio response, or even an AI avatar. It is the foundation upon which new knowledge products can be built and consumed.
These are AI personas that could embody deep, domain-specific knowledge and reasoning, available 24/7. They could simulate experts and/or act as proxies, drawing from aggregated data to respond dynamically.
I expect this will manifest first as augmentation, then as a premium upsell, and eventually as a viable alternative for certain use cases.
But how do you build one?
It starts with transforming raw conversation transcripts into structured, machine-readable data. A raw transcript is just a block of text, but through processes like speaker identification, entity recognition, and structuring dialogue into Q&A pairs, it becomes a high-fidelity dataset. This is the foundational building block.
A hybrid model—using fine-tuning to establish the core mental models and RAG for dynamic, real-time updates—could create synthetic experts that effectively learn every single day.
The Caveat
Let's be clear about the role of a synthetic expert. Its purpose isn't to merely "create new knowledge" from thin air. Its true power lies in moving beyond simple synthesis to model the reasoning patterns of an expert.
It’s about understanding how a specialist thinks—their mental models, the way they weigh trade-offs, and the frameworks they use to connect disparate facts.
When asked about a specific company, the synthetic expert wouldn't invent a story. It would synthesize the verified experiences of all relevant experts and deliver an originally reasoned answer grounded in those collective mental models. It's an expert on the data and the reasoning within it, not a replacement for a single human's memory.
This, of course, introduces the risk of fabrication and hallucination, which is real.
And if AI models fail to significantly reduce these inaccuracies, the industry could remain stuck in incrementalism, reliant on human validation. Counter-arguments from skeptics often point to AI overhype cycles and these persistent flaws.
However, the foundational models are improving daily.
More importantly, I believe market dynamics, rather than legal mandates, will solve for trust. Users will naturally gravitate toward platforms that can ground every claim with a direct citation back to a source transcript, creating a competitive advantage for those who build this in.
A clean, high-fidelity dataset becomes the ultimate backstop for accuracy.
The vision of a truly synthetic expert is a bet on the future trajectory of AI, not its current state.
The accelerating pace of releases—from GPT-4 to Gemini to Grok—shows that the time between frontier models is now measured in months, not years.6 This rapid progress makes it more probable that capabilities like reasoning and reduced hallucination will improve in the shorter term.
We're betting on the destination. As OpenAI's CPO Kevin Weil has noted: the AIs we have today are "the worst you'll ever use for the rest of your life".
Bottlenecks
The expert network industry is, in my view, fundamentally supply-constrained.
While demand grows, the pool of top-tier, recently-tenured experts is finite. This supply-demand imbalance is the bottleneck. "Expert fatigue" is a real phenomenon; the same specialists get bombarded with generic messages, leading to burnout and unreliable response rates.
Human relationships, despite what marketing collateral might claim, are likely not the moat. Experts are primarily driven by incentives like reliable pay, not deep-seated loyalty. This dynamic pushes the industry further toward commoditization.
Staleness compounds the problem. Because insights have short half-lives, static libraries often lack the agency needed for the bespoke, follow-up queries that are critical for deep diligence.
Synthesis
Synthetic experts could resolve these bottlenecks by innovating on the supply side itself.
This doesn’t mean replacing humans outright, at least not at first.
I think augmentation will come first. For example, an AI could moderate a human-to-human call in real-time, ensuring key topics are covered. These interactions create richer data loops—the moderated data is then used to train better synthetic models, creating a powerful flywheel.
Over time, an AI moderator can improve faster, be more consistent, and have a lower error rate than a human, benefiting from the collective intelligence of all previous calls.
This aligns with Jeff Bezos’ philosophy to build on what will not change. In expert networks, the unchanging demands are for faster, cheaper, and more complete insights.
Synthetic systems are, arguably, simply a way to productize this at scale, turning human supply bottlenecks into data-driven flywheels.
Data
In Act 3, I believe differentiation will come from proprietary datasets, not expert headcounts.
This is what I see as the most important takeaway for expert networks today.
If you believe synthetic experts are the future, then every transcript becomes a priceless proprietary asset. Many networks currently overlook the long game of building a cumulative, compounding asset.
The winning strategy is to invest in cumulative capture.
The transcript libraries of Act 2 are thus incredibly valuable prerequisites to participate, let alone compete, in Act 3. But their value is contingent on their quality; if datasets are inaccurate or polluted, they will produce poor-quality results downstream.
Importantly, this differentiation is only possible if the data is meticulously cleaned, structured, and annotated. The success of companies like Scale AI underscores this truth: high-fidelity, human-verified data is the raw material for reliable AI.
The winners will likely be defined by the coverage, completeness, and credibility of their datasets.
A deep, niche dataset on semiconductor manufacturing becomes infinitely more valuable than a broad one on "technology" because it allows an AI to build a detailed model of how these specific experts reason—how they weigh trade-offs between fab capacity and node innovation, or how they analyze geopolitical supply chain risks.
That reasoning pattern is the real alpha.
As this shift occurs, business models will also likely evolve—from selling calls to selling API access, per-query fees, or full-fledged synthetic expert subscriptions.
The customers of the future aren’t just humans; they’re AI agents.
Future
The demand for information asymmetry will endure. What will change is how it is delivered. The endgame is not a better expert network; it’s expertise as a utility—a programmatic, consumable resource available on demand.
This shift won't be uniform. Regions with high demand but historically constrained expert supply may leapfrog the West, adopting synthetic models aggressively not just to compete, but to redefine market access.
This transformation also reframes the role of human capital. The value shifts from brokering access to curating knowledge. The most critical employees will no longer be the ones with the best Rolodex, but those who can manage the human-in-the-loop processes that ensure data fidelity—the ultimate backstop for a synthetic expert's credibility.
The networks that view their datasets as simple archives—the exhaust of their current business—are liquidating their only durable asset for the future. In contrast, those who treat every transcript as a building block for cumulative intelligence are positioning themselves for an age of unlimited expert supply.