Primary Thoughts

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Eos

Eos: identity security for the agentic era

Today, we’re excited to announce our investment in Eos, and AppViewX’s acquisition of Eos. To think this has all happened in just the last nine or so months is staggering. However, the story started with deep conviction in a market and some serendipity. 

When we met Archit Lohokare and Kash Ivaturi in the summer of 2025, it felt meant to be. For months, we had been thinking about the area of agent identity and access, so much so that we were considering incubating a business in this space. 

In our view, the majority of the AI security startups at that point were focused on the obvious, low-hanging fruit issues of using ChatGPT and Claude. These included making sure sensitive data did not end up being fed into an LLM, or having a customer-facing chatbot go off the rails and make a bad business decision. These problems seemed like just the beginning – a wave of enterprise-grade problems was still to come. Chief among them were the issues of governance, access, and privilege. What does an AI agent access, and when? How should companies treat agents within their identity and governance frameworks? Who has permission to use what agents, and in what contexts? All of these problems were not getting enough attention, even though they represent the critical questions underpinning identity in the age of AI, a massive market that is up for the taking.

With this idea, and these open questions, we met Archit and Kash as they were beginning to form what would become Eos. The connection and alignment was instantaneous. Archit and Kash, having been leaders of identity and access management and governance products at CyberArk and Idaptive, had seen firsthand the disruption heading this market’s way because of agentic systems. Agents change the way we need to architect identity and governance programs within the enterprise. The traditional incumbents are not prepared for that disruption. This was the opportunity that led to Eos, and in the summer of 2025, we seeded the business and they were off to the races.

Today does not mark the end of that story, but a very exciting step change and acceleration of it. We are excited that Eos is being acquired by AppViewX, a leader in the Machine Identity Security, Certificate Lifecycle Management (CLM), PKI and Post-Quantum Cryptography (PQC) space. Archit and Kash will assume the CEO and CTO roles at the new combined company, and AppViewX will integrate Eos into its product offering, repositioning around a next generation AI Agent and Machine Identity Security platform. 

We could not be more excited about this milestone. AppViewX provides Archit and Kash the platform and distribution resources to go after the Eos vision, but to do so bigger and faster than they could have otherwise. AppViewX’s Machine Identity Security and CLM products should be a natural component of the Eos offering, and make the technology even stronger. In AppViewX, we saw a partner that could help get Eos to market faster, and with more impact, in a space that is getting hotter by the second.

We could not be happier for Archit and Kash, two unbelievably seasoned professionals and operators in the identity security space. There are no better builders in this market than them, and the combined Eos-AppViewX entity will be a force to be reckoned with in this next wave of Identity Security companies built for the AI and Post-Quantum era. We’re excited to be along for the ride.

A special thanks goes to Doug Steinberg, an Operator-in-Residence with us who spearheaded much of the research and diligence that led to our interest and conviction in this market. Without his hard work, the Eos story would have been very different, and it almost definitely would not have included us at Primary.

Congratulations to Archit, Kash, and team Eos!

Primary’s 2026 investment thesis for GTM Tech

Go-to-market is a space our team knows intimately well: many of us were previously buyers and users of these solutions, and Primary spends thousands of hours each year working with our portfolio companies on their sales, marketing and customer success rhythms. When we published our last GTM thesis, AI was just beginning to shake things up; today, that shakeup has accelerated dramatically. We have gained clarity on what truly matters for building a category-defining GTM company and are excited to hear your feedback.


GTM startups must fundamentally transform the company P&L

Our thesis: AI has finally made it possible to transform the GTM P&L, and the market is demanding companies act accordingly. Today’s best-in-class GTM solutions embrace these four market realities:

  1. Gross margin pressure requires companies to radically reduce operating expenses
    The best companies are investing heavily in compute to stay competitive and eroding gross margin in the process. They simply must get more efficient with their operating expenses, but GTM financial metrics such as CAC payback and net magic number have been stagnant for years.
  2. AI-native companies are prompting an evolution to leaner org design
    Growing revenue has historically been expensive and wildly inefficient due to the number of people required: human BDRs can only handle so many touches per week even with the newest tools, companies are forced to hire another middle manager for every X new AEs, and so forth. AI-native companies are achieving staggering productivity per employee, and are setting new expectations for what’s possible. Every company must reconsider the human intensity of their GTM motion.

  3. Function-specific software silos are collapsing
    It’s finally possible to break down the barriers across functional software categories (e.g. marketing software vs. sales software vs. customer software).
  4. Companies will collapse and integrate traditional GTM roles
    We are on the brink of a new “full-funnel GTM” role, removing the traditional functional specialization between Marketing - BDR - Sales - SC - CS - AM. It’s conceivable to have one person – backed by powerful agents – cover pipeline generation, sales, and ongoing customer management. 

We are excited to back founders building AI-native solutions to enable this exciting new world.


Introducing PRIME


We are looking for founders or aspiring founders building GTM businesses with a direct-line impact on the P&L.


We've codified this as having demonstrable impact on PRIME:

  • Productivity (Revenue per Employee)
  • Retention (NRR or GRR)
  • Investment Efficiency (Net Magic Number / S&M Efficiency)
  • Momentum (Top-Line Growth)
  • Expense Reduction (Headcount and Cost Eliminations)

This is our non-negotiable filter for GTM investing. Companies who explain ROI through opportunity cost, marginal efficiency gains, or indirect metrics don't fit our mold.

Consider a category such as digital deal rooms: they may accelerate cycle times and marginally improve win rates, but there are multiple steps of math required to translate that to the P&L. Marginal gains are certainly valuable, but we are committed to backing the companies that seek to deliver an order of magnitude impact.


Founders: we thrive with serial entrepreneurs who are obsessed with their customers

We've learned that second-time founders are where we add the most value. Our strongest fit tends to be with serial entrepreneurs who value what we bring beyond a logo on the cap table: our network connectivity and operating experience. Exceptional talent is exceptional talent and we’re excited to meet any high-slope founder who shares our category conviction. But founders such as Amanda Kahlow at 1mind, Ganesh Ramakrishna at Lyric, Richard Harris at Black Crow AI and Jon Sherry at Alium exemplify the type of builders we're excited to partner with: they have unfinished business. 

The founders we're drawn to are aggressive about their own GTM and deeply customer-obsessed. They understand you cannot build enduring enterprise value without enduring customer value, and they're constantly challenging their own assumptions based on what they learn in the market. They obsess over the risk of the gross retention apocalypse. At the same time, they're Challenger sellers—teaching the market where it should be headed. Importantly though, the founding team must marry this customer obsession with world-class technical acumen. 

The path to full-stack GTM 

We love hearing a founder’s articulation for how they will bring this new GTM operating system to life. We’ve thought extensively about wedge vs. compound startup tradeoffs and are excited to share our own thoughts: 

Wedge Strategy

The wedge approach starts narrow but has a strategic vision to become a platform. Wedge products must still meet PRIME criteria and demonstrate clear P&L impact because that guarantees the right to ultimate share-of-wallet expansion. But beyond financial impact, wedge products need two additional qualities:

  • Obvious right to win – whether it’s a “Zero CAC” founder with a rolodex of prospects who already trust them, a team of 10x engineers shipping products at unprecedented speed or a former operator with an earned secret, the companies we back must have a clear right to win the lane of their wedge.
  • Own the context – the best wedges embed into the bloodstream of workflows and data, earning the right to capture net-new signals or refine existing data into decision-grade context. If a wedge cannot maintain and resolve context, it will struggle with durability regardless of the initial P&L impact. Playing the context angle correctly also makes launching subsequent SKUs more seamless and more defensible.

1mind is a great example of a successful, context-rich wedge approach. When we led the Seed round, we were investing in Amanda’s vision to build an end-to-end AI brain for all GTM functions from first prospect engagement through ongoing support, but we recognized they needed to start with a wedge. 1mind carefully selected an inbound AI BDR wedge that facilitated thousands of customer conversations, enabling impressive context. The data ingestion needed to train their AI “Superhumans” combined with that conversational context positioned them to expand into adjacent workflows with an unfair advantage. 1mind’s right to win was Amanda’s track record: she had previously built a category-defining unicorn with 6sense, and had an incredible rolodex of prospects ready to take her at her word.

1mind also exemplifies PRIME: it makes sellers more efficient by having AI handle upfront conversations so reps can manage bigger deal loads (Productivity), increases efficiency through reduced hiring needs, thereby reducing CAC and increasing net magic number (Investment Efficiency), and drives obvious expense reduction because teams can hire fewer people (Expense Reduction). 

"Wider" / Compound Strategy

The compound approach goes full-stack from the onset. The core question: why are customers trusting you for this risky, transformative move? Because these plays command higher switching costs, we believe founder domain fluency is essential. 

We recognize compound businesses take longer to build, particularly in the 0-1 GTM phase. During that extended building cycle, we're looking for a best-in-class approach to design and development partners as well as evidence of off-the-charts engineering and roadmap velocity. We love it when we hear comments along the lines of, “I can’t even keep up with our engineering team because we’re shipping so fast” (a recent verbatim from Amanda at 1mind!).

If you’re transforming the GTM P&L, get in touch

The GTM category is at an inflection point. AI has created both tremendous opportunity and tremendous noise. The winners will be the companies that don't just bolt on AI capabilities to existing workflows, but fundamentally reimagine how companies acquire, retain, and grow their customer base.

We're committed to finding category-defining plays early and supporting them relentlessly. If you're building something that transforms the GTM P&L, we want to hear from you.

Investing in the reinvention of cybersecurity

We are at a crucible moment in cybersecurity, one in which AI is changing everything. Just last week, Anthropic caught a near-successful cyber espionage attack that was 80%-90% executed by AI agents. These AI agents ran many parallelized attacks at thousands of requests per second, the kind of speed and sophistication a human would only dream of. The world is getting scarier, and CISOs wake up every day confronting this reality.

At the same time, AI is a new technology platform that presents internal risks. Top-down mandates from C-suites and boards mean that AI experimentation is ubiquitous, creating new threat vectors. Additionally, new hardware and configurations in the data center mean that the physical infrastructure to protect is different too, necessitating new solutions.

This change is scary, but also exciting. For too long, cyber has been bogged down in the SPM craze—“posture management” and dashboards. Of course, visibility is necessary, but the dashboard sprawl has become comical. A CISO’s main problem is no longer blindspots, but rather an excess of data. Alert after alert, and security teams seemingly get farther from actual solutions. In the words of a CISO I know well, many tools are just “more software for software.”

The evolution of the market in this way makes sense. Some visibility-focused startups have become category-defining, generational companies. In pursuit of the next Wiz, VCs have over-funded visibility. Despite some successes, most of these companies have contributed to the most fragmented, sprawling, and redundant stack in the history of cybersecurity.

AI has the potential to break this pattern by automating workflows, reducing costs, and finding better signals in the noise. We see this happening across all security categories, from identity to data to endpoint, and have made investments accordingly. The time to build in cyber is now—to give security leaders less, not more; to think of cyber from a blank slate with AI at the center; to use the data the SPMs of the last decade have generated to deliver unique value.

And New York is a great place to build. Although Primary is much bigger than we once were and we now invest globally, the firm began with a bet on New York. A bet that New York is the greatest city in the world, with the highest density of customers and talent, and that founders should be here. This is true for cyber. New York is home to the largest Israeli community outside of Israel, where a disproportionate number of cyber companies are born. And, as the center of the financial world, New York is home to some of the biggest, most sophisticated cyber customers in the world.

We are seeing the gravity around New York strengthen, as companies like Wiz, Cyera, and Axonius have all relocated their headquarters here. Today, there are thousands of security operators in this city, and that number is growing. Undoubtedly, many of these operators will start great companies of their own. We believe we are on the verge of an explosion of the New York cyber ecosystem.

Today, I am humbled that my promotion to Partner is being announced to the outside world. Primary is a deeply special organization for me. I first met the team when I was 25, before going to business school, having spent my prior years working on a startup in NYC. As a born and bred New Yorker who knew how hard the earliest days of startup building were, Primary stood out. “Startups are hard; founders deserve better” has been and is always at the core of what we do, and our ability to execute on that truth and deliver value for founders has grown exponentially since I joined the firm.

I am especially excited about leading the cyber practice for Primary, at a time when security is so existential and great upstarts so needed, and in a place that is poised to become a hotbed for cyber activity. We are working on a lot of interesting things here, and have big dreams. We do not invest in many companies—I will make maybe two investments a year—but the companies we do invest in will be big swings, and we will try to help them in a differentiated way unmatched by any seed firm on the planet.

The challenge of building a cyber seed investing practice is real, but we believe our strategy and positioning is unique. If you are reading this, and intrigued to learn more about our approach, please get in touch. Today I am kicking off a search for an associate to help me with these efforts. This person will be my partner-in-crime, and together, we will define the strategy and investment decisions of the cyber practice at Primary.

The reinvention of cyber in an AI age starts with startups, and amazing customers, advisers, and investors who help them on their journey. If you want to be a small part of this transformation, let’s talk.

The Biological Computing Co

The Biological Computing Company raises $25M Seed

The Biological Computing Company (TBC) is ready to tell the world what they’re building. Founders Alex Ksendzovsky, CEO, and Jon Pomeraniec, COO, have led the company to use living neural networks to develop and optimize AI architectures in a commercial setting. This investment pushed the boundaries of our compute thesis for three reasons: one, Alex and Jon live in that founder sweet spot of brilliant, on a mission, and wildly different; two, their tangible results are the stuff of sci-fi but very real; and three, it’s exactly what we’re looking for —  the belief that radical breakthroughs in compute are needed to meet AI’s demand for efficiency, performance, and scalability and that outlier founders with perspectives born of unique life experiences will build that future. 

To understand TBC from first principles, we start with the fact that the brain is a million times more energy efficient than the best silicon today. It operates on roughly 20 watts of power to deliver an exaflop of computational power. You’d need 120K watts of power for NVIDIA’s Blackwell to deliver the same throughput. 

For decades, neuroscientists have tried to translate the brain's functionality into digital systems. What if, instead of translating the brain, we let neurons compute on their own? If we can harness the computing power nature intrinsically provides us, we can unlock capabilities that silicon alone cannot. 

These facts have been driving Alex and Jon for two decades. In the early 2000s, Alex was studying philosophy of mind in college and saw a visiting professor demo a robot powered by a neuron. He immediately intuited the potential to extract the immense power of the brain for computation. Since then, Alex and cofounder Jon – both neurosurgeons from UMD and Penn – have had a decades-long partnership through medical school, surgery roles, and research positions. After making rapid progress in their research around the brain’s computing capabilities in 2021, as covered by Fortune’s Term Sheet, they started The Biological Computing Company to focus on building computers for the future – out of neurons. This is not neuromorphic or “brain-inspired” chip design. This is harnessing the intrinsic and evolutionary power of the brain for compute. 

The Biological Computing Company is building an organic computing platform that connects real living neurons with modern AI, making frontier models more stable, scalable and dramatically more efficient. TBC is able to harness intelligence, connecting neurons to electrodes, which then demonstrate superior efficiency to silicon for specific computational tasks. With its first product, TBC is showing a 23x retained improvement in video model efficiency – their team encodes real world data into living neurons, then decodes the neural activity into richer representations that have been mapped to state-of-the-art AI. 

TBC is also using biological compute to extract principles from living neural networks, informing the development of novel AI architectures. In doing so, the company is not only augmenting today’s models, but shaping how future models will be built. This marks the first true commercialization of biological computing – and a critical step toward a world where biological and digital compute operate together. 

We’re proud to have led The Biological Computing Company’s $25M Seed round, joined by Builders VC, Refactor Capital, Wonder Ventures, E1 Ventures, Proximity and Tusk Ventures. This is entirely new technology and we don’t know exactly how the market around it will develop. We believe in founders shaping the arc of history by doing something that people previously thought to be impossible. We look forward to supporting them in their journey. If you are interested in building at this frontier, TBC is hiring.

Etched

Etched’s Series A to revolutionize AI hardware with purpose-built LLM chips

"Move fast. Speed is one of your main advantages over large competitors."

This quote from Sam Altman encouraged us to venture into the daunting battlefield that is the semiconductor industry and lead Etched’s seed round about 15 months ago.


And it is speed of execution that gives us confidence in the future of superintelligence that Etched will enable. Indeed, we believe they will set a world record for fastest time to tape out for such a complex chip. This is fitting for a company building a chip that will be orders of magnitude faster than NVIDIA’s latest GPU, when running inference on transformer models.

Today, Etched is announcing a $120 million Series A to bring the same vision they pitched over a year ago to reality.

Hardware, not software, is the biggest bottleneck to truly magical AI experiences: artistic masterpieces like the next Titanic or Beethoven’s 9th produced by AI; agents that perform tasks on the web at the speed of thought, planning and booking honeymoons and preparing memos. What is impossible today will be possible tomorrow, but only with better hardware. To understand the Etched thesis, we encourage you to check out a post from the team at Etched discussing their bet. 

The Etched team epitomizes the greatness of Silicon Valley, right down to the 'silicon.'

The CEO, Gavin Uberti, is brilliant, mature beyond his years, visionary, committed to the craft of being a founder, obsessed with the details, and insanely passionate. We are believers in him, and think that what Sam Altman is to software in AI, Gavin will be to hardware. You can listen to his podcast on Invest Like the Best with Patrick O’Shaughnessy, whose firm Positive Sum is co-leading this round with us, here. Gavin is joined by his cofounders, Robert Wachen and Chris Zhu, both equally remarkable in their own rights. This year, they became one of the first teams to collectively receive the Thiel Fellowship.

The founders are joined by a powerhouse roster of semis professionals who are all driven to break the rules while leveraging their expertise. Mark Ross, the CTO, was previously the CTO at Cypress Semiconductor, which eventually sold in 2019 for almost $10 billion. Ajat Hukkoo, the VP of Hardware Engineering, was at Intel for nine years and Broadcom for 14 before joining Etched. Saptadeep Pal, the Chief Architect, cofounded Auradine. The bench at Etched goes deep and every single person on the team is driven by an ethos of speed.

Most silicon teams are composed of people from the same networks and companies. Etched is the opposite of that: people with different skills and perspectives who want to be part of something special and change the world. It’s a team that embodies the creativity and optimism that makes startups exciting—a group of engineers, coming together with intense, uncanny ambition to build something that industry insiders believe is impossible. This is the only way that radical progress ever happens. This is how we get to superintelligence.

Today, more than anything, we’re proud of the team at Etched and humbled to be a part of their journey. 

We’re also excited to be collaborating with such an excellent group of investors, including not just Positive Sum, but also Hummingbird, Two Sigma, Skybox Data Centers, Fundomo, and Oceans as well as angels like Peter Thiel, Thomas Dohmke, Amjad Masad, Jason Warner, and Kyle Vogt.

Interested in joining this stellar team? Check out the more than a dozen open roles here and help build the future of superintelligence.

Inspiren

Why we doubled down on Inspiren

Senior living communities are at the heart of one of the most important demographic shifts of our time. An aging population, rising care costs, and persistent staffing shortages are forcing operators to do more with less—without compromising safety, dignity, or quality of life for residents. Yet most of the tools available today are point solutions: basic fall detectors, clunky emergency call buttons, or fragmented EHR systems that fail to keep pace with day-to-day realities.

Inspiren is changing that.

Their AI-powered hardware and software integrates real-time motion awareness, staff coordination, emergency response, and resident engagement into one cohesive ecosystem. From preventing falls to accelerating emergency response to improving care planning, Inspiren is redefining how senior living communities operate—while delivering measurable ROI to operators.

From Fall Prevention to Full-Stack Care Coordination

When we led Inspiren’s Seed round, the company had already built a best-in-class fall detection device. But CEO, Alex Hejnosz, and founder, Michael Wang, were thinking much bigger. They envisioned a full “intelligent ecosystem” for senior living, one that combined multiple hardware devices with a single software brain to give operators unprecedented visibility and control.

That vision is now a reality. In the past year, Inspiren has expanded from its core device to a comprehensive hardware and software product suite:

  • AUGi for in-room activity sensing and fall detection
  • Sense for high-risk bathroom coverage
  • Staff Beacon for staff location and workflow optimization
  • Help Button for residents to request help without wearing a pendant
  • Inspiren HQ and Inspiren Mobile App, an AI software suite to give operators and staff real-time insights into the overall health of their communities

This unified approach does more than replace outdated systems—it collapses the need for multiple vendors into one platform, eliminating integration headaches and increasing staff efficiency. Dedicated clinicians partner closely with communities, ensuring clinical insights are communicated and applied in ways that maximize care quality, outcomes, and staff support. And, most importantly, keeps residents safer.

Why Now

The timing for Inspiren couldn’t be better. Over 10,000 people turn 65 every day in the United States. The senior living market is fragmented, underserved, and under increasing operational pressure. Regulatory scrutiny around resident safety is increasing, while financial risk from early move-outs and under-documented care is pushing operators to modernize.

At the same time, Inspiren’s combination of AI-powered sensing and integrated software is hitting an inflection point in cost and capability:

  • Affordable hardware means mass deployment is now possible.
  • AI-powered embedded workflows make adoption easy and ROI immediate.
  • Regulatory and operational tailwinds are pushing toward data-driven, documented care.

In a market where speed of deployment is critical, Inspiren’s ability to get devices “on walls” faster than anyone else is a decisive advantage.

Why We Backed Inspiren—Again

We’re doubling down in Inspiren’s $100M Series B led by Insight Partners because we believe they are building the category-defining operating system for senior living. Our conviction comes down to four key beliefs:

  1. The market is ripe for a platform that replaces fragmented point solutions with an integrated, AI-driven ecosystem.
  2. Inspiren has proven product-market fit with industry-leading retention, rapid ACV expansion, and clear ROI for operators.
  3. The economics work at scale, with a recurring software model, short payback periods, and a massive market opportunity.
  4. This team can execute, having consistently delivered growth and product velocity ahead of plan.

As the aging population grows and care demands intensify, senior living communities will need more than incremental tools. They’ll need a system that makes care safer, faster, and more efficient. We believe Inspiren is poised as the market leader in this space.

We’re proud to continue our partnership with Alex, Mike, and the entire Inspiren team. The future of senior living is intelligent, connected, and compassionate — and Inspiren leading that future.

LightTable

AI-powered design review catching errors pre-construction

Bad design is one of the most persistent, overlooked, and expensive problems in construction. Change orders caused by coordination issues, missed specs, or incompatible designs aren’t just costly—they’re preventable.

In the U.S., hundreds of billions of dollars are spent on construction every year. Projects are chronically delayed and over budget, and with interest rates where they are, new development has become even more difficult in many markets. The majority of change orders could be caught in the pre-construction phase, but today’s review process is slow, expensive, and imperfect.

Large developers often outsource drawing reviews to third-party firms, paying hundreds of thousands of dollars for reports that take 6–20 weeks to deliver—and still contain errors. These delays and oversights ripple through every stakeholder, from architects to subcontractors, costing valuable time and eroding margins.

LightTable solves this. By applying AI-powered coordination and peer review, they can eliminate up to two-thirds, improving project IRRs by 3–4 points. For real estate owners, this means delivering projects faster, on budget, and with fewer costly surprises. For architects and subcontractors, it means less time revisiting old projects to fix preventable mistakes.

The Right Wedge: Peer Review

We backed LightTable in 2024 based on a simple premise: peer review today is manual, expensive, and error-prone—and AI can do it better. Starting with coordination issues across disciplines creates a powerful wedge to expand into broader design optimization and collaboration.

Three things gave us conviction:

  1. Acute pain: Developers feel the cost and time impact most directly.
  2. Adoption leverage: They can push better tooling across architects and engineers.
  3. Workflow ownership: Controlling the design review process opens the door to downstream automation, from value engineering to inventory-aware specs.

Since funding in November, LightTable has signed product development and design partnerships or pilots with five of the top ten developers in the U.S., including Hines, The Related Group, Greystar, Mill Creek, and Alliance. These pilots are already driving real-world feedback and product iteration, setting the stage for enterprise contracts with annual values that could reach seven figures.

Built With—and For—the Industry’s Best

LightTable’s go-to-market strategy is rooted in deep collaboration. Through these design partnerships, the team is running real projects and building in lockstep with enterprise users.

Paul Zeckser, Dan Becker, and Ben Waters didn’t just set out to build better design tools—they’re rethinking how buildings get designed from the ground up.

Paul, a product-first leader with sharp commercial instincts, spent over a decade shaping HomeAdvisor before leading product at Sealed. He’s known for turning big product visions into real market wins.

Dan, a seasoned AI and simulation engineer, founded a startup that was acquired by DataRobot and helped enterprise teams adopt AI. Frustrated by how slowly legacy players like Autodesk move, he decided to build LightTable: an AI-native company already shipping faster than most incumbents can prototype.

Ben, a trained architect that has worked at SOM and Gensler, knows and has lived the pain of coordinating large drawing sets, only to see RFIs and Change Orders pile up. He was convinced AI could transform this archaic practice of human-only review and deliver massive ROI for builders.

Together, they’re the team with the perfect set of backgrounds to build for speed—and for a smarter, more intuitive future in design.

Why We’re Excited

LightTable is starting where the pain is most acute—pre-construction peer review—and building a platform that could become the coordination layer for the entire industry. The combination of early traction with top developers, a pragmatic and valuable product, and a sharp, fast-moving team makes this exactly the kind of founder-led business we want to partner with.

We’re proud to be backing Paul, Dan, Ben,and the team as they redefine how better buildings get built, long before anyone breaks ground.

The gas turbine bottleneck reshaping energy infrastructure

Electricity costs are rising fast. One midstream energy CFO in Texas told us his company’s rates have climbed from $0.35/MWh in 2021 to nearly $0.70MWh today. “This is getting ridiculous,” he said. “We used to say the price at the pump decides elections. Going forward, it’s going to be the price at the meter.”

Similarly, an executive responsible for the data center buildout at Microsoft asks, “If every producible watt is already committed for the next five years, where can I get more watts?”

That question has become the defining obsession of this moment. AI companies, hyperscalers, and industrial manufacturers are all competing for the same scarce resource: firm, dispatchable electricity. Yet, turbines, the single most important piece of equipment required to make that power, are suddenly the most scarce.

There are effectively only three relevant manufacturers that can produce large-scale gas turbines: GE Vernova ($162B market cap), Siemens Energy ($91B), and Mitsubishi Heavy Industries ($84B). These machines sit at the center of the modern power plant. They are the hardware that converts fuel into motion and motion into electrons. Today, they are booked solid for years. GE, Siemens, and Mitsubishi are each reporting order backlogs that stretch roughly five years, with many delivery slots already sold into the next decade.

Rumors are circulating in Washington about just how acute this has become. A contact close to the Department of Energy’s new venture fund (a kind of “In-Q-Tel for energy”) told us the administration is even considering whether to pressure allies with outstanding GE turbine orders to release them back to U.S. buyers. It sounds extreme, but that’s the level of urgency building around this bottleneck.

What a Gas Turbine Actually Is

A gas turbine is, at its core, an air compressor, a combustion chamber, and a spinning shaft. Air is compressed, mixed with fuel (usually natural gas) and ignited. The expanding gases turn the turbine blades, which drive a generator to produce electricity. In a simple-cycle setup, that’s the whole process. In a combined-cycle plant, the hot exhaust is captured to make steam and power a second turbine, pushing overall efficiency and reducing emissions.

These are enormous machines. A single heavy-duty turbine can weigh more than 400 tons, stretch nearly 50 feet long, and generate anywhere from 100 to 400 megawatts, enough to power a mid-sized city. Smaller aeroderivative models, adapted from jet engines, are used for distributed or peaking power. Each large turbine costs roughly $50-100M depending on its class and configuration. It is elegant, brutally engineered machinery, and the workhorse of global baseload power.

How We Got Here

The roots of the shortage go back two decades. In the early 2000s, gas plants were being built everywhere. Then the 2008 crash hit, followed by another glut in the mid-2010s. Orders evaporated. OEMs shut factories, laid off skilled labor, and merged their supplier bases. The “painful overbuild” became a corporate cautionary tale as the turbine market crashed in 2018.

By the late 2010s, renewables dominated investor attention. Gas turbine divisions were treated as cash cows, not growth engines. Capacity stayed flat even as demand for electricity quietly began to climb again. Then came AI.

Starting in 2023, data-center power requirements exploded. Hyperscalers that once drew tens of megawatts suddenly needed hundreds. Natural gas looked like the fastest, most practical bridge to new capacity, but the world only had three major manufacturers left, and none had invested to scale. With the fresh memory of 2018 and some uncertainty about how much energy the AI buildout would actually require, they weren’t rushing to ramp capacity again.

Siemens Energy now reports a €136 billion backlog, the largest in its history. GE Vernova has roughly 55 gigawatts of gas orders in its queue and plans to expand from about fifty to eighty heavy-duty units a year by 2026. Mitsubishi Power says it will double production, but is already sold out into 2028. Even if all three deliver on their expansion plans, total output might rise only twenty to twenty-five percent—nowhere near enough to meet demand.

Anatomy of a Shortage

Every layer of the supply chain is fragile. The high-temperature blades and vanes inside the turbine are cast from exotic nickel alloys by a few companies such as Howmet Aerospace ($76B market cap) and Precision Castparts (acquired by Berkshire Hathaway for $37B). The massive forged rotors that hold everything together are produced by a handful of plants worldwide, including Japan Steel Works. The heat-recovery steam generators that make combined-cycle plants efficient are backlogged. Transformers, switchgear, and control systems are just as scarce.

The result is a perfect storm of concentration and pricing power. One energy CFO told us that total project capex costs have doubled in just fifteen months, from roughly $1,000/kWh to more than $2,000/kWh. “It’s cartel pricing,” he said. “There’s nowhere else to go.” Reuters now cites installed combined cycle gas turbine costs moving from ~$1,000/kW to $2,000–$2,500/kW on recent projects, driven by turbine scarcity, HRSG/transformer backlogs, and EPC/labor constraints. Some developers are scouring Alberta and the Gulf Coast for refurbished turbines from the 1990s because new ones are unavailable. Others, including major industrial conglomerates, are buying units on spec simply to guarantee they have equipment for future plants. Koch Industries is rumored to be doing the same.

Implications for the AI Buildout

Natural gas was supposed to be the fast path. Today, its build timelines look a lot like those of emerging nuclear small-modular-reactor projects. It’s no surprise that Microsoft, Google, and others are signing enormous power-purchase agreements with nuclear and geothermal developers such as Oklo, Kairos, TerraPower, and Fervo.

A chief strategy officer at one hyperscaler told us he’s signing these LOIs freely, while assuming most will never reach operation. Even among those that do, he expects only a fraction of the current terms to survive to commissioning. But that hardly matters. For hyperscalers, the act of signing unlocks investment flows, de-risks sites, and signals seriousness to regulators. It’s a rational strategy in an irrationally tight market: spray capital at every potential source of electrons, then back the few that deliver.

Our Perspective

The turbine shortage is not just an isolated equipment problem. It exposes the fragility of the entire industrial stack: the transformers, EPC labor, HRSGs, and switchgear that all depend on long, brittle supply chains. Turbines just happen to be where the pressure shows first.

For startups, there are enormous opportunities across this landscape. Predictive-maintenance networks for turbine fleets. Software that helps EPCs and utilities coordinate procurement. Financing models that let developers pre-buy critical equipment. Materials or manufacturing breakthroughs that reduce lead times. Even the idea of a fuel-agnostic turbine that runs on gas today and integrates with a nuclear reactor tomorrow feels less like science fiction than a commercial inevitability.

Still, solving this likely won’t be as simple as building another “Anduril for turbines” (though we’d entertain it). Heavy industrial capacity takes years, patient capital, and government partnership to stand up. The payoff is profound. Whoever helps unjam these bottlenecks will sit at the center of the next trillion-dollar buildout and power the world’s compute.

We’ll continue exploring these supply-chain chokepoints in future issues, from turbines to transformers to the EPC networks that knit them together. As always, we’re early in our journey into energy and learning in public. If you’re building, financing, or operating anywhere in this bottleneck, whether you make turbines, HRSGs, or just want to power the next wave of AI, we’d love to hear from you.

Lyric

Why We Invested in Lyric—Again

Supply chains weren’t built for the world we live in now.

Pandemics, geopolitical risk, climate shocks, volatile demand, and an AI-fueled productivity race have put operators under more pressure than ever. But while everything around them is changing, the software they rely on hasn’t kept up.

Legacy supply chain tools like SAP, Blue Yonder, and Coupa were built for static environments. They offer rigid templates, brittle implementations, and expensive services arms that struggle to adapt to real-world variability. Even Gen 2 “smart” platforms like Palantir lean heavily on forward-deployed engineers to tailor models to each customer, resulting in long timelines, high cost of ownership, and limited reach.

In a world where operational agility is the difference between growth and chaos, the software that powers our supply chains can’t be static. It has to be composable, context-aware, user-extendable and fast.

In 2023, we led the seed round in Lyric because we believed the services-heavy playbook for enterprise software was breaking down, especially in supply chain. We saw a world where bespoke AI tooling was finally colliding with business reality: long-tail complexity, fragmented data, and a crippling reliance on forward-deployed engineers. Lyric offered a new path: a composable platform that could absorb complexity without defaulting to custom work. We were honored that a sophisticated repeat founder like Ganesh Ramakrishna opted to partner with us on what we do best: seed-specific operational partnership.

Today, we’re proud to double down in Lyric’s $44 million Series B alongside Insight Partners, VMG Catalyst, Permanent Capital, and other insiders.

Why? Because Lyric isn’t just delivering on the original thesis—it’s blowing past it.

The Product Problem Nobody Solved

Ganesh Ramakrishna knows this world inside and out.

Before Lyric, he built Opex Analytics, a services business that delivered custom data applications to Fortune 500 supply chains. But every win came at a cost: each solution was a one-off. The team shipped bespoke code for every engagement.

That’s the problem with most enterprise AI today. It generates project-level IP useful for one customer, once. Not product-level IP that scales across use cases and companies.

Ganesh started Lyric to change that.

Lyric is a composable, AI-powered platform that turns operational complexity into decision intelligence. The platform layers domain-aware data infrastructure, a rich library of optimization and ML models, a no-code sequence builder, and intuitive UIs, all wrapped in agentic AI tools that accelerate development and deployment.

It doesn’t just solve problems. It productizes the solutions.

From Six Months to Six Hours

What does this mean in practice?

At a major beverage company, Lyric started with just two use cases: out-of-stock forensics and production planning. These weren’t problems the customer’s existing stack (Palantir, Blue Yonder, Oracle) could solve. But Lyric could. The first few apps saved $8 million in value. Soon the customer had built 20+ apps on their own with no consultants involved. Today, Lyric powers more than $40 million in ongoing ROI inside that enterprise.

At another global CPG, Lyric replaced a $2 million, six-month failed implementation with a working solution in four weeks. That app saved $1 million per quarter and led to a wave of expansion across planning, logistics, and risk analytics.

The POC Factory deserves special mention. Lyric can go from raw data to a full-featured production-grade application in as little as six hours. Customers aren’t pitched with decks or mockups. Instead, they’re handed a live app tailored to their business. That experience doesn’t just accelerate sales. It sets a new bar for customer expectation.

It’s no surprise that Lyric’s go-to-market motion, which until recently was entirely founder-led, has scaled with remarkable efficiency and unprecedented ACV expansion.

The Platform Is the Product

Lyric’s true innovation is about speed and scale. Most enterprise AI platforms still operate like bespoke consultancies. They promise flexibility, but that flexibility is serviced, not shipped. Lyric’s composable architecture flips that model. It allows customers to build once and scale infinitely. Applications are modular, models are extensible, and integrations are already in place across the dominant enterprise data and planning systems.

What we’re seeing now is the platform unfolding across use cases far beyond what it was originally built for. Customers who began with core supply chain planning and network design are now solving for fleet and workforce optimization, predictive maintenance, sustainability forecasting, warehouse layout, and more. These weren’t on the roadmap a year ago, but because Lyric’s platform is composable by design, these apps aren’t exceptions. They’re natural extensions.

This is how platforms grow. Not by narrowly optimizing a single workflow, but by adapting to the evolving needs of the enterprise. Lyric isn’t just flexible software. It’s infrastructure for operational intelligence.

What’s Next

Lyric now has the team, product, and momentum to scale into the broader vision it was always meant to fulfill. With seasoned GTM leadership in seat, the company is ready to go from founder-led selling to full GTM scale. The POC Factory remains a superpower, dramatically shortening the distance between interest and impact, and will continue to be a cornerstone of the buyer experience.

Under the hood, Lyric is pushing aggressively into agentic AI. New copilots are being trained to help customers test scenarios, build workflows, and deploy models even faster. As the product expands, so does the community: more customers are building their own apps, repurposing existing ones, and contributing to a growing library of reusable solutions.

Most importantly, Lyric’s market is expanding. What began as a bet on supply chain planning is now becoming a horizontal platform for any operationally complex, data-rich decision in the enterprise. In a world where cost structures are shifting, AI tooling is democratizing, and operational resilience is a board-level priority, we believe Lyric has the chance to define a new category.

Lyric isn’t just rewriting how enterprise software is built. It’s redefining how real-world decisions get made. We’re proud to be along for the ride—again.

Maybern

Maybern replaces spreadsheets running billion-dollar portfolios

More and more money continues to flow into the private markets. Today, private funds manage over $13T and continue to grow rapidly—averaging 20% annually since 2018. This explosive growth reflects a fundamental shift in how institutional and private capital is being deployed, with private market capital becoming an increasingly critical pillar of our global financial systems.

As funds grow, so does complexity. The bespoke nature of how deals are structured—both for individual investments as well as agreements with limited partners—means it historically has taken an army of accountants and excel spreadsheets to manage. Fund CFOs are faced with two not-so-great options:

  1. Try to pay top dollar to grow a large team in-house in the midst of a massive accounting talent shortage and bear all of the associated costs.
  2. Pay a third-party fund administrator like SS&C ($18 billion market cap) or SEI ($9 billion market cap) 6 or 7 figures annually. and then run a shadow system to double check everything in-house).

The challenges facing fund CFOs today are more acute than ever before. When we spoke to 20+ fund CFOs ranging from $1B first time fund managers to the most sophisticated finance teams with hundreds of billions under management, we heard the same pain points over and over again. Dissatisfaction with fund admins. Poor data quality and access. Lack of visibility into how things are calculated. Multi-day long turnaround times on simple questions. Frequent mistakes. High staff turnover.

All this against a backdrop where pressure is mounting on both sides. On one hand, their Managing Partners are pushing them to be more strategic—i.e. evolve beyond the operations of running the back office and deliver insights on portfolio construction and management, cost optimization, and fundraising strategy, and deliver better insights, more quickly. On the other side, LPs and internal IR teams are asking for faster, more in-depth, and more frequent reporting—leading to a constant time-suck of information retrieval and bespoke analyses.


This shift from operational CFO to strategic CFO is one that we’ve seen play out in the corporate CFO tooling world where software has emerged to streamline and automate day-to-day operational work of finance teams to allow time to be freed up to focus on strategic decision making. Many large winners have emerged from this trend including Anaplan, Blackline, Workiva, and OneStream, just to name a few. Private fund accounting however isn’t governed by standardized GAAP principles. Instead, they’re governed by their limited partnership agreements (LPAs) which not only can take many shapes and forms but are also oftentimes accompanied by negotiated side letters with preferential terms for certain investors. This complexity meant that previous attempts to build software solutions for this problem were either limited to much simpler use cases (e.g. small venture funds) or requiring tons of customization with multi-year implementation timelines.

Enter Maybern

Maybern is revolutionizing private funds with the first true operating system for fund finance. Purpose-built in partnership with the most complex funds managing billions in AUM, Maybern's core product centralizes and automates essential operations, saving finance teams hours of headache. More importantly however, the software becomes the single source of truth for all fund flows regardless of the complexity of the underlying legal structure. The core engine, designed for maximum flexibility, adapts to each fund's unique structures and workflows, handling even the most complex edge cases.

What makes Maybern particularly special was the decision to start with real estate private equity—arguably the most complex segment of the market. By solving for the most challenging use cases first, they've built an incredibly robust and flexible platform that can handle virtually any fund structure or requirement without writing a single line of custom code. They’ve quickly leveraged this advantage to win the hearts and minds of CFOs across traditional private equity, growth equity, and private credit.

There is no team that is better suited to tackle this problem. The Maybern cofounders Ross Mechanic and Ashwin Raghu initially met at Cadre, a tech-enabled private equity platform for individual investors. There, they built the internal system that allowed Cadre to multiply AUM without needing to scale headcount. Upon our first meeting with Ross, it was clear that he was deeply obsessed with this problem and determined to change this market. Although an engineer by training, he’d both personally read hundreds of LPAs and set out to learn everything there was to know about fund finance before coming to realize that it was possible to build software that could address all the edge cases and complexities, it just wouldn’t be easy. Since then, Ross and Ashwin have assembled a team of world-class engineers and fund finance specialists to tackle a problem many thought was unsolvable.

When we spoke to early customers, it became clear that Maybern would fundamentally transform how funds are run. Fund CFOs are very rarely effusive, so hearing effusive praise of the solution’s immediate transformation of internal operations and the deep expertise and credibility of the team suggested that this was clearly something special. With an internal buyer in house, it only made sense for us to put Maybern in front of our CFO, MIke Witowski, who previously had spent time at one of the largest fund admin providers. In the span of one meeting, Ross and Ashwin managed to transform his wary skepticism into astonished enthusiasm. It was at that moment that we knew this was something special.

As the private markets just get bigger and more complex, Maybern is at the center building a true enterprise grade central nervous system for fund operations. Soon, the most sophisticated private funds will run on Maybern. We’re thrilled to be leading the Series A and to be partnering with the team on this journey.

Tabs

The AI-native revenue platform for modern finance

Revenue collection is the biggest opportunity in the CFO stack, and Tabs is poised to own it. CEO and Co-Founder Ali Hussain is fierce in all the right ways: urgent and thoughtful, yet a true servant leader. Alongside co-founders Deepak Bapat and Rebecca Schwartz, and an extraordinary team of passionate builders, Tabs is creating one of the most promising vertical-AI companies in New York City.


Today, we’re excited to announce our continued support of Tabs, the AI-native revenue platform for modern finance teams, as a part of its $55 million Series B led by Lightspeed Venture Partners. Tabs is automating the revenue engine for the AI-Native CFO, powering fast growing companies like Cursor, Statsig, and Cortex.

Accounts receivable, the all-important task of collecting revenue, is breaking free from the ERP. There are incredible decacorns associated with every layer of the CFO stack—Ramp for spend, Rippling for payroll, Bill.com for AP—but no company owns AR. The ERP is unbundling as enterprises are modernizing, with 50% of CIOs expecting to upgrade their ERP in the next two years. The ERP historically dominated the CFO stack, but lengthy implementations, expensive pricing, brittle architecture, and antiquated product experiences created an opening to build best-in-breed solutions for each category.

We believe revenue is the biggest prize in this unbundling. Revenue collection sees the biggest flow of money. It commands the most time from experienced, senior members of your team. Given the complexity and criticality of revenue, the market opportunity for Tabs swells to over $1 trillion given in senior labor spend required to manage revenue and billing operations at companies of all sizes.


Tabs transforms AR from a manual, ops-heavy burden into an automated revenue engine. By making contracts—not CRMs or spreadsheets—the system of record, Tabs enables CFOs to say “yes” to growth, experimentation, and new monetization models without getting trapped in their legacy ERP.


The Finance Stack Wasn’t Built for the AI Era


Accounts receivable has historically been a burden finance teams haven’t been able to engineer themselves away from. Legacy approaches to AR force companies to map CRM and billing data through tools like NetSuite or Stripe, creating endless edge cases. Contractual deviations—discounts, custom usage, multi-year contracts—becomes a painful manual exercise, ballooning the size of finance ops teams. It could take three days to reconcile contracts in an Excel doc with what’s in your CRM, manually cross-reference materials, feed that data back into your ERP, then automate a billing flow. The painful result is delayed collection of revenue.


AI has created an incredible opportunity to transform AR, improving accuracy on contract parsing, reducing time spent on reconciliation, and making the platform flexible to new pricing models. AI is the unlock that enables Tabs to convert a contract into cash. By treating the contract—the same artifact auditors already rely on—as the source of truth, Tabs eliminates manual reconciliation, using AI to collect unstructured contracts and automatically turn it into revenue.


Pricing innovation makes this shift even more urgent. AI-native companies are experimenting with consumption billing, usage-based pricing, and hybrid models at a pace legacy systems can’t support. Even if you solved the data mapping problem, introducing a new pricing model renders it useless in moments. Traditionally, this made the CFO the brake on sales creativity, prioritizing revenue clarity over pricing innovation. Tabs changes that: finance becomes a team of “yes,” with revenue systems that adapt as quickly as sales. For CFOs navigating the AI era, this flexibility is crucial to keeping up, as every company knows its pricing will evolve, and AR must keep pace.


The result with Tabs is faster closes, leaner teams, and higher confidence for both finance and auditors.


Contract-to-cash and beyond


Tabs is built for this moment. Its wedge is contract-to-cash automation: ingesting messy contracts, extracting commercial terms, generating invoices, chasing payments, and recognizing revenue with minimal human involvement. Tabs’ ingestion engine transforms static PDFs into bill-ready logic, while Slack-native workflows and API-first integrations embed directly into how finance teams already operate. Tabs isn’t just digitizing invoicing—it’s reimagining the entire revenue cycle.

Billing and revenue often require senior team members tackling mundane tasks. Their work also exists across many solutions, referencing the contracts, CRM, ERP, payments processor, and other systems to recognize revenue. This cross-functional workflow that requires domain expertise makes it perfect for agentic automation, freeing up teams to do higher value work. Tabs makes it possible to automate everything from edge cases and revenue rules that used to require armies of finance professionals.


CEO Ali and the Tabs team believe the future of finance looks like this: lean senior teams powered by swarms of AI agents managing revenue at scale. From your first dollar to your billionth, the platform automates billing, collections, revenue recognition, audit, and reporting – with an auditable trail across the CFO stack.

That vision unlocks more than AR. With the launch of Tabs Agent, its Slack-native AR agent, Tabs is taking the first step toward building a true agentic revenue platform.


Why we invested

Since incubating Tabs, Ali and his team have blown past every target, quarter after quarter, while shipping products at remarkable speed. With $55 million in fresh funding led by Lightspeed, Tabs is positioned to lead the shift away from ERP-bound finance and toward an AI-native, best-in-breed revenue platform.

At Primary, we believe Tabs will be one of the defining companies of the AI era—a generational compounder that fundamentally reshapes the economics of finance. That’s why we’re proud to deepen our partnership in this $55 million Series B. Tabs isn’t simply automating AR—it’s creating the revenue operating system for the AI era, unlocking a trillion-dollar labor market and turning CFOs from bottlenecks into accelerators. We’re proud to back Ali, Deepak, Rebecca, and the entire Tabs team again for this next phase of their product.


Tabs was incubated at Primary Labs. If you’re interested in learning more about incubations, becoming an Operator-in-Residence, and building with us, please reach out to labs@primary.vc.

Scaling for the AI era: Fund V

Primary has, from our beginnings, never looked like a traditional VC firm. My co-founder Ben and I always liked it that way, and as we’ve scaled, we’ve leaned ever-harder into those differences. Today, I’m thrilled to share that we have taken the next step in scaling our firm and in breaking the norms of our industry as we announce our fifth set of funds. At $625M, this represents the largest standalone seed firm in the market and a ringing endorsement by our LPs of our ambition to build the biggest, most impactful, and best performing seed firm on the planet. 

Today, Primary leads pre-seed and seed rounds across tech sectors, backing exceptional founders from San Francisco to Tel Aviv. We practice the craft of seed investing at institutional scale. On our way to 80 full time employees, we are led by a team of eight exceptional, sector-focused investing partners, each running their own strategies and partnering with the very best founders in their sectors: Financial Services, Healthcare, Vertical AI, Infrastructure/Compute, Cybersecurity, Consumer, GTM Tech, and the Industrial Evolution. Our partners bring deep insights, extensive networks, and curious, prepared minds to their craft.

Super-charging our investing work is an Impact team, led by a C-suite of seasoned operators, that delivers operational support unequaled at any stage of venture. And tightly aligned with both teams is PrimaryLabs, our incubation engine, which in recent years has launched many of the most exciting companies in our portfolio. In aggregate, Primary is a team with more scale, more muscle, and more ability to drive genuine results in support of founders than any in the market. 

At 10x the size from our first fund, we are beyond what Ben and I imagined when we began working on Primary. We were never remotely interested in building a typical venture capital firm. We joined forces because we wanted to build something that changed the way the industry operated, putting the founder at the center. Our most important firm value has always been “Wear the Founder’s Shoes.” Guided by that, we listened to founders and invested in the capabilities they most valued across talent acquisition, revenue generation, and capital raising.

As our early investments began to bear fruit – 9 of the 25 investments in our first fund reached unicorn status and another two achieved cash exits north of $500M – our conviction grew. The unique way we made operating resources available to founders was making a difference. With conviction came ambition to push further, for we knew that our model would get even better with scale. As we’ve scaled, we’ve resisted the temptation that unfortunately defines our industry: using new management fee resources not to execute better for founders, but to further line the pockets of the very investors whose incentives now drive them away from seed and towards larger/later deals. Instead, we’ve grown our team and capabilities in line with the growth of our funds. 

This year our team will make hundreds of hires, drive tens of millions of dollars of revenue pipeline, and support dozens of new financings for our companies. As we all race to find and earn the right to work with the founders who are driving the AI revolution, Primary is showing up with more capability and more expertise than ever. And at a time when many firms are struggling to raise capital, our existing LPs have doubled down on our strategy and several amazing, mission-driven institutions have joined us as LPs for the first time.

We are eleven years into an amazing journey at Primary. Every year we relearn the original lesson of venture: success is all about the founders with whom you partner. Our journey has been defined by a remarkable collection of founders and the successes they’ve delivered for our LPs. Our mission has been to support them with everything we’ve got. We feel truly blessed that the unconventional strategic bets we made early on have resonated with founders and paid off in earning us their partnership. 

We sit today with a unique market position in the midst of what is by a vast margin the most exciting and important moment in the history of information technology. And we’re not stopping building. We never will. We’re rapidly reinventing everything about how we operate for an agentic future, building new products, new technologies, and new capabilities internally, all in service of continuing to offer something even better in support of world-class founders (more coming here soon).

We are eternally grateful to the LPs who have entrusted us with their capital and given us the right to continue to add our own dent to the universe in service of founders. Fund V is bigger than ever, and we fully expect it to be better, as well. We can’t wait to meet the founders defining a new generation of innovation and magic in our world.

Etched

Our compute thesis: Etched and beyond

In January 2023, semiconductors were not the rage. NVIDIA was worth $360B. Memory was a sleepy commodity. Hyperscaler capex totaled $130B+, relatively flat from the year prior. Approximately 100k H100-equivalent compute systems were deployed globally, with few having ever heard the term “H100” before.


We were post the ChatGPT moment, but the gears of the semiconductor supercycle were only beginning to churn.

Today, NVIDIA is worth over $4.5T. Multi trillion parameter models demand staggering memory capabilities, driving sky high DRAM prices and scarce supply. Hyperscaler capex in 2025 surpassed $400B and is expected to grow to $600B in 2026. As of Q3 2025, more than 3 million H100 equivalent systems are training models and running inference. Executives now brag about their secured GPU shipments as they compete in an economy increasingly organized around AI.

Back to early 2023.

Our belief was that this was about to change. AI demand was set to explode, and the world did not yet have sufficient compute to meet the need. Scaling laws suggested that no amount of computation would ever truly be enough to deliver the magical experiences AI could unlock. A demand tsunami was coming.

We intuited that a window of opportunity was opening for entrants to capitalize on the moment. In February, I shared this view with the Wall Street Journal: “There’s new openings for attack and opportunity for those players because the types of chips that are going to most efficiently run these algorithms are different from a lot of what’s already out there.”

That same day, I met Gavin Uberti, the CEO and cofounder of Etched.

Gavin had pitch perfect understanding of what was about to happen: transformer dominance would expand; the GPU bottleneck would intensify; inference would become the all-important battleground; and the power of purpose-built hardware would become clear.

We led the Seed round in Etched in March 2023 and have since worked with Gavin and the team as they (as they like to say) build the hardware for superintelligence. This experience combined with studying the broader compute supercycle helped crystallize our conviction. 

Our Compute Thesis

Our compute thesis is simple. Alpha exists at seed for compute investing. Every major technology shift requires a rebuild of its underlying infrastructure. As compute drives intelligence, demand for computation is growing exponentially, forcing a massive capex buildout across silicon, power, networking, and cooling. Fundamental bottlenecks are emerging. Incumbents are being pushed beyond incremental improvement. New opportunities are opening for founders taking first principles approaches to rearchitect how computation is produced and scaled. Yet the majority of seed investors shy away from compute deals.

We’ve noticed that fewer VCs spent time in the compute market, but wanted to quantify it. We analyzed the past decade of M&A across software infrastructure (defined as Dev Ops, Dev Tools, Data, etc) as compared to compute (semiconductor value chain). Since 2015, software infra saw 8,500 M&A transactions, while compute saw 1,300. Despite 15% the deal count, the resulting value creation in compute was $705B, compared to $587B in software infrastructure. It’s the power law on top of the power law – fewer opportunities but far bigger outcomes.

We believe our thesis will only grow in strength thanks to the token exponential. Consumer adoption is accelerating as models improve and new use cases emerge. AI is moving from occasional interaction to daily workflow. Swarms of agents are running continuously in parallel. Enterprise infrastructure is migrating from CPU-based machine learning to GPU-based LLMs. Beyond knowledge work, AI will expand into science, robotics, security, entertainment, and products we have not yet imagined.

The simplest evidence is personal. AI is not just pervasive in my life. My usage is accelerating rapidly. My wife and I just had our second baby; ChatGPT was around for the first but now Chat is like a doctor/therapist in our pocket. At Primary, Gaby and I are in deep debate with Claude around our compute thesis. The firm is restructuring our workflows around AI tools in real time. Our token usage today is easily a hundred, if not a thousand times, greater than a year ago at this time. 

And underneath it all sits the same constraint: compute. So it should come as no surprise that we think Jensen is directionally correct when he suggests a $3 to $4 trillion buildout may arrive this decade alone.

The Bear Case

Of course not everyone is a Jensen bull. Michael Burry, the seer of the housing bubble, has $1.1 billion in short bets against NVIDIA and Palantir – and he's not alone. The bear case: infrastructure spend won't yield equivalent value. Anthropic and OpenAI will make over $60B in revenue, backed up by over $600B of hyperscaler capex. Google spent $24B in capex in Q3 2025, only to make $15B in their cloud business. Enterprise revenue is at risk – OpenAI, Anthropic, Google, Microsoft, and Palantir all want the entire enterprise wallet. Even early winners like ServiceNow are trading back at 2023 prices as the market sorts winners from losers.

Naysayers are pointing to signs of "bubble behavior": circular revenue deals where NVIDIA invests into OpenAI which buys NVIDIA hardware; depreciation concerns as chip cycles accelerate; unsustainable spending with OpenAI committing to nearly $1T in compute while burning $9B a year; and questionable valuations across asset classes, with sketchy pricing practices and stressed debt markets.

The behavior – irrational exuberance, circular revenue, fantastical projections – all just screams bubble. You can hear the echoes of the dot.com bubble when people speak of tokens as proof of value. Or the telecom overbuild. Or the 2008 financial crisis with the funky debt.

The lynchpin to the bull case is simple: AI needs to deliver value. We believe it will, and then some.

In Defense of the Bull Case

The historical differences that defend Jensen's bull case matter:

  • 2001: Overbuilt fiber sat at 5% utilization. Today, every GPU yearns to run at capacity. Even a 1% increase in utilization can be a billion dollar revenue opportunity.
  • 2008: Money flow built on unpayable consumer debt with no cash generation. Today, hyperscalers with $500B+ annual free cash flow invest in productive infrastructure. Meta, Microsoft, Google, Amazon, SpaceX are good debtors.

OpenAI went from $2B, to $6B, to $20B in revenue in the last three years. Anthropic went from $1B to $9B in 2025 and is on track for $30B this year. This is with only 10-20% global consumer usage, no ad monetization, and barely cracking the enterprise opportunity.

Circular revenue becomes problematic when no value is created. Nvidia invests in OpenAI and Coreweave, then sells them GPUs with usage guarantees. But this reflects genuine scarcity and real demand – Nvidia captures value at multiple points in an actually expanding market, not recycling dollars in a closed loop.

And perhaps most importantly, geopolitical competition removes the option to slow down. There's consensus we cannot cede AI leadership to China. That requires more compute.

An Unprecedented Opportunity for Value Creation

There maybe should be a rule in investing that you're not allowed to say, “this time things are different.” But ... the all important word in that sentence is “time.” We are undoubtedly in bubble territory. I got into tech in 2010 and, for 12 years, I heard talk of bubbles. And then the bubble burst. The value creation in that time was astronomical, and it will be de minimis compared to what is taking place now. Said another way, we believe years of demand, endless innovation, and cash-rich companies financing the build point to a fact that things are different this time, for now.  

We believe the next phase of compute will require more than iteration. It will require step functions. In some cases, discontinuous jumps in architecture, efficiency, and system design that incumbents are not economically or culturally positioned to lead. In moments like this, the opportunities emerge not because incumbents are asleep, but because they are rationally bound to the status quo. In moments like this, the bottlenecks move faster than the incumbents.

The Founders Making Building the Future

Ultimately, this thesis is brought to life by founders ambitious enough to build where the stack is hardest. The ones we are most excited to back combine deep technical realism with a willingness to challenge what the system has accepted for decades.

A new generation is showing up. Semiconductor and computer architecture programs are surging, and young builders want to rebuild the physical foundations of the modern economy. They bring brains, audacity, and an AI native intuition for where the world is heading.

The teams that win, though, will pair that first principles ambition with seasoned builders who know how to ship silicon, scale systems, and execute in the real world.

Since 2023, we have taken a high conviction approach to investing behind this view. We have backed the teams at Etched, Atero (acquired by Crusoe), Haiqu, and several more in stealth across memory, alternative computing, and other layers of the stack.

Compute is the limiting factor in a world of accelerating intelligence. The token exponential shows no signs of slowing, and the resulting pressure continues to expose structural cracks that incumbents alone cannot repair. History suggests that moments like this create rare openings for new entrants.

If you are building in compute, now is the moment. We want to meet you early, and partner with you as you shape what comes next.

Ollie

A journey from incubation to industry leadership

Last week, Ollie announced that it is being acquired by Agrolimen, a large Spanish based multinational company focused on consumer goods. Ollie redefined dog nutrition with high-quality, human-grade, refrigerated food tailored to each pet’s needs. Anticipating the rise of pet humanization and the fresh pet food category, the team built a trusted brand centered on clean ingredients, balanced cooking processes, and visible health outcomes—from improved digestion to shinier coats. Ollie’s personalized, minimally processed meals and its deep connection with consumers helped it become a standout leader in the $66 billion U.S.  pet food market.  Ollie was not only an innovator in the pet food industry but also for us as a Primary as it was the very first business we incubated.

The original idea for some of the most well known startups came from surprising places.  Twitter was born out of another startup called Odeo when a then employee by the name of Jack Dorsey had an idea of a status-update/SMS service.  Or when Steward Butterfield was building a multiplayer game called Glitch that flopped but the internal communication tool they built didn’t which then became Slack.  Even my first company, Community Connect, which ran some of the first social networking sites including BlackPlanet.com was inspired by a conversation that I had when I was an investment banking analyst working with the founder of a company that we were trying to take public.  The founder of that company had a thesis that online communities which were primarily in closed online services such as AOL were eventually going to migrate to the Web.  This comment pre-dated Facebook by almost a decade and sparked me starting my company.  After I sold Community Connect, I realized that I would actually be better at generating startup ideas as an investor versus when being a founder.  When I was a founder, I wasn’t studying business models all day.  I was laser focused on my business.  I realized that ideas sometimes spawn from pattern recognition and being an investor allowed me the luxury of studying and seeing so many business patterns.  The story of Ollie is an example of this pattern recognition journey.

When we first started Primary in 2015, two hot start ups in New York City were Blue Apron and Plated delivering meal kits so people can be better homecooks .  Both companies started in 2012 and were growing quickly which inspired a number of other similar companies that had successful exits such as Freshly (acquired by Nestlé for $900 million) and HomeChef (acquired by Kroger for $700 million).  The key insight from these wave of meal kit delivery companies was that people wanted a more convenient option for cooking for themselves and their families.  This insight reminded me of when I first moved back to New York after college.

In the late 90s, I was living in a one bedroom apartment in midtown Manhattan with my dog that I brought back with me from college.  There used to be a little old italian woman that lived in the building that used to hang out in the lobby.  This woman would sit there every morning with her miniature doberman sitting on her lap and would love to chat with the residents as they would come in and out of the building.  One morning I was coming in from taking my dog for a walk when she asked me with an Italian accent “Do you love your baby?”. I responded “Yes, I do.” which then she asked me “How old is she?”.  I told her she is 3 years old.  I then asked her how old her dog was.  She then responded “my baby is 18 years old”.  I surprisingly asked “18 human years??”.  She said “Yes! My secret is that I cook for my dog!”.  She then went into detail of how she cooked her dog a human meal balanced with fresh meat and vegetables for every meal.  I listened but in the back of my mind I thought she was just a kooky woman and didn’t think much of it beyond just being amused.

Years later my dog passed after going through a number of health issues.  I really did love my dog and I tell people that losing a pet is devastating as you do feel like it is to be, what people now call, a pet parent.  One day as I was remembering my dog which got me thinking of my old neighbor and if there was any real logic in cooking for her dog.  I went down the rabbit hole from a number of Google searches and then a number of books about the pet food industry.  I was shocked!  What I learned was most pet food is absolutely terrible made from unhealthy ingredients and way over processed.  My old neighbor was right!

As an investor, you are looking for solutions that are not incrementally better by at least a 10x better product and experience than what is out there.  A pet food brand made up of human grade ingredients, gently cooked, well balanced for your furry loved ones that is shipped to your doorstep instead of you having to cook yourself met that 10x better criteria.  I wrote a memo for what was then called “Project Milo” and presented it to our team meeting at Primary in 2015 which is the first year of the firm and our newly minted $60 million seed fund.  The team, especially my co-founder and Partner, Brad, saw my conviction in the idea and without reservation said we should incubate this.

I recently told this story this past year at an Ollie’s all hands offsite that was held in Nashville.  Over 150 Olllie employees were in the audience when I told this story.  A woman that worked in our customer experience team came up to me after and thanked me for starting the company as she told me that her dog had so many health problems until she started feeding her Ollie which inspired her to work at the company.  I told her that she shouldn’t thank me, that I need to be thanking her and the rest of the folks in the room.  I may have been the spark that started the Ollie journey but it was really the hard work and dedication of everyone in that room and in Ollie’s past that built Ollie into a $250 million business that has served millions of pet parents.  It has been an honor to have been that spark and to have served on behalf of Primary as an investor and board member but the real heroes of the Ollie journey are Nick Stafford, the CEO,  and the Ollie leadership team.  They have delivered on the mission of redefining dog nutrition and have done it with care and excellence.  The journey was far from easy as they had to innovate every part of the business from the manufacturing, supply chain and logistics to the digital experience.  And they did this through incredibly uncertain times including challenging fundraises and the pandemic.  The team always believed in the mission and for that the customers responded.   Thank you Nick and team.  Your work and dedication is so appreciated.  And congratulations to Agrolimen for acquiring a terrific business.  I look forward to you carrying the Ollie brand forward.

1mind

The $30M bet on AI-powered go-to-market execution

Today’s “modern” go-to-market motion often feels anything but modern because it is fundamentally broken for both sellers and buyers. Commercial teams struggle to navigate growth and efficiency trade-offs, and buyers invariably feel those trade-off tensions in underwhelming customer experiences. While AI promises transformation, to date it has largely delivered only incrementality. Buyers and sellers are drowning in point solution experiments that don’t materially move the needle. Nobody wins.

Growth commands a premium over profitability even in the tightest market conditions, so SaaS companies are always charging hard at top-line ARR growth, even if that means sacrificing quality and efficiency along the way. Everyone loves to talk about the eye-popping growth of the AI darlings, but we live in a tale of two cities and in the non-AI-native world where top-line ARR growth has been compressed, companies have been forced to pay more attention to their bottom lines, meaning all eyes are on efficiency. Metrics such as “APE” (ARR/FTE) are the talk of the town.

More often than not, getting more efficient means cutting costs. But the reality for sales-led businesses is that 50-80% of their sales and marketing burn is headcount, so when they push to make teams more efficient, it’s usually the buyer who suffers: slower response times, reduced access to answers until they can prove they are fully “qualified,” and so forth. Buyers are consistently underwhelmed and frustrated when evaluating new solutions because the process is so disjointed for them: schedule a meeting with a BDR only to get minimal information while they evaluate you, wait an entire day for an account executive to sync with a solutions consultant on technical questions before circling back, etc. As the old sales adage goes, “time kills all deals,” and many deals are lost before a qualified lead is ever even logged in the system purely on account of the bumpy buyer experience.

Even high-growth or mature sales teams with plenty of resources haven’t delivered much better for their buyers. The larger commercial teams become, the wider the distribution in team performance and quality, thus weakening the efficiency of the overall funnel and rate-limiting growth (while still disappointing the end customer).

The product-led growth (“PLG”) movement proved that it is possible to operate in a leaner capacity while still delighting customers, but “product-led” was the operative part of that equation. We believe that AI-led growth will democratize this potential to every shape, flavor and size of company in a way that benefits buyers and sellers alike. This is why we’re excited to introduce 1mind and announce their $40M in funding following their recent $30M Series A led by Battery Ventures. We were privileged to lead 1mind’s $10M seed round in the spring of 2024, and are excited to double-down with our support as 1mind enters their next chapter of growth.

Today, go-to-market (“GTM”) executives are inundated with pitches for AI point solutions and co-pilots. Siloed tools lack the efficacy of a horizontal solution (i.e. a sales-facing AI would be much more helpful to a prospect if it was also trained on customer support questions, internal knowledge bases, etc.) and broader, more horizontal LLMs don’t offer the level of accuracy CROs require. And at the end of the day, humans are inherently rate-limited in how fast they can read, digest and act on information provided by co-pilots. 1mind deploys human-like AI to GTM teams to augment and ultimately replace full-time employees. As Jacco van der Kooij wrote in a recent Winning by Design research paper, “the best use of AI is not to improve people's performance through co-pilots but to create disproportionate improvement by using AI to optimize the process entirely and replace the seller.” GTM teams are hungry for a consolidated approach purpose-built for them.

We aren’t talking about agents or AI wrappers. 1mind’s superhumans have faces, voices, personalities, knowledge, motivations and a GTM brain. They have their own AI-optimized virtual desktops that allow them to do anything a human can do on their computer, including joining impromptu video calls to give presentations or demos. 1mind promises the power of a salesforce that never sleeps, learns at lightning speed, and evolves continuously.

Amanda Kahlow, 1mind’s founder and CEO, was previously founder and CEO of 6sense, the pioneer of B2B account-based marketing and intent data. After taking some time off following that incredible journey, Amanda sprung back into action when she identified another category-creating opportunity to transform how B2B companies grow, through what she’s coined as AI-led growth, or AiLG. As Amanda regularly says, 6sense was to find buyers and 1mind is to close them. Sachin Bhat, Amanda’s right hand and 1mind’s CTO, is an experienced enterprise technologist and former founder who most recently spent 5 years scaling complex global engineering and data teams at Rippling.

We believe that Amanda’s reputation and reach make her uniquely suited to usher GTM teams into the next (AI-powered) era of growth and that the formidable commercial-technical duo of her and Sachin will build the definitive GTM platform and accelerate AI adoption across all flavors of commercial and customer teams.

Indeed, sales is just the beachhead: 1mind’s vision is to be all-knowing for an organization, with targeted use cases that run the gamut from top of funnel qualification all the way through to ongoing technical support. 1mind will become the brain, face, and voice of commercial and customer organizations. And as 1mind executes its horizontal, end-to-end play, we believe they will be well-positioned to box out other categories of software, including training and enablement software, internal knowledge management tools, demo automation software and more.

No one tells a company’s story better than its customers, and 1mind’s reviews are off the charts. To date the team has successfully partnered with incredible brands such as Hubspot, Nutanix, Owner, Seismic New Relic and Boston Dynamics, and 30% of customers expand to new use cases within 90 days of going live, and several of 1mind’s customers have joined the cap table.

AI-led growth is the future of go-to-market. There will always be things humans will do better than AI, but there are many things AI will consistently do better than humans, and we’re excited to live in a world where that is celebrated and encouraged. As Amanda says, “if you are worried about your AI hallucinating, have you asked yourself whether your sellers hallucinate?”

It's exhilarating to imagine SaaS companies no longer need to think about account prioritization or ticket deflection because AI does the heavy lifting for them, never mind the resulting implications for buyer satisfaction. If you’re a revenue leader committed to true transformation in your go-to-market organization, we encourage you to engage with 1mind’s superhuman (Mindy) today.

Rethinking EHRs in an AI-first World

Since 2020, >$30Bn has flowed into verticalized EHRs, largely through PE transactions. Think athenahealth ($17B), Inovalon ($7.3B), ModMed ($5.3B), WellSky ($3B), NextGen ($1.8B), Nextech ($1.4B), Therapy Brands ($1.3B), and Experity ($1.3B). These deals rarely show up on VC radars, but they signal two important things. First, across most specialties, clinicians are still running core workflows on systems built more than a decade ago. Second, the largest software outcomes in healthcare over the past decade are generally not venture-backed!

At Primary, we’ve been more broadly thinking a lot about what the future of application software looks like and how AI will reshape SaaS. There’s already plenty of debate about systems of record becoming systems of action. This piece is not to debate that future. Instead, we want to imagine a future where existing systems of record are paired with, or in some cases replaced by, true systems of action. In healthcare, EHRs are the most natural place to start. We will be posting a handful of pieces on this theme in the coming months that go deeper on the market, the product, and potential business models shift enabled by AI.

To set the stage — taking on specialty EHRs is not for the faint of heart. The process of ripping out and replacing any system of record is brutal. Most providers genuinely dislike their EHRs, but they live inside them every day. The change management alone is enormous, and the perceived risk often outweighs the promise of incremental improvement. The standard venture playbook has been to build a point solution on top of an existing system of record, earn trust, and then slowly eat away at the core platform before (hopefully) making the swap. But, watching how the “AI scribe wars of 2025” are unfolding makes us skeptical that this will work at scale. Incumbents are willing to copy anything that starts to matter. See Epic, Athena, and many more examples.

Epic is the most egregious example of how quickly momentum can be shifted.  For years, the Abridge-Epic partnership was the engine behind Abridge’s breakout growth. Epic’s Workshop program effectively created a sanctioned, first-of-its-kind path for Abridge to build deep integrations, co-sell alongside Epic, and access early customers inside the Epic network.

Earlier this year, Epic shut down their Workshop program right before announcing its own competing scribe product. The message was unmistakable. And now, we’re seeing the same dynamic emerge across other multi-product EHR platforms. As incumbents roll out their own native AI features, they not only close off third-party distribution pathways, but also undercut startups on price. Even if Abridge continues to ship a meaningfully better product, incumbents will always have structural advantages in distribution and pricing power. To be clear, we’re rooting for Abridge and the entire startup scribe category. At the same time, it’s hard to look at the market dynamics and not believe that a disproportionate share of enterprise value will ultimately accrue to the platforms that already sit at the center of clinical workflows.

This raises a harder question: what does it actually take to build durable value in this market? One possible answer is not to sit on top of the EHR at all, but to become the system that coordinates and executes work end-to-end. In practice, that does not mean waving away the complexity of being an EHR. Any credible “core operating system” must either natively include or tightly integrate the fundamental components of an EHR. The difference is not whether you store records, but whether the system is designed primarily to document work or to actively move it forward. Seat and documentation-based pricing in today’s EHRs reinforces incremental change, while AI-native systems will likely require pricing tied to outcomes or productivity.

Another possible answer is to actually build an EHR. Building a modern EHR today is easier now than at any point in the last decade. Development cycles are faster, infrastructure is cheaper, and interoperability standards and data export requirements have improved in certain parts of the market. More importantly, legacy systems are reaching the limits of how far they can be stretched. Many have layered decades of workflows, pricing models, and incentives that make meaningful re-architecture extremely difficult.

This is why we’re interested in exploring the question, “Are AI-native and agentic EHRs a good place to invest?” From what we’ve seen in the market, it’s very difficult for existing systems to contort themselves far enough to truly redefine how work gets done. At best, the incumbents can deliver incremental improvements or integrated point solutions (i.e. scribe). At worst, they simply bolt on decades-old workflow solutions without meaningfully shifting productivity.

AI now makes something else possible: systems of action that interpret context, decide what should happen next, execute tasks, and close loops on behalf of users. Perhaps you pair that with a business model that undercuts incumbents on price and taps into new revenue streams, and you can imagine a product that is ten times better at a tenth of the cost. As one physician put it, “Imagine if I could just focus on my patients and everything else was handled?” For clinicians and patients, that is not a marginal improvement. It is a step change.

So what are the right market dynamics?

As we consider this market, we are not here to argue that the venture opportunity lies in unseating Epic or Cerner. The hospital market is structurally protected by switching costs and cost fallacy. Epic, in particular, also benefits from being private and unconstrained from quarterly earnings pressure, allowing it to reinvest aggressively for the long term. The more compelling opportunity sits in ambulatory care, where the market remains fragmented and shaped by a long tail of legacy EHRs that have not reinvested as aggressively as Epic.

Across the 38 medical specialties and 89 subspecialties, most non-hospital owned clinicians still use vertical EHRs built more than a decade ago. The UX and UI are often akin to digital paper and hide deep architectural limits. Practice management, RCM, patient engagement, imaging, intake, analytics, and documentation sit under one brand, but rarely function as a single system.

If you ever want an unfiltered readout of how providers feel about their EHR, spend an evening on r/medicine or any specialty subreddit. You will find endless threads written by clinicians complaining about clunky interfaces, “note bloat,” billing workflows, and general “why does this thing hate me?!” energy. There is no shortage of pain.

But, finding real opportunity for systems of action means looking past generic frustration and into the structure of each specialty. So far, four dynamics stand out.

  1. Provider independence. A specialty must have enough non-hospital-owned practices to create a real market for software adoption. In many fields, consolidation into health systems removes purchasing autonomy entirely. Specialties like dermatology, ophthalmology, GI, pediatrics, PT/OT, urgent care, and behavioral health still have meaningful populations of independent or PE-backed groups that make their own tech decisions. These are also the environments where a new operational layer can be evaluated, purchased, and deployed without multi-year IDN procurement cycles.
  2. Competitive dynamics. Every speciality has multiple EHRs built for them, but not all markets are created equal. As examples, mental health has over 35 different verticalized EHRs, while specialties like dermatology are dominated by one software solution (ModMed with ~85% market penetration).
  3. Workflow complexity. Systems of action shine where messy, cross-system work dominates. Imaging coordination, pathology loops, protocol-driven care, prior auth, referrals, lab results, patient messaging, scheduling changes, and multi-party communication all fit this pattern. As we consider expanding the scope of an EHR, the ability to take on this work can meaningfully expand TAM.
  4. Administrative burden. In many practices, the true bottleneck is staff capacity, not clinician capacity. Intake, eligibility, benefits checks, documentation prep, coding, edits, denials, and follow up consume enormous time. Vertical EHRs may provide screens, but not relief. Similar TAM expansion dynamics are also at play here.


If a true venture-scale opportunity exists, it likely sits at the intersection of these factors, paired with a credible strategy for reducing switching costs. That may come from better data portability, improved migration tooling, purpose-built adapters, or even direct economic incentives to offset the pain of change. AI may not eliminate switching costs, but it can meaningfully compress them. In our view, an AI-native EHR is most compelling in a specialty that has a large base of independent providers, a more fragmented incumbent EHR landscape, and a higher degree of workflow and administrative complexity.


What we’re still figuring out


We’re far from having all the answers. A few key questions we’re still working through that we will explore in subsequent pieces:

  • How do you reduce the switching cost for providers as much as possible? AI could be the key to “why now” here on the technical side, but is that really enough?
  • What are the dynamics of an EHR product that is truly 10x better than an incumbent?
  • Do you need to lean into business model innovation to make this feasible for a provider and investable for VC? Ex. providing the EHR software for free and monetizing on the back-end via pharma services, RCM, or owning other workflows
  • What are the speciality “ontologies” that are most automatable and what will require a human in the loop?

If you are building in this world, we would love to chat. We deeply believe the next generation of vertical EHR companies will not look like the last one. Be on the lookout for additional pieces that go deeper into AI-driven business models and product.

Additionally a huge thank you to Brendan Keeler, Nikhil Krishnan, and JP Patil for their thoughts on this piece!

Valerie Health

Why we invested in Valerie Health

When we first met Pete Shalek and Nitin Joshi, it was immediately clear that they were not setting out to build another point solution. They were starting from first principles and asking what it would take to rebuild the operational backbone of independent healthcare practices. That ambition immediately resonated with us. Independent practices are the foundation of American healthcare, yet they are increasingly burdened by the administrative complexity that has reached crisis levels.

Across the U.S., providers now spend more than one trillion dollars each year on administrative costs. That is close to one quarter of total healthcare spending. Much of this cost is a direct result of outdated workflows, manual processes, and fragmented software that was never designed to communicate with each other. The administrative load is no longer just an efficiency problem, but rather a structural force that drives burnout, reduces access, and erodes the viability of independent practice ownership.

The AI Front Office for Healthcare

Valerie Health is building a full-stack, AI-native front office for independent practices. Their approach is simple to describe, yet incredibly difficult to execute. The company becomes an extension of the practice and takes responsibility for the innumerable workflows that make a practice run. Think: intake, referral management, scheduling, messages, paperwork, and countless small but essential tasks that collectively consume the majority of staff time.

Valerie’s agents handle this work end to end. They integrate with existing systems inside the practice, coordinate across channels, and manage workflows that currently require a human to shepherd across multiple tools. By doing so, Valerie delivers something the industry has long promised but never achieved. It creates immediate, measurable lift for practices without the friction of a lengthy implementation of major workflow change.


Their flagship products focus on intake, referrals, and scheduling for specialty care. These processes are among the largest administrative pain points for both staff and patients. Valerie’s ability to streamline them is already unlocking meaningfully better patient experiences and dramatically lighter workloads for front office teams. As the platform expands into more specialties, the implications become even more profound. A modern operating layer for independent practices does more than just save time. It enables smoother care coordination, greater patient access, and a more sustainable economic model for the physicians who anchor our healthcare system.

Why This Team

Pete and Nitin bring a very unique combination of healthcare depth, technical excellence, and operational rigor that is rarely found in one founding team. Pete previously founded Joyable, a mental health startup, sold it, and then served as Chief Product Officer roles at AbleTo and Stellar Health, one of Primary’s portfolio companies. His understanding of provider workflows runs deep and is grounded in years of building software that directly impacts care delivery. When he left Stellar, we immediately told him we wanted in on whatever came next.

Nitin co-founded Uber Health, scaling it from zero to more than $100 million in ARR. He is both a technical founder and a systems thinker who knows how to build products that work at scale in highly regulated environments. Before Uber Health, he spent time as an engineering manager at Stripe where he honed his craft in building resilient, high throughput systems.

Together, they combine product intuition, healthcare expertise, and engineering leadership. It’s hard to imagine a better pairing for a company that must deeply understand provider operations while also building a highly sophisticated automation engine.

Why Now

Running an independent practice has never been more challenging. Reimbursements are tightening, wages are rising, competition from scaled platforms is increasing, and administrative requirements continue to grow. Yet, AI has reached a point where it can reliably handle structured and unstructured workflows at production quality.

This creates a rare moment where technology can change the trajectory of an entire class of providers. Valerie is not retrofitting AI into an older product. They are building an AI-native company that is purpose-built for the workflows that define practice operations. Valerie sits squarely in one of the strongest tailwinds in healthcare. Providers need operational leverage and cannot hire their way out of the problem. AI can finally deliver that leverage.

After co-leading Valerie’s Seed with General Catalyst, we are so excited to now double down in Valerie’s Series A and welcome Redpoint to the team.

We believe Valerie has the team, technology, and timing to define this category and we're thrilled to be their partners.

Bobyard

Capturing revenue contractors leave on the table

Bobyard uses computer vision and AI to turn messy construction drawings into fast, accurate takeoffs and estimates—freeing contractors to bid on more work, with more confidence, and less overhead. Primary is proud to be the lead investor in Bobyard's seed round and are excited to continue to support founder Michael Ding and the team as they announce their $35 million Series A, led by 8VC.

The bottleneck between drawings and bids

Every construction project starts with a plan set. Before anyone can break ground, someone has to turn those drawings into quantities of material, labor, and equipment, then into a scope of work and a bid.

For mid-sized and large general contractors, that work falls to large teams of full-time estimators both on and offshore. For smaller firms, it often lands on owners and project managers who are already buried in work and hate the idea of counting items page by page with archaic software.

Takeoff work is slow, error-prone, and constrained by people, not demand. The construction industry is already short on estimators, and that gap is widening. Even as employment growth stalls, thousands of estimator roles need to be filled each year just to keep up. Contractors feel that constraint every time they want to bid a new job and realize their estimating team is tapped out.

Legacy takeoff tools help a bit, but they still expect humans to “teach” the software where everything is on the drawing: tracing rooms, placing shapes, counting symbols by hand. They facilitate the workflow; they do not do the work.

Bobyard flips that around.

Why now: computer vision that actually does the takeoff

Bobyard’s core product is an AI-native takeoff engine that has helped contractors increase the number of bids they do over 5X. With Bobyard, contractors upload a PDF, select the legend, and run analysis. Under the hood, dozens of computer vision models—trained specifically on construction drawings—spin up in parallel to:

  • Detect symbols and count them
  • Measure linear footage
  • Segment and measure areas
  • Parse text and notes

Instead of forcing estimators to drag polygons over every planter bed or conference room, Bobyard’s proprietary models recognize each distinct region, size them, and calculate quantities automatically. First-pass accuracy is already in the mid– to high–ninety percent range for the initial trades, with more than ninety percent of the work fully automated.

On top of that, Bobyard handles scope-of-work generation and estimation: material and labor costs, templates, markups, and exports that plug into existing workflows. It is a complete workflow from drawing to bid.

The impact for customers is straightforward: takeoffs that used to take dozens of hours can be turned around in a fraction of the time, with better consistency and fewer misses. That lets each estimator support more bids, say yes to more opportunities, and grow revenue without adding headcount.

Meeting Michael Ding: from math olympiad to front office painkiller

We met Bobyard founder and CEO Michael Ding through Pear VC’s accelerator.Mar Hershenson knew about our thesis and deep connectivity across the built world and flagged him as one of the strongest founders to come through their program.

Michael grew up in the Bay Area, started coding at six, competed in math olympiads, and graduated valedictorian from one of the most competitive high schools in the region. At Stanford, he focused on computer science and AI, but his real education came from the field: in three months, he spoke with about 250 general contractors to understand how they actually estimate work.

He did not do this from behind a desk. He cold emailed. He called. He stood at Home Depot at 7 AM and talked to contractors one by one. Aside from proving a level of obsession we look for in Founders, those conversations also pointed to the same pain time and time again: takeoffs and estimates were the bottleneck.

Only after validating demand did he start building. In roughly a month, with occasional help from a roommate, Michael reproduced and improved on the capabilities of well-funded AI takeoff competitors. From there, he began training models trade by trade—starting with walls, paint, flooring, windows, doors, and hinges, and pushing into more complex categories over time.

What stood out to us:

  • He is extremely structured in how he learns and decides what to build.
  • He combines deep technical ability with real hustle on sales and customer discovery.He moves fast, but is careful about accuracy and reliability—exactly what you want in software that controls millions of dollars of bids.

A big, underestimated market hiding in the “front office”

On the surface, takeoffs and estimating might look like back-office work, but in reality is the key unlock to revenue.

In the U.S. alone, construction companies spend billions each year on estimator labor. Many large GCs also outsource a significant portion of estimation to lower-cost markets. Smaller contractors sacrifice nights and weekends to get bids out the door. That is all “hidden spend” on the same underlying task: turning drawings into numbers you can sign.

Long term, we believe Bobyard can become the operating system that connects drawings, takeoffs, bids, and, eventually, more of the construction value chain. The team is quickly expanding into all major construction trades, and takeoff is the foundational data layer that unlocks every other piece of preconstruction.

We are excited to support Michael from this early stage as he and the team build toward that vision—and as Bobyard becomes the default way contractors turn plans into projects.

Rewiring Cost Containment in Employer Healthcare

Employer healthcare spend is at an all-time high—up ~50% since 2017—despite tens of billions invested in digital health. Over the past decade, large venture-backed companies have been built in navigation, MSK, metabolic, and behavioral health. More than $60 billion has flowed into the category since 2015. Hinge Health’s IPO this year, alongside scaled players like Maven Clinic, Included Health, and Omada, shows how much capital and talent have poured into attacking high-cost categories. Yet, even with public market liquidity for some of these companies, costs for employers continue to rise, and member engagement remains low.

We believe AI represents the biggest opportunity we’ve seen in over a decade to break this cycle. The rise of AI-native infrastructure changes the cost equation, the engagement equation, and the speed at which new models can scale. For the first time, it’s possible to unify fragmented point solutions, personalize the member experience at scale, and deliver true navigation and cost containment tools at a price point that works for the mid-market—not just the Fortune 100. GenAI and LLMs reduce service costs, enable real-time data orchestration, and power incentives tailored to the individual. The result: a new system architecture that engages people earlier and shifts behavior in ways that were cost-prohibitive just a few years ago.

Our belief is that a large part of the issue has been structural—the explosion of point solutions has created siloed member experiences. Companies focus on engaging with patients in their specific swim lanes and do little to think about a patient’s holistic journey or shift their overall health journey. Meanwhile, “navigation” layers often lack the data integration and consumer orientation to deliver durable results.

What we’re hearing in the market

Over the past few months we’ve spoken to over 100 benefits leaders, brokers, operators, and actuaries, and a few consistent themes keep coming up:

The status quo isn’t working

The average employer has 6-10 point solutions with no unified layer to orchestrate them. Employees are confused. Adoption is low. True ROI is rarely there.

Costs are rising fastest, especially in the mid-market

Premiums continue to rise, with many employers reporting 10–20% YoY increases. In some regions, the cost of family coverage is projected to reach ~$50,000/year by 2030—exceeding average wages in many industries. At the same time, spend is highly concentrated: A relatively small set of high-need members can drive tens of millions in annual costs, leaving employers exposed to volatility they can’t predict or plan for. This financial pressure is increasingly pulling CFOs into benefits decisions and amplifying the demand for ROI that is clear, defensible, and immediate.

Broker buy-in is the key to employer adoption

Brokers control access to most employer accounts, and they won’t champion solutions unless the ROI for their clients is clear, defensible, and easy to explain. The offerings that break through will be the ones that deliver measurable savings and engagement without adding operational drag.

How AI changes this

From studying the largest first-generation employer-market companies, three constraints stand out: engagement is the holy grail, human service teams drive high costs, and data is siloed. GenAI tackles all three:

Engagement

AI-native navigation personalizes outreach across chat, text, and voice, sustaining member interaction over time

Human cost

Conversational AI resolves most routine questions and routes members to the right care, freeing staff for complex cases

Data silos

Real-time data layering unifies claims, labs, and SDoH, enabling early steerage and targeted incentives

These shifts make it possible for a $2-3 PEPM product to match the impact of $15+ PEPM service-heavy platforms, unlocking mid-market and SMB segments that were previously out of reach.

Rewriting the Cost Containment Playbook

If V1 was defined by fragmented point solutions, V2 will be defined by solving navigation. Navigation is the key unlock for cost containment—and for the first time, it’s possible to do so at scale and at a cost point that works beyond the F100.

The goal isn’t to invent another point solution, but rather to create a system architecture that connects the existing ones and intervenes earlier in the patient journey. The essential elements are:

  • A centralized AI interface as the member’s front door to benefits and care decisions
  • Steerage to high-value providers and contracted bundles
  • Configurable plan design levers like waived co-pays or financial nudges to guide behavior in real-time

The individual tools aren’t new, but the infrastructure and cost structure that supports them is. AI should reduce the marginal cost of navigation to close to zero, allowing for scalable personalization and earlier intervention where traditional models struggled to show savings.

Open Questions We’re Exploring

  • What does it take for AI to be trusted as a front door for high-stakes navigation?
  • For a mid-sized employer, what is the minimum operational, technical, and organizational setup required to actually get a positive ROI from AI-driven navigation / cost-containment?
  • Can this replace today’s bloated benefits stack or does it end up becoming just another layer?

Why We’re Spending Time Here

This feels less like a new trend and more like a second chance to get cost containment right – especially for the part of the market that’s been neglected and underserved. Lower delivery costs, flexible plan design, and smarter back- and front-end infrastructure offer a path beyond fragmented point solutions toward something integrated and effective.

We’re continuing to explore and would welcome conversations with others thinking about this space!

Reach out to sam@primary.vc or hannah@primary.vc

The data moats that unlock billion-dollar fintech outcomes

As investors, our job isn’t simply to assess whether businesses are working today, but to prosecute whether they can create durable, long-term value (ideally as large and standalone companies!). We often debate whether a business has a meaningful moat, which we define as a structural – and ideally compounding – advantage one company enjoys relative to its peers.

While discussions around moats have always been a central feature of our internal debates, they’ve become the topic as we’ve watched (a) AI businesses scale at historically unprecedented rates and (b) virtually every venture-investable category become saturated the moment it’s deemed obvious to investors. We’ve also observed that most of our conversations – both internally and externally – end with a comment around data as key to long-term defensibility.

We’d like to add more fidelity to the idea of data as a moat. As investors who study financial services companies for fun, we believe an elegant way to do so is by examining how financial data and network businesses (our favorite subset within financial services) become self-reinforcing, monopolistic market juggernauts – largely by orienting their businesses around a unique and compounding data asset. While revenue ramps are steeper and competition is fiercer in 2025, our view is that the market forces that govern which companies ultimately endure remain the same

Approach

We started by asking: if we study the origins of financial data businesses, can we (a) reverse engineer their paths to juggernaut-dom and (b) reconcile those paths with venture capital as a funding source tuned for hypergrowth (vs. slow-and-steady growth over years or decades)?

After speaking to founders and builders of some of the most consequential financial data businesses, our hypothesis is that the next great, venture-backed data and network business won’t start off looking like pure-play data businesses. Building a valuable data asset simply takes too much time.

Instead, we find that the successful venture-backed companies invert the sequencing: they begin as valuable software businesses in their own right, and only later earn the right to “flip” into data business territory once they’ve amassed a critical, valuable data asset. In short, data businesses tend to work best when they represent the end state – the Next Act of Next Acts.

Data Assets & Businesses: The Holy Grail

All of the largest businesses in financial technology have proprietary data assets at their core. Each of these companies operates in an effective monopoly (or oligopoly) in their respective categories.

Exhibit 1: Incumbent Financial & Data Network Businesses

But if these are such valuable businesses, why are there few historically venture-backed companies in this comp set? There’s many potential explanations but we asked ourselves — are data businesses structurally incompatible with the startup game?

The Pragmatic Challenge of Data Businesses

The issue with data businesses is that they either:

  1. Were built so deliberately (think consortium models) by industry insiders that they hit critical mass on market participants from day one; OR
  2. Became behemoth data businesses by (near) complete accident

On Consortium Models

The card networks are the best example. Mastercard began as the Interbank Card Association — originally a group of regional banks reacting to Bank of America’s refusal to provide Marine Midland Bank with a BankAmericard regional license. The formation of Visa — fka the International Bankcard Company (IBANCO)—was a “counter”-consortium in reaction to the looming threat of Mastercard.


While some venture-backed startups have looked to launch “consortiums” of sort – the ownership structure required to truly align participant incentives in this model is fundamentally at odds with how startup cap tables look.

On ‘Happy Accidents’

Accidental behemoths are perhaps even more challenging, as their existence implies that building and funding a data business from the get-go requires founders and investors to bet on something smaller and less exciting before any real economic value accrues. Some examples:

  • Native Distribution + Corporate Balance Sheet. MSCI—a $43 billion market indices business that powers $15 trillion of AUM—started as a free set of stock indices published initially by Capital International (CI) in the ’60s and later by Morgan Stanley (MS) in the ’80s... hence the name MSCI. It took 10+ years of close-to-free distribution for MSCI to monetize its indices at scale.
  • Deep-Pocketed Individual. Bloomberg—a business last valued at $23 billion when it bought out Merrill Lynch’s remaining stake in 2008—started as a bond calculator designed with Merrill. When Michael Bloomberg left Salomon Brothers with a $10 million buyout, he rolled $4 million of that to start Bloomberg in the ’80s.
  • Long-Enough Time Horizon. Moody’s—an $86 billion credit ratings business—started as a financial publishing (i.e. books) business, whereby John Moody would assign ratings to railroad businesses. It took 70+ years for Moody’s to chart a path to becoming the credit ratings juggernaut it is today. In fact, the debt capital markets that they serve today weren’t a thing until the the high-inflationary era of the late ’70s.

This works when done within the “comfort” of a large corporate balance sheet, deep-pocketed individual, or long-enough time horizon that does not align with a 12-18 month fundraising cadence and a 10-year fund cycle.

Charting a Path to a Venture-Compatible Data Business

Although the historic paths of building juggernaut data businesses fundamentally do not lend themselves to startups, the wrong conclusion to draw would be to avoid them entirely.

What if there’s a way to be highly intentional about building and investing in the next great data business?

There are a handful of venture-backed “hero companies” that we believe have earned the right to scale and monetize a data asset:

Exhibit 2: Venture-Backed “Hero Companies”

Early observations from studying these businesses lead us to a strong set of beliefs. Namely, to build a great, venture-backed financial data asset business one has to start with a strong standalone software wedge. Then, with enough market capture and scale, a player can earn the right to 10x TAM through the monetization of a critical data asset.

There are some critical flywheels and strategies that can help accelerate this journey:

As the cost of developing software trends lower thanks to AI, better product and distribution is at best a medium-term differentiator. Building towards a core proprietary data asset around the software can become a tenant of true long-term defensibility.


Building an Eventual Venture-Backed Data Business

Distilling what we’ve learned, our current operating framework for building a data asset necessitates:

  1. Scale and Patience. Folks grossly underestimate the critical mass at which a business can start to monetize a data asset, which is also a function of the time to reach readiness. Only now 12+ years down the road do established players (e.g. Plaid) have the scale and penetration to truly take this path.
  2. A Good Enough—Ideally Great—Standalone First Act. The above implies that getting to the end state is virtually impossible without a great core business. The core must be a scalable, high-margin business from the start capable of getting to a $1B outcome. The data asset play is a force multiplier that unlocks the $10B+ outcome but the strong core business must come first.
  3. An Unconflicted Value Proposition. The data asset cannot be in conflict with the core customer or value delivered as this introduces existential risk to the business. We believe this is why even if a business shouldn’t explicitly operate as a data business from day 1, there needs to be a sense of intentionality in foundation building ASAP.

So What?

Amassing a critical data asset is a wonderful way to build and sustain a moat – so much so that we believe this style of company-building should extend far beyond financial services! We’re excited to spend time with founders who not only (a) can solve a pressing customer pain point with software today, but also (b) have the end-state foresight to use that software to capture data that can generate tremendous downstream value over time.

Circuit & Chisel

Payment rails built for AI agents

Agents are fundamentally breaking the business model of the web. Internet traffic is undergoing a massive shift – away from human visitors to scrapers, bots, and agents. In some cases, bots are visiting a webpage 60,000 times for each visitor they refer. It’s clear that ads that rely on human attention are no longer going to cut it.

As AI agents continue to get smarter and more sophisticated, we’re clearly rapidly entering an era where we see a cambrian explosion of AI agents across verticals that are discovering services, making decisions, and coordinating with other agent based services.

If only they had a way to pay and get paid.

Enter Circuit & Chisel

Circuit & Chisel is building the payments infrastructure that will power the emerging AI agent economy. Its ATXP enables instant, nested, delegated, and extremely low cost micropayments between AI agents — something that today’s traditional payment rails are structurally unable to support.

While there’s plenty of digital payments options available today, they suffer from a goldilocks problem: none of them are just quite right for what the emerging agentic ecosystem needs. Digital card rails? Great for large ticket transactions, too expensive for small ones. Subscriptions? Doesn’t align with the cost structure of delivering agentic services. We’ve yet to have the “app store” moment of the agentic era. ATXP is that missing link.

ATXP makes it easy to embed functionality into any Model Context Protocol (MCP) server that enables agents to:

  • Discover and pay for tools and services independently
  • Manage complex, multi-party transactions
  • Delegate payment decision making up the chain when necessary

But don’t take our word for it – check out their docs and start building here.

Two major tailwinds on a collision course

Many of these themes are ones that have been explored on and off at different points along the history of the internet. Today however, we’re seeing the convergence of two massive tailwinds: 1) the rise of the agentic economy and 2) stablecoins going mainstream.

With $240 billion in circulation and regulatory clarity now arriving via the GENIUS Act in the U.S. and MiCA in the EU, stablecoins have emerged as a modern alternative to card networks. Adoption by traditional financial institutions is off to a blistering pace, lighting up new nodes on a global network quickly. As the ecosystem matures and costs of transacting have dropped, stablecoins have become the ideal medium for high-velocity, small ticket payments that don’t make sense to run over traditional card or ACH rails.

This ecosystem is developing at a breakneck pace with many existing and emerging players all vying to be the underlying medium of transacting. ATXP is designed to support multiple payment methods and blockchains out of the gate, abstracting away the messiness of figuring out the right pathway to transacting into a simple SDK.


Meet the founders: Stripe Crypto’s Skunkworks breakout

The ability to deeply understand and expertly traverse three domains – AI developer tooling, traditional payments, and stablecoin infrastructure — is exceedingly rare. The team, led by Louis Amira and David Noël-Romas, brings the perfect combination of traditional fintech experience coupled with unique insights into this nascent space.

Louis and David worked together at Stripe spearheading crypto and AI efforts through the volatilities of the last crypto cycle. Louis, as the company’s first external crypto hire, was tasked with evolving Stripe’s financial infrastructure and spearheaded crypto and AI ecosystem partnerships. David, as Head of Engineering for Stripe Crypto, was the brain behind many of Stripe’s core crypto infrastructure initiatives. From our first meeting, it was exceedingly clear that they had years worth of earned insights and knowledge on exactly how to architect a solution that would bridge the gaps that exist in the landscape today.


Why we backed Circuit & Chisel

We’re backing Circuit & Chisel because we believe in a future where AI agents transact autonomously on behalf of people, companies, and each other. Louis and David have seen the gaps in traditional payments up close. They know how to earn trust in regulated environments, scale developer ecosystems, and move quickly in crypto-native communities. They’re not just shipping product — they’re imagining and shaping the future of commerce and payments.

We’re thrilled to be partnering with them on this journey alongside our friends at ParaFi. If the future of the internet runs on AI agents, Circuit & Chisel will be the proverbial wallet in their pockets.

Atero

Why we invested in Atero—and why Crusoe agreed

We are thrilled to announce that our portfolio company Atero was acquired by Crusoe. We led the seed round in Atero just over a year ago, and the team has achieved remarkable things in the time since – culminating with this exciting news. We never were able to celebrate them coming out of stealth until now, and this is quite the way to make an entrance!

In the Spring of 2024, we found ourselves down a rabbit hole of AI workload orchestration. The general idea was: AI workloads are becoming so important, but they are run quite inefficiently. Existing orchestration software, namely Kubernetes and Slurm, were not built for these workloads at all. As a result, orchestration is an inefficient part of the AI stack, and given the cost of these workloads, a place for potentially massive cost savings with better stability, speed, and overall performance. We were so excited about this idea that we wrote two blog posts about it in March and May of 2024.

A reader of our newsletter re-introduced us to Alon Yariv after reading these two editions. We had previously met Alon when he was building his last startup, and we knew he was a special talent. His new idea, inspired by learnings from his last startup, aimed to make GPUs much more efficient by orchestrating AI workloads in a better way, with a specific focus on memory optimization. Given the rabbit hole we were down at the same time, this immediately caught our attention.

Alon would go on to build an incredible team of technologists, including Omer Landau, Ben Chess, and Eyal Salomon. In a short period of time, they did what no one else has done for memory optimization on GPUs – the output and speed of this team’s R&D efforts was nothing short of amazing.

In Crusoe, the Atero team finds a perfect home. Atero will help optimize Crusoe’s inference platform and continue to provide Crusoe with a completely vertical solution for serving AI. We are thrilled for the team on the acquisition, and excited for all they will accomplish in the future. Most importantly, we want to extend a massive thanks to Alon, Omer, and the team for letting us play a small part in this awesome journey.

And last but not least, one person in particular deserves a special shoutout: Roy Katznelson. Roy introduced us to Alon originally and then re-introduced us to Alon when he began building Atero. Thank you, Roy. This journey would not have been possible without you.


Congrats to all involved, most importantly team Atero. We cannot wait to see what comes next!

Cell and gene therapy's infrastructure build-out is starting

For more than a century, modern medicine has been defined by how effectively we manage disease – by controlling symptoms, slowing progression, and extending longevity. Cell and gene therapy (CGT) breaks that paradigm: treatments designed not to manage illness, but to eliminate its biological cause entirely.

This is a profound inversion of how healthcare has long operated. Instead of drugs designed to manage/ control symptoms over time, we’re seeing one-time interventions that can correct the underlying cause of disease, sometimes permanently. These therapies replace chemistry with biology, turning our cells and genes themselves into medicine.

To bring this to life: that shift can mean an end to years of pain crises, transfusions, or cycles of chemotherapy – the quiet, relentless toll of living with a disease. It means children with sickle cell waking up without pain for the first time in years, or a cancer patient finally hearing “no evidence of disease” after years of treatment.

But for all the excitement, CGT remains brutally hard to deliver. The science is advancing faster than the infrastructure, reimbursement models, and operational systems needed to support it. If a single treatment can eliminate the need for lifelong management, every surrounding system, including manufacturing, payment, delivery, regulation, must be rebuilt to keep up.

At Primary, we’ve been spending increasing time here, understanding the systems that must emerge to make this moment scalable and sustainable. Every new therapy approved by the FDA brings not just a clinical breakthrough, but also a logistical, operational, and financial challenge to make that breakthrough deliverable in the real world. The opportunity ahead lies in solving these system problems.


The Market at a Glance

Today, CGT spend is roughly $6 billion globally (~5% of specialty drug spend), led by autologous CAR-T and gene replacement therapies for hematologic cancers and rare diseases. The global pipeline, however, tells a very different story: more than 2,000 therapies are now in development, spanning oncology, rare disease, neurology, and autoimmune disorders. In the first half of 2025 alone, nearly 500 new cell therapy assets were initiated globally.

The category’s growth trajectory mirrors that of biologics in the early 2000s, but with higher complexity and steeper operational demands. Most forecasts project a $120-190 billion market by 2033, growing at a CAGR of 18-25%.

And perhaps most importantly, these therapies are starting to move earlier in the care pathway.

CAR-T and bispecifics, once reserved for relapsed or refractory patients, are now being tested,  and in some cases approved, as first- or second-line options. Gene therapies that once targeted ultra-rare diseases are expanding into more common conditions like sickle cell disease and muscular dystrophy. Over the next decade, we’re going to watch as CGT shifts from the last resort to the standard of care for dozens of conditions.

Why It’s So Hard – and Where We’re Focused

The promise is extraordinary, but the current delivery model can’t meet the demand. Today, treatment is predominantly confined to academic medical centers (AMCs) because they have the physical infrastructure, trained staff, and 24/7 coverage these therapies require. We heard from multiple AMCs including MSK, Yale New Haven, and Boston Children’s that they are already capacity-constrained. As indications expand and move earlier in the pathway, an AMC-only model will only continue to bottleneck access. Scaling CGT safely will require shifting appropriate volumes into qualified community oncology and community hospital settings – with the right technology, protocols, and financial rails.

1. Technology & Data Infrastructure

It’s still incredibly hard to make and move these therapies. As a pharma leader we spoke with put it, every patient dose “travels through a chain of collection, modification, release, and reinfusion – often across continents – and a single break collapses the process.” Community sites described using little beyond their EMR and spreadsheets to track that process, with no unified system connecting apheresis, manufacturing, and infusion.

We’re interested in: Platforms that unify manufacturing, logistics, and clinical workflows into a single chain-of-identity record, automated QA and exception management, interoperable data layers connecting manufacturing, clinical, and outcomes data, and AI-driven coordination tools that compress timelines and prevent errors

2. Access & Delivery Infrastructure

Even when the product is ready, most hospitals aren’t. We heard from community oncology leaders that standing up a program can take 12+ months and a seven-figure investment for cold storage, apheresis space, trained staff, and inpatient coverage when necessary. A director of cellular therapy operations at a regional cancer center noted that even with rising demand, staffing is the primary constraint: “You can’t just hire nurses; they have to be cell-therapy trained and FACT-ready.” Without better training pipelines and remote-monitoring support, they can’t expand capacity safely. A pharma leader echoed that “patients want to stay with their community oncologist, but most clinics don’t have the infrastructure or FACT accreditation to receive or monitor these therapies safely.”

We’re interested in: accreditation-as-a-service models that simplify compliance, scalable workforce training and credentialing platforms, and patient monitoring technology that allows qualified hospitals and health systems to safely identify and manage CRS and ICANS in real-time, infrastructure that facilitates partnerships between community oncology centers and nearby hospitals for inpatient support and coverage.

3. Financial & Reimbursement Infrastructure

The economics are compelling but still operationally fragile. Community oncology leaders emphasized that CGT is one of the most attractive areas for expansion – it allows practices to retain patients, capture meaningful drug margin, and position themselves as preferred sites for next-generation therapies. Yet the rails to support that growth are incomplete. Payer approvals remain slow and inconsistent, revenue recognition is delayed, and the indirect costs of staffing, accreditation, and inpatient coverage can erode profitability on a per-case basis.

We’re interested in: automated platforms that operationalize outcomes-based contracts and risk-sharing, annuity or milestone-based payment models that align cash flows with realized benefit, and pooled-risk structures that make these therapies accessible without catastrophic exposure for smaller payers or employers

Why We’re Spending Time Here

As CGT moves from experimental to standard of care, the center of gravity is shifting. Value will no longer accrue only to the scientists discovering new molecules. It will move to the companies that make these therapies manufacturable, reimbursable, and deliverable at scale.

The next generation of category-defining healthcare companies will be built at the intersection of biology, infrastructure, and operations. If you’re working at that intersection and helping biology meet system design, we’d love to connect.

Reach out at sam@primary.vc or hannah@primary.vc.

AI agents need systems of work, not record

Context

A year ago, Chris Paik controversially wrote about The End of Software, arguing that it’s easier than ever to build software so there’d be a Cambrian explosion of new tools.

We’re not quite there yet but any early stage investor will tell you that it sure feels that way. Categories where a couple years ago you’d see 2-3 players now easily have 10-15 new entrants. Play the tape forward from here, and the markets shrink, prices compress, and spaces commoditize.

So the question remains – is there enduring venture scale value left in the application layer?

Where Value Was Created in the SaaS Era

As investors, we all know how hard it is to go up against an incumbent – especially if it requires a “rip and replace” of existing systems. No matter how crappy the customer service or the UI/UX looks, inertia is a powerful force. Better products lose everyday.

While provocative, Chris’ argument misses the mark of how challenging it is (and has always been) to build true enterprise-grade software. It requires:

  • Performance at scale - the ability to handle truly large workloads across a large user base with little to no downtime
  • Compliance & security - robust access management, detailed audit trails, adherence to data security standards / best practices
  • Change management & ecosystem lock In - integrations into other business systems, data lock in (esp true for systems of record), and an ecosystem of trained (oftentimes officially certified) employees, consultants, and integrators that raise switching costs
  • Customization & configurability - long implementation cycles that tune a baseline product to the exact needs of the customer

Even through the 2010s, and the “Era of SaaS”, many traditional enterprise heavyweights remained largely “undisrupted” despite many attempts and challengers who built 5x even 10x better products. Looking back at the winners that emerged in this era, there’s a couple of core themes:

Article content


These tailwinds have created $Ts in enterprise value but are largely mature. As a result, in the last couple of years (pre-AI), new entrants were increasingly forced to attack narrower customer segments or smaller niches to compete.

The AI Era Changes the Paradigm

AI changes this. The stage is now set for a new generation of truly enterprise grade AI-native software winners to emerge. These companies have the opportunity to completely reimagine workflows and categories from first principles and ultimately finally unseat the old guard.

Here’s how:

  • GenAI erodes traditional software moats: GenAI loosens many of the traditional characteristics of SaaS stickiness. With AI, it’s easier to extract critical data, quickly customize settings, and integrate into an existing ecosystem of tools. Spaces that traditionally would see 6-12 month implementation periods requiring a massive IT lift to get a new system live will see those shrink.
  • “Systems of work” will eat “systems of record”: If the atomic unit of the SaaS era was the record, the atomic unit of the AI era will be the work. AI agents have the ability to handle messy unstructured data and navigate complex decision making. Capturing that “process” of how and why a decision is made contains 10x more information than what decision was made. In this world, “end-to-end process” data becomes more valuable than “final decision” data. Owning the rails of the ‘sausage making’ becomes more valuable than the completed ‘sausage’.
  • AI allows for building 100x better outcomes: many SaaS 2.0 challengers were prettier, faster, and cheaper, but at the core, fundamentally the same. Incremental ROI led to indifference. New enterprise AI solutions can unlock jaw dropping experiences and value that is orders of magnitude stronger than their predecessors.
  • “Selling work” taps into a net new budget pool: No longer tethered to “seats”, this category of enterprise AI companies can directly replace human capital and tap into  budget earmarked for people – whether that’s internal headcount or outsourced services. It also opens up downmarket segments by expanding attainable ACVs and opening up segments that were previously “not worth it” for system of record type companies to sell into given cost to serve + lower contract sizes.

Article content


Winners in the AI-Native apps won’t just replace software, they’ll also replace:

  • Spend on existing software tools (not sufficient on its own)
  • Outsourced spend (e.g. BPOs, consultants, third-party providers)
  • Internal white-collar headcount (the holy grail)

Article content

Source: “SaaS Isn’t Dead (Yet) and AI Could Make it Bigger

Why Startups Win

Objection 1: So why is this not a giant race to the bottom?

The answer lies in the capture of reasoning logic: how decisions are made, not just what decisions are made.

The last generation of solutions focused on outcomes (e.g. issue refund or not). Now, with more and more work being taken on by agents, the process by which decisions are made is infinitely more valuable (e.g. see request >> analyze customer inquiry data >> cross check against internal policy >> apply human judgment >> make decision).

Put plainly, the capturing of how and why decisions are made natively in the workflow is the key. End-to-end feedback loops helps agentic products continue to make better decisions in the context of the customer they serve. This customer-level fine tuning of preferences and decision making, when done right, creates stickiness and lock in.

This is really just another form of data capture. Historically logs and actions were captured for audit and explainability purposes but now how a decision is made is becoming equally as important as what decision is made. This means that many categories are a land grab to capture reasoning logic and lock up customers quickly.

Objection 2: But don’t the incumbents have an inherent distribution advantage? Aren’t they just going to win?

Another common criticism of AI upstarts – i.e. why isn’t Figma the AI Figma?

In some categories this will be true: scaled winners of SaaS 2.0 that are nimble and capable enough to deliver an AI-native experience have an upper hand.

This is not true across all categories. AI-native startups can win in a couple of scenarios:

  • Going upstream into the “system of work”: many legacy “system of record” solutions enter at the end of the work being done. These solutions capture the result but not the process to get there.
  • Building guardrails for non-deterministic processes: LLMs are inherently non-deterministic. Harnessing their power while also developing the appropriate guardrails for accuracy and consistency is a new skill set that many traditional modes of software development are not well equipped to handle.
  • The good ol’ innovators dilemma: traditional software monetization is tethered to seats and the expansion of those said seats. Agentic efficiency oftentimes runs counter to the core cash cow creating organizational conflict and difficulties prioritizing.

There’s parallels that can be seen from the shifting of software from on-prem to cloud (h/t to our friends at Sequoia)

Twenty years ago the on-prem software companies scoffed at the idea of SaaS. “What’s the big deal? We can run our own servers and deliver this stuff over the internet too!” Sure, conceptually it was simple. But what followed was a wholesale reinvention of the business. EPD went from waterfalls and PRDs to agile development and AB testing. GTM went from top-down enterprise sales and steak dinners to bottoms-up PLG and product analytics. Business models went from high ASPs and maintenance streams to high NDRs and usage-based pricing. Very few on-prem companies made the transition.

The ways “AI-Native” applications are designed, implemented, and tested will all have to change. For many incumbents, the cultural and structural leaps will be insurmountable.

Three ways the pie grows

There are three buckets of opportunity that we see in the application layer:

  1. Democratizing access downmarket: By selling work vs. tooling, AI agents make services and functions that previously were inaccessible to smaller organizations affordable and easy to access for the mid-market and SMBs. For example, phone answering/customer inquiry, scheduling, customer support, inventory management, etc.
  2. Vertical-specific use cases: Industries that require deep specialized knowledge have historically been unpenetrated by software solutions and have instead relied on expensive, highly-specialized labor. Previously, selling SaaS tools into these segments was considered too niche and markets too small. Agentic workflows that replace high-cost basis white-collar work and marry that with deep vertical and functional knowledge can make niche markets venture-scale
  3. Productizing the “messy middle”: Workstreams that previously lived across systems and modalities (emails, spreadsheets, voice, etc) can now be codified into agentic workflows that the last generation of RPA tooling was not flexible enough to unlock. Because agents can tie together structured and unstructured data sources they’re able to finally productize the “work” that historically was manual.
  4. Going head on with incumbents at enterprise scale (bundled): Each functional department is being reimagined with an AI-Native solution. While we’ve seen strong early market traction in customer-facing functions (e.g. sales, marketing, support), there remains a lot of low hanging fruit in traditional “cost centers” that historically were not prioritized in terms of budget, IT resources, or attention to 10x those functions.
Article content

Why “Enterprise”?

This window of opportunity is largely specific to enterprise-grade applications because enterprises still need functional best of breed solutions. In the enterprise, enduring value will come from the intersection of:

  • Lowering of traditional enterprise software moats
  • 10x improvement in value and ROI calculations
  • Relatively high implementation costs (with the “FDE” quickly becoming the hottest job category)
  • Continued need for enterprise grade permissioning / access management / compliance features

While there are many SMB and mid-market pain points that are very much worth tackling, the value creation drivers in those segments differ significantly from enterprise. More on that soon.


What are we looking for?

  • Low NPS incumbents: Either a) software markets where there is a large, universally-hated incumbent and the pain of rip & replace historically fended off new entrants OR b) services markets where spend (and varying quality) largely outpaces value delivered
  • Enterprise-grade core product: while this may require longer upfront build times, having a truly enterprise grade product will be a necessary criteria to be a contender for market winner. While we will not rule out companies that start in the mid-market to get a product to market – we need to see an end vision (and the product chops) to eventually serve enterprise
  • Capture of end-to-end workflows: Most importantly, the product must be set up to capture and learn from full cycle feedback, strengthening the “agentic reasoning” architecture over time. Solutions that do not capture “outcomes” will be non-starters.
  • Early enterprise credibility teams: the ability to get an enterprise customer to take a leap of faith on an early market entrant and product is nearly a moat itself (e.g. Brett Taylor at Sierra, Amanda Khalow at 1mind). We’re looking for teams that occupy the intersection of being 1) AI-native and 2) Day 1 credible with enterprise buyers. Given the long lead and deployment times, unfair advantages are needed to get the first points on the board.

We are at a juncture in time where every piece of software is being reinvented for the agentic era. The journey will not be easy— build times will be long, and revenue growth will be lumpy. But the long term path to value creation is clear. Here are specific areas that we’re actively looking to invest in:

  • “Offices of G&A”: Every horizontal middle/back office function (finance, IT, HR/payroll, benefits, legal, audit/risk, compliance, investor relations, etc) can be totally reinvented. Is there room for an AI-Native Workiva / Auditboard or next-gen Jira?
  • Future of work 2.0: Collaboration and productivity software that is designed for a hybrid human + AI agent workforce. As the balance shifts over time, new tooling will be needed for coordination of activities whether that is agent <> agent or agent <> human, particularly when it comes to cross-functional work.
  • Vertical “systems of work”: Are there more industries with a defined universe of buyers that are overdue for their own “Veeva” that to date have been retrofitting horizontal solutions for their needs? We’re actively looking for solutions across insurance, energy, government, and more that are creating the “systems of work” that add intelligence to core systems of record.

If you’re building the next-generation of enterprise AI, I’d love to hear from you.

Plural

AI for DevOps at Scale

Kubernetes is the 10,000 pound gorilla of infrastructure. It is ubiquitous—over 60% of enterprises today manage 10 or more Kubernetes clusters—and the underlying backbone of so much of how enterprises run their applications. And yet, despite consistent efforts to make Kubernetes manageable, in 2025 it remains an unsolved problem. It is an esoteric system that has become a mess to operate.

This is the case because the enterprise Kubernetes environment is more complex than ever: sprawling with massive scale, sometimes hundreds of clusters that require upgrades, dependency management, and a control plane to manage how everything is tied together. Enterprises often have dozens of highly skilled engineers just tasked with keeping Kubernetes operable.

Moreover, the hyperscaler clouds have never been properly incentivized to create the optimal control plane and experience around Kubernetes, so their products are subpar. They are by definition tied to their own cloud, with a lack of the product vision required to build something that properly solves all the intricate issues that pop up when managing a Kubernetes fleet.

Enter Plural. CEO Sam Weaver and CTO Michael Guarino have been working on removing the headache from Kubernetes upgrades and dependency management for years. They have the intense product understanding, Kubernetes expertise, and now, AI to supercharge the product and 10x the experience.

Plural delivers AI automation for DevOps at scale. Plural reads a scaled Kubernetes environment, gives prescriptive insights and recommendations around when and how to manage upgrades, and sits alongside a DevOps professional as they address issues.

Today we’re celebrating Plural’s $6 million seed round, which we led with participation from Company Ventures and Capital One Ventures, bringing total funding to $12 million. We are excited to be joined in this round by Capital One Ventures, which has an incredible track record of backing industry-defining companies while also being an early customer to some of the world’s most iconic infrastructure businesses like Snowflake.

Plural has from the beginning been focused on enterprise functionality and scale. The company’s first customers are some of the biggest financial services and cybersecurity businesses in the world, that entrust a startup with the all-important job of managing a high-stakes Kubernetes environment.

Some of our favorite infrastructure investors often say that it takes time for infra solutions to take hold—because they are essential building blocks of a company’s foundation, the trust and time required to adopt is high. However, when an infra solution clicks, it is magic. You become an essential part of your customers’ operations and success. You become indispensable.

We are seeing this happen with Plural, and it’s fun to watch. Now is the time to truly build DevOps automation at scale—the pain is severe and the tech is ready. Plural is the solution to this longstanding and frustratingly unshakable problem.

Bedrock Ocean

The future of ocean intelligence

One of the next decade’s most important infrastructure markets isn’t in the cloud or on land, but on the ocean floor.

Today, we’re excited to share that Bedrock Ocean has raised $25 million to accelerate its mission of mapping Earth’s final frontier: the ocean floor. We have been investors with Bedrock since co-leading their seed round, and are proud to participate again alongside a stellar group of new partners led by co-leads Costanoa Ventures, Harmony Partners, and Katapult, alongside Autopilot and Mana Ventures plus other longtime believers like our friends and existing co-investors at Northzone, R7, Eniac, and Quiet Capital.

This funding comes on the heels of a breakout year for Bedrock, with commercial traction across multiple market sectors in the U.S. and field deployments that customers describe as “the best data we’ve ever seen”—often in conditions where traditional survey vessels would refuse to operate.

A new foundation beneath the sea

Bedrock is building the most comprehensive, scalable, and environmentally responsible ocean floor dataset ever assembled. Its vertically integrated approach combines custom-built autonomous underwater vehicles (AUVs) with a cloud-native data platform called Mosaic™, enabling survey turnaround in a matter of weeks instead of a year.

To understand how transformative this is, it helps to look at the status quo. Traditional subsea surveys require hundred-million dollar ships, cost approximately $100,000 per day to operate, and carry five-year-plus backlogs. They burn tens of tons of CO₂ daily and blast sonar from the surface, causing ecological damage. Bedrock, by contrast, can deploy battery-powered AUVs in hours, complete surveys in days, and upload data directly to the cloud—at a fraction of the cost and with minimal environmental impact.

In short: it makes deep sea data fast, cheap, and green. And like anytime something gets faster and cheaper, that is poised to open up massive new market opportunities on the ocean floor.

Why now? The infrastructure moment has arrived

The subsea opportunity is no longer theoretical. It’s an $8 billion market just in survey spend—and that’s before factoring in new categories like subsea data centers, carbon capture, and national security.

NATO is forming task forces around subsea infrastructure security. There are also use cases around energy, telecom, environmental monitoring, deep sea mining, defense, and more.

As one example, the U.S. has over 450 gigawatts of offshore wind in its pipeline through 2050. Offshore wind projects alone require 20+ surveys per development cycle, across bidding, planning, construction, and operations. Several of the largest wind developers in the world are already engaging Bedrock for next-generation, marine-mammal-safe survey capabilities.

A team with the right pedigree and persistence

We first met founder/CEO Charlie Chiau in 2020. What stood out was not just his technical depth—Charlie started building subsea tech nearly 20 years ago at DeepFlight and then worked at SpaceX—but his obsession with building for scale.

Bedrock’s team of thirty-five includes top talent from Google, SpaceX, NASA, Cruise, and L3Harris, and it has already achieved in four years what most defense contractors need decades (and hundreds of millions of dollars) to do:

  • A full-stack AUV fleet, now in Gen 2
  • A cloud platform that makes ocean data accessible and interoperable
  • Hardware built in-house in four to six weeks
  • Compliance with IHO’s most stringent global accuracy standards

Even prior to this round, the team had already reached unit economics that would turn any hardware CFO’s head. Now, with additional capital, Bedrock will:

  • Expand data acquisition across U.S. and global coastlines
  • Scale manufacturing capacity to meet surging survey demand
  • Advance full autonomy in the field—from deployment to delivery
  • Enable over-the-horizon fleet operations with minimal human input

Data is the real wedge

Perhaps most exciting is Bedrock’s multi-client data model. Unlike legacy survey firms that deliver hard drives once and move on, Bedrock can make the same dataset available to multiple buyers—turning each mission into a flywheel of recurring revenue. It’s not just a better way to survey; it’s a fundamentally better business.

Mosaic is poised to become the foundational ocean intelligence platform for developers, governments, researchers, and regulators alike. The ability to license high-resolution ocean floor data—collected sustainably, processed in real time, and accessed in the cloud—will unlock use cases we can’t yet predict, just as satellite imagery did on land.

Where we go from here

What started as a next-generation survey company is now becoming a data infrastructure platform for the ocean. That’s why we invested.

We believe the Bedrock team is building a new category at the intersection of robotics, data, climate, and infrastructure—one that will underpin critical decisions across energy, defense, and the environment for decades to come.

We’re proud to back Charlie, Brandon, and the entire Bedrock team as it builds a living digital twin of the ocean—an enduring foundation of intelligence, resilience, and prosperity for generations to come.

ExploreTech

Rebuilding mining’s risk model with AI

The next decade will be defined by our ability to secure the critical minerals that power the modern economy. Electrification, national security, supply chain resilience, and the compute to power the AI revolution all hinge on a reliable, domestic pipeline of metals like copper, nickel, and rare earth elements. And yet, mineral exploration—the first step in that supply chain—is still painfully slow, inefficient, and risk-laden.

Today, discovering a new mine can take over a decade and hundreds of millions of dollars. Even then, the odds of success are brutally low: fewer than 1 in 1,000 greenfield exploration projects ever become productive mines. Despite the enormous capital flowing into energy transition, the mineral discovery process has barely changed in 50 years.

That’s why we invested in ExploreTech.

ExploreTech is building the first exploration platform designed for speed, precision, and repeatability—using AI not to find “treasure maps,” but to fundamentally change how the subsurface is understood and de-risked.

From Venture to Veins: A Shared Risk Curve

Mining, in many ways, mirrors the venture capital lifecycle. Exploration-stage projects are like pre-seed startups—high risk, uncertain potential, and heavily reliant on talented founders and conviction-based capital. As drill data accumulates and the resource potential is validated, these projects raise more capital and trade at higher valuations, eventually graduating into development-stage assets (akin to growth rounds) and then into producing mines (the IPO equivalent).

But unlike startups, where early-stage investing has evolved into a mature asset class, early-stage mining investment remains a cottage industry. Exploration companies are often microcaps listed on the TSXV, have been largely capital-destructive over the last decade, and are financed in piecemeal rounds with minimal technological edge. There is no Y Combinator of mining—and no Sequoia.

What exists on the other end of the mining lifecycle, however, shows what’s possible when risk is well understood. Royalty and streaming companies like Franco-Nevada and Wheaton Precious Metals have built some of the most capital-efficient businesses in the world. Think of these as the IPO investors of mining. Franco-Nevada, with a market cap of over $32 billion, has just over 50 employees. Wheaton, valued at around $38 billion, operates with a similarly lean team. These companies don’t operate mines—they finance them. In exchange for upfront capital to help construct or expand a mine, they receive royalties (a percentage of the mine’s revenue) or streams (the right to buy a portion of the mined metal at a fixed, discounted price). This model gives them commodity arbitrage upside without capex intensity or operational exposure. They are also providing capital to projects that are over 10 years old with already discovered and quantified deposits, which means their underwriting is based on a very well understood portion of the geologic subsurface and is far less risky than an early stage exploration project. Their success underscores a powerful idea: when the subsurface is well-understood, the capital markets work. The challenge is bringing that kind of clarity to the start of the process.

The challenge—and opportunity—is to enable the same underwriting rigor at exploration, the front end of the risk curve.

AI as the Risk Engine

This is where ExploreTech comes in.

They offer a radical improvement in how early-stage fieldwork is conducted and interpreted. The team’s AI models—trained on geophysics, geology, and historical outcomes—guide field teams toward the most statistically promising places to drill, based on real-time data collection and a proprietary “geologic Monte Carlo model.”

This doesn’t produce a magic answer. But it does reduce the number of holes needed, the capital spent, and the time wasted to discover and quantify a deposit. Instead of drilling fifty holes over ten to find out if a resource exists, ExploreTech aims to get there in five holes and two years.

The results so far are compelling to say the least. ExploreTech has made seven drill recommendations to date—all seven have hit mineralization at the predicted location and depth. In one recent case, it predicted copper would be found 600–650 feet below the surface and 1,200 meters away from any previous drilling. Giant Mining, their partner, found it exactly where the model said it would be.

This kind of accuracy can change the fundamental risk-return profile of early-stage mining. It’s the equivalent of knowing, with high probability, which startups will graduate from seed to Series A—and which should never have raised at all.


Why Now

The timing for a company like ExploreTech couldn’t be better:

  • Copper demand is expected to double by 2045 if net-zero targets are to be met (IEA).
  • National security concerns around rare earths and battery metals are pushing governments to re-shore supply chains.
  • Billions of public dollars are being deployed to catalyze domestic mineral production.
  • GPUs and cloud compute have made subsurface modeling at scale newly possible.

In other words: we need to discover more minerals, faster—and we finally have the tools to do so.


Why We Backed ExploreTech

We invested in ExploreTech not just because it has promising technology and a great team (though it does—Stanford Mineral-X alums with field experience at Rio Tinto, Freeport, and Glencore). We invested because we believe in their vision to remake exploration as a rigorous, data-driven discipline.

If successful, the ExploreTech team won’t just improve discovery timelines—they’ll make early-stage mining investable in a way it hasn’t been before. Their approach could open the door for a new class of institutional capital to back the assets that unlock the energy transition.

We’re proud to be early partners to Tyler, Alex, and the ExploreTech team. This is about more than finding the next mine. It’s about rebuilding the discovery engine itself.

Opportunities in supply chain tech—a challenge that could define post-pandemic innovation

This is the first installment of our Supply Chain Newsletter—stay up to date with new issues by subscribing here.

The team here at Primary strongly believes that the global supply chain challenges we’ve endured over the last three years present an incredible opportunity that will support multiple category defining businesses—each one of the opportunity sets above represents tens of billions of dollars of market potential.

Here, we’ve outlined the key areas we’re thinking about, and that we’ll explore in more depth for subscribers of this newsletter. Expect further thinking and perspectives including case studies, market maps, interviews with thought leaders, and more. Let us know if you have ideas, suggestions, or questions!

We have seen no shortage of founders pursuing ambitious solutions to these problems and we are eager to meet more. Please send any founders our way, and remember, there is no such thing as too early for us!

Why supply chain tech is primed to define meaningful innovation in the coming years

If the Global Financial Crisis helped to kick off a decade-long boom in fintech, we think of the pandemic and its immediate aftermath as the Global Supply Chain Crisis. As we all saw, the shock that Covid-19 brought to global supply chains was truly unprecedented, shutting down factories and constraining supply, sending the costs of global shipping skyrocketing, and in many markets driving unprecedented new levels of demand. Maersk, the world's largest shipping company, tripled gross margins and 5x’d EBITDA in a single year. On the other hand, large companies went under, employees got laid off, and supply chain leaders had no answers. And then Russia invaded Ukraine in February of 2022, sending shock waves through global supply chains once again.

These multiple body blows showed how brittle our supply chains had become. Just-in-time was a brilliant idea until it wasn’t, right? The reality is that with global supply chains stretched and tweaked as tightly as they’ve been, driven by a relentless drive for efficiency, dangerous shocks appear likely to be as much a part of modern global commerce as hurricanes are to Florida and wildfires to the American West. Companies of all sizes need to optimize their risk exposure and responses to these inevitable shocks.

To build more durable supply chains, companies first need to diversify their supplier and transportation provider networks

This may seem simple and obvious, but doing so at scale comes with transaction costs that have historically made supply chain diversification infeasible. There’s good reason these supply chains were overly optimized previously, after all.

Imagine you are the Chief Supply Chain Officer of Walmart, with an already enormous network of suppliers and logistics companies to make and move your products. Materially increasing the diversification of your network would likely mean working with a larger number of lower quality partners and SMBs, who are less sophisticated and don’t meet all of your procurement requirements. Working with companies like these will result in manual work, lower quality service, and an overall increase in cost.

Squaring this circle – finding paths to making supply chains more diversified and durable while not adding mountains of operational complexity and additional risk, presents a massive new challenge. We believe it's a challenge that only technology can answer. This push for diversification and durability is the central opportunity for supply chain tech right now, and we are very excited about it, because we believe it’s the only path forward.

Challenges

After seeing what we saw in the pandemic, we’re betting on a world where supply chain diversification is a top priority for any organization that moves goods at scale. There are lots of reasons why it’s difficult to increase the breadth of a supply chain network, but we bucket them into four problem statements:

  1. Operational complexity - As the number of vendors in a company’s supply chain increases, and likely the average size and sophistication of those vendors goes down, the number of manual tasks and opportunities for errors will increase, too.
  2. Interoperability - As companies adopt more software and start working with more vendors in their supply chain, they will need to build more integrations within their stack and with their vendors.
  3. Vendor discovery - The push for vendor diversity and redundancy creates a discovery challenge / opportunity. Actually finding and understanding the quality of net new supply chain partners to work with is extremely opaque today, relying heavily on word of mouth referrals.
  4. Financial infrastructure - Transactions between supply chain partners is still largely manual and prone to errors, fraud (truckers handling piles of cash—what could go wrong?), and other risks. As supply chains become larger and more complex, that will only exacerbate these risks. Enough is enough.

Solution Sets

The above are intentionally very broad ways of categorizing today’s supply chain challenges.  We are using them to refine the lens we look through to find the next generation of category defining businesses in this space. We believe that each challenge area calls for solution sets that include multiple potential venture scale outcomes.

  1. Operational complexity: An opportunity for more SaaS and AI. A large number of tasks related to procurement, sales, operations, accounting, and data science are still largely manual in global supply chain organizations. This problem grows as the companies expand their supply chain networks. At the same time, ERP and Transportation Management System (TMS) utilization has been widespread for decades, but only recently has it started shifting to the cloud. We think that these systems have been great for keeping a digital record of supply chain data, but have only skimmed the service of what can be done to automate workflows related to supply chain management. We have seen a number of companies enter the market that sit on top of or plug into the TMS/ERP to help companies automate tasks associated with supply chain optimization. Within our own portfolio, there’s an exciting new company we’re yet to announce that gives enterprise data science teams a decision intelligence platform, allowing organizations to improve the quality and speed of everyday supply chain decisions with AI. Parade is another product we admire that helps logistics organizations automate workflows related to the management of transportation vendors in their network.    
         
  2. Interoperability: The need for better data infrastructure. Network expansion, cloud product adoption and IoT proliferation are all driving the need for a data infrastructure overhaul in the supply chain tech ecosystem. As enterprises expand their supplier and transportation vendor networks, they will need to build more and more integrations to operate efficiently with their new partners. Today this task is handled by service providers who build costly peer-to-peer EDI integrations. We see companies like Orderful and Chain.io improving the speed and cost of building these connections, ultimately making it easier for companies to expand their supply chain networks. As the section above highlighted, there will also be a wave of new, cloud-native products that come to market that will need to leverage data from legacy ERPs. Today, these new products struggle to build these integrations, which creates a massive obstacle in their go-to-market timelines. Universal APIs for the WMS, TMS and more will provide the interoperability they need to bring value to their customers faster.
  3. Vendor discovery: Building the databases to stand up B2B marketplaces. Grounded in our belief that large enterprises will continue to expand their supply chain networks, we also believe they will have to find new ways to discover and assess new vendors. Marketplaces have gained very little ground in this space, and there are only a few reliable databases of suppliers that buyers can trust. We’ve seen products offering quality control/assurance software to help enterprises manage their supply chain vendors. Some of these products also plan to leverage their quality control data and customer traction to build out the supplier/vendor databases needed to stand up a marketplace (we don’t believe you can launch a marketplace from scratch). One great example of this is FactoredQuality, which connects brands to thousands of quality control inspectors and helps them run, monitor and analyze all of their supplier testing and compliance checks. In doing so, FactoredQuality also has a vast amount of data about the quality and performance of their customers’ suppliers and can leverage that to stand up a supplier marketplace for brands looking to outsource manufacturing. Whether or not this is the winning chess move, we don’t know, but we have strong conviction that someone will crack this code and one or more robust vendor marketplaces will emerge.
  4. Financial infrastructure: Payments, lending, and insurance. Historically, the way money moves through supply chains – from shippers to forwarders to carriers and back again, with stop-offs at ports, chassis pools and rail yards – is as antiquated and ripe for fraud and errors as it gets. There is an already rolling wave of technology making freight payments easier and safer for customers and vendors alike. They also frequently offer solutions that make supply chain operations more efficient. Freight invoice auditing – the act of reviewing all invoices, bills of lading, and receipts associated with a shipment – has historically been an incredibly error-prone manual task undertaken by large accounts payable teams or outsourced firms. Today, companies like Loop are automating the freight invoice audit process and connecting it to a payments platform to streamline the entire AP process. There are also companies that have succeeded by building payments products that actually accelerate the movement of physical assets through the supply chain by eliminating costly and wildly inefficient waiting for payments to clear at various stops in the chain. Relay Payments, for example, automates lumper payments – fees for the labor required to load and unload cargo – in the warehouse, allowing trucks to save time coming in and out of delivery sites while simultaneously eliminating an important source of fraud. Lastly, we’ve become increasingly interested in supply chain insurance. Commercial auto insurance has become a loss leader for most insurance carriers, yet all trucking companies are required to have it. We’ve seen multiple compelling approaches to leveraging IoT and telematics data to improve insurance underwriting processes. LogRock is building a tech-enabled insurance company that offers carriers free compliance software in exchange for their telematics data. Meanwhile, Terminal is building a universal telematics API – something of a Plaid for telematics – and selling telematics data to commercial auto insurers, with the promise of improving underwriting and bending the curve on loss rates.

Why I went from being a founder to backing them

"Welcome to the dark side." I’ve heard this refrain countless times since I became a VC. I get it. There’s something suspect about VCs. We get this position of power, but the entrepreneurs we back are more resilient, ambitious, intelligent, charismatic, and just competent than we are.

When I told my father, who was a CEO and sat across the table from VCs for decades, I was becoming a VC, he said, "Why would you want to do that?" I’d like to explain.

Before diving in, it's worth noting that my experiences with VCs as a founder were overwhelmingly positive. In 2009, I cofounded my first company (HowAboutWe, acquired in 2014) when the phrase "founder friendly" was trending and VCs were repositioning themselves as service providers. In 2018, when my second company (SelfMade, still trucking) was in the midst of an emotionally taxing crisis, my investors never lost sight of the human element.

Don’t get me wrong. I’ve had VCs obnoxiously ghost me, condescendingly reject me, and arrogantly opine on complex problems. But that’s been the exception. As an angel investor and advisor in many scaling companies, I’ve seen the same thing. Bad behavior is rare and true partnership is normal.

So I did not become a VC to do it better. I became a VC because I love being there for founders and I know how much they benefit from working closely with someone who themselves has been there.


Being there
is about being present, responding in the moment to what matters most, connecting on a human level, speaking your truth, listening deeply, and caring, fundamentally, about the entrepreneur and their journey.

If you’ve been there you understand what it takes to build a company. It’s insane.

Prior to being a founder, the five most transformative experiences of my life were: 1) high school sports, 2) theatre, 3) teaching, 4) silent meditation retreats, and 5) political organizing. These experiences forced me to rapidly learn new skills, take huge risks, be vulnerable, and transcend perceived limitations. They prepared me to create a company.

  1. Sports: grueling competition, pushing yourself to the limits, entering flow, and bonding with a group of people on a shared mission
  2. Theatre: creative risk taking to create something people love
  3. Teaching: leading students to do more than they thought they were capable of by convincing them of the merits of the task at hand and providing the scaffolding to help them succeed
  4. Silent meditation: being at ease and experiencing joy amidst all that is arising
  5. Political organizing: campaigning for a cause greater than yourself

Company creation is all of this. And then some. You’re at it for many many years, all while under immense pressure to perform and deliver for your team, your investors, and your customers. During ten years and two startups, I learned how to navigate these intensities while staying healthy, true to myself, and obsessively focused on building.

To share this and be there for founders is a joy, and it’s why I became a VC.

What does this look like in practice? It starts with a shared understanding of the basics: cash balance, burn, state of the product and how customers feel, team morale, current blockers, etc. Equally important are strategy questions about the product roadmap, the market, the competition, and fundraising. Eventually, if you’re truly being there, you encounter the complex psychological dynamics founders face such as how to show up, manage anxiety, overcome self-sabotage, and constantly up their game. Delusions and doubts need to be discarded. Confidence and clarity need to win the day. To be supportive at this level takes time and trust. It’s earned through late night phone calls and early morning walks to strategize around hard conversations, company-wide communications, and do-or-die questions for the business.

When a founder takes off and starts operating at a new level, it’s exceptionally rewarding. Being there for founders is a joy when you get it right. It’s why I became a VC and what energizes me in each interaction I’m fortunate enough to have with anyone endeavoring to build a company.

Caspian

Turning customs complexity Into cash

At Primary, we’re always looking for overlooked workflows that quietly control billions in value—and companies poised to rewire them.

That’s why we’re excited to announce that Primary has led the $5.4 million seed round for Caspian, a company bridging two worlds we care deeply about: tools to help finance teams make better decisions and optimization of global operations. Caspian is modernizing the customs refund process—starting with duty drawback—and in doing so, building the foundation for a much larger platform.

Led by Justin Sherlock and Matt Ebeweber, the Caspian team brings deep customs experience and rare product execution. They helped build Flexport’s most profitable internal tools, and now they’re applying that playbook to an even broader opportunity: unlocking cash, data, and control for companies buried in customs complexity.

The missing layer in finance and logistics

Every year, U.S. importers pay over $110 billion in duties. If those goods are exported, they’re often eligible for refunds. But the system to reclaim those funds—called duty drawback—is wildly outdated. Legacy providers are expensive, opaque, and slow. The process is manual, broker-dependent, and time-consuming.

As a result, more than $10 billion goes unclaimed each year. Caspian is fixing this.

Its software connects to ERPs, shipping systems, and customs data sources to:

  • Extract and match documentation
  • File compliant claims with U.S. Customs
  • Track and nudge for future refunds

The result is a seamless flow of savings—with auditable, real-time insight into landed costs, refund status, and tariff exposure.

Why we led the round

What makes Caspian compelling isn’t just the refund—it’s the data. Drawback is the wedge. From there, Caspian can expand into classification optimization, filing analytics, and eventually a comprehensive trade control layer. Over time, we believe this platform can become the Moody’s of trade compliance, using AI to structure and act on customs data across borders.

We led this round because Caspian is solving real pain with real urgency. Early customers like Pakt Bags, UltiMaker, and Sunday Golf are recovering 6-figure refunds in a matter of weeks. The founding team has lived the problem and built for it before. And the macro landscape—from CBP enforcement to global tariffs—is making this issue more urgent every day.

Caspian gives both finance and ops teams a way to stop leaving money on the table—and start turning customs data into an asset.

The Token Exponential

Machines of Loving Grace by Anthropic CEO Dario Amodei defines the attributes of “powerful AI.” Then he explores what such an AI could herald for biology, neuroscience, politics, the economy and human meaning. As the name suggests, he imagines a glorious outcome for the AI Revolution.

We’re now accustomed to hints of powerful AI: a Nobel Prize for protein discovery, 2M+ new materials discovered, pervasive labor automation (most apparently in software engineering), and the latest reasoning models. For Dario, powerful AI will be a step-function from this: AI that is smarter than Nobel prize winners in all fields, capable of spending years on a single task, accessible through various modalities, able to collaborate effortlessly with other AI, and take action on the world. While avoiding techno-utopian tropes, he explores concretely how this AI could extend life, improve mental and physical health while solving our most vexing psycho-political-economic problems. The essay reflects his optimism about the potential of intelligence and AI to become very, very smart. He envisions powerful AI being akin to a “country of geniuses in a datacenter.”

I took the essay in during a vacation, when the mind is a bit free to wander, and I started imagining what said datacenters would need to be like. The compute required to reach this land of geniuses felt unimaginable or perhaps unreachable, barring fantastical breakthroughs.  

With this macro backdrop, we looked to the micro: the token. Satya Nadella said during Microsoft’s latest earnings call: “We processed over 100 trillion tokens this quarter, up 5X year-over-year – including a record 50 trillion tokens last month alone.” If Microsoft continues their pace for five years, they’ll process nearly 1.25 quintillions tokens in 2030.

If we assume Microsoft is 50% of the market (they have OpenAI compute afterall) and growth continues at a 5x clip, we’ll process 2.5 quintillion tokens in 2030. That’s 2.5 trillion million. For the visual learners in the group:


As we were writing this newsletter and getting excited about token exponentials, Google I/O happened. Sundar Pichai had a mic drop moment when this slide came to the screen  

Jaw-dropping growth and scale: 480T tokens/month, up 49x! While we knew we were being conservative in our estimates and aggressive on assuming Microsoft was 50% of the market, we had grossly underestimated Google’s introduction of AI into their product suite.

So, let’s make an update. In March, Microsoft processed 50T tokens. Google processed ~400T (480T in April). Let’s say Google plus Microsoft is 50% of the market, processing 450T tokens a month. If we still use a conservative 5x (a 10th of Google growth) year over year growth estimate, we’d grow to 168 quintillion tokens processed in 2030. Our earlier estimate was 2.5 quintillion!

We’ve entered a phase of astounding and accelerating growth in token usage, and we are wildly underprepared.

Compute will try to keep up, but there’s no way. At 100 tokens/second, a single fully-utilized GPU can process 3.1B tokens a year. While this year’s 800T tokens could be processed with 250K GPUs (and NVIDIA is going to ship 4M+ chips), 168 quintillion tokens in 2030 would require 53B GPUs. A 100x improvement in the efficiency of the underlying models or hardware would require 535M GPUs - 100x the number of GPUs NVIDIA will ship in 2025.

Breakthroughs are needed.

Why tokens matter:

The token is the atomic unit of the AI era. At the end of the day, models consume and produce tokens. A token is a word, sub-word, or symbol that the model turns into a number for computation and then back from numbers into readable text, images or sounds. Tokens underpin the economics of AI, with model providers charging on a per-token basis and cost/token being a north-star metric in price compression, especially on inference.  

Token throughput per GPU—how many tokens a chip can crunch each second—plus the watts each token consumes, sets the unit economics of inference. Hardware improvements and smarter software have already collapsed the cost of processing 1 million tokens from $180 to $0.75 in 18 months from 2023-24; Sam Altman expects a 10x drop in price every twelve months. Importantly, that’s for the same task and there are powerful forces driving token volumes and costs up that are leaving efficiency gains in tokenomics in the dust.

Fueling the token exponential:

There are four key inputs to token growth, and they are compounding and accelerating one another:

  • Human user growth: AI adoption has hit escape velocity and we will see a majority of the human population using AI within five years. Today, ~10% of the world is using OpenAI, but 7.2 billion people have smartphones. By 2030, all new smartphones coming to market will have robust AI functionality.
  • Agent user growth: A single person might orchestrate 100 agents acting on their behalf across their personal and professional lives. Machines/service accounts outnumber human identities 82:1 today, and while we don’t see that ratio yet with agents, we’re not far from that world.  
  • Token load per task: Reasoning models use 20x more tokens and are up to 150x more expensive than non-reasoning models. Prompts themselves are also getting longer – Google’s latest earnings call confirmed their AI Mode queries are twice as long as traditional Search queries.
  • Tasks being done by AI: First came copywriting and then coding, but we’re seeing signs of more complex labor automation across legal, accounting, logistics, and finance. Furthermore, media, robotics, and scientific research are all undergoing a process of AI infiltration. Teenagers are now consulting ChatGPT on life questions; my three year old daughter is now accustomed to getting answers to questions from ChatGPT! And Gaby’s dad, seven decades away from Lila, is writing a novel with AI that has 65,000 words already!

These forces will continue to grow because of model improvements, product breakthroughs and the natural process of tech adoption. Perhaps, in the near future, all of these tokens will power a “country full of geniuses in a data center,” if we could get the chips…and the power.  

Turning electrons into tokens:

In a literal sense, a GPU and even an entire data center carries out the function of converting electricity into tokens. And AI is already taxing the grid. If the trends we outlined above continue, with 53B GPUs required to process 169 quintillion tokens in 2030, we’d need over 37.5K GW to power them (assuming 700 watts per H100). Today, we have ~1,200 GW of power in the US in total. Right now, about 4.4% of electricity is consumed by data centers. At 37.5K GW, we’d need 100% of a grid that is 30x bigger than today’s infrastructure. Impossible, unbelievable, and in need of radical change.  

In early 2023, chip shortages crippled the industry and companies were buying GPUs off black markets. The market stabilized through increased supply, better utlization and improved accessibility resulting in a steady decrease in price per token. These dynamics exposed the power bottleneck.

In response, hyperscalers are purchasing large scale power contracts as existing power infrastructure cannot meet current data center demand. Microsoft is restarting Three Mile Island, Meta is buying geothermal power, and Google is partnering with startups to accelerate nuclear power deployment. Power challenges explain, in part, the success of the neo-cloud. Companies like Coreweave, Nebius, and Crusoe were already in the business of power optimization for bitcoin mining. They’re all now playing the AI token game.

Alleviating the token exponential:

To be clear, there extremely powerful levers to move to keep apace with the token exponential:

  • Improved hardware: GPUs are improving (FLOPs/second/GPU improving 1.35x a year), and breakthrough hardware will deliver astounding token throughput.
  • Smaller models: Models with fewer parameters that still deliver desired outputs can deliver more tokens with fewer FLOPs.
  • Model architecture: New designs like Mixture of Experts, which enable less work per token but the same overall model capacity, will drive efficiency.
  • On-device/edge inference: We are moving to a world of local by default and cloud when necessary. This will require both model and hardware improvements, including breakthroughs in model compression, edge-optimized runtimes, and low-power hardware.
  • Improved GPU utilization: Currently, utilization rates sit between 30-50%. Tech that virtualizes and orchestrates GPU resources, such as RDMA-based networking and workload scheduling, can enable multiple jobs/users to share GPUs and reduce idle time.

All of this will help. But 462 quadrillion tokens a day is no joke.  

The token exponential becomes an S-curve… or a bell curve?

Compute capacity will lag behind token demand for the foreseeable future, but we will hit AI saturation, eventually. Token growth drivers will slow: use cases, model advancements, user adoption and usage rates will all plateau. We’re in an era of radical exuberance that feels both rational and irrational, but the frenetic experimentation and investment will all pass. As Carlota Perez explains in her theory of technological disruption, we’ll eventually reach a steady state, in a new world. At that point, compute will be able to catch up.  

While our S-curve envisions token demand climbing for years before compute catches up, Yann LeCun is skeptical. He views today’s transformer models as essentially high-powered guessing machines that cannot, by definition, reason like geniuses; transformer-based intelligence will eventually underwhelm, and we’ll have overbuilt compute. Then perhaps model breakthroughs that are more efficient will emerge. In that scenario, our S-curve falls into a bell-curve.

While no one knows how this will play out, booms and busts are likely. But even quantum leaps in model architectures will require many years (the Transformer came in 2017) to work their way into daily usage. With that runway, the existing backlog of transformer-based workloads will keep today’s GPUs busy, while the industry figures out fresh hardware.

Any ways you cut the deck, we are at the beginning of the AI era, and we need specialized compute solutions.

Opportunities for startups

When we invested in Etched just two years ago, we made a bet that the inference market would grow dramatically faster than expected, making specialized hardware economically essential, nay essential, for AI at scale. While ASICs like Sohu are fundamental to solving the demand problem today, we need more.

To get to a world where powerful AI can build a “country of geniuses in a data center”, we’ll need radical innovations across the entire compute value chain. When people have access to a team of PhDs working for them, digital companions, personalized media, robots, etc. and AI adoption is pervasive, the compute needs will be astronomical. This will require more chips, manufacturing capacity, more energy, more talent, and a ruthless focus on building for the flood that’s coming.

Investing in this space at seed is complex: capital intensive, massive incumbents, unpredictable supply chains, and many moving targets around compute need driven by model architectures and specific use cases. Most VCs stay away. Indeed, US venture funding into the semiconductor industry dropped from 8% in 1995 to 4% in 2005 and sub 1% in 2015. This is starting to change, and we predict it will change radically over the next decade to meet the moment.

We’re looking for opportunities aligned with our thesis that radical breakthroughs are needed:

  • New materials: There will be a generation of materials beyond silicon; we’re particularly interested in gallium nitride for power, alternative materials for substrates, and photonics for memory.

  • New architectures: New model architectures, like the distillation architectures that accelerated DeepSeek, and particularly architectures best suited for multimodal and physical AI models, are a huge opportunity.
  • Infrastructure to reduce the strain on compute: There will be more improvements in interconnection, power accessibility, data center reliability, utilization, and the operation of data centers.
  • Compute at the edge: Enabling technology that moves workloads away from GPUs and closer to where data is being collected at the edge will reduce the need for reliance on hyperscaler compute.
  • Orchestration solutions: As compute becomes more heterogeneous, and quality of compute varies dramatically depending on how workloads are managed, there will be emergent solutions in workload orchestration.
  • Radically novel solutions that don’t fit neatly into any of these buckets: We’re thinking about harnessing energy from the ocean for underwater data centers and putting GPUs in space. Quantum. And technologies that looks crazy today will not in a few years.  

Compute will look radically different in a decade if we are to realize the potential of powerful AI, and we’re looking for founders bold enough to take on this daunting challenge. We’re also looking for network nodes who are passionate about this space.

Many thanks to Phil Brown (Meta), Ben Chess (ex-OpenAI), Rob Wachen (Etched), Max Hjelm (Coreweave) and many others who helped shape our perspectives as thought partners for this article.

Vertical AI isn't vertical software with a chatbot

Pros and pitfalls we’ve noticed in the emerging category

“Vertical AI” is one of the buzziest categories in early stage investing. That’s for good reason, with early breakouts including Abridge and EvenUp raising a collective $570 million in 2024. These companies share a similar premise: Do work rather than facilitate work, therefore capturing services spend (13% of GDP), rather than software spend (1% of GDP).

While it’s easy to view Vertical AI companies as the natural evolution of Vertical SaaS in the age of LLMs, we believe that they are fundamentally different from their predecessors. Specifically, Vertical SaaS companies were SaaS companies; vertical AI companies come in multiple different forms, some of which are not pure SaaS. As such, founders need to carefully evaluate which business model is optimal for each market when launching a Vertical AI startup because the business model will inform some of the key risks and core advantages.

What are the categories of vertical AI companies?

Vertical AI companies have a slew of different options when it comes to choosing a business model. With that said, nearly all of these businesses are doing one of two things: they either enable a service provider or are a service provider. There are multiple different business models in both categories.

Article content

Each of these categories have distinct advantages and disadvantages. In this article we’ll give an overview of how we evaluate each.

Enabling Service Providers

“Do it for me” co-pilot

The AI co-pilot is the most popular business model we are seeing early stage vertical AI companies pursue today. These companies have all the advantages of traditional software businesses: high margin, recurring revenue, sticky (ideally), with seemingly easier sales cycles than their vertical software predecessors.

Co-pilots are especially attractive for companies that serve SMB and SME customers, as the breakouts offer a significantly improved ROI compared to traditional vertical software. These companies are growing quickly by automating work rather than providing incremental productivity gains via workflow tooling. Oftentimes with incredibly fast or immediate time to value. We’ve written extensively about this theme in our SMB Tech Installment of Change Order.

On the other hand, co-pilot business models come with their own set of challenges. We’ve seen these business models have two major challenges:

1. User adoption: Co-pilots that require a significant customer behavior change will likely struggle to drive user adoption. We’ve seen the most hesitancy around two themes:

Article content


2. Product expansion beyond the wedge:
We believe that there will be a massive number of successful wedge products that plug into the system of record and quickly grow to $50m+ ARR. At the same time, we have questions about how many of these companies will go on to become enduring generational businesses.


Reason to Win:
The ones that do will have won because the founder determined the second act early into the company life cycle (ideally months or even weeks into launching the wedge product) and had a magical experience for end users from day 1. The profit pool or labor spend that you go after with your second act should dictate your pricing strategy and engineering resource allocation to determine the speed at which you should attack that second act.


Key Risk:
The fundamental risk of meeting customers where they already work is that the system of record company can complicate or even turn off existing integrations. They can also launch their own version of your tool. If the end goal is to replace the core workflows of the system of record, vertical AI founders must be extremely clever and move fast in launching new products.


AI Native Service Provider to Other Service Providers


Companies like EvenUp have averted a core challenge of the co-pilot model—end user adoption. They’ve done this by becoming an outsourced service provider and selling work to the services firm, rather than selling software. These companies typically go after the labor pool of spend rather than software budgets. EvenUp, for example, prices as a percentage of a law firm’s existing paralegal staff. Consequently, they are able to drive larger ACVs from Day One, enabling them to grow revenue quickly.


Unlike many co-pilot solutions, service providers often go after enterprise customers first. This is because enterprise customers own a disproportionate percentage of industry data, which is critical for building an early data moat and offering a 10x better service as you go down market.


While these companies can grow fast, they also have two core challenges:


1. Pricing pressure from new entrants:
Skeptics say the question surrounding AI native service providers like EvenUp is a lack of defensibility within their wedge product. While locking up enterprise customers and leveraging their data to build a better product gives these companies a strong first mover advantage, we don’t believe that is the sufficient driver of long-term defensibility.


The enduring service providers will have to find ways to build defensibility in their second act. If they don’t, they are likely to be undercut on price by new entrants and ultimately suffer from thin margins driven by downward pricing pressure.

Article content


2. Operational complexity:
Leveraging AI to build a services business does not negate the fact that it is still a tech-enabled service and may face many of the same challenges that services businesses have always faced. One obvious operational pitfall is lumpy, project-based revenue tied to market cycles. This creates two core issues: The first is around staffing utilization, which when sub-optimized can pull down gross margins; the second is around building a recurring, predictable revenue stream that will be valued at a higher multiple. To date, we’ve found that the best way to drive more predictable revenue is by charging a minimum spend upfront or using a credits-based system.


Ultimately, these companies will need to be very thoughtful in how they manage their “human in the loop” workforce and set up the right data infrastructure to enable their workforce to train the AI models over time.


Reason to win:
AI native service providers selling to other service providers should be able to capture more gross profit dollars per a customer than their purely software peers. They should also find acquiring customers a bit easier than software tools as the change management problems they experience from an operational perspective is likely much lower than pure software, especially if their service provider customers already have a budget for outsourcing the specific activity.


AI-Native Marketplace


An often overlooked business model in the Vertical AI landscape is AI-native marketplaces. Once a hot business model in the early 2010s, marketplace investors have struggled to find true breakouts in recent years. Most of the large markets had been won by the time customer acquisition costs leveraging online marketing started to rise. We’re now entering an era where marketplaces can thrive again as AI fundamentally shifts the cost structure in some categories that were deemed too challenging prior.


This business model is relatively less common, though we’ve seen a couple take off and grow very quickly in the legal sector, including Manifest and Marble. Both of these companies are going after micro-SMB or solopreneur law firms who do not have the bandwidth or marketing muscle to spend significant time on customer acquisition.


These businesses leverage AI for customer acquisition, as well as customer success on the demand side and automation of work on the supply side.


By delivering a fully vetted customer (not a lead) and automating much of the low hanging fruit involved in servicing that customer through AI, these companies are able to command high take rates on each “job” they deliver and provide end consumers with a “better, faster, cheaper” option. They’re able to do so because AI enables two important efficiency gains:

  1. Each of the marketplace’s sales rep and CX agent can handle 10X the amount of customers they were able to before
  2. Each service provider on the other side (i.e., the law firm) can take on more clients as traditionally manual tasks are automated away with AI
Article content


Another key insight that will lead to very strong AI-Native marketplaces is strong cash conversion cycle dynamics. Markets in which the marketplace can collect revenue on day 1, but not pay out the service provider until day 30-90, creates incredible dynamics whereby the company can reinvest the dollars into growth. Consequently they are able to grow very quickly.


In addition, marketplaces have an innate network effect built into their business model. They can also start building strong consumer brands through a high-quality customer service experience and transparent pricing. While these companies can be harder to scale as founders need to acquire both supply and demand, they’re inherently more defensible.

Article content


With that said, they come with their own set of challenges:


1. Price compression from new entrants:
Similar to vertically integrated service providers, as these AI-native marketplaces start to truly take off, new players will enter the market. While marketplaces benefit from strong network effects, we believe that new entrants might drive up customer acquisition costs and put downward pressure on pricing. The 50%+ take rates we see today are likely not sustainable long term.


2. Bumpy, unpredictable revenue:
Depending on the type of service, transactional revenue is typically not as predictable or valuable in the public markets as SaaS revenue. These companies will need to scale to much larger sizes to achieve similar valuations to their SaaS counterparts. The good news is that many of these markets are absolutely massive and represent strong wedges into more recurring revenue streams based on the building of trust with their end consumers (e.g. financial advisors making recommendations for annuities, insurance, etc.). It’s critical for founders to build in marketplaces where there is still a venture scale TAM opportunity even with pricing pressure.


Being a Service Provider


Vertically Integrated Services Firm


Unlike AI Native Service Providers, Vertically Integrated Services Firms are not selling discrete work to an existing service provider. In contrast, they are disrupting that service provider by offering an often cheaper, faster, and/or better experience to the end customer. They are able to do this by leveraging AI to automate away some traditionally manual, time consuming, and expensive processes that are done by humans today.


On the surface, AI Native Service Providers and Vertically Integrated Services Firms share many of the same advantages and disadvantages. However there are critical tradeoffs between being a services firm and enabling them.


Advantages of being a services firm no matter how you do it:

  1. Owing the profit pool: By being the services firm, businesses own the entire profit pool. You have more control over the core drivers of profitability and can experiment with different tooling to drive costs down over time.
  2. Capture and create truly complex and proprietary data: Vertically integrated services providers typically develop nuanced understanding of complex processes by having access to significantly more data than either type of Vertical AI business. With proprietary data comes key insights that typically lead to a higher level of automation and a more delightful customer experience over time, all of which are likely to drive true first mover advantages.
  3. Brand power: By offering a better service directly to end customers, these companies can build durable and lasting brands that users associate with a high trust experience and a quality product.
  4. Rethinking the end customer experience and internal operations from the ground up: Launching a new services firm means you get to reimagine firm operations and the customer experience. If a services firm charges customers on an hourly basis today, they’re not incentivized in most markets to leverage AI to become more efficient as it’ll impact their topline. When building a new firm, you could easily rewrite the rules and charge on a project basis. Additionally, legacy firms are stuck with old clunky software tools that they’ll likely struggle to replace and sub-optimal processes for their end customer. By launching a firm from scratch, you can fully reimagine operations and pass a lot of that value onto the end customer.


There is a core disadvantage:

  1. Operational complexity: Being a service provider is fundamentally more operationally complex than enabling an existing firm, especially on the onset. It takes an incredible amount of high quality hiring, process development, and industry expertise. Consequently, the capital requirements are often very high.
  2. Exit Multiples: Services firms have historically traded at lower multiples than software companies. This may impact a startups valuation in both the private and public markets, as well as deter some downstream investors from wanting to get involved.


Ultimately, founders need to weigh operational complexity with the possibility for larger profit pools and a stronger moat when choosing between becoming a services firm and enabling a services firm.

Article content


AI Enabled Roll Up


In order to avoid the cold start problem, some founders may consider purchasing an existing services firm and layering on AI tools in order to deliver a better, faster, and cheaper service. These companies are essentially a Vertically Integrated Service Provider but their core advantage is that they are skirting some of the operational complexity involved with starting a services business from scratch. They do not have to worry about hiring every employee and setting up all new systems; and immediately have access to a wealth of data. Most critically, they are starting with a book of business and from a place of trust with existing customers, which is critical in relationship driven industries like accounting.


They have one core disadvantage compared to Vertically Integrated Service Providers that are starting from scratch:

  1. They need to master M&A and product simultaneously: While this may seem like an issue that can be solved with the right talent, M&A and product are two very important yet very different core competencies. It’s rare for a founding team to have both. Consequently, many of these businesses may end up looking more like private equity roll-ups than a tech enabled services firm. On the other hand, the team may build great products but acquire bad assets. Either way presents a challenging path ahead.
  2. Change management: Acquiring an existing business means inheriting existing talent, culture, and ways of working. Large companies often hire slews of outside consultants to enact change management. While the process is usually simpler at smaller organizations, it is still likely to be challenging and time consuming, especially when it comes to digitizing traditional tech laggard industries.


AI Franchise


A third flavor or “being a services firm” is becoming an AI franchise. These founders recognize the operational complexity of building a services firm and instead of trying to scale on their own, they leverage the power of their tooling and brand to drive growth. The core advantage of this approach is scalability and capital efficiency. It’s easier to offload some of the growth and management to entrepreneurs who have expertise in their own markets. Additionally, it’s incredibly powerful in markets where the franchisee takes on some initial capex investments (e.g. robotics or other equipment) vs. the franchisor startup itself. In many instances AI franchises will have higher ACVs than pure SaaS companies selling to the same target customer base if you assume that their “franchise fee” is in line with the franchise royalty rates of today.


However, that same advantage comes with one core disadvantage:

  1. Quality control: It will be harder for entrepreneurs pursuing this approach to ensure that their brand power does not get eroded over time by sloppy operators. Having strong compliance mechanisms in place and a thorough onboarding process is critical for the success of AI franchises.


While these business models have their own unique challenges, we believe there will be multiple generational businesses built in each of these categories.


Founders who find the right business model market fit and nail their second act early in their company’s journey are poised to win. If you’re ideating or building in Vertical AI, please let us know. We’d love to chat with you!


Vertical AI business model overview

Article content

Haiqu

Building quantum's software layer before the hardware arrives

Richard Givhan, CEO and co-founder of Haiqu, is a founder who makes you question industry assumptions. Most VCs dismiss quantum, while semi-informed VCs offer predictable quasi-technical explanations as to why we’re “decades away”. But builders like Richard use first-principles and technical depth to plot a course that will upend convention. “What PyTorch did for AI, Haiqu will do for quantum,” he said, pointing to results whereby engineers were deploying today’s powerful algorithms on quantum computers with Haiqu. It’s the kind of progress the industry has been waiting for.

And it's picking up momentum. Last year marked a breakout year for quantum investments – over $11.6B was invested across public and private quantum companies, compared to only $2.4B the year prior and only $1B the year before that! From venture, quantum saw $4.5B in the last three years vs. $4B total in the 15 years prior.

The market is starting to believe. Quantum computing presents extraordinary, human civilization trajectory changing opportunities. Our world is, in part, governed by quantum dynamics that only quantum computers can reveal. This could unlock hard-to-fathom breakthroughs in areas like energy production and drug discovery. More practically, quantum computing could transform finance and insurance and upend encryption as we know it.

Despite investment and boundless potential, we have not yet reached “quantum advantage” – when a quantum computer can solve a problem faster and more efficiently than a classical computer, though there is debate about whether this must occur on a practical problem. When quantum advantage is reached, and quantum computers are performing economically valuable, practical problems better than classical computers, there will be a deluge of attention on quantum, dwarfing the past two years.

So how do we get quantum advantage? The existing state of deploying applications on quantum computers is challenged – expensive, hard-to-access computers that require experienced teams writing bespoke, non-standardized algorithms. Quantum teams need the right software and frameworks to deploy algorithms on today’s hardware, not waiting another decade. Haiqu provides circuit compression and error shielding software within easy to use frameworks that let teams focus on solving problems instead of wrestling with hardware. With Haiqu, teams can run applications at greater scale and with dramatically fewer computational steps on today’s early, error-prone quantum hardware.

Richard Givhan, CEO and co-founder of Haiqu, has long been captivated by quantum computing’s potential to tackle the world’s toughest problems. After growing up in Cyprus and studying applied physics at Stanford, he joined Creative Destruction Labs, where he teamed up with co-founder Mykola Maksymenko, a former quantum researcher at Max Planck Society and Weizmann Institute, to start Haiqu.

We are thrilled to announce Haiqu’s $11M seed round. We were fortunate to lead the round, with participation from Qudit Investments, Alumni Ventures, Collaborative Fund, Silicon Roundabout Ventures, Angel One Fund, and returning investors Toyota Ventures and Mac Venture Capital. We’re proud to back Richard, Mykola, and the entire Haiqu team in their bold endeavor to enable their customers to reach quantum advantage across every vertical.

Teleskope

Data security rebuilt for companies deploying AI

Data security is a burning problem in the AI age

There is a simple formula for seed investors: big market + strong tails winds + great founder = good investment. You don't know how these factors will play out before an investment, but sometimes they all get better than you had predicted. This is the case with Teleskope. We see no founder who grows as fast as Lizzy. The market she is in could not have better tailwinds. Lizzy started Teleskope to bring to market the data security solution she built while at Airbnb. We all believed that data would become more valuable over time, but then came the latest AI wave and our assumptions exploded. We're thrilled to share that Lizzy and team have raised a $25M Series A to go and tackle what is sure to be one of the most essential categories for the enterprise in the age of AI.

For a long time, data security has been a problem for security teams. Data lives everywhere, spanning SaaS applications, on-prem systems, and cloud resources. This means that data is hard to wrangle, and as a result, organizations risk sensitive data ending up in the hands of the wrong people.

This is not a new problem, but AI has accelerated the severity of it by 100x. Companies today see their data as a core asset more than ever before. They want to build differentiated products leveraging their data, make their enterprise tools more usable by enabling AI search, and fine-tune models for AI use cases using domain- and company-specific data. As companies rush to adopt AI, tools and agents like Microsoft Copilot can now surface sensitive information in seconds—far faster than any human search ever could or any attacker could extract manually—making existing risks of data exposure and breaches much easier to realize. Many companies and security teams recognize this risk and are holding back on AI adoption until they can clean up and secure their data.

This is where Teleskope comes in. Teleskope has built the category-leading agentic data security platform that goes beyond visibility and alerts to help teams actually fix data security issues. It enables CISOs and engineers to act on real risks through agents that learn the company’s policies and context, and through a natural-language workflow builder powered by small language models (SLMs) that drives remediation in milliseconds.

Imagine an internal rule that simply says “whenever PII is shared in Slack, delete it” and implements that rule in real-time. This is what Teleskope enables, across all types of data and applications. This kind of work would typically be impossible for engineers to do on their own, and take hours with existing tools sifting through alerts and doing manual remediations. Teleskope has proven to be a game changer, providing a jaw dropping customer experience to its users.

Teleskope was founded by Lizzy Nammour, who experienced the problem firsthand at Airbnb. She was tasked with getting Airbnb’s internal data GDPR-compliant and IPO-ready, and could not find anything on the market that would help find and remediate the problems. She founded Teleskope out of her own frustrations, and has done a miraculous job scaling the business quickly, delighting customers, and building a stellar team.

Why We Invested

Teleskope is off to the races, with customers like Ramp, Notion and Alloy. One of the most amazing testaments of Teleskope’s impact is the way its customers talk about the product and team. The customer love is palpable, and in our careers we have only come across such enthusiasm from users a handful of times. We’re excited for this fresh round of capital to enable the business to continue accelerating.

The progress of the company so far is exciting, but we are most looking forward to seeing where the business goes. In an AI age, so much manual, wasteful work will be automated. In security, we believe many automation opportunities exist, and data security is a great example. Data security engineers spend much of their time toiling away on manual scripts to solve data vulnerabilities and prevent exploits. In the future, this process needs to be augmented with AI, and Teleskope is positioned to be the agentic data security engineer.

We are thrilled to be collaborating with our friends at M13 and Lerer, along with many fantastic security leaders who are personally investing in this round. Congratulations to Lizzy and team Teleskope!

Incumbents can't follow you into AI-native

In the realm of business strategy, few concepts hold as much transformative potential as counter positioning. Originally articulated by Hamilton Helmer in the book 7 Powers, counter-positioning refers to the strategic maneuver of introducing a new, innovative business model or product that incumbents cannot or will not emulate. This often occurs because doing so would undermine their existing business model.

After incubating two businesses this year, one with the former CEO of Angi and the other with a handful of the largest real estate developers in the world, our goal of this post is to share our playbook with you to find your next counter positioning opportunity to enter into a vertical market.

This post is going to be the first of a multi-part Series where we dive into our learnings in Vertical AI across over 1K pitch meetings, customer meetings, founder and executive interviews across more than a dozen end markets.

What is Counter Positioning?

Hamilton Helmer, in his seminal work "7 Powers," defines counter positioning as a strategic advantage where a new competitor adopts a different business model that an incumbent cannot replicate without suffering significant losses. This forces the incumbent to either adopt the new model and cannibalize their own revenue or continue as is and lose market share. Examples of counter positioning include:

  1. Netflix vs. Blockbuster: Netflix’s subscription model and online streaming service disrupted Blockbuster's rental business, which relied heavily on late fees and physical stores.
  2. Amazon vs. Traditional Bookstores: Amazon’s online bookstore with vast selection and home delivery counter positioned against the traditional bookstore model.
  3. Dandy vs. Traditional Dental Labs: Dandy’s free intraoral scanner program was an attractive value proposition for dentists, but would’ve led to dramatically higher expenses for dental labs who had never provided them before and rarely invested in customer acquisition or tech


How You Can Leverage AI for Counter-Positioning


AI technology presents a unique and unprecedented opportunity for counter-positioning across various industries. Here’s how you can find your next business idea and leverage AI to establish a strategic advantage:


Step 1: Identify the right market


Find services that have been historically priced hourly and are reliant on heavy human labor


AI excels at automating repetitive and labor-intensive tasks.


Interview key buyers at large companies in your markets of interest. Ask them what 3rd party service providers they use who bill hourly, how much they spend on them and how they perform the service.


By targeting industries where services are billed by the hour and require significant human involvement, you can introduce AI solutions that offer superior efficiency and cost savings. This enables you to dramatically undercut on price while having a better margin profile.


It’s easiest to look into end markets where they frequently use BPOs for things like claims processing, scheduling or report writing.


OR


Find markets where companies outsource data processing or other projects on a fixed fee/project basis


Incumbent service providers who are used to charging $30K+/project or $10/job to be done are not strongly incentivized in the short-term to drop their prices, but they are incentivized to leverage AI to improve their gross margins.

In markets like this you can either:


Enable the services firms OR Be the Services Firm

Enabling the services firm (aka selling to them) approaches include:
  • Delivering work to the Services Firm, which enables them to scale without adding additional operational complexity for a fraction of the cost - e.g. EvenUp
  • Co-Pilots that equip the services firm with tools, which unlocks a massive increase in employee efficiency and helps the business grow their revenue/employee - e.g. DataSnipper, Harvey
  • OR Enable net new service providers to counterposition - By selling directly to the service providers just getting started you can enable them to counterposition. This is best in markets where a high number of net new businesses are being started. Companies like Captions have taken off in the consumer world, but we’re also hearing that offshore companies who provide video editing services are thriving leveraging tools like it.

Be the services firm can look like:
  • Start a vertically integrated services firm - Selling the AI X Human Support directly to the business and counter-positioning greatly on price (50%+) - e.g. Hedral, Pilot, Invisible Tech
  • Start an AI marketplace business - Provide customers/revenue and AI software to the supply side to make them 10X more productive. Force them to use your software when serving the demand side/end clients, which you’ve acquired for them. This is best in market where a high number of solopreneurs or SMBs exist who spend way too much time on backoffice tasks and are historically bad buyers of software. By driving the supply side new business and leveraging a take rate model you’ll be able to make more gross profit per SMB than your SaaS counterparts. - e.g. Manifest and Flare/Marble Law
  • Buy + modernize an existing services firm - Improve the businesses EBITDA margins 2X+ overnight and help drive more demand through sophisticated AI-powered customer acquisition techniques - e.g. Metropolis
  • Franchise a services firm concept - help people launch their own business in a turnkey way where they can leverage AI to make operating the business 10X easier, while undercutting competitors on price and experience - e.g. Farther

Step 2: Analyze the market structure


Is this a fragmented market with lots of Mom and Pops providing the service who use onshore labor?


Or is it a market with sophisticated, scaled players who efficiently leverage technology and offshore labor to drive the COGS down already?


The more fragmented and less sophisticated the market the better.


Step 3: Analyze the role AI can play


Talk to experts in the market to understand how the service is specifically delivered.


Perform a process, task and communication mining exercises where you map out how the service is actually provided.


Then think through this question:


How can I introduce human-in-the-loop products that are better, faster, and cheaper into a great market?


Combining AI with human oversight via a service or co-pilot can result in products that outshine traditional services on multiple fronts. You want to find jobs to be done that will result in:

  • Better Quality: In markets where accuracy is crucial, AI can deliver superior quality. For instance, image recognition technology powered by AI can outperform human capabilities in terms of speed and precision, proving invaluable in fields like medical imaging and security (e.g. Verkada, AmbientAI).

  • Faster Service Delivery: In industries constrained by human labor supply, such as healthcare and legal services, AI can significantly accelerate processes. Moreover, markets that require extensive licensing and training (e.g., specialized medical practitioners, architecture and engineering ) can benefit from AI-driven solutions that bypass these constraints with signoff by a licensed professional (when allowed), providing faster service delivery when the human can play more of a quality assurance role.
  • Cheaper Cost: Identifying and targeting major cost components in traditional business models can yield substantial savings. For instance, AI-driven customer service solutions can reduce the need for large call centers, cutting down on overhead costs while improving response times and customer satisfaction.


Step 4: Figure out if it is a Great Market, not just a good one


A million opportunities exist out there, but you can only choose one. The difference between a good market and a great market will ultimately decide your longterm enterprise value. So what do we think makes a great market?


Better quality leads to 10X+ ROI, preferably driving revenue


Markets where higher quality results will lead to a significant ROI are great. Examples include:

  • AI medical billing leads to higher collection rates
  • AI procurement leads to more price transparency and immediate savings
  • AI call answering increases conversion rate of customers leading to more revenue


Faster Service Delivery leads to better returns or cash flow


In real estate for instance, developers find themselves consistently waiting for service providers to deliver their product. This delay creates a drag on the internal rate of return (IRR), so speeding things up for them will make them look better in the eyes of their investors.


Similarly, many businesses have challenging cashflow dynamics whereby they have to pay their employees every other week, but do not get paid by the end customer until the project is completed. In instances where you can expedite service delivery, you might be able to radically improve the cash flow of the business and reduce the stress levels of the owner.


High immediate savings in markets that clearly flow right to a tight bottomline


Markets where margins are very tight and you are able to quickly and predictably reduce the cost of a major line item are bound to see a lot of demand.


If companies are running at 15-35% gross margins or 5-15% EBITDA margins, and you can provide them with a major expense at 50% of the price, the demand will come either way.


The Real Winner: Obvious workflows to build into post service delivery


This is the major key! A great AI powered tech-enabled service company or Co-Pilot will watch its margins get compressed over time if it cannot figure out a way to build stickiness.


An ideal scenario would be that the service is delivered via a proprietary workflow platform that involves multiple stakeholders in the company (or outside) receiving the service who are then able to leverage the platform to complete downstream tasks .


Conclusion


AI is poised to create the greatest counter positioning opportunity of our lifetime.


By understanding and strategically applying AI to disrupt traditional business models, you can achieve a competitive edge that incumbents find difficult to replicate.


Whether it’s through improved quality, faster service, or cost reductions, the potential for leveraging AI with counter positioning is immense.


If you’re embracing this transformative technology to redefine an industry we want to talk to you! No matter what market you’re playing in, as we believe the opportunities are endless.

What Primary looks for in early stage Go-To-Market SaaS

At Primary we are always thinking about the future of the Go-to-Market stack. This is partially because we spend so much time—hundreds of hours a year—working with portfolio founders on the sales, marketing, and customer success functions. It’s also a space we know intimately well from our past lives in senior operating roles, where we were both economic buyers and end users of GTM solutions.

Cassie Young was Primary’s first GTM leader. She was previously the CRO at Sailthru and then joined its acquirer Marigold, where she became CCO and managed a $200+ million P&L and a team of 200+ people in the U.S., Europe, Australia, and New Zealand. In mid-2022 Jason Gelman joined us from Compass to further expand our GTM support; in backchannel calls, we heard he was “the best revenue strategy and operations leader in New York” several times over. And rounding out Primary’s GTM pod is Zach Fredericks, a principal on our investing team with rich product management experience on the operating side.

Our familiarity and excitement with the GTM space has left us with strong conviction that there are still many category-defining opportunities that don’t yet exist, and we believe the recent advances in AI will accelerate—though also complicate—their creation. 90% of sales leaders say that they plan to utilize AI solutions “often” in the next 24 months; Gartner predicts that sales enablement budgets will actually increase 50% by 2027—an increase much higher than we would have expected given overall budget compression.


We are specifically compelled by GTM SaaS businesses poised to offer a combination of integrated, ROI-rich workflows and data advantages, regardless of which comes first (but we believe the best businesses have plans to achieve both). We are also excited by companies focused on the "connective tissue" that exists between multiple GTM teams. Below, we’ll elaborate on these priorities, plus share insights from some of our notable friends and collaborators in the space.

In case this is not abundantly clear, if you are working on something in any of these areas, we want to speak with you.

The state of the sector


The GTM stack—also sometimes referred to as “RevTech”—refers to solutions that service the entire customer and revenue cycle.  The space is crowded. Really crowded. And generative AI will only compound that reality. Consider MarTech, which is just one sub-category of GTM tech: that vertical is estimated to have grown 7,000% over the past 12 years (not a typo!), even after plenty of industry consolidation, wind-downs, etc.. And the proliferation of “SalesTech” is quickly feeling like Martech 2.0. Buyers of these technologies are exhausted by the number of options available to them.

And while AI is making the GTM stack even more crowded, GTM executives are still shopping for new solutions, because technology advancements in this category are making it easier and cheaper to build and scale the GTM function. As Mandy Cole, Partner at Stage 2 Capital tells us, One of the most significant changes we will see in GTM as a result of AI in the next year is improved efficiency and CAC payback because people will not only be able to do more in the same amount of time, but that improvement could mean less people to produce the same outcomes.” An Insight Partners survey supports this, indicating that 87% of surveyed GTM leaders anticipated that generative AI would increase efficiency by at least 16%. Companies are ready and willing to invest here.

Budget compression in the age of doing more with less

“Since 2015, the GTM tech space has gotten incredibly saturated,” says Max Altschuler, General Partner at GTMFund. At this point, “GTM tech is an area where CFOs are aggressively cutting budgets or just not spending. A space where tech is sold by seats and those seats are being eliminated via layoffs.” With that, “only the best one or two companies in each category will get funded. And they will have to have real proof points in order to even be considered a ‘need to have’ category.” Put another way, successful GTM solutions will address obvious and acute pains in an organization. We love The Challenger Sale as much as the next sales nerd, but GTM products that need to teach organizations why their solutions are relevant simply will have to work much harder to gain adoption and traction.

Feature vs. platform

The age-old “feature vs. platform” conversation is a common one in the GTM category. Today there are many GTM point solutions that are focused on a singular business application, and CFOs and most champions are eager to consolidate their vendor stacks. Moreover, businesses focused on a singular use case are inherently rate-limited on how big they can ultimately become. As such, the GTM category is ripe for both organic and acquisitive consolidation, as evidenced by Gong launching Gong Engage to compete with Outreach and ZoomInfo’s $575MM acquisition of Chorus.

How Gen AI is playing in so far

Generative AI has made things both exciting and challenging. Because the barriers to entry are so much lower with GenAI technology, the winners will be those who can think big. Now is not the time for point solutions.

GTM incumbents have embraced AI technology to launch countless new products. Salesforce recently launched AI Cloud, a set of GPT-powered tools supporting multiple business functions. 6sense, the revenue intelligence platform, launched a generative AI email writing feature. The list of established GTM tech companies launching AI products is seemingly endless.

And yet we’ve also seen waves of new startups and point solutions enter the market addressing GTM pain points. We’ve looked at more “AI-powered BDR'' businesses than we can count. The barrier to launching innovative pain-killing solutions with generative AI is very low, but in most cases, incumbents have a clear data advantage. It is difficult to see a path for a startup to win in the GTM category unless they are building something much bigger than a point solution.

In the startup landscape, we’ve been most excited about startups that are either attacking a high ROI, integrated workflow opportunity or setting themselves up to build a defensible data moat. Elaine Zelby, cofounder of Tofu and former GTM partner at Signalfire, summarized the opportunity with workflows: “In 5-10 years, humans will be solely focused on the creative (branding, messaging/positioning, etc) and the strategic (differentiation, relationship building, channel strategy, etc), but everything else from campaign/workflow creation and execution to measurement, optimization, and expansion will be done by AI.” A Salesforce survey reiterates this conviction—Seven in 10 marketers (71%) expect generative AI will help eliminate busy work and allow them to focus more on strategic work.

We regularly meet startups offering workflow innovation, but we’ve found it much more difficult to meet early-stage founders that have a clear path to a data advantage over incumbents. That said, these businesses certainly exist—Clay is a recent fan favorite in the ecosystem—and we have strong conviction that there are many parts of the GTM stack where generative AI could be used to ultimately develop a data moat.

So with all the challenges with GTM software/RevTech, why are we spending time here and which early-stage companies are we paying attention to?

Workflows to data advantages: Klaviyo

Companies like Gong, Outreach, SalesLoft, and more have built valuable workflows for GTM teams that also ultimately unlocked data moats/advantages. One other example we love is Klaviyo. Klaviyo’s early growth was spurred by its email service; they made it dead-simple for Shopify retailers to send targeted, automated emails at scale. The initial value proposition put Klaviyo on a breakout trajectory, growing from just 1,000 customers by the end of 2016 to over 5,000 by the end of 2017 and 12,000 by April 2019. By amassing a massive customer base and powering marketing workflows for all of them, Klaviyo ultimately built a marketing system of record and managed to achieve its long-term vision of building a Customer Data Platform (“CDP”) on the back of Shopify that enabled high-ROI workflows beyond email, which then allowed them to further expand their data moat, launch even more workflow automation, and so on.

Data advantages to workflows: 6sense

In the reverse direction, there is 6sense, which built an early wedge with a B2B buyer intent database that GTM teams would pay for to better understand which prospects were in buying cycles. With this data, 6sense’s customers were able to increase the reach and ROI of their top-of-funnel efforts without building workflow automation tools at the application layer. That said, 6sense did ultimately layer on a host of different workflow products ranging from display advertising for ABM to BDR email tools.

In either direction, data compounds over time

As Daniel Chesly of Work-Bench says, “The value of data is that it compounds over time. By leveraging historical data, insights and actions can be codified into the product and used as a moat. To create a data moat, the tool must own the atomic unit for which that company conducts business. For Gong, it’s sales calls, for Clari, it’s forecasting, for Dialpad it’s customer intelligence, for Zoominfo it’s prospecting information. Given the distribution moats for many growth-stage companies, emerging startups must attack an acute, but underserved pain point as their wedge. For startups, wedges are more important than moats. Wedges help you differentiate while moats help insulate you from competition” We agree with Daniel’s take, and it’s relevant regardless of whether a company begins with the workflow component or the data itself.

Opportunities in the GTM stack

We like to think of the stack broadly in two sections:

Acquisition tools (aka Sales/Marketing wedge products that help GTM teams contact, nurture and close new deals))

On the list of enterprise applications for generative AI, automating outbound sales is one of the most obvious, so there has been ample incumbent and startup activity in this space. Multiple companies in the most recent YC batch have launched products in this realm and incumbents have all said that they plan to release generative AI-based features in the near future. One company we admire in this space is Valley, which is building a product that automatically runs conversations between SDRs and prospects. Valley can look at a list of leads, write custom outreach, nurture conversations, and eventually set meetings for AEs. Users can drop a lead list link from LinkedIn Sales Navigator, upload information on their product, and then connect an AE’s calendar. From there, Valley runs the outreach and scheduling process on its own. This product is a wedge to Valley’s broader vision of building a sales and marketing data lake. The company believes that its wedge product will allow it to capture and store customer data in a vector database, identify patterns in that data at scale, and distribute insights on that data to their customers.

Leveraging AI for marketing automation is another common use case, but also a category that is ripe with point solutions. Tofu is an exciting new solution that is building a broader platform play by using generative AI to eliminate manual content creation for enterprise marketing. But in addition to simply offering a painkiller campaign creation tool, Tofu built its capabilities directly into the Marketo workflow, allowing them to automatically update content across each channel based on which campaigns and iterations perform best. This approach sets them up well for building a proprietary database of how and where certain leads respond to certain content and at scale, they should have a large database of lead behavior, which should make its products more valuable to marketing teams—a great example of the workflow-to-data advantage approach.

Retention tools (aka Customer Success wedge products that help existing business teams retain and grow their installed base of customers)

For this space, we bet on Lantern in 2022. To the naked eye Lantern may look like a customer success platform (similar to a Gainsight or Catalyst), but the core business is actually a CDP for B2B. Lantern built a novel approach to data ingestion that has allowed for an unparalleled single view of the B2B customer (a major barrier to adoption for the incumbents in the CS category). With this data intact, Lantern can offer a host of its own recipes and plays based on that customer data, and its early wedge is focused on customer success and account management teams (e.g. identifying expansion opportunities). Lantern is an example of the reverse approach of focusing on data to unlock workflow advantages.

Another trend that has us excited in the post-sales world is the power of LLMs focused on customer data. We’ve recently looked at businesses building LLMs to make it easier to action unstructured customer data—customer calls, tickets—for a variety of different use cases.

The more pervasive a software application is in an organization, the better the company is positioned to drive upsell and growth from its installed base of customers. So needless to say, regardless of workflows or data moats, the most powerful GTM tools will aid and abet more than one GTM team in an organization.

Finally, we believe that regardless of the initial direction—workflow or data—there is a big opportunity for GTM superapps: “Rippling for GTM,” if you will. This has been validated by several of the GTM stack incumbents (e.g. Outreach) already layering on new capabilities, and regardless of a startup’s initial wedge, we expect to see much more of this use case aggregation. The winners will be the businesses who build early momentum by picking the most compelling use cases for their wedge products.