AI in Private Equity: Deconstructing the Pitch

In the world of Private Equity, an Artificial Intelligence strategy has become the new mandatory slide in any fundraising deck. LPs now expect to see a plan to leverage AI, and firms have responded with a flurry of announcements, proprietary platform names, and high-profile hires. The message is clear: we are AI-native, data-driven, and positioned for the future.

But for investors and industry observers, the critical task is to look past the marketing gloss and dissect the underlying substance. Not all AI strategies are created equal. Some are transformative, while others are merely defensive plays or, worse, hollow buzzwords. The current frenzy around generative AI, in particular, often obscures the fact that older, more reliable machine learning models may be better suited for many core PE tasks.

This article provides a critical framework to analyze the AI strategies of PE firms, illustrated with examples from the European market, with which I am most familiar. The goal is to move beyond accepting claims at face value and to offer the tools to question the hype, evaluate various approaches, and identify the firms that are building a genuine competitive edge.

Deal Sourcing and Due Diligence: The "AI-Powered Edge"

Firms showcase proprietary platforms that analyze vast datasets to unearth hidden gems before competitors. EQT has pioneered this idea with its “Motherbrain,” platform, credited with sourcing 15 investments since 2016, including three unicorns. In the mid-market, Inflexion’s Tearsheet tool uses AI to pre-screen 40% of its opportunities, aiming to focus human effort where it matters most. For due diligence, the promise is speed and depth, with firms like the "AI-native" Ardabelle reporting a 30-40% faster production of investment memos.

  • Signal vs. Noise: An AI that generates thousands of potential leads is not an edge; it's a data-management problem. The real value lies in the quality and conversion rate of those leads. How many deals sourced by AI actually reach the investment committee and close? Deep Research agents* definitely help to cover more ground but how does this tsunami of information affect the team's focus?
  • Proprietary or Off-the-Shelf? The term "proprietary platform" needs scrutiny. Is the firm building genuinely unique models based on exclusive data, or are they simply white-labeling commercial software and connecting it to public APIs? Any claim that a firm trains an "internal ChatGPT" must be met with extreme skepticism. Even assuming that a firm has enough data to meaningfully modify the behavior of a model (see next bullet point), "training" on internal documents is an odd strategy compared to retrieving relevant data and passing it to a powerful model at inference time. Beyond the technical complexity of training a Generative AI model, another downside of the strategy compared to retrieval is that it is harder to adapt to changing market trends because past data is fully baked into the model's weights. Inflexion or Ardian, with its GAIA platform, go down the retrieval route, feeding third-party frontier models with their own historical deal documents, expert call transcripts, data from external subscriptions, internet search results... PE firms do not have budget constraints, they can build bespoke search engines based on their indexing strategies, they can embed documents in massive vector databases. So it would be strange not to do it. However, if their AI engine only relies on embeddings search, without an old-school full-text search engine, that could be an indication that the AI strategy is more performative than substantive.
  • The Data Moat: AI solutions are only as good as the data they use. Whether they choose to train models or to retrieve information from a knowledge base, firms that have a long history of meticulously organized, proprietary data on past deals, market performance, and due diligence findings have a significant advantage in building effective AI tools. But a data bloat can also be an issue. Newcomers in the space argue that they are better off starting from scratch as they don't have to spend years cleaning old databases, re-indexing information, etc.... There is some truth to this, but a firm with no track record of data-intensive workflows will struggle to convince. For prospective LPs, assessing the quality and relevance of a firm's data assets is extremely hard.
  • The Right Tool for the Job: Generative AI is notoriously prone to "hallucinations". Vision language models are very unreliable for this reason. The VLM technology is just not good enough (yet) for large scale production use so take any claims of using VLMs with a grain of salt. For large language models (LLMs), the situation has improved compared to two years ago, but there will always be hallucinations* given how these models work. They are also exposed to adversarial attacks, also known as "prompt injections". For investment firms, it can be a problem as the issuer of the due diligence pack can steer the output of generative AI models by inserting information invisible to humans. Additionally, these models can't be trusted with numerical content so they are absolutely not the right tools for the rigorous financial modeling, which remains at the heart of due diligence. While it can accelerate report writing and data summarization*, using it for quantitative analysis is a high-risk endeavor.
  • Risk appetite: The mandate of a fund should reflect a delicate balance of risk and return. Different firms may have different risk appetites, and even within a firm, there can be different strategies and different funds that require completely different approaches to risk. However, even the best generative AI models struggle to take into account subtle nuances of risk appetite. They are not trained for that and they often exhibit a positive bias (they are bullish on basically anything). This can be addressed via the prompt but instructions must be tuned for each mandate. An investment firm that relies too much on AI for its investment decision is an investment firm that overlooks the risk dimension in its decision making.

Portfolio Value Creation : The True Test

In 2024, for example, we achieved generative Al-related incremental revenues of €290 million across our portfolio, and that number will continue to grow as we embed more Al and generative Al into our companies' processes. That's exactly the sort of thing LPs need to hear.

We are also seeing significant productivity gains in customer support, where we have delivered real efficiencies across the portfolio. Costs associated with white-collar work amount to €300 billion across the portfolio. Even a small, 5-10 percent productivity gain through effective Al adoption will generate huge value in the years to come.

Riccardo Basile (Permira) in Private Equity International, September 1st, 2025

PE firms promise that AI will deliver tangible ROI. Most times, the strategy involves deploying AI specialists to drive operational improvements within portfolio companies. Permira reports that 30 percent of their funds’ portfolio companies are deploying agents in use cases such as customer support, data extraction and processing, and service delivery. CVC strategically applied AI to over 120 portfolio companies to identify and prioritize AI-driven opportunities. Hg Capital is another powerful case study with its Retina platform and 250 AI projects across its portfolio, touting a 12% productivity increase from a GenAI chatbot pilot in one case and projecting over £1 million in annual benefits.

  • The Attribution Problem: It is rarely clear how AI-driven value is measured and attributed.
  • Scalability vs. One-Hit Wonders: A successful AI implementation at one tech-forward portfolio company does not guarantee a repeatable playbook for a diverse portfolio of mid-market businesses. Is the firm building a scalable value creation model, or is it cherry-picking its best success stories for marketing? Systematic and scalable approaches seem more robust, as with BC Partners' four-pillar AI strategy, which includes mapping AI threats and opportunities sector by sector.
  • Investing for Today or Tomorrow? Much of the current AI technology is not yet mature. Investments made today are effectively "training" for the people and processes that will be needed to implement the next, more powerful generation of AI. A credible strategy should acknowledge this, framing current initiatives as building long-term organizational capabilities, not just chasing immediate, short-term productivity gains that may never materialize within a typical 3-5 year holding period.
  • "AI-Washing" Through Hires: Hiring a "Head of AI" is a popular move. But is this person a true strategic leader with a team and a budget, or a "Digital Evangelist" (as seen at IK Investment Partners) tasked with persuasion more than implementation? IK's approach is not necessarily bad, as long as it is combined with an Operations Team with digital/AI experts supporting portfolio companies (which IK has) to ensure that the expertise is close to the assets. Supporting portfolio companies with in-house experts is also part of Permira's approach. Ardian does this too for its Infrastructure investments.

Investing Directly in AI Companies

The most direct way to play the AI trend is to invest in it. This strategy involves backing companies whose core business is AI. Eurazeo’s Growth Fund IV, with a stated mission "to scale up European AI champions," is a prime example. Here the strategy seems closer to venture capital than to Private Equity. Growth funds have always been in between, but AI growth feels like a fundamentally different risk-profile. However, many PE funds that focus in tech or tech-led businesses are likely skewed towards AI these days given the market trends.

  • Valuation and Hype Cycle Risk: Is the firm buying into the AI sector at the peak of a valuation bubble? The pressure to deploy capital into the hottest trend can lead to undisciplined investment decisions.
  • Technical Due Diligence: How much deep technical expertise do you need? Who truly possesses the required expertise to vet the technology and competitive moat of a cutting-edge AI company? Are firms simply relying on consultants and betting on market momentum? A common issue in the sector is that the innovation/disruption cycle is extremely short. In fact, many of the go-to AI solutions from two years ago, like standalone vector databases, are no longer seen as a panacea and the relevance of embeddings search is even being questioned. AI models have a life cycle of 6 months. For deep AI tech, value lies in people ; there are virtually no assets. In that context, making investment decisions is very complex. Investing in AI-adjacent segments, like energy infrastructure that can be repurposed if the bubble bursts, seems a much more sensible approach.

The "low-hanging fruit": Streamlining Internal Operations

Firms are using AI to enhance their own efficiency. An obvious example would be the automation of RFP-answering during fundraising: RFPs essentially contain the same questions, ordered differently and with minimal nuances. Given access to the relevant information, a powerful LLM can deliver a first draft in minutes, whereas several employees might take days. The use-case is effectively the same as for first drafts of Investment memos*. Many firms are engaged in such initiatives, which is not necessarily advertised, as it doesn't directly enhance LP returns in a headline-grabbing way. For example, 3i Group developed an internal AI toolbox for document summarization and data extraction, reporting that it "streamlined our operations and boosted efficiency."

  • Implementation Challenges: Implementing firm-wide AI solutions is a lengthy, complex endeavor that is often disruptive. Regardless of the buzzwords used by IT teams like "agile development", the larger the firm, the harder it is. One problem is the oversimplified view that management teams and consultants have about the tasks to automate. The devil is always in the details, but business people are so used to handling the edge cases without formal processes that they may overlook critical nuances. The result is often a mismatch between expectations and reality, leading to frustration and disengagement. IT says that the specs were not good enough, Business says that IT is not good enough. Such conflicts arise when trying to automate the wrong tasks. An additional challenge with AI is the fast pace of technological change mentioned above, which can render solutions obsolete before they are even implemented.
  • Impact on Returns: While operational efficiency is positive, it does not materially impact investment returns. These gains are often marginal and should be viewed as sensible business modernization rather than a transformative investment strategy. So what's in it for the investors? At best, they can expect that the investment firm will attract and/or retain the best talents.
  • Data Security and Privacy: Using third-party models via APIs raises questions about data security and privacy, with implications in terms of compliance with data regulations. Training a generative AI model on a firm’s most sensitive internal documents (fundraising data, LP communications, deal pipelines) also carries data security and privacy risks. Passing third-party documents to a language model integrated in a system that has access to the open internet poses enormous risks. A firm promoting an AI strategy must also be able to articulate its data governance and security protocols in detail.

Conclusion: A Framework for Critical Evaluation

An AI strategy is no longer optional in Private Equity. However, the onus is on investors and observers to dissect the narrative. A truly compelling AI strategy is not defined by flashy announcements, but by a coherent, pragmatic, and measurable plan. When evaluating a firm’s pitch, consider the following:

  • Differentiate the AI: Do they distinguish between generative AI for qualitative tasks and predictive AI/ML for quantitative analysis? Is the right tool being used for the right job?
  • Scrutinize "Proprietary" Claims: Is their tech built on a foundation of unique, proprietary data, or is it a thin wrapper over commercial tools?
  • Demand Measurable, Attributable Outcomes: Ask for the methodology behind value creation claims. How is success measured, and how is it isolated from other factors?
  • Assess the Human Capital: Look for deep, integrated teams rather than a single figurehead. How is AI expertise being embedded across the firm and portfolio?
  • Question the Risk Alignment: How does the firm govern its AI tools? How do they ensure outputs are explainable, reliable, and genuinely aligned with the fund's specific risk appetite?
  • Look for Pragmatism over Hype: The most credible firms acknowledge the limitations and long-term nature of AI adoption, presenting it as a journey in building capabilities, not a magic bullet for instant returns.

The firms that will win in this new era are not necessarily the loudest marketers, but those who are quietly and systematically integrating AI into the fabric of their operations, from sourcing deals to transforming businesses, and can prove its value with data, not just words. What's peculiar about Private Equity is not that it involves a lot of money, but that it involves a lot of ego. And egos drive bureaucracy. Intuitively, this seems to contradict the very idea of steamlining operations using AI. But who knows? I continue to follow the sector because I want to see how it plays out.


* Market research for this article leveraged OpenAi's Deep Research tool in ChatGPT (read entire report) and Google's search grounding in AI Studio (no sharing feature in AI Studio). It took three turns of prompting with Google's Gemini 2.5 Pro, having explained the context, the thesis, the objectives, my own insights and provided the sources and reports fetched in the previous step. Manual adjustments were necessary, 100% of the paragraphs were changed. Opening all the links and reading the sources is critical as some details can be hallucinated. For example, the first draft of the article claimed that Inflexion "trained" a model for its Tearsheet tool, which is not accurate. In total it took about 2-3 hours. Given my personal experience as an Investment professional and my extensive use of AI tools since I left the Finance world, I consider that, to get a good first draft of any document (which will absolutely need to be extensively reviewed and corrected), 5 minutes of frontier LLM work is equivalent to a day of intern / junior work.

Additional resources (not linked in the text):

We care about your privacy so we do not store nor use any cookie unless it is stricly necessary to make the website to work
Got it
Learn more