The prospects of huge returns encourage a headlong rush into AI applications for private equity professionals. But real challenges remain and the penalty for mistakes could be quite substantial.

Here is an excerpt from a report published in JDSupra:

According to a survey conducted by Intertrust, 90% of private equity firms expect AI to have a transformative impact on the industry. AI-backed data analytics are playing a growing role in analysing and identifying deals. QuantCube Technology, for example, provides in-depth data analysis, drawing on customer reviews and social media posts to develop predictive indicators of events, such as economic growth or price changes. There are now companies offering AI-driven technologies that claim to help source PE deals. While this presents a potentially compelling use of AI for investors, it remains to be seen whether these technologies will deliver results.

AI technologies used to speed up legal due diligence can be applied to commercial and financial diligence and other aspects of the deal process, bringing time and cost efficiencies to investments. Further, AI has portfolio company applications — from the financial sector to consumer and retail, AI is driving back-office efficiencies in HR, IT support, cybersecurity, and data aggregation, resulting in cost savings and quicker decisionmaking. AI can also improve front-office functions, and is increasingly used to analyse and predict customer trends.

However, the introduction of AI tools widens the scope for unexpected outcomes. Firms and portfolio companies must understand what products actually do in practice, a task that can be difficult when software is not proven or is self-learning — developers may not yet fully understand capabilities and can be hesitant to stand behind guarantees. Firms should consider how their business will contract for this technology. Whether by acquisition, licence, joint venture or otherwise, each method carries specific risks. A key question to consider is who owns the models and the resulting algorithms. As systems are made “smarter” by training, firms must understand if learning is shared, and what the commercial and legal implications could be.