Skip to main content

Gone in nine seconds… that’s how long it took AI to independently decide to delete a company’s entire database.

PocketOS, a provider of software for car rental businesses, recently had its entire production database deleted by Anthropic’s Claude AI, leaving its customers unable to access their data.

So, what could this major AI failure mean for private equity?

Private equity isn’t backwards in coming forward. The industry is known for its willingness to adopt innovative technologies, ways of thinking, and ways of doing business - and at speed. And in a world where AI use is proliferating, PE firms need to move fast so as to not be get left behind. 

PE is embracing AI technology and is encouraging investors to do the same. But, in some cases, the adoption of this technology is being hastily done - driven by a lack of formal structure - which is having significant consequences for businesses. Whilst the whole area of AI and the return on investment is being worked through, there are early indications that not adequately managing the risks associated with the technology could present significant business risk both for PE and investment portfolios. 

It’s vital that PE firms classify AI risk

Starting with the theoretical, there’s a useful way of classifying risks into a Taxonomy of AI Risks (adapted from Weidinger, 2021): 

  1. Discrimination and Toxicity 
  2. Privacy and Security 
  3. Misinformation
  4. Malicious Actors & Misuse
  5. Human-Computer Interaction
  6. Socioeconomic and Environmental
  7. AI System Safety, Failures, and Limitations

Various incidents over the last few weeks should have set alarm bells ringing for investors, and encouraged PE to change the way it provides oversight over investments – and this type of risk classification is an important starting point for dealmakers. 

Case study one: Removal of access

We’ve recently seen the cutting off of a corporate subscription for one organisation – with this loss of access to business-critical tooling triggered by the actions of an individual within the organisation. 

The company in question stated that there were limited ways for them to then appeal against the actions of Anthropic, which was central to business processes and workflows. There is now a greater likelihood of such events growing, driven not by policy violation, but by geopolitics and increasing tension between regional powers. Statements by major political figures continue to highlight the need to enhance national resilience and interest, with technology firms increasingly being seen as an extension of the state apparatus. 

Lesson number 1: The importance of AI tooling use within a business can equate to that of a critical supplier for a company; growing instability and a greater focus on AI sovereignty is now exacerbating this risk. Multi-vendor strategies and dependency-mapping for specific AI tooling must be put in place.

Case study two: Claude-Alt-Delete

This week, the founder of PocketOS warned that an AI coding agent within the organisation had deleted their entire production database. This was amplified by the configuration or the misconfiguration of automated backups that resulted in the wiping of all volume level backups as well. The result was the deletion of customer data critical to business operations. According to the founder, the whole process took nine seconds after the agent decided to “take its own initiative” to fix a problem it came up against. 

Lesson 2: Introducing AI tooling and permissions carefully is something that is often overlooked. Organisations must treat any new introduction of AI tools as a significant change: it should be analysed carefully and risk managed by experts. 

Case study three: Mythos

If you operate in the security world, the release of Mythos is news that won’t have escaped you. The ability of this tool to conduct a series of complex tasks, focused on vulnerability identification and exploitation, will place a huge emphasis on organisations to manage their external attack surface and - importantly - to patch, patch, patch. 

In an environment where organisations are shifting their technical boundaries, identifying and securing external perimeters is a challenging and ongoing task. Regulators are also paying close attention to AI (partially driven by Mythos). Organisations must act now to ensure they’re not caught with significant exposure to this expected increased regulatory oversight.

Lesson 3: External perimeters must be a focus for PE firms; attack surface management must be seen as leverage; and ongoing cybersecurity operations should be enhanced.

Conclusion: AI introduces significant risk to an asset’s valuation

These case studies illustrate that AI introduces significant risk to an asset’s valuation - and this is driven either by regulatory implications manifesting as fines, regulatory oversight, or potentially other forms of legal action/loss of IP. 

As more cases of AI misuse appear in the press, there’ll be a much greater focus on managing the risks for organisations, and PE in particular. Private equity doesn’t operate in a vacuum. Limited partners place expectations upon GPs to address investment risk, and it’s expected that AI cybersecurity will now become a vital part of oversight and due diligence. 

Action must be taken now to manage AI risks and drive improvements across not only in-house PE processes and governance, but also their portfolio of investments. 

To find out more about how to identify, monitor and mitigate cyber risks throughout the investment lifecycle, visit the Thomas Murray website.

Cyber Risk

Cybersecurity for Private Equity

Cyber attacks are becoming more intelligent than ever and private equity firms require security partners who understand the complete investment lifecycle and can protect business value. Our experience working with 8 of the 10 largest Private Equity funds by AUM positions us as a trusted advisor delivering strategic cybersecurity services across portfolio companies and investment stages.

Learn more