AI Risks in Software Investing
A major contemporary challenge in private equity, as artificial intelligence threatens to disrupt many established software verticals. This risk limits the investment space and requires firms to constantly learn and adapt.
First Mentioned
10/16/2025, 5:08:50 AM
Last Updated
10/16/2025, 5:10:35 AM
Research Retrieved
10/16/2025, 5:10:35 AM
Summary
The AI boom, characterized by rapid advancements in generative AI and scientific breakthroughs since the late 2010s and gaining international prominence in the 2020s, presents significant risks for software investing. Orlando Bravo, founder of the private equity firm Thoma Bravo, has identified these "AI Risks in Software Investing" as a major headwind threatening to disrupt the entire SaaS industry. While Thoma Bravo has built its success on a focused strategy of acquiring and scaling software companies, the disruptive potential of AI, including security vulnerabilities in AI-generated code, ethical challenges, and the risk of market manipulation, poses a considerable challenge to the continued success and valuation of software investments.
Referenced in 1 Document
Research Data
Extracted Attributes
Investor Risks
Suboptimal products/services, flawed investment decisions due to inaccurate/unreliable AI responses ('hallucinating'), exposure to undue risks based on made-up content or flawed analysis
Acknowledged by
Orlando Bravo, founder of Thoma Bravo
Associated with
AI boom, generative AI, large language models, AI image generators, scientific advances (e.g., protein folding prediction)
Nature of Risks
Significant headwind for software investing, disruptive potential for the SaaS industry
Affected Industry
Software Investing, SaaS industry
Malicious Use Examples
Market manipulation, spreading disinformation (deep fakes), data poisoning, phishing, identity theft, account takeovers
General Risk Categories
Complexity, opacity, unreliability, bias, conflicts of interest, data insecurity, operational failures, malicious use, security vulnerabilities, ethical and privacy challenges, vendor lock-in
Software Development Risks
Vulnerabilities or malicious code introduction, lax security controls, unauthorized language model deployment, potential for legal/copyright issues (e.g., licensed code entering product)
Timeline
- The AI boom began. (Source: Wikipedia)
Late 2010s
- The AI boom gained international prominence. (Source: Wikipedia)
2020s
- Orlando Bravo acknowledged AI risks as significant headwinds for software investing and the SaaS industry during the All-In Podcast. (Source: Related Documents)
Undated (recent)
- ChatGPT is projected to be the 5th most visited website globally, indicating the widespread impact of generative AI. (Source: Wikipedia)
2025
Wikipedia
View on WikipediaAI boom
The AI boom is an ongoing period of technological progress in the field of artificial intelligence (AI) that started in the late 2010s before gaining international prominence in the 2020s. Examples include generative AI technologies, such as large language models and AI image generators by companies like OpenAI, as well as scientific advances, such as protein folding prediction led by Google DeepMind. This period is sometimes referred to as an AI spring, to contrast it with previous AI winters. As of 2025, ChatGPT is the 5th most visited website globally behind Google, YouTube, Facebook, and Instagram.
Web Search Results
- [PDF] Opportunities and Risks of Artificial Intelligence in Investment Markets
into a frustrating “doom loop” of repetitive and unhelpful responses.27 Flawed responses could also harm investors by causing them to make investment decisions that are not in their best interest. Another major risk with customer-facing AI applications is the possibility of the AI “hallucinating” – producing inaccurate or unreliable responses.28 If investors rely on those inaccurate or unreliable responses, investors may be exposed to undue risks based on made-up content or flawed analysis. [...] engage a market participant that would outsource critical functions to a third-party service provider without effective oversight over that service provider.53 Securities regulations should espouse that principle. Conclusion While AI has the potential to deliver significant benefits to investors, it also poses significant risks. If complexity, opacity, unreliability, bias, conflicts of interest, or data insecurity infect AI applications, investors could receive suboptimal products and services, [...] cause operational failures of many users of the service provider.42 Bad actors’ malicious use of AI Bad actors can exploit AI technology in investment markets through various malicious activities, including market manipulation, spreading disinformation using “deep fakes,”43 and engaging in “data poisoning” to corrupt AI models.44 Bad actors may also use AI to engage in phishing, identity theft, and account takeovers. Malicious use of AI in investment markets undermines market integrity,
- Understanding AI Risk in Software Development
Much of the AI risk in software development stems from a visibility gap. We often discover that security teams first don’t know where AI is in use, and then find out it’s used in a location that isn’t configured securely. [...] One example: The report reveals that, on average, 17 percent of repositories within organizations have developers using AI tools without proper branch protection or code review processes in place. This toxic combination of AI usage and lax security controls creates an environment ripe for introducing vulnerabilities or malicious code into production systems. [...] In many cases, we found that enterprises are combining the use of these models with other risks, amplifying the vulnerability. For example, we often find developers using AI and generating code on a repository that doesn’t have a code review step. This could, for instance, allow for licensed code to enter the product, exposing the organization to legal or copyright issues.
- The Risks and Opportunities of Investing in AI
These companies not only provide the hardware and software backbone for AI development but also benefit from massive data ecosystems and global scale. Their central role in AI has made them cornerstones of performance for AI-focused strategies. ## The Allocation Dilemma in AI Portfolios [...] Our analysis reveals a clear trend: The Magnificent Seven dominate AI and Big Data fund holdings. Nvidia, in particular, appears in nearly 90% of AI and Big Data fund portfolios, thanks to its market leadership in AI chips. All the other names in the Magnificent Seven (Microsoft, Amazon.com, Google, Meta, Apple, and Tesla) also contribute uniquely to the commercialization of AI—from cloud computing to data monetization and advanced robotics. [...] Investors should be mindful that while the AI theme offers tremendous growth potential, it also carries elevated risks. Concentration in a few dominant names, exposure to rapidly evolving technologies, and valuation concerns are all factors to consider when allocating to this space. The author or authors do not own shares in any securities mentioned in this article. Find out about Morningstar's editorial policies. ## More In Funds See All ### The Worst-Performing Funds in Q3
- AI Security Risks Uncovered: What You Must Know in 2025
### 2.4 Using an Unauthorized Language Model to Develop Software The deployment of unauthorized language models in software development introduces substantial security risks of artificial intelligence. When developers use unverified or compromised AI models, they risk incorporating vulnerabilities or backdoors into their applications. These security gaps can remain undetected for extended periods, creating potential entry points for cyberattacks. ### 2.5 Ethical and Privacy Challenges [...] ### What are 3 dangers of AI? The three most critical security risks of AI that organizations need to address are: 1. Advanced Cyber Attacks: AI-powered tools can automate and enhance traditional attack methods 2. Privacy Breaches: AI systems may inadvertently expose sensitive data through processing or storage 3. System Manipulation: Adversaries can compromise AI models through targeted attacks and data poisoning ### What is the biggest risk from AI? [...] One of the critical security risks of AI involves the manipulation of AI systems through adversarial attacks and data poisoning. Attackers can subtly alter input data to confuse AI models, causing them to make incorrect decisions. For instance, slight modifications to traffic signs could mislead autonomous vehicles, while corrupted training data might compromise facial recognition systems. These attacks are particularly concerning because they can be difficult to detect until significant damage
- Managing gen AI risks | Deloitte Insights
To provide a clearer understanding of the intersectional nature of these threats and the areas they impact, we can organize these gen AI risks into four distinct categories: risks to the enterprise, which include threats to organizational operations and data; risks to gen AI capabilities, which include the potential for AI systems to malfunction or their vulnerabilities to be misused; risks from adversarial AI, which include threats posed by malicious actors leveraging gen AI; and risks from [...] Security and employee risk across development processes. As gen AI is introduced into development processes, new security risks emerge. A Palo Alto Networks report found that AI-generated code was the top concern for surveyed security and information technology leaders.7 The Deloitte leaders interviewed for this article expressed concerns that AI might inadvertently amplify existing code-level vulnerabilities such as misconfigured code, increasing the risk of data breaches, malware infections, [...] Limited application flexibility caused by vendor lock-in. Gen AI models and infrastructure are advancing faster than organizations can keep pace, introducing a risk that leaders could be paying for obsolete or duplicative capabilities or partnering with vendors that don’t ensure their products interoperate easily with other future technologies.