Investor money constantly flows into the tech industry, especially emerging AI companies. Recently, corporate investors have funded top startups in the AI space far more often than they have in other sectors. As of 2021, corporate investors have comprised 16 percent of all investment in the AI industry. Big investment banks have even created investment companies that specialize in these technologies, helping customers invest in companies specialized in market infrastructure, information services, security software, mobile technology, big data analysis, payments, and more. For example, Goldman Sachs’s Principal Strategic Investments Group serves customers worldwide.
As per Goldman’s 2024 investment commentary, despite macroeconomic concerns, the tech sector continues to show resilience due to improving fundamentals, AI advancements, and easing rate expectations. Specifically, as AI opportunities grow, companies under $100 billion in market cap have become increasingly optimistic. AI infrastructure has been a key return driver, and despite slower-than-expected AI adoption, market enthusiasm for AI remains strong. However, it is essential to note that even as businesses sign and negotiate longer contracts to shield themselves from prospective AI risks, there is little certainty in determining what laws apply to these emerging technologies. Moreover, AI systems are not entirely accurate and are vulnerable to making arbitrary mistakes, leaving an open question around whether it is possible to build AI models that comply with potential regulation at all.
One of the solutions to potential AI noncompliance discussed in proposed legislation is what’s called “system shutdown.” System shutdown is a method of mitigating the harms that might be caused by noncompliant AI being used to engage in illegal activity. Proposed legislation, such as California’s SB 1047, floated the idea of forcing all large-scale, “high-risk” AI models to include a “kill switch” that would unilaterally shut down the AI model if it was detected to pose “serious risks to humanity.”
Governor Newsom ultimately vetoed SB 1047. As the bill was being debated, Silicon Valley companies (including OpenAI) emphasized how this controversial AI bill would stifle innovation. It is unclear whether this push stemmed from general inefficiencies regarding these “system shutdown” solutions or these companies’ broader fears of business shutdown. Noncompliance with a system shutdown policy could certainly lead to (i) breach of covenants with investors and (ii) prolonged legal battles between investors and founders. However, for emerging AI companies, system shutdowns could also fatally disrupt day-to-day operations and growth. Generally, AI models are developed (i) with certain reliance on third-party platforms like OpenAI’s API and/or (ii) in-house entirely. Hypothetically, a shutdown affecting a key partner like OpenAI could sabotage dependent companies, while regulatory action could derail companies ignoring compliance from the start. Therefore, in both situations, the compliance risks are significant.
That said, other alternative solutions to potentially risky AI face similar dilemmas. The EU AI Act talks about the “withdrawal of non-compliant AI systems from the market,” which ultimately raises investor concerns about whether a company’s stance towards the incredibly complex issue of AI product compliance could lead to the loss of their investment entirely. The Colorado AI Act creates strict rules and standards for developers and deployers of high-risk AI systems to follow, adding more responsibility to AI model developers and giving investors a potentially critical due diligence question to seek answers for. This approach runs into a similar barrier as California’s vetoed SB 1047 did: Is it possible to fully capture all “high-risk AI systems” in a single regulation, especially as technologies change so quickly and companies are looking for newer ways to innovate in such a lucrative field?
At the 2024 Seoul AI Safety Summit, major players like Microsoft, Amazon, and OpenAI, amongst others from the U.S., China, Canada, the U.K., France, South Korea, and the UAE, agreed to set up voluntary commitments concerning AI safety and ethics advancements. The participants pledged to create safety frameworks for mitigating the challenges associated with their frontier AI models, while addressing potential risks caused by the progress of AI, such as cyberattacks, the development of bioweapons, and the misuse of technology by malicious actors. While these are important commitments to make as AI becomes more prevalent, there are still open questions about the potential ineffectiveness of expensive penalties like system shutdowns and whether different regulatory policies might create more effective enforcement actions in their place.
With the fast pace of technological growth and the trial-and-error approach to regulation, navigating contractual obligations in commercial and transactional spaces remains a key strategic focus area. Given how risky developing AI models can be and how investors lack significant certainty in the safety of their investments when they put money into AI startups, legal compliance will increasingly play a crucial role in navigating investor expectations. With proposed legislation potentially implementing product withdrawal or shutdown requirements, investors and companies face huge risks and financial burdens in this growing field. Therefore, closely monitoring compliance requirements as they arise, adhering to them promptly, and building stronger corporate strategies in collaboration with key stakeholders, including investors, are essential to keeping the business running smoothly.