
Artificial intelligence, once viewed primarily as a growth driver, has rapidly become a material risk factor for large U.S. companies, prompting a surge in disclosures in regulatory filings. According to a recent analysis of S&P 500 companies' annual SEC Form 10-K filings, 72 percent of firms now report at least one AI-related risk, up sharply from 12 percent in 2023.
The increase reflects the deepening integration of AI into business operations and decision-making, as companies rely on the technology for everything from customer service and predictive analytics to automated operations and product development. Alongside its benefits, AI is now widely recognized as a source of potential harm, encompassing reputational, cybersecurity, and regulatory risks.
Reputational threats are the most commonly cited concern. Companies warn that failures in AI systems, biased or erroneous outputs, and missteps in customer-facing applications could damage brand trust. Overpromising results or delivering faulty AI experiences may erode confidence among customers, investors, and the public, leading to long-term consequences for corporate reputation and competitive positioning.
Cybersecurity has emerged as another critical area of risk. Companies report that AI could both increase the complexity of their technology environment and expand the attack surface for cybercriminals. The technology itself may be used maliciously to automate attacks, conduct sophisticated impersonations, or amplify disinformation campaigns, making robust oversight and security measures increasingly essential.
In addition to reputational and cyber concerns, firms are disclosing evolving regulatory and legal exposure related to AI. Companies face uncertainty as governments around the world develop rules governing AI deployment, data privacy, and algorithmic accountability. Potential legal liabilities also extend to intellectual property disputes, including copyright claims and challenges surrounding data used for training AI models. The shifting regulatory landscape adds complexity to corporate governance and risk management, as businesses must prepare for compliance across multiple jurisdictions.
Beyond these primary concerns, companies are beginning to identify additional AI-related risks. These include environmental impacts associated with large AI models, workforce disruptions resulting from automation, and potential liability tied to autonomous or decision-making systems. The breadth of these risks highlights that AI is no longer a specialized technology concern but a strategic issue that touches multiple aspects of corporate operations.
While disclosures have increased, analysts note that many remain general in nature. Companies often reference AI-related risks without specifying the measures they are taking to detect, mitigate, or monitor those risks. This lack of detailed guidance can make it challenging for investors and stakeholders to fully assess a firm's preparedness or resilience in the face of potential AI failures.
The heightened attention to AI risks reflects a shift in corporate governance. Boards and executives are now expected to integrate AI into their enterprise risk frameworks with the same rigor applied to finance, operations, and compliance. Investors are likewise gaining a clearer view of how companies perceive AI not merely as a growth opportunity, but as a potential source of operational, reputational, and regulatory harm. As global regulatory frameworks, including the European Union's AI Act, continue to take shape, companies are likely to face more stringent compliance obligations, further emphasizing the importance of effective risk management and transparent disclosure.
For many companies, addressing AI risks will require more than disclosure. Operational safeguards such as bias testing, red teaming, post-deployment monitoring, and careful oversight of vendors and third-party AI providers are becoming critical. Firms that fail to implement such measures may encounter not only regulatory scrutiny but also reputational damage and operational disruptions that could have material financial consequences.
The surge in AI risk reporting represents a turning point for corporate America. What was once a tool for innovation has become a strategic governance issue, with implications for investors, regulators, and the broader public. Companies that can effectively identify, manage, and communicate AI risks will be better positioned to maintain trust, minimize harm, and sustain long-term resilience in an increasingly AI-driven economy.
Originally published on IBTimes





Join the Conversation