NAIC AI Systems Evaluation Tool Pilot: A Guide for Insurers
From the blog this month: Starting March 2026, the NAIC AI Systems Evaluation Tool Pilot goes live across 12 states, and insurers need to be prepared.
Monitaur CEO Anthony Habayeb breaks down what the pilot actually means for carriers: what regulators are evaluating, where the real complexity lives (hint: Exhibit D and third-party risk), and the three steps insurers should take right now to get ahead of it.
The companies that treat this as a governance moment, not a compliance checkbox, will be in the best position when exam time comes. Read the full breakdown on the Monitaur blog.
In a nascent and fragmented regulatory environment, focusing on the ethical implications of the company’s AI activities is a prudent strategy for board oversight of AI risk. This article discusses board oversight of AI risk through an ethical lens and proposes areas of inquiry for boards to consider in discussions with management.
AI has become the top priority for insurance industry leaders heading into 2026, according to the International Insurance Society's Global Priorities Report. Artificial intelligence now dominates the strategic agenda for insurers, brokers and other industry stakeholders — reflecting a shift from aspirational planning to urgent operational necessity, according to the International Insurance Society’s 2026 Global Priorities Report.
Governor Gavin Newsom issued an executive order to strengthen California’s procurement processes and raise the bar for artificial intelligence companies seeking to do business with the state. The order aims to ensure that companies meet strong standards and demonstrate responsible policies that prevent misuse of their technology, while protecting users’ safety and privacy. California remains committed to ensuring that AI solutions adopted and deployed by the largest state in the nation and 4th largest economy in the world cannot be misused by bad actors seeking to exploit their users’ data, subvert their security, and violate their civil rights.
On March 17, 2026, the Colorado AI Policy Work Group, with strong support from Governor Jared Polis, proposed a new artificial intelligence (AI) legal framework to replace the Colorado Concerning Consumer Protections in Interactions with AI Systems (Colorado AI Act). As described in greater detail in our June 2024 Legal Update, the Colorado AI Act is currently the most comprehensive and robust AI law in the United States. Similar to the EU AI Act, the Colorado AI Act triggers obligations on developers and deployers of AI systems classified as “high risk” under the law. The law was originally set to go into effect on February 1, 2026, but was amended late last year to postpone the effective date to June 30, 2026.
The AI revolution is here, and boards are being called to provide more oversight, even as the implications for both strategy and risk are still coming into focus. The Flock and Workday cases underscore how ethical and responsible use of AI is now a board-level governance issue, requiring oversight of both internal and third-party AI, including bias auditing, transparency and human accountability, amid rising litigation and regulatory scrutiny.
Gartner has projected that more than 2,000 legal claims linked to so-called “death by AI” incidents will be brought worldwide by the end of 2026. The prediction reflects a growing awareness of the legal and financial consequences associated with the increasing deployment of artificial intelligence across industries.
Forrester does not endorse any company, product, brand, or service included in its research publications and does not advise any person to select the products or services of any company or brand based on the ratings included in such publications. Information is based on the best available resources. Opinions reflect judgment at the time and are subject to change. For more information, read about Forrester’s objectivity here.