"While there is potential for AI to reach consciousness, we're far from that point."
- Dr. Andrew Clark
March 2024 | What's new this month
5 tips for governance in your AI product strategy
Risk teams are often the first to consider a systematic approach to ensure that the AI models powering their solutions meet applicable standards for safety and ethics, quality, and performance. The first task is navigating complex or conflicting rules and principles that govern AI systems. After that, there is the need to balance the needs of model builders and engineers with the growing expectation that non-technical audiences understand AI models.
We spoke with Susan Bow, General Counsel at CAPE Analytics, a fast-growing vendor of AI solutions for insurance and real estate enterprises, about how her team meets and exceeds customer expectations with an AI governance solution from Monitaur.
We're taking a slight detour from modeling best practices to explore questions about AI and consciousness. With special guest Michael Herman, co-founder of Monitaur and TestDriven.io, the team discusses different philosophical perspectives on consciousness and how these apply to AI. They also discuss the potential dangers of AI in its current state and why starting fresh instead of iterating can make all the difference in achieving characteristics of AI that might resemble consciousness.
Week after week, we express amazement at the progress of AI. At times, it feels as though we’re on the cusp of witnessing something truly revolutionary (singularity, anyone?). But when AI models do something unexpected or bad and the technological buzz wears off, we’re left to confront the real and growing concerns over just how we’re going to work and play in this new AI world.
AI poses unique challenges for governance that must be directly addressed to mitigate risks and take advantage of opportunities. How are organizations developing AI governance frameworks, and what benefits and challenges have they experienced around AI governance?
Ambitious plans to forge a new global governance regime for AI may collide with an unfortunate obstacle: cold reality. The great powers, namely, China, the United States, and the EU, may insist publicly that they want to cooperate on regulating AI, but their actions point toward a future of fragmentation and competition. Divergent legal regimes are emerging that will frustrate any cooperation when it comes to access to semiconductors, the setting of technical standards, and the regulation of data and algorithms. This path doesn’t lead to a coherent, contiguous global space for uniform AI-related rules but to a divided landscape of warring regulatory blocs—a world in which the lofty idea that AI can be harnessed for the common good is dashed on the rocks of geopolitical tensions.
The EU wouldn’t be the first jurisdiction to regulate AI, but the size of the European market means the EU AI Act would be felt globally, shaping how multinational companies manage data and navigate AI. The law would apply to businesses operating in the 27-nation market and builders of AI systems that are used in the EU and are based anywhere, including the US. If a company’s product is put on the market in the EU, it would be in scope of the AI Act and the company would have to comply, said Evi Fuelle, global policy director at the AI governance platform company Credo AI.
This use of AI goes beyond the usual campaign dishonesty to actual fabrication. You might assume that is illegal, but the federal agency that oversees elections has yet to take regulatory action on so-called deepfakes. The head of the Federal Election Commission, or FEC, said he expects a resolution of the issue "later this year," leaving the possibility that this misinformation will be unregulated during the bulk of the 2024 election.