"If we can be a business enabler, I think that's a much more exciting space. We all agree that there's great potential for AI to create better efficiency, product offerings, and maybe even better equity within the insurance space."
- Anthony Habayeb, Monitaur CEO
December 2024 | What's new this month
Monitaur Year in Review
It's been a great year here at Monitaur!
The AI Fundamentalists
Model documentation: Beyond model cards and system cards in AI governance
What if the secret to successful AI governance lies in understanding the evolution of model documentation? In this episode, our hosts challenge the common belief that model cards marked the start of documentation in AI.
The vast majority of businesses are using artificial intelligence tools and most are doing so without any AI governance–a glaring compliance gap that poses extreme risks, according to a new survey by Compliance Week and GAN Integrity.
Nearly two-thirds of organizations have adopted generative AI without establishing proper governance controls, highlighting a significant gap between implementation and oversight, according to new research from Deloitte. While 58% of organizations are currently using generative AI to some degree, the study found that 21% of extensive AI users and 41% of limited AI users have no controls in place. This governance gap persists even as AI adoption expands, though it narrows somewhat with more extensive AI use. “It is critical that organizations adopting GenAI tools also adopt corresponding guardrails to govern its use — ideally before implementation,” said Casey Kacirek, internal audit managing director at Deloitte & Touche LLP. “Organizations that are already using GenAI without a controls framework should prioritize putting the necessary controls in place to minimize unintended consequences of the technology and protect the integrity of outputs.”
AI has been a huge topic since the release of ChatGPT two years ago. Whatever one thinks about the incoming Trump administration, it’s important to recognize this reality: A big push for looser AI regulation is heading our way. An influential group of Silicon Valley figures put its full weight behind Trump’s presidential campaign, and this cohort expects its agenda to be implemented. Their mantra is to unshackle AI development in the U.S., in order to win the AI arms race with China and deliver extraordinary benefits to society.
Artificial intelligence (AI) makes important decisions that affect our everyday lives. These decisions are implemented by firms and institutions in the name of efficiency. They can help determine who gets into college, who lands a job, who receives medical treatment, and who qualifies for government assistance. As AI takes on these roles, there is a growing risk of unfair decisions – or the perception of them by those people affected. For example, in college admissions or hiring, these automated decisions can unintentionally favor certain groups of people or those with certain backgrounds, while equally qualified but underrepresented applicants get overlooked. Or, when used by governments in benefit systems, AI may allocate resources in ways that worsen social inequality, leaving some people with less than they deserve and a sense of unfair treatment.
Artificial intelligence will fundamentally alter societies by transforming the creative process, education, and business, but its impact promises to be uneven across regions, communities, and social classes. The extent and variations of AI's benefits and risks for societal constructs can be understood by investigating its potential to enhance cognitive skills (including creativity), its ability to transform living standards, and its disruptive effects on economic equality. The likelihood that AI will have a net positive influence on society depends on policymakers, private corporations, nongovernmental institutions, and societal norms steering AI's responsible, secure, and equitable development and deployment.