Stay up to date: news and updates from the Monitaur team and news and viewpoints from across the web.
View in browser
AI Governance Newsletter from Monitaur

News | About Us | LinkedIn

"Information theory divergences are not useful for monitoring production model performance."

- The AI Fundamentalists Podcast Team

April 2024 | What's new this month

Information theory and the complexities of AI model monitoring

 

In a recent podcast of the AI Fundamentalists (linked below), we spoke about information theory and why, although it is a valuable discipline, its divergences are often the wrong choice for model and data drift monitoring.

 

In this post, we summarize the goals of information theory, define the differences between metrics and divergences, explain why divergences are the wrong choice for monitoring, and propose better alternatives.

Read the Post

The AI Fundamentalists Podcast

Information theory and the complexities of AI model monitoring

In this episode, we explore information theory and the not-so-obvious shortcomings of its popular metrics for model monitoring; and where non-parametric statistical methods can serve as the better option.

Listen Now

AI Governance & Assurance | Ethics & Responsibility

Walking the Walk of AI Ethics in Technology Companies

In this brief, Stanford scholars present one of the first empirical investigations into AI ethics on the ground in private technology companies. In recent years, technology companies have published AI principles, hired social scientists to conduct research and compliance, and employed engineers to develop technical solutions related to AI ethics and fairness. Despite these new initiatives, many private companies have not yet prioritized the adoption of accountability mechanisms and ethical safeguards in the development of AI. Companies often “talk the talk” of AI ethics but rarely “walk the walk” by adequately resourcing and empowering teams that work on responsible AI.

The ethics of artificial intelligence: A path toward responsible AI

Artificial intelligence has been around for decades. But the scope of the conversation around AI changed dramatically last year, when OpenAI launched ChatGPT, a Large Language Model that, once prompted, can spit out almost-passable prose in a strange semblance of, well, artificial intelligence. 

Its existence has amplified a debate among scientists, executives, and regulators around the harms, threats, and benefits of the technology. 

Visit the Blog

Industry Regulation & Legislation

The Brussels Effect: How EU's law affects AI Regulations Globally

The European Union (EU) carves out a unique niche in the global landscape. While it may not possess the raw military might of the US or the economic muscle of China, the EU exerts significant influence through a phenomenon known as the Brussels Effect. This concept highlights the EU’s ability to shape global standards through its own regulatory framework, bypassing traditional methods of international collaboration and global governance. However, as AI assumes critical importance in the global landscape, there are active efforts to contain the influence of the Brussels Effect, originating from Washington, London, and numerous emerging countries.

 

The Inherent Paradox of AI Regulation

Nary a day goes by when we don’t learn about a new regulation of artificial intelligence (AI). This is not surprising, with AI being widely touted as the most powerful new technology of the 21st century, but because there is no agreed-upon definition of AI and the landscape is constantly growing and changing, many new regulations are steeped in contradiction. But there is an often-overlooked problem here: because we are at such an early stage of consumer LLGAI, any regulations made now will be based on what little we know today. And with such a rapidly evolving technology, what makes perfect sense in 2024 may be irrelevant — or counterproductive — by 2029.  

Impact & Society

Big AI has already gone rogue. Where is the regulation?

ChatGPT debuted in the autumn of 2022, and it took less than a year for experts to worry that artificial intelligence more broadly could represent an existential threat to humankind. By March 2023, more than 33,000 experts asked Big AI to “pause” further development and testing until that threat could be gauged, prevented, and mitigated. The U.S. Senate held hearings with AI developers in May 2023 and called for strict government regulation. It’s been almost a year now, and it’s time to admit our lawmakers are not up to the challenge. 

Upcoming Events

  • Sunday, April 21 - Tuesday, April 23: IRES Compliance Rocks
  • Monday, April 22 - Tuesday, April 23: Insurance Innovators USA
  • Wednesday and Thursday, May 15th and 16th: The Future of Insurance USA 2024  Join the Monitaur team on the Blue Stage on May 16th for a hands-on workshop on Defining, Managing, and Automating your AI Governance Operations. 

 

Monitaur, Inc.

Monitaur, 19 Plantation Drive, Duxbury, MA

Unsubscribe Manage preferences