MLOps isn't necessarily model governance, our latest podcast, and is AI the end of everything?
View in browser
MLA Newsletter

News | About Us | LinkedIn

"Although MLOps monitoring is a crucial part of governance, monitoring is not governance by itself, nor is it valuable in isolation."

- Andrew Clark

October 2023 | What's new this month

MLOps monitoring isn't necessarily model governance

A common area of confusion in data science is how monitoring and governance are related to one another. With the emergence of MLOps as a separate field, and its adoption of DevOps principles for deploying machine learning models, some key principles of model governance have been lost.

 

There is a tendency to embrace the "move fast and break things" mentality when developing mission-critical systems that impact end users' lives, without adhering to modeling best practices and objective reviews or validation.

 

Read the latest post from Andrew Clark and learn exactly what is missing from MLOps monitoring that is essential for governance.

    Read the Post

    The AI Fundamentalists Podcast

    Model validation: Robustness and resilience

    Episode 8: Model validation: Robustness and resilience

    This is the first in a series of episodes dedicated to model validation. Today, we focus on model robustness and resilience. From complex financial systems to why your gym might be overcrowded at New Year's, you've been directly affected by these aspects of model validation.

    Listen Now

    AI Governance & Assurance | Ethics & Responsibility

    Why Everyone Has a Role In Creating AI Ethics Standards

    This Forbes article provides an interesting perspective on the role individuals and companies have when steering the course of AI. As AI steadily integrates into our daily lives, ethical dilemmas have come to the forefront. I firmly believe that this duty extends beyond government boundaries; it's a shared responsibility encompassing each and every one of us, especially business leaders working with or on AI technology.  

    Visit the Blog

    Industry Regulation & Legislation

    The AI regulatory toolbox: How governments can discover algorithmic harms

    While AI legislation advances, some regulators are experimenting with gathering information about algorithmic systems and their potential societal effects. This experimentation has developed a toolbox of AI regulatory strategies, each with different strengths and weaknesses. These potential interventions include transparency requirements, algorithmic audits, AI sandboxes, leveraging the AI assurance industry, and welcoming whistleblowers.

     

    Second Draft of the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (NAIC, October 2023)

    The second draft of the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers is exposed for public comment.  The public comment period ends on November 6, 2023. To get the draft, visit the NAIC site and click Exposure Drafts.

    Impact & Society

    How AI will trigger the next 'end of history'

    AI is supposed to change everything. But what if it changes… literally everything, right down to the very fabric of government and society? That’s the vision presented in a recent series of essays from Samuel Hammond, a Canadian-American economist who has written extensively on AI, technology and social policy.

    Upcoming Events

    You can always contact us to chat, but let us know if you want to meet in person at these events:

    • Silicon Prairie. Omaha, NE. Oct 23-24
    • Insurtech Ohio Social, Columbus, OH. Oct 24
    • Scaleup:AI. New York, NY. Oct 27
    • ITC Las Vegas, NV October 31 - Nov 2
    Contact Us
    Monitaur, Inc.

    Monitaur, 19 Plantation Drive, Duxbury, MA

    Unsubscribe Manage preferences