"Although MLOps monitoring is a crucial part of governance, monitoring is not governance by itself, nor is it valuable in isolation."
- Andrew Clark
October 2023 | What's new this month
MLOps monitoring isn't necessarily model governance
A common area of confusion in data science is how monitoring and governance are related to one another. With the emergence of MLOps as a separate field, and its adoption of DevOps principles for deploying machine learning models, some key principles of model governance have been lost.
There is a tendency to embrace the "move fast and break things" mentality when developing mission-critical systems that impact end users' lives, without adhering to modeling best practices and objective reviews or validation.
Read the latest post from Andrew Clark and learn exactly what is missing from MLOps monitoring that is essential for governance.
This is the first in a series of episodes dedicated to model validation. Today, we focus on model robustness and resilience. From complex financial systems to why your gym might be overcrowded at New Year's, you've been directly affected by these aspects of model validation.
This Forbes article provides an interesting perspective on the role individuals and companies have when steering the course of AI. As AI steadily integrates into our daily lives, ethical dilemmas have come to the forefront. I firmly believe that this duty extends beyond government boundaries; it's a shared responsibility encompassing each and every one of us, especially business leaders working with or on AI technology.
While AI legislation advances, some regulators are experimenting with gathering information about algorithmic systems and their potential societal effects. This experimentation has developed a toolbox of AI regulatory strategies, each with different strengths and weaknesses. These potential interventions include transparency requirements, algorithmic audits, AI sandboxes, leveraging the AI assurance industry, and welcoming whistleblowers.
The second draft of the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers is exposed for public comment. The public comment period ends on November 6, 2023. To get the draft, visit the NAIC site and click Exposure Drafts.
AI is supposed to change everything.But what if it changes… literally everything, right down to the very fabric of government and society? That’s the vision presented in a recent series of essays from Samuel Hammond, a Canadian-American economist who has written extensively on AI, technology and social policy.
Upcoming Events
You can always contact us to chat, but let us know if you want to meet in person at these events: