“To use generative AI foundation models responsibly, commensurate with the risk of the systems, you cannot succumb to the market perception of generative AI being an instant game-changer without knowing what part of your own game you want to change.”
- Andrew Clark, Ph.D.
November 2024 | What's new this month
Register now: Understanding AI Governance, Risk Management, & Compliance (AI GRC) Solutions: Part of the OCEG GRC Technology Series
Join Michael Rasmussen (GRC2020) and Anthony Habayeb (Monitaur) as we explore the benefits of robust AI GRC solutions that provide governance, monitoring, and compliance management across AI systems, and guide you through how each capability addresses the challenges of managing AI risks and compliance.
In this webinar, we will outline the benefits of a strong AI GRC solution and demonstrate how it addresses your organization’s AI-related governance, risk, and compliance challenges.
This link takes you to the OCEG webinar registration page.
The AI Fundamentalists
New paths in AI: Rethinking LLMs and model risk strategies
Are businesses ready for large language models as a path to AI? In this episode, the hosts reflect on the past year of what has changed and what hasn’t changed in the world of LLMs.
Over the past year, AI has evolved rapidly, moving beyond generative models that create text and images to advanced automation systems that are already transforming healthcare, finance, education and more. AI’s potential to revolutionize everything from disease diagnosis to supply chain optimization is undeniable.
However, as this technology progresses, it also raises significant challenges, particularly around ethics, privacy and governance. Concerns around data privacy, algorithmic bias and transparency are growing as AI becomes more integrated into everyday life.
Successful AI Ethics & Governance at Scale: Bridging The Interpretation Gap AI ethics and governance has become a noisy space. At last count, the OECD tracker counts over 1,800 national-level documents on initiatives, policies, frameworks, and strategies as of September, 2024 (and there seems to be consultants and influencers opining on every one). However, as Mittelstadt (2021) succinctly puts in a way that only academic understatement can, principles alone cannot guarantee ethical AI. Despite the abundance of high-level guidance, there remains a notable gap between policy and real-world implementation. But why is this the case, and how should data science and AI leaders think about it?
The introduction of new artificial intelligence-based technologies has generated front-page headlines and grabbed the attention of consumers and policymakers recently. While related technologies have been in use in workplaces for many years, new commercially successful products have spurred greater discussion about the impact of artificial intelligence (AI) on our economy and society. Similarly, new research and reporting have highlighted the direct experience of workers who use or are subject to these technologies in industries ranging from warehousing and manufacturing to health care and retail services. In response to growing attention and concerns with the labor market impacts of AI technologies, policymakers at nearly every level of government have published principles, issued new guidance, and introduced legislation on a range of AI-related issues—including data privacy, employer disclosure practices, and auditing requirements.
Artificial Intelligence (AI) is on the brink of reshaping not just industries but the very fabric of our relationships and interactions — transcending beyond mere tools and entering realms of companionship and emotional support. Renowned historian and author Yuval Noah Harari recently engaged in discussions with journalist Andrew Ross Sorkin, addressing the potential for AI to redefine how we build connections, emphasizing its growing sophistication at deciphering human emotions.
Intelligent machines and AI interfaces are increasingly embedded in a range of social contexts. In turn, these machines are themselves deeply shaped by the social and cultural milieu of their human creators. Milena Tsvetkova makes the case that social scientists should recognize and engage with the social properties of these new technologies.