One threat detection configuration file update caused a massive collapse of IT infrastructure worldwide, making us realize how fragile our automated, computerized society has become
- Andrew Clark
August 2024 | What's new this month
Preventing AI incidents, Part 1: Why robustness and resilience are critical for AI governance
With the CrowdStrike IT incident of 2024, this generation’s Y2K finally happened. For the year 2000, teams anticipated and avoided the possibility of worldwide computer failures at the hands of the millennium bug. However, CrowdStrike and the systems in its path didn’t have the benefit of foresight or avoidance. Instead, deploying one threat detection configuration file update caused a massive collapse of IT infrastructure worldwide, making us realize how fragile our automated, computerized society has become.
Although unrelated to AI, the incident made us think about its parallels to AI systems and how we’ve yet to have our CrowdStrike-scale AI fiasco. We believe that such an event is a matter of “when”, not “if”. This is the first blog post in a series, building on our systems engineering discussions, about how to properly build and govern robust, performant, resilient systems, the right way. Our goal is to insulate your company from any AI-related disasters, as well as beat the odds of creating successful AI systems.
Join us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making.
Many efforts to build an AI ethics program miss an important fact: ethics differ from one cultural context to the next. Ideas about right and wrong in one culture may not translate to a fundamentally different context, and even when there is alignment, there may well be important differences in the ethical reasoning at work — cultural norms, religious tradition, etc. — that need to be taken into account. Because AI and related data regulations are rarely uniform across geographies, compliance can be difficult. To address this problem, companies need to develop a contextual global AI ethics model that prioritizes collaboration with local teams and stakeholders and devolves decision-making authority to those local teams. This is particularly necessary if their operations span several geographies.
Artificial intelligence (AI) has improved to the point that machines can now perform tasks once limited to humans. AI can produce art, engage in intelligent conversations, recognize objects, learn from experience, and make autonomous decisions, making it useful for personalized recommendations, social media content creation, healthcare decisions, screening job candidates, self-driving cars, and facial recognition. The relationship between AI and ethics is of growing importance—while the technology is new and exciting, with the potential to benefit businesses and humanity as a whole, it also creates many unique ethical challenges that need to be understood, addressed, and regulated.
The artificial intelligence revolution may be moving a bit fast. Americans aren't comfortable with the government'sapproach to regulating AI, according to a new survey from data intelligence company Collibra. Nearly every respondent expressed unease over regulatory AI safeguards--or the lack of them. But the same poll showed that nearly 9 in 10 people employed in decision-making roles in a variety of companies place a lot of trust in the way their own business was dealing with AI, including how staff are training and upskilling.
MIT professor Daron Acemoglu, co-author of the bestselling book Why Nations Fail and the recently released Power and Progress, discussed the current pitfalls of AI deployment on worker protections and prosperity at the Inequality in the Digital Age conference, hosted by Harvard Business School's Institute for Business in Global Society (BiGS) in collaboration with the Race, Gender, and Equity initiative at HBS. Acemoglu believes that AI has great potential to bring innovation and prosperity but argues that rapid deployment in the hands of big technology corporations is likely to overestimate the technology and underestimate the value of workers. In the long run, if CEOs see their workers as resources to be further activated by AI, he argues there is a path that can allow both businesses and workers to flourish.