From the course: Introduction to Auditing AI Systems
What is an AI audit?
From the course: Introduction to Auditing AI Systems
What is an AI audit?
- [Narrator] Over the past two decades you may have heard about several incidents of harmful AI due to little governance and oversight. For example, facial recognition technology has been used to falsely arrest citizens, a result of skewed historical data, poorly designed algorithms, and a lack of mitigation strategies to avoid these consequences. These incidents have led to a reduced public trust in AI and its potential benefits. However, the situation is changing as new regulations are being implemented globally to address issues with how human data is collected and the decisions of AI systems. Since the adoption of the GDPR, there's been a growing emphasis on responsible AI development. Companies must now collect data and build machine learning in ways that are compliant and, ideally, ethical. As a result, the AI audit has become a popular tool for understanding if an AI system or its components comply with company policy, industry standards, or regulations. An AI audit consists of various public disclosures, internal procedures, code tests and documentation to identify disparities in AI system outcomes. An AI audit typically identifies biased outcomes for various kinds of AI systems and focuses on auditing the data, model, or overall system outcomes. In addition, some AI audits attempt to identify the root cause of a performance disparity. It's critical that AI auditors and AI developers both outline the goals of an AI audit and what they hope the audit achieves. AI audits borrow from practices in financial auditing, which provide governance, accountability, and oversight. Audits look at the data, model, or reevaluate a previously audited system. As AI has been adopted and used more widely in high risk scenarios, such as healthcare and finance, AI systems are increasingly used to make critical decisions. For example, determining which patients are seen first in an emergency room. In such situations, the risk to human life is very high. Algorithmic audits can be performed internally, meaning the people who work for the company that develop the algorithm are tasked with evaluating it, or externally, meaning a third party group of auditors analyze the outcomes of an AI system. There are various challenges with each that we'll discuss later in this course. To conduct an AI audit, practitioners rely on industry standards, such as the NIST AI Risk Management Framework, or other regulatory guidance. There's a growing movement in algorithmic auditing that's more community-based, similar to the well-established practice of bug bounties, where external hackers are rewarded for finding vulnerabilities and bugs in software. While AI auditing is on the rise, there are a few governing bodies that audit and monitor the auditors. This course aims to provide you with an understanding of what an AI audit should look like and an overview of third-party audits. Third-party audits help ensure that auditing organizations don't falsify positive results to benefit themselves. At the same time, auditors should also face downsides when they falsely label AI systems as compliant. The results of an AI audit are used to identify areas for improvement and aid in the development of harm mitigation plans. AI systems can unintentionally cause harm to people through misidentifications or getting wrongfully denied financial opportunities. Biased AI can also lead to allocation harms, where automated decisions unfairly extend or withhold resources, or quality of service harms, where AI systems don't work as expected for some groups as they do for others. Lastly, AI systems can exhibit stereotyping harms. This happens when supervised learning systems use historical data to stereotype people based on past examples and assume similar patterns when faced with data subjects who have similar attributes, such as race and gender. Most external audits happen after a model is already built, so they can cause harm even before they're assessed. Now that we have a basic understanding about the context around why we audit AI systems, let's go over how we use AI audits.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.