From the course: Responsible AI: Principles and Practical Applications

Risks of AI

- In 2019, Nijeer Parks was accused of shoplifting candy from a hotel gift shop in New Jersey. Police identified Parks by running facial recognition technology on a fake driver's license that the suspect dropped when fleeing. Parks was arrested and held for 10 days. But there was a problem. He was innocent. Bank records confirmed that he was 30 miles away at the time of the crime, making a purchase in a pharmacy. Police departments and law enforcement agencies are increasingly rolling out AI enabled technologies, such as facial recognition systems, to increase efficiency of operations. Facial recognition systems use machine learning techniques to identify, collect, store, and evaluate facial characteristics so that they can be matched to photos of people in a database. But these systems are often fraught with bias. Leading AI researchers, Joy Buolamwini and Timnit Gebru, evaluated the accuracy of commercial facial recognition systems. Their research, and research conducted by The National Institute of Standards and Technology, or NIST, found that facial recognition systems are often ill-equipped to handle non-Caucasian faces, producing more false positives on people of color and women. In this video, we'll explore three of the most prominent risk areas of AI, bias and discrimination, transparency and accountability, and privacy and security. Let's start with bias and discrimination. We just discussed an example of what that might look like in a facial recognition system, but what is actually causing this to happen? Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, which occur when an algorithm produces results that are prejudiced due to erroneous assumptions in the machine learning process. In our earlier example of the risks of false positive identification in facial recognition systems, algorithmic bias led to an innocent person being arrested. Let's consider another example. From 2014 to 2018, Amazon researchers developed a machine learning algorithm to identify the best candidate for a job. The model was trained on data drawn from resumes submitted to Amazon over a 10 year period. By doing so, the team believed the model would identify relevant skills and features of current and potential employees, but the model was trained that male candidates were preferable to female candidates. By analyzing data from resumes, most of which were submitted by men, the model had inadvertently learned to favor male candidates. Realizing this, Amazon never put the model into use. Amazon's experience is not isolated. LinkedIn, Career Builder, and Indeed have all uncovered gender bias in their job recommendation algorithms. We'll explore more examples of algorithmic bias in human resources in chapter four when we discuss ways to develop responsible AI systems. Now let's talk about transparency and accountability. Machine learning algorithms can vary widely in their complexity, from simple algorithms based on linear regression, to complex algorithms based on deep neural networks. Simple algorithms are usually transparent. In other words, we can readily see what factors the model considered in how it used these factors to reach its predictions. On the other hand, complex algorithms are often opaque. That is, they are so complicated that even those who created them do not fully understand how the model made its predictions. This lack of transparency has led to them being dubbed black boxes. In other words, it may be impossible to understand what factors the model considered and how it used these factors to reach its predictions. With a lack of transparency comes risks to accountability. If practitioners and those affected by an algorithm do not understand how it reached its predictions, it may be difficult to ensure the algorithm is functioning properly. Take for example how a self-driving car uses image recognition to identify and avoid objects. It's transparency in what data were used and how their algorithm was trained are important for ensuring accountability in the performance of the self-driving car. Finally, let's consider risks to privacy and security. Many of the AI systems you come into contact with daily rely on personal or sensitive data. For example, results of a search engine query on your mobile device can take into account numerous personal factors such as your previous browsing history, your location, and your behavior using other apps. AI systems can be vulnerable to data focused attacks, such as adversarial machine learning. Examples include techniques such as data corruption and poisoning, where a nefarious actor inputs malicious data to manipulate an algorithms predictions. Left unchecked, these attacks can cause serious harm. Coming back to the self-driving car example, let's imagine that a nefarious actor wants to undermine the effectiveness of a self-driving car. They know that the car has been trained to recognize a stop sign and come to a complete stop. By strategically placing tape across the word stop on the sign, they may trick the algorithm into no longer being able to detect that the sign is a stop sign. The car maintains its speed and blows past the stop sign. This is clearly extremely dangerous to the driver and those around them. While AI systems can pose serious risks, there are also effective strategies to mitigate those risks. We will explore some of these strategies later in this course when we talk about developing responsible AI.

Contents