From the course: Responsible AI: Principles and Practical Applications
Next steps and resources
From the course: Responsible AI: Principles and Practical Applications
Next steps and resources
- In this video, I'll summarize key takeaways, actions you can take and additional resources to help you determine the best strategies for developing or using AI responsibly. In this course, you are introduced to ways that AI is being used in human resources to streamline the hiring process, in social media to aid content moderation, in healthcare to support early detection and effective treatment, and in climate resilience to assess and address sustainability challenges. You are also introduced to ways that AI simultaneously poses risks in all those domains, such as perpetuating biases and discrimination in HR and healthcare and increasing energy consumption to run large AI models. To mitigate these risks, appropriate safeguards must be put in place. If your organization is developing or implementing AI-enabled technologies, it is important to have an AI strategy that includes responsible AI principles and practices. I encourage you to check out AlgorithmWatch's AI Ethics Guidelines Global Inventory for examples of responsible AI principles and practices. Once responsible AI principles are adopted, a robust review and oversight process should be established to operationalize those principles. For example, a risk assessment can be conducted to support the principle of fairness and non-discrimination by guiding developers and users to identify potential bias and discrimination. Corrective actions can then be taken to address any identified risks before deployment of the AI model. The National Institute of Standards and Technology, NIST, has developed the NIST AI Risk Management Framework, a helpful guide for AI risk identification and mitigation. Additionally, the National AI Advisory Committee, which comprises experts from academia, government, and industry, is developing guidance for responsible development and use of AI. I encourage you to check out their work. In addition to risk assessment, organizations should implement continuous monitoring processes. In other words, AI-enabled technology should be reassessed throughout its life cycle to identify and mitigate risks as they emerge. Numerous companies, including Microsoft, Amazon, and Google offer continuous monitoring, guidance, and packages. You play an important role in supporting responsible development and use of AI within your organization. I hope this course has helped you to better understand the power and limitations of AI and provided you with strategies that you can use to support its responsible development and use.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.