From the course: Using Generative AI Ethically at Work

Make ethical choices when using generative AI

From the course: Using Generative AI Ethically at Work

Make ethical choices when using generative AI

- How can we make ethical choices when it comes to generative AI? Before we answer that question, let's quickly recap our discussion so far. You now know some hard truths about the ethical issues surrounding the design of generative AI, including questionably acquired, possibly biased training data, environmental impacts, and an unethical data supply chain. We've walked through the challenges of trying to determine using generative AI to create content and how that poses some tricky questions. And we've laid out the ways in which generative AI tools are not all the same when it comes to risk, but instead have different legal and technical guardrails. With these foundational elements in mind, let's go back to our discussion about ethical choices and the four factors we may wish to consider. They are beneficence, upholding wellbeing, non-maleficence, or do no harm, justice or fairness, and finally, autonomy, respecting others' agency and human rights. Generative AI was made in ways that are ethically questionable, but it's hard for you as the end user to directly address those issues. One choice you could make is to not engage with it. Don't use it at all. That's extreme, but it might be akin to taking a stance on animal rights and deciding that veganism is the right choice for you. There might be a personal cost in taking this approach, but maybe it's one you're willing to make. Another alternative, assuming you want to move forward and use generative AI despite its ethical design flaws, is to focus on the ways in which you use it and make ethical choices from that standpoint. Firstly, you can disclose when you are using generative AI. This upholds the value of transparency. By disclosing to others that the working question involved the use of generative AI, you're respecting people's agency to make a choice as to whether or not they wish to engage with AI-generated content or to better assess the content knowing it was AI generated. You can be responsible for the data you use in the prompt, ensuring you are not violating copyright or privacy at a minimum. You can also be thoughtful about using other people's informal communications, such as information sent to you in an email in a generative AI tool. You can make decisions about the degree to which you might use the system. Recall our cupcake analogy as well as the context for using it. Being responsible also includes verifying the content coming out of the AI systems, ensuring it's not biased and that it's accurate information. As the person involved in creating the content, you are accountable for the veracity and quality of that content. These are basic things you can and should do in order to use generative AI responsibly. While you can't single handedly address unethical design practices, you can raise your consumer voice and ask AI vendors to make better products. You can ask for ethically acquired training data, products where data workers are paid fairly and treated with respect, and AI tools that don't exacerbate climate change or environmental degradation. You can ask your company or organization to lend their voices as customers to these issues. AI vendors are already addressing corporate concerns about model accuracy, data privacy, and cybersecurity. Let it be known that these other issues matter just as much. By working collectively, we can help to inform and shape the design choices that go into making generative AI.

Contents