From the course: Using Generative AI Ethically at Work

AI ethics: Cybersecurity and AI risks

From the course: Using Generative AI Ethically at Work

AI ethics: Cybersecurity and AI risks

- Alex is part of the cybersecurity team at Every Company and has recently been tasked with developing employee training related to the safe use of Generative AI. New digital tools always add to the attack surface, making the work of cybersecurity more challenging. This new training aims to be preventative and help Alex enlist colleagues to use the tools in safe and responsible ways. It needs to cover a wide range of AI tools used at every company. From generic ChatGPT to the art generation Tools marketing is experimenting with to the custom developed in-house solution that HR plans to roll out. Each one is a little different, but there are some common themes that Alex can address from a cybersecurity perspective. Alex is also wondering if they can use Generative AI to help develop the training materials. They're cautious about how to do this, but why shouldn't the cybersecurity team benefit from the use of these tools? Alex ponders how to do this in a responsible manner. Given what you know about Generative AI risks, what topics would you recommend Alex cover? Here's a hint. Think about data. How is it being shared? How might it be at risk? Take a minute to reflect on this question before we move on. So let's talk about this idea of how Alex can use Generative AI to help write the material for this cybersecurity training. Remember our cupcake analogy? We can think about creating content with an AI tool as a continuum, from not using it at all to using it as a digital assist to fully offloading our task. Alex is aware that Generative AI isn't always accurate. They decide that using AI for first draft or even ideation isn't all that helpful because they know more about cybersecurity in the Every Company context. While Alex is a subject matter expert, they know their writing style can be a little boring. They would like to ensure that the training is engaging, so they decide to use Generative AI as an editing tool to help punch up the copy, ensure it's grammatically correct, and that it fits with Every Company's communication style. Alex is very aware of the environmental costs of data processing, which is one of the hard truths about Generative AI, and that means they're choosing to use the tool sparingly. Just like our driving analogy, choosing to do it when warranted, but selecting other options that are more environmentally friendly whenever they can. Alex is also advocating for better, more ethically designed Generative AI choices with every company executives making their voice heard on surveys and through other employee feedback tools. Alex hopes this feedback will eventually result in Every Company pushing its vendors to make the fair trade coffee version of Generative AI.

Contents