From the course: Using Generative AI Ethically at Work
AI ethics: Accountability and strategic use of AI
From the course: Using Generative AI Ethically at Work
AI ethics: Accountability and strategic use of AI
- Luna isn't sure why everyone is so impressed with generative AI. Not only does she fail to see clear gains in productivity from using the technology, especially for her work as a junior accountant, but people seem to be glossing over the ways in which the technology is actually causing extra work. There's one sales rep at every company who uses generative AI for everything, including expense reports. The submissions are often incomplete or inaccurate. At first, she tried sending them back, but this guy is a rock star and she wanted to avoid a tense work relationship, so she quietly fixes the errors in order to keep the peace while also ensuring accountability for the accuracy of the work. But it's not just expense reports. Luna has noticed that generative AI tools aren't good with numbers in general, which isn't all that surprising. Hallucinations in generative AI tools are well known. Given the inaccurate, unreliable information coming out of the systems, Luna wonders if generative AI is just not the right tool for the job. Accounting needs to be reliable, and it benefits from stable, predictable, deterministic tools, not the unpredictable, probability-based outputs that generative AI provides. Luna is a big fan of more stable automation tools, things like robotic process automation or even good old-fashioned macros. New is not always better. Despite these misgivings, Luna is feeling the pressure to find ways to apply generative AI in her work because every company executives are enamored with the technology, believing it will revolutionize the business and create all kinds of cost savings. But from her perspective, generative AI isn't all that helpful, so why should she manufacture ways to use it? Throughout this whole course, there's been an implicit bias. We've assumed that you want to use generative AI, that it delivers benefits in your work, but you just aren't sure how to do it ethically. All of our content and examples have taken this perspective. But it's important to also acknowledge that there might be certain roles or tasks or even personal preferences to not use generative AI. Maybe it's not the right tool for all jobs, or perhaps it's not a fit for the person doing the work. If we go back to the idea of company policies, we can think about giving people not only the rules around generative AI use but also the flexibility to choose other tools that might be a better fit for their role. That choice respects their autonomy and agency. In addition, Luna is seeing a negative impact from another colleague's use of generative AI which is creating more work for her. The shifting of work within an organization is something to watch for, and it's challenging because it can lead to interpersonal conflict, often exacerbated by power dynamics in the organization. This relates to our principles of fairness. Management should not assume that the use of generative AI always produces a net productivity gain for the company overall. Organizations can support employees by having clear feedback mechanisms, in other words, upholding employee wellbeing while also minimizing negative impacts, which align with our principles of beneficence and non-maleficence. Luna decides to speak with her manager and HR about her concerns, and she's thankful to find support. In fact, the HR manager, Aaliyah, is advocating for a clear organization-wide generative AI policy and said she will take Luna's feedback into that discussion with the executive team. Luna is proud of herself for having the courage to speak up, and she knows that her actions will have a positive benefit for her colleagues at every company.