From the course: Using Generative AI Ethically at Work

Know your generative AI tool

From the course: Using Generative AI Ethically at Work

Know your generative AI tool

- In early 2023, many organizations were banning the use of ChatGPT, citing concerns over sensitive data sharing and possible risks of hallucinations or inaccurate outputs from the system. Yet less than a year later, a lot of those same companies are now embracing generative AI tools. Consider JP Morgan Chase, who had originally banned their staff from using ChatGPT. They have now rolled out a tool to 60,000 employees called LLM Suite, which is partially built on ChatGPT. Why the change of direction? It has to do with risk management. Not all generative AI tools are the same, even when aspects of the underlying technology are the same. Generative AI tools can differ for legal reasons, such as the terms and conditions surrounding how data will be used and who will control it or who might be liable in the case of an AI-related incident. These tools can also differ for technical reasons, such as the addition of various guardrails, which might make the tool less risky to use, or by accessing the tool within a more secure environment. We can think of publicly accessible generative AI tools, the ones that anyone can freely access as the most risky from a corporate standpoint. Let's run through some of the common risks. These tools are not behind a corporate firewall, which makes their use more of a cybersecurity risk. The legal terms and conditions that govern the use of these tools are the same for everyone, and may include clauses which allow for data sharing in ways that might not align with the company's objectives to protect their data or intellectual property. There aren't additional technical guardrails to try and ensure greater accuracy or veracity of information, such as mapping a tool to a knowledge base of verified data relevant to the company's domain. While the publicly accessible generative AI tools don't address these risks, other variations do. For example, Microsoft's copilot is essentially built in partnership with OpenAI using a version of GPT, the underlying model behind ChatGPT. Yet, companies using Copilot can safeguard their data and negotiate terms with Microsoft that might make them feel more comfortable deploying the solution for use in their organization. In essence, these organizations are using contractual means to help address the level of risk while also implementing greater cybersecurity measures. Another common risk hallucinations can be addressed by either fine-tuning an AI model, essentially retraining it with additional data, or by mapping the model to a knowledge base, a technique known as retrieval augmented generation or RAG. These techniques aim to try and limit or shape the model's inputs in order to get better, more accurate outputs. This is particularly important when it comes to certain domains. For example, vendors building legal tech solutions need to know their generative AI models are safe and reliable for lawyers to use. These tools typically use an existing generative AI model, such as OpenAI's GPT, but these vendors also train the model on additional legal data. While there are always some residual risks, this type of additional guardrail does help to reduce inaccuracies and thus mitigate risk to an acceptable level. Companies can also apply similar techniques using their own data to effectively produce customized versions of generative AI technology. While generative AI tools might seem similar on the surface, each tool will have different legal terms of use and technical specifications, making it important to know the details behind the tool before starting to use it. It's also important to note that mitigating risks related to how the tools are used in corporate environments doesn't address the hard truths of generative AI. The environmental training data or ethical supply chain issues we've already discussed. To touch on our earlier driving analogy, we can implement speed limits or traffic lights, but that doesn't change the issue of pollution from cars.

Contents