From the course: Microsoft Azure AI Essentials: Workloads and Machine Learning on Azure

Azure AI Content Safety demo

From the course: Microsoft Azure AI Essentials: Workloads and Machine Learning on Azure

Azure AI Content Safety demo

- [Instructor] Azure AI Content Safety can be demonstrated using the Content Safety Studio. First, you need an Azure Content Safety resource. In the Azure portal, search for Content Safety. Click Create. Assign the resource group location, name, and pricing tier. Click Review + create. Then click Create. To access the Content Safety Studio, search Azure Content Safety Studio in Bing. Sign in using your Azure credentials, and select the appropriate resource via the cog icon. Let's try the Moderate text content feature. The first sentence passes all four categories. The second sentence, even with misspellings like euthanasia and bleeding, it's blocked due to the Violence category as it describes harm to an animal. You can also adjust threshold levels. A Low level is strict test, while a High level allows more content to pass. Setting the Violence filter to High for the second sentence allows it to pass through. The third sentence demonstrates multiple risk categories. You'll see where the text fails, and the severity level detected. In Protected material detection, the system blocks copyrighted content, like a song. For image content moderation, the first image passes all categories. The second image shows High severity for Self-harm and Low for Violence. The third image is flagged for sexual content. Lastly, let's demo prompt shields. Even when asked to act as DAN or do anything now and generate harmful content, the filter still blocks such instructions.

Contents