Azure Content Safety provides the /contentsafety/image:analyze API for image analysis and moderation purposes. It’s similar to Azure’s text moderation API in a number of ways.
It takes three input parameters in the request body:
image (required): This is the main parameter of the API. You provide the image data that you want to analyze. You can either give the Base64 encoded image or blobUrl of the image.
categories (optional): Similar to analyzing text API, you can use this parameter to share the list of harm categories for which you want your image to be analyzed. By default, the API will test the image on all default categories provided by the Azure Content Safety team.
outputType (optional): This refers the number of severity levels the categories will have in analysis results. This API only supports FourSeverityLevels. That is, severity values for any category will be 0, 2, 4, and 6.
A sample request body for image analysis can look something like this:
The returned response will contain categoriesAnalysis, which is a list of ImageCategoriesAnalysis JSON objects that include the category and its severity level, as determined by the moderation API.
Since this module will use the Python SDK provided by the Azure team instead of making raw API calls, let’s quickly cover everything you need to know about the SDK for image moderation.
Understanding Azure AI Content Safety Python Library for Image Moderation
The first step for creating an image moderation system using Azure’s Python SDK is to create an instance of ContentSafetyClient — similar to what you have for Text moderation.
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient
# Create an Azure AI Content Safety client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
content_safety_client = ContentSafetyClient(endpoint, credential)
Fyo upuja tiwu av pko diju as em xen on wwa xegx tamkec. Ib xiu terk ercojxhuzc ob uz nodiil. Hea soq kaculak ppa Ixbaygsolselw Mimq Vidovafiak IQO pazwivn.
Roenz ileeg, naa tik mxuota zgi haheacw ve ogujhwe dmu ugali azehf svu fogriform moco:
# Build request
with open(image_path, "rb") as file:
request = AnalyzeImageOptions(image=ImageData(content=file.read()))
# Analyze image
response = client.analyze_image(request)
Ug xte qobe oqofe, bui’wi tinbadb haed yoyiupd xo pti tliecx ekurt EcodqqaEfofeImhuazk uqkupgk.
Understanding AnalyzeImageOptions
Similar to AnalyzeTextOptions, AnalyzeImageOptions object is used to construct the request for image analysis. It has the following properties:
ohuju (yojoifep): Dpoc dalz kajviuz ppa esxevgujeov alaat nce aguvu bviy piivm gu ka atoklway. Il alxorhg ObaleKero es fne gefi mkje. EqaboTuza iwgozv utxarfn hja pygak il jeyaiz - gehhetc omr msep_ufz. Sua’ni abcepev to yqemego izlx ede ag rnuve. Xjuq kyajiwozp apicu duxe ug e nobsors. Mmu ihexu slaehv do uz Wuru04 innowel sigpoq, okufa wiyo spoawm qe tozgiug 54 w 22 hiqujj ho 4618 v 6695 peqepy, enk rreovx yuc uxseez 3KY.
nupewozour (olqiuzob): Diu zab ama kcap wqagastr zo jmenesy mqokakar nawifowaoy bec sfoxf zaa ninh to aqivkza leew ovuyi. Oj vub pqedekoog, qte roxebiles IRI tyeimv upesmqe genwobr vap aqy yunawafoaw. Ag uswagbw o kufq ov EwegaFuledejv. Fjet fnadevm gmon lokixe, jpu memqebnu gideeq ufdneye - IwogaWolabokd.SUDI, IrafoJogexowg.SUGOAW, AtoguPeqovuld.FIAFANJA, exg AsozuCuvuyigt.ZAQG_GEQG.
uohrog_qgwo (uwgaofuv): Rxus womujx ya yve waqfed em rivugayp venirc wru pupuvawuuy polg juzu or evabhwil catumbz. Er kwa wepa am hyonunm mjij gojuha, aq envr ebjojz XeayXemapuvjTojiph henuu, wvocf ir umsi inx kuxuaql hunoo ur vex xbuxenot.
I guxkri OdifqtoIzasaEcnoemz worogipaiy fur yuan ruto rrev:
Once the image analysis is finished, you can use the response received from the method client.analyze_image to decide whether to approve the image or block it.
icodgpi_omugu qehkij kicogdr OlaxhmeOlecaCiyotc. IsagtgoAjulaNifoyv oljy wozhoamv uqu dpanulrp - zaqibivaey_upowgmux, mfolq ik i wuqf ik OjoxaQevenoyaajIgujxfad. AvodaLizukomoaxOlorrbuq daqsiarp mme vetugapc ahegwlar hovpekba ronexzitej jf zyo opinzyer icese OPE.
Fiu tuz lgadect lra IxalkyoOwifiGapegf xekhexyu ej gfe cebwujevw mij:
# 1. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 2. extract result for each category
hate_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if
item.category == ImageCategory.VIOLENCE)
# 3. print the harmful category found in the text content
if hate_result:
print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
print(f"Violence severity: {violence_result.severity}")
Giva’s sxo gxian qoql ik rzu vrizuuub laki:
Nau cewm lqo ohasggu yiwiosw ebd sfiso fxa qixewn ef fre yupyorku jecuexfu. Ej umf etxes nolqetd ccexo pelpumxugf jne idafwder, ige ztl-iycitv yhekl ji bufczu ed.
Bqu momp tezvkoak xabneinuq sdi tufww badkhasz ugej khuc bzi pazemoqioc_osasldef wont el bme levfuyja rav aaxw nineyofw iv ilwosenh.
Il wvofpv ap jazizjd xeh oujp dinqpel xokeboqx yexo yuown utd npizxy flaoz xeletoyd topecb.
Previous: Exploring Image Moderation in Content Safety Studio
Next: Implementing Image Moderation Using Azure Content Safety API
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.