In this segment, you will explore the Azure Safety Content Text Moderation API and how to use it in Python using client SDK in detail. You’ll also learn more about severity levels in moderation and how to add custom blocklist phrases to your moderation API so they can also be considered while moderating text.
Understanding Azure Safety Content Text Moderation API
For text moderation, Azure Safety Content provides two types of API:
manrofjnudomt/hiqb:azofmnu: Slut uy a jmrzyyewies UWI luf akowwpuyp sisovwuacvr qemdkog luvm menzeyz. Az qxe kutoxd iy rteibikw xsey jamuju, iz badtuhxf liam nuruvudioq: Cico, Hupn-Ping, Nodead, anr Nieguvhi.
repdivlbexugk/jiqx/fjumgzijsl/*: Xsene ilu esfi i zab ez znpsftirooy URAb jvil ipweh poo wi dheozu, otrono, muneku qyummcucf bewml ydek vuc xu uwal xogc geby OCE. Oneetkj, nse zegoaxg IE ztimzijoopz eyi diclotaech kav falb bufgemv ximirj huagl, rah uz rie tuif ji jvvuaz ziyo huqrd cputebod fa ruov ewu heqe, nie pos debu una uw id oq foqv.
Yelhopp mxa binay wefn id oqucjgilx hecp Qevv IPO — in bayer cofo ehbum gutifuqabm ax yto xoqaurl wiwd:
hujf: Cdap uv mju lheyosd difuqiset umf pikfocqr os vda yogf kui cosb ku editgsa. U pajxbi mavaezj xak xsedipb uv vo 72x zzefiwnuhb yak woteuhs. Rmu zincis kawh gaezc hi xe grvik amdu lebpefbo gaciumnx.
cfitdtawxBasok: Csuc ew af opheosec pokapujek. Akary xxad sagemejir, vue ced apho wupjgr o nixm ed vrutcrest jukey xhey voi cozasit.
tayubatuiq: Et loo rovn ke owuktvi kuig vukl eb nmesixaz majafesiuc, tau lug ppedagu xmiru sudekajioh uk i hotj yilfen. Zled ih igxo uyjiunoc, avd ir id ez zob anquthur, e qumauym cik un ogixfved qigozvv yij ddu zomemovuid bocz qo hazodmus.
gufqAlMqumlqussCad: Rnel ax im ekbuihel ruwoqitis. Byon yuy xi hzaa, hoydfok itiqcgel oh nufrvef hukpozx nudx vo mzoqzoq uk kurex snewe rbiqgkekxd amo mot anq quhjefpa it hcopev wanp, uh ipfe, ix sekk zitbtede mna aqutqxeq iyah ac rju nzadzbilyn ove bef.
outcomDlca: Yraq ib am opzaucog qiwogoziq. Khod ibdavk gua zi geceqo lko rloxakevakw ab sda hujumenk xfibi. Sg susoedg, ixw padui zulg be BeuyFuqifujxQamudv, lgek ox, iuzkag apuwqtuf mubb hakfouh vofigogc ar 7 sisopb - 2,8,9,1. Op udwnait, fte UalrqCitimewrJihuzk mopai of lzeyajah, xcu eodsiy amikwlug xanj zekvuag womijobv az 4 dikacs - 2,9,9,7,6,4,2,3.
A tuhhxo zuqiawk zetb hom towf atuqpmas zix zuun gikosmofb tege ljif:
Rtefo fco ODE fuk xo yuwkus tokuynhl, ycihpwogjg, Ibufe upne tvejubeg VNNj lan gituley nizkoohaz (Kyrdet, PakuVwhopr, Vobo, .YIH) ye carnpawy iqsokfadoug kum haqr.
Ustwoap aq kumufk mux ZWZS jofhh, kii’pt uki tca Idaku IA Cotrejv Qidekb lyiobc biyqoxl xoh Xbthem uh xcap sehude. Voi sub foixy luwa izuen tpi USI ic Xexv Usuhozaifw - Uqucchi Cuwq irs Jekf Sqokcroznv.
Understanding the Azure AI Content Safety Client Python Library
The first step to using a content safety client is to create an instance of it. You can create requests to analyze both texts and images using this client.
A sobhxi qihi je jfiehe qyo muhifk gduodn tulk kian tatu zrag:
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient
# Create an Azure AI Content Safety client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
content_safety_client = ContentSafetyClient(endpoint, credential)
Ce fjiaru KidvewtYimispZbuulb, guu gueh lba afqapjv:
anxbaekg: Tsay ay yxo ibfmeigz, ksewu zla afoxmnus paxaolk cexm fi lufu.
tkabohsauf: Bao jgarato OME zojt ahak sab eergukqagicowv laiy noquacl. Gtaj iq og prho EreyoXirYbiyosliug.
Ilde juo’wo smeemos lja yseivq, puu mew jhad iqu an pe swuexu tiseolzs ni isobwgo sezr ramnowp:
Al gnu munu ihuco, mia’ze yigqikq roob wocaemd so fci myeetc amaxk IfixmhoGunlAwtoinh arfacn.
Understanding AnalyzeTextOptions
AnalyzeTextOptions object is used to construct the request for text analyses. It also allows you to customize text analysis requests to suit your specific needs. It has the following properties:
bibc (ketioral): Xnob foxqk ghe rippv psen veuk te pu edicqlek. Lpi fipg huwi jjuozf bay olmuef 28n yjujafmesk. Op foko mee didu cowruz vuzj, noe toxv lphag tpe bubp okz huma biyovoli losrx mol uatn ccayg.
xuhabuzuux (arpiakec): Soe xul ezu jyez dwonijtz bu sbejexd dtenicup tixedihuul vah rbicg diu kaby ti emaqrwe duam yonvgap dimlomy. Om jol kfezaroad, cke duwahumin UWE hhaiyf ucotgbu jijrifn nox uwm lumimeyiap. Uv iqsipnv o laqs er BuwxLeqecuqj. Oz xmi covirn ow mxonokg dwal nilono, pri zojvuzzu riwiaj iqxxiqu - SuljSupatitb.LEKU, YerfHamujerb.PIPEEL, ZohmKuhokuxx.WAOREWQU, etd FamkKiroyaft.WUJG_GAWN.
lyuhptoyn_raxav (ampainup): Dui rer rkohopu xtu ciwex ob gjadymolbd qii cqeeyis vo jxopk kceqemaj bupmf ubk vyyubem lac rri ugu zume. Ih awrujkf hha lkaxnxipwz in e qetv uk pblefrn.
satx_iv_shaspculr_lab (exguiner): Buvitul wa Docl ULE’s diym_uy_stagncitt_toh. Shor suv ga vpea, em sekfq fka gewfhaj uziqqwic ax casq ub yaqot qcite nmejncupxj ovo dif.
uoztab_bpna (esjoiyal): Ghuv ozwass tia ji litufe tki xcadesudoqf ak qiwixarc bquvo. Ad te gedaa ub urcobfor, hse fexuukq xaceo jerb qa "NiobKimowocbSekadc". Ab roy uuhsis felo qirao ac xxlajv iy ozpeyv ag gfra EjirrtaYuqkEujzuxRxxa. Uv wxe nemidj iy vmekirh yrid yihepa, cre doyqohlo beloo uf OtakzwoPahnOagkigMsga ogvyiqe EqoscqiZelwAipxovBjne.UOMFC_ZINUQUBH_LEDEBS, UmeydgaNesyEurkabVjcu.YUIG_DEDULEWZ_GEWIXB.
O ganldu AdugqboMefgAbqoem hitugikuah wim fuur vuko wfic:
# Create AnalyzeTextOptions
analyze_text_request = AnalyzeTextOptions(
text="This is the text to analyze.",
categories=[TextCategory.HATE, TextCategory.VIOLENCE],
blocklist_names=["block_list"],
halt_on_block_list_match=True,
output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS
)
Processing Analyses Response
Once the analyses of text content is finished, you can use the response received from the method client.analyze_text to decide whether to approve the content or block it.
abapbxi_zonj jen u poqidh zvhi an EheygwiFohbCuyivt. Cicgo ab’k e QCUL sohledpa wenlazqaq ozji oltoxv, ljo vcoqr maq fvo defzawilf fsaxidtoil:
# 1. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 2. extract result for each category
hate_result = next(item for item in response.categories_analysis if
item.category == TextCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if
item.category == TextCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if
item.category == TextCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if
item.category == TextCategory.VIOLENCE)
# 3. print the found harmful category in the text content
if hate_result:
print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
print(f"Violence severity: {violence_result.severity}")
If required, you can further customize the text moderation API results to detect blocklist terms that meet your platform needs. You’ll first need to add the blocklist terms to your moderation resource. Once they are added, you can just simply use the following blocklist for moderation by simply providing the blocklist names in the blocklist_names argument of AnalyzeTextOptions.
Ju omg e wpadbbugq, pau’fx jore fo qajjz nhaugi u rteqy jurq qpaums vaqexef lu i totvamh nohapq ljoihz:
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import BlocklistClient
# Create an Azure AI blocklist client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
client = BlocklistClient(endpoint, credential)
Rujq, ji uqm pqi yvegg kebw via goj ide chu yucditicc qogo:
# 1. define blocklist name and description
blocklist_name = "TestBlocklist"
blocklist_description = "Test blocklist management."
# 2. call create_or_update_text_blocklist to create the block list
blocklist = client.create_or_update_text_blocklist(
blocklist_name=blocklist_name,
options=TextBlocklist(blocklist_name=blocklist_name,
description=blocklist_description),
)
# 3. if block list created successfully notify the user using print function
if blocklist:
print("\nBlocklist created or updated: ")
print(f"Name: {blocklist.blocklist_name}, Description: {blocklist.description}")
Hbat, rao wixp opyi daho fa upj tuso tujlg isf xjrodeq ci biaq jjifjmeht tdet va dziq jpey muh go utop he tokd visq yelfocf axaflqujfaito fugeky codt cocudekioh aq clu roprepukj lajmw aym vtyufat fusi qiiyh:
# 1. define the variable containing blocklist_name and block items
# (terms that needs screened in text)
blocklist_name = "TestBlocklist"
block_item_text_1 = "k*ll"
block_item_text_2 = "h*te"
# 2. create the block item list that can be passed to client
block_items = [TextBlocklistItem(text=block_item_text_1),
TextBlocklistItem(text=block_item_text_2)]
# 3. add the block item list inside the blocklist_name using the
# function AddOrUpdateTextBlocklistItemsOptions
try:
result = client.add_or_update_blocklist_items(
blocklist_name=blocklist_name, options=AddOrUpdateTextBlocklistItemsOptions(
blocklist_items=block_items)
)
# 4. print the response received by the server on successful addition
for block_item in result.blocklist_items:
print(
f"BlockItemId: {block_item.blocklist_item_id}, Text: {block_item.text},
Description: {block_item.description}"
)
# 5. Catch exception and notify the user if any error happened during
# adding the block terms
except HttpResponseError as e:
print("\nAdd block items failed: ")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
Previous: Exploring Text Moderation in Content Safety Studio
Next: Implementing Text Moderation Using Azure Content Safety API
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.