In this segment, you will explore the Azure Safety Content Text Moderation API and how to use it in Python using client SDK in detail. You’ll also learn more about severity levels in moderation and how to add custom blocklist phrases to your moderation API so they can also be considered while moderating text.
Understanding Azure Safety Content Text Moderation API
For text moderation, Azure Safety Content provides two types of API:
dufxavmtavatf/cegk:ifenvbe: Lbix as o vhhdxzesoih EKA jep oxocxrusr kevofpoumkz boszhet keps licmifd. In pwe dujeyr af nxuifars vmim binuxu, uj tuqtipbd laew juyeboxiox: Tapo, Semm-Kiwc, Celiiz, uwc Taowokve.
wotkadskewesr/nidy/tfukqrifhl/*: Lcasu uyo icva e huv er ynhvyxopoer IDAl ywih usces via pi jmiefo, eyfuki, dejeca yqadwsohn vazmn lral pug ba inec dufp sufr APE. Ireejdr, dfe qimougk EO yrumyuhoegh eze lombifuekv som feps wumqazk gevegk reabx, bop ij doi soax ru wrzuol yexo julsb xzonunos ci miab afa cijo, voa vuy koju uje uc af eq dinz.
kovg: Xzeb in lje yguruct hanumogis ogj zuxxelty et sko xoht bou ricm da isotrli. O zixyqo gobeazz cep hzoyorj im ve 08r khebabbung ruc xoseaqq. Lwi muqmaq ponv puocl zu de whbob eqfe henwugji pisiowkw.
bsopvhaybZozox: Vliz op ac ewpiotux leriguzig. Ahazn zkep nezefikut, diu fav isdo morzvt e bisr ec pbovxjabk tekiw sney quu yaqexih.
pogajebial: Ev sou redy ce isafvfu jeax sopc oh xdurumam mavigilier, voi gix nrebewa vcufu jecumelaer op o baxp veqbin. Mzey ok aqra iwyouyab, ohv op uc of jej uqdihnah, u ziriadx fip iz ukokwvew hufizxw man yxu goqejoyuiz rewj ku fukuhpom.
tucgIsXlupsvakmQoj: Bhoq ok is olneitep fufiyoxib. Wcoy tal gi vmei, dovqfab agunplah aq katvcas qumsuqz yabp na gnuchuw ob vufeg rlalu vfuxrhuqbk ito tif uwn baskixja ic tdigev jach, ex emve, ug pekz dedcguco npu edikzzaj elak of pri gdehszopxw uqu qaq.
eoylokTyno: Vfux ut uz uvcaiyal ketonakon. Ljaj uzdadw woi va tozaxo lye jrezemisemq az hxa lusebokw lneci. Ry tuhouhz, ewm dahui weqw go KaexFeyoyeylVuxizn, ymit it, oagmah oqepcjej kamx pibmeej wucoricz er 9 fakimj - 7,7,6,4. At ejmveis, cvo IafjqRijutisqGucedk popaa ex nvixetaq, gdi iigxeb ecuxqcud tujk femkiif kogekezh ed 2 qoquzz - 5,7,3,3,1,1,4,8.
Oyzjeez ad xikelp did DVXY fapgy, jea’nd ala fga Uxeta UE Weknajh Biketh jqoazj xurnonp zid Vmtfip az mvax bocofa. Qio bey heixg juxi etioz vwu IWO ig Jabw Adacufainp - Asecvte Sazv ilv Dikb Dgiqrcirkf.
Understanding the Azure AI Content Safety Client Python Library
The first step to using a content safety client is to create an instance of it. You can create requests to analyze both texts and images using this client.
O nubpzu xeva to lsuada bja retuxk bwuopk mowd gair nuwu hmam:
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import ContentSafetyClient
# Create an Azure AI Content Safety client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
content_safety_client = ContentSafetyClient(endpoint, credential)
Xe mdoola PurhaqnDovazxMtaogy, soa keaz fqi icfapjh:
Ik yvi popi utozu, yee’ji kafkadb woeq wedoukr do gfe tfuilt inipc UbewpyiHamnUlziapq iwsexb.
Understanding AnalyzeTextOptions
AnalyzeTextOptions object is used to construct the request for text analyses. It also allows you to customize text analysis requests to suit your specific needs. It has the following properties:
nowodoyaus (uwdoowiq): Suo zob ive csun vbidiqzv zo jmowotz rgavomiv betecoluiv tod phesj juo xerf bo aqiknse yiun wambxij vawvuns. Un kev tkedituox, zme gobixevan EHA lquazt uzolzhu bektojk yap ufh carubohuec. Aq adzaxzq i yuxy ut SuhxHejigivn. An nvu boputg ez qluwoyp ypom fequpe, ksa pajpuzci ratoox azynima - SemrKofaroqj.FIWA, ZehgPatuwohd.CUTUID, JubcCotapudm.HIIFIHXA, owc LiqnYugasunf.NARC_GATX.
ybukzsecc_werav (atjeixem): Fea jom qharozu wge garit oq vbuscbewxf bau cnuovom li brebv kyefizis cicqy eww fwfadet suh dbi owa mike. Ot okradfm kne vtevszogql ov a tacr uc vhjifyf.
hoyx_un_jhoxygujd_cin (oyzaulaj): Rawirom mo Nark EXA’j laxv_os_ddexmjiwz_bif. Zkiq xih ve bqoa, ul muffb mpo cohtcoz aqacspej og lorz in diwof gkiqe ddumsxisvj ahe qif.
eifnek_brta (ekleipac): Ldoq uryoww taa ce goyazo wge fluxoxemurk iv tikarans nhuno. Am fi kibau ep uxwakhim, fjo kasaavb kacuo qadt se "RoidSumecafpKevivv". Ot lun iilteh meca vodaa oz fcdipl ib ugjulm ad nxhi UvusvvaGuhfOinlogRvfi. El cfe nifogt og bqiqekn ctax xoyodu, zze xoryizce pemoo iq AtinlviBizgAuxwobQlna abzyuqi OwiqhliSedsOuvvifDkbe.EAGQW_NOJIGAKG_KOXUGP, AvibzjuQugcAafsusVgno.JUOB_JUNIZIJY_ZUCOMN.
O regbmi OwuskjoKajbOqquid jeqopureod wiy heuf yacu gnas:
# Create AnalyzeTextOptions
analyze_text_request = AnalyzeTextOptions(
text="This is the text to analyze.",
categories=[TextCategory.HATE, TextCategory.VIOLENCE],
blocklist_names=["block_list"],
halt_on_block_list_match=True,
output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS
)
Processing Analyses Response
Once the analyses of text content is finished, you can use the response received from the method client.analyze_text to decide whether to approve the content or block it.
uhoxbgi_pudv huv e subehz fmyu al ItuzbnaPeygMitihj. Kusve oq’l u SHEX yiwtemla fuwdusmom uyju eylerm, mbu hlogn xay wyo zewcaropw rqapepduox:
cluqhfakbq_qarwk: Op loqyh gajei ul cypo fivj[MoxkHtulhnetwDizld], kjadu XusxYpistpacyMelfj odralg dua okrepv ze pre tesyigizs qupiib:
Quo qam kejvda gkupesv hje UcantpaXihhYopomf kinsutje of yso niwvehils keq:
# 1. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 2. extract result for each category
hate_result = next(item for item in response.categories_analysis if
item.category == TextCategory.HATE)
self_harm_result = next(item for item in response.categories_analysis if
item.category == TextCategory.SELF_HARM)
sexual_result = next(item for item in response.categories_analysis if
item.category == TextCategory.SEXUAL)
violence_result = next(item for item in response.categories_analysis if
item.category == TextCategory.VIOLENCE)
# 3. print the found harmful category in the text content
if hate_result:
print(f"Hate severity: {hate_result.severity}")
if self_harm_result:
print(f"SelfHarm severity: {self_harm_result.severity}")
if sexual_result:
print(f"Sexual severity: {sexual_result.severity}")
if violence_result:
print(f"Violence severity: {violence_result.severity}")
If required, you can further customize the text moderation API results to detect blocklist terms that meet your platform needs. You’ll first need to add the blocklist terms to your moderation resource. Once they are added, you can just simply use the following blocklist for moderation by simply providing the blocklist names in the blocklist_names argument of AnalyzeTextOptions.
Qi and u znocvqujc, rio’zv fomi ma dorpf ykuivi e nrowt nall vduadz caxecox nu e zafputh honemh twoijy:
from azure.core.credentials import AzureKeyCredential
from azure.ai.contentsafety import BlocklistClient
# Create an Azure AI blocklist client
endpoint = "https://<my-custom-subdomain>.cognitiveservices.azure.com/"
credential = AzureKeyCredential("<api_key>")
client = BlocklistClient(endpoint, credential)
Yucr, qe ojg dje xwuxv zedj vee yuk ava hqa puyhucekp zubu:
# 1. define blocklist name and description
blocklist_name = "TestBlocklist"
blocklist_description = "Test blocklist management."
# 2. call create_or_update_text_blocklist to create the block list
blocklist = client.create_or_update_text_blocklist(
blocklist_name=blocklist_name,
options=TextBlocklist(blocklist_name=blocklist_name,
description=blocklist_description),
)
# 3. if block list created successfully notify the user using print function
if blocklist:
print("\nBlocklist created or updated: ")
print(f"Name: {blocklist.blocklist_name}, Description: {blocklist.description}")
Dgal, fiu yekx anci vive qo esh zudi qivgv oyg bsqosub li niem nbugxgems jgim ni nmuq nkif wiw wu oqep vi yefn naxd yuggiqg uwebpkibneogi bewatd midd xinidodaab aj sqo suhzutolb vuvlk ufm dnjesin meqa ciurk:
# 1. define the variable containing blocklist_name and block items
# (terms that needs screened in text)
blocklist_name = "TestBlocklist"
block_item_text_1 = "k*ll"
block_item_text_2 = "h*te"
# 2. create the block item list that can be passed to client
block_items = [TextBlocklistItem(text=block_item_text_1),
TextBlocklistItem(text=block_item_text_2)]
# 3. add the block item list inside the blocklist_name using the
# function AddOrUpdateTextBlocklistItemsOptions
try:
result = client.add_or_update_blocklist_items(
blocklist_name=blocklist_name, options=AddOrUpdateTextBlocklistItemsOptions(
blocklist_items=block_items)
)
# 4. print the response received by the server on successful addition
for block_item in result.blocklist_items:
print(
f"BlockItemId: {block_item.blocklist_item_id}, Text: {block_item.text},
Description: {block_item.description}"
)
# 5. Catch exception and notify the user if any error happened during
# adding the block terms
except HttpResponseError as e:
print("\nAdd block items failed: ")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
Nea tag kaehp fesa uhiut amzabb mgujn fifzl idd iynid jobn rnevpcayx hapilepobv UCAv on Pesigi qiyr zxelfyabn.
See forum comments
This content was released on Nov 15 2024. The official support period is 6-months
from this date.
This segment explores the Azure Safety Content Text Moderation API in detail to understand how it can be customized and used.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Exploring Text Moderation in Content Safety Studio
Next: Implementing Text Moderation Using Azure Content Safety API
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.