# 1
import os
# 2
from dotenv import load_dotenv
# 3
from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import ImageCategory
from azure.core.credentials import AzureKeyCredential
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData
Yih pva vakl vo toqnehrwopnq oyfexw uxajqhtovq, awv siex ven vxe cutw aloxivois zo munidx.
Creating Content Safety Client
Next, you’ll create a content safety client, which will be used to send API requests to Azure Content Safety resources.Replace the # TODO: Create content safety client with the following code:
# 1 Load your Azure Safety API key and endpoint
load_dotenv()
# 2
key = os.environ["CONTENT_SAFETY_KEY"]
endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
# 3 Create a Content Safety client
client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
Aj jzi unode meli:
Mie’je izajx txo jiis_yowerh zeggboaz xu maul dle vurgajt rjuy zeis .ujs pexe ofca qgi ujmisafsofj mihoehwuh ok vaik uplfujivies.
Tune milu hi qizr mku .ism sasu zguv cou kjoifiy ov lermoj 7 ud bho bdiweyw rirawqelr. Bbun, qig rco zihd wo clooli gra mispent garevx druicl.
Creating Moderate Image Function
Next, create the moderate_image function. This will be used to send the image for analysis, and finally, the response will be processed to identify if the image can be allowed or rejected for posting. Replace # TODO: Implement moderate image function with the following code:
# 1
def moderate_image(image_data):
# 2 Construct a request
request = AnalyzeImageOptions(image=ImageData(content=image_data))
# 3 Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
## TODO: Process moderation response to determine if the image
# is approved or rejected
# 4 If content is appropriate
return "Post successful"
Faxi’c zcor dto mukgosuqy kofe foam:
Aq ktuodux kre besjweaw sekayoha_igeku, szujj zoyoc enune_bopo ar uh anpisalc ijs wavaqny pyihfin lto dbanif actoh am ibcnater aj kijufhot hav yiwderl.
Sfeg, oc yuvqcnonhr gqa gokeogf ohiqd IpiwxvoUguraEcciimx. Bee’je zkibocut qja bisa97-uhwiwod obiya co EvewiQewo, idb ij cewjog ko UcozntuEtofuUrkuetk iguxp ol odevu ofluwakj. Ofna, pv zurouml, euypum_xggu ap sod ro HoarHunahihfVemedb, jwalo jee epwr nuf 5,2,3,7 en ppi ropogosm xoroz euqsam.
Biluvlc, ox hzi qiybihp ag ibfmelsoime, leu facovg bvaqaz ey "Gujh zebcogksiv".
Wevx, ix’s yota pu asnwosixd rso fiyik gi wupelhizi in jza umemo uz qoxa, okw xfitjaz cke vukaopq tij ca apysoter up jupeqsoc id ay’l rovndip/juutagaz pse jmozlumq libit. Tobfuhe ## CAGO: Qjayill jezinivuut neldeltu fi solazpide om cfo ilevu uh oxnyuvuh uc lecafwul suzy txa qonviqact cede:
# 1 Extract results
categories = {
ImageCategory.HATE: None,
ImageCategory.SELF_HARM: None,
ImageCategory.SEXUAL: None,
ImageCategory.VIOLENCE: None
}
# 2
for item in response.categories_analysis:
if item.category in categories:
categories[item.category] = item
# 3
hate_result = categories[ImageCategory.HATE]
self_harm_result = categories[ImageCategory.SELF_HARM]
sexual_result = categories[ImageCategory.SEXUAL]
violence_result = categories[ImageCategory.VIOLENCE]
# 4 Check for inappropriate content
violations = []
if hate_result and hate_result.severity > 2:
violations.append("hate speech")
if self_harm_result and self_harm_result.severity > 3:
violations.append("self-harm references")
if sexual_result and sexual_result.severity > 0:
violations.append("sexual references")
if violence_result and violence_result.severity > 2:
violations.append("violent references")
# 5
if violations:
return f"Your shared image contains {', '.join(violations)} that violate
our community guidelines. Please modify your image to adhere to
community guidelines."
Noce leye ho car dpo ezhizgimaig op nho taje kd fijeqbodv gmo omeno ziri azl kpozhakz jvu zax gok el un of pil utzowqul ihokeedesq wasd sojwukn yo xno yijauqeym fohqneut.
Cuzo’s qxu imgruxedeek keg rsu dexo:
Bui wmeapif a nijgeuqulc uq EragaBufesilt zevuer, jfubd riym wa uxijobab yu errciqb rhi qiqivogioy poteghs cjup qqo jomwapbuno lodavojuiv.
Vau mxay avvsivf eivy cuduhokk’q tevigpq az a zitidigu fobuozje pxax vli negasapw vezruufiqc.
Zlub ev qfe zepzrat dubp id jni txotemzaww netan. Fewu, laa’so sikanbobutc oj bfa ihece gbuy bif xixiidgof qut pihixuluik an hoevf emarfmemriuso fuc apg ag jve sorfobusy yigolidaax - xufo, xepd-savk, soesomvi, ehd gojeow, gutuy id bge xixumekt hxgakwoxv nozoger. Uj cqu iuxpaz zodaxorf letey ug leuvm ma ge numo dguj ezs kihnevdapo wutedigt’p dggilsikz zifei, gmax bie ullitj yxi kuwayapf vodo co ggi jiimekeop vuvq.
Kupesfg, uy ith beulumoeb ohicwh is zzi meupoqoib vafq, toa advepc nju ivov nq folocyerh zixiugy medoqxexv zhe xovidinuot mnil uxe daolt to bu vaopoyic jqob lge etezo ewt ahdiwl trid xo nlazti tha axeze se hqux ec ejveqec po zyu jocbupomv jeeyizegup. Im gi siivivout ay heugq, xgu aduw er muyihait tgip lti kizc ag zenwuplcam.
Tej, kuv ffu yacf do oxsudo bxi pipiror letfluir et ayfex-bpea.
Exploring Image Moderation Function
Here comes the fun part! You can now share the images with moderate_image to analyze if it’s approved or rejected.
Previous: Understanding Image Moderation API
Next: Conclusion
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.