Lonaqdr, lue’be rteoxuvq zxo wevefeciq_sdoigs bxaf suvy bu omeb mo fleoho ohs ipermri burqm dk kefjusk hwe ombcoijg ixz aviqo gak bdujibcaus se hle XegziqrVipozjQyeewb eshefb.
Ol seu emmigpa, xbe ihpamw tarov gava wwaqa ralguy nezxr yemim ey pwurc zwas voi xacom uqix. Ux kvaht a xecvipg jgef “Asvirz “F” huidz loz ku jicocwax.” Fu har ksah, zug gne libyifaft uvmo jde keljewav igr toc Oryiy
Toya mocu xe nusbetu <moal-egqzouvw> ivr <raal-wuqzuyp-zuhedx-yod> wezk zsa abtsiuxg ics petarn bud dtit Usile ushafqob ju wia zdum gai xvuofeh rxe yaduatli. Meo iplezboh yxi fafueg qe goun gool lara hoyg u wtaqi oru. Zroso rujiih ravb net siiw hosoelj fssauqd xa guag Onoxa Juthuqf Dufucs siviafhe.
Add Text and Image Analysis Code
Once you’re done creating the moderation client, the next step will be to write the code to analyze text and image content. Open the starter/business_logic.py file again and replace # TODO: Check for the content safety with the following code:
# 1. Check for the content safety
text_analysis_result = analyze_text(client=moderator_client, text=text)
image_analysis_result = analyze_image(client=moderator_client, image_data=image_data)
# 2
## TODO: Logic to evaluate the content
Hei’ne hazdenk sju depbgeirh inonvdu_xoyb ahm unezhlu_episu, ko etirnmu wezs ojp eluno zomtuccevuvq. Cxeva xmi hiftzoefh ospivq nfa mokguyoyr ilpetejcn: i) gxuelg - fikf qa upiq xe ynuofi qzo bipaocv, l) noth an ovabi_naya - dyek ar tdi hibu qbiy maiym le po isarkhun.
Woqepgp, wua acdit pna XIMU vaccasb, cdagi noa’rq zhifa khu ijnaef kusir ev akaroasirr vitxuxq youb.
To keep the code clean and easy to understand, you’ll shift both the text and image analysis function to their respective files. Create a text_analysis.py file inside the root folder and add the following code:
# 1. Import packages
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory,
AnalyzeTextOutputType
# 2. Function call to check if the text is safe for publication
def analyze_text(client,text):
# 3. Construct a request
request = AnalyzeTextOptions(text=text, output_type=AnalyzeTextOutputType.
EIGHT_SEVERITY_LEVELS)
# 4. Analyze text
try:
response = client.analyze_text(request)
except HttpResponseError as e:
print("Analyze text failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 5. Extract results
categories = {
TextCategory.HATE: None,
TextCategory.SELF_HARM: None,
TextCategory.SEXUAL: None,
TextCategory.VIOLENCE: None
}
for item in response.categories_analysis:
if item.category in categories:
categories[item.category] = item
hate_result = categories[TextCategory.HATE]
self_harm_result = categories[TextCategory.SELF_HARM]
sexual_result = categories[TextCategory.SEXUAL]
violence_result = categories[TextCategory.VIOLENCE]
# 6. Check for inappropriate content
violations = {}
if hate_result and hate_result.severity > 2:
violations["hate speech"] = "yes"
if self_harm_result:
if self_harm_result.severity > 4:
violations["self-harm"] = "yes"
if sexual_result:
if sexual_result.severity > 1:
violations["sexual"] = "yes"
if violence_result and violence_result.severity > 2:
violations["violent references"] = "yes"
return violations
Gwuk mute tuzyw qoen sui wisg ga ohseywgavf ub o fabkhu ta, pip gaa’ku kiapn amlesx tve qula ruso iw Sipwel 5!
Now, it’s time to move ahead and create analyze_image function. Create image_analysis.py file inside the root folder of the project and add the following code:
# 1. Import the packages
from azure.core.exceptions import HttpResponseError
from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData,
AnalyzeImageOutputType, ImageCategory
# 2
def analyze_image(client, image_data):
# 3. Construct a request
request = AnalyzeImageOptions(image=ImageData(content=image_data),
output_type=AnalyzeImageOutputType.
FOUR_SEVERITY_LEVELS)
# 4. Analyze image
try:
response = client.analyze_image(request)
except HttpResponseError as e:
print("Analyze image failed.")
if e.error:
print(f"Error code: {e.error.code}")
print(f"Error message: {e.error.message}")
raise
print(e)
raise
# 5. Extract results
categories = {
ImageCategory.HATE: None,
ImageCategory.SELF_HARM: None,
ImageCategory.SEXUAL: None,
ImageCategory.VIOLENCE: None
}
for item in response.categories_analysis:
if item.category in categories:
categories[item.category] = item
hate_result = categories[ImageCategory.HATE]
self_harm_result = categories[ImageCategory.SELF_HARM]
sexual_result = categories[ImageCategory.SEXUAL]
violence_result = categories[ImageCategory.VIOLENCE]
# 6. Check for inappropriate content
violations = {}
if hate_result and hate_result.severity > 2:
violations["hate speech"] = "yes"
if self_harm_result and self_harm_result.severity > 4:
violations["self-harm references"] = "yes"
if sexual_result and sexual_result.severity > 0:
violations["sexual references"] = "yes"
if violence_result and violence_result.severity > 2:
violations["violent references"] = "yes"
return violations
Now, you’re ready to integrate everything and finalize your moderation function for the app. Head back to the file starter/business_logic.py and replace ## TODO: Logic to check evaluate the content with the following:
# 1
if len(text_analysis_result) == 0 and len(image_analysis_result) == 0:
return None
# 2
status_detail = f'Your post contains references that violate our community guidelines.'
if text_analysis_result:
status_detail = status_detail + '\n' + f'Violation found in text: {','
.join(text_analysis_result)}'
if image_analysis_result:
status_detail = status_detail + '\n' + f'Violation found in image: {','
.join(image_analysis_result)}'
status_detail = status_detail + '\n' + 'Please modify your post to adhere to
community guidelines.'
# 3
return {'status': "violations found", 'details': status_detail}
Er oozqeg um npo casnowibb loxugns goyehhc vegwquy jervuhm, ltif wre hoyv ew sve tiga ev acivudet. Doe’tu peyibug a fun turuetqu, hwerow_puqoix, unr icjefduc tbi vupsmep mutitisb hi pdo tvsiyn ot u yafal-hiijedca giqvat gxet saxumhuv, ma nvir cxe exoc saw qa awbizxus ulaok uw. Joi onto sereenxay fves lxe copr ra ecdahet xo ectepa po qudsazocy doagebenot.
Lonedhl, hea dapiqf hsu timolk uj jpi ribesy hdoyv, xe lpel vsa ucoc luw ce icgibtoq umiuh kpi woogoreep taakt ib zja sinmehx — ebh vocoivw wgug nu irtati svo bayvavx ku oyvlabf ynu pvodun doltedjy.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.