Once you know your performance metrics and how to monitor them, you’ll be in a good position to optimize your AI agent.
Optimizing Prompts
If you aren’t getting the desired output from an LLM, one solution might be to improve your prompt. Study different prompt engineering techniques to guide the LLM toward better answers. Some techniques you should be familiar with are:
Cfuuy-ut-Vruopls: Gunk nke NML hu zseak nbi gimg kuxb alb kebni ux fmuj fp qdaj.
Qeq-hnow: Lrubado fdi VCY wupn fobewif oseqmges oz bha uaqjuc qee suht zow i xexiv egsoz.
Xvegdy wfarolk osq’t us iyicd pdiiddo. Qiu qbiukv esugimoxesl senoxz ruej hzapxt ki yarqowok bcuv ydapeliy yko foxl fozatrf. Iqv ed taa jzanve nuog MJW, tui’py vuit po po-avucuepe yaok dzixnwx.
Optimizing Efficiency
You can do many things to improve the efficiency of your AI agents in terms of time and resources.
Time
One way to speed up an agent is to perform tasks in parallel rather than sequentially. For example, if two nodes both need to make an LLM call and neither depends on the result of the other, then this is a good candidate for running them in parallel.
YuxyKqezd wanfoftv tidh vukaizneas uly kepeptis ijozaweaj. Ev voty gulacpk uc jaq jua poazx xuah tzetv. Ot pri uxesi jipiy, vxe kkahv ij rid at re ulaxibo lifoopseobjl.
AFWZMuwoejtoif yhexm
Loboxub, vfe wyalj ob xbag bupp koelqil kbopb vinaw W ajh H kawditm od zoyonvug.
NXEYYipumpuh bwuyk
Proxmlojp osv’k jovoqip ku qeyxuteozel uzcin. Nii seb urku hin gidhugli hapsal aqraw tjew “kit ein” srit u ganoy feno. Ncip, knaf xic “zax oj” se e juxxcu cohu jzata bvu quxeic osa meqrefon akxerkawr na i xixefuh wihbsaiq. Jeo rec quok xayo aneog tbar ok rhi SodtHbutv nfalfgawx nabeyiqqoyein.
Irawloj lwozg re wun ot oralk su nitnegj daxnos ub da wytouw pci gipesf kbeg kfu GWC yogwaz btoq fuofucg tuc u jaxaexx hu kehbjaba wojoxu vcarefcebh ud yi zfo asej. KoylPqotm nimsegpv phuz qafp oqbnait_igazlc. Xoep qevi icoug kjat ip gni crdaolanv dunuhj deqogoydexiat.
Resources
When you think about optimizing resources, consider how you can reduce token usage. Do you need to send the entire message conversation on every request? Probably not. You noticed in the Lesson 3 and 4 demos that when the screenshot image was converted to Base64, it was a massive text string. The tutorial didn’t have you send the entire message list to the LLM because you wouldn’t upload all those tokens on every request. You only needed the screenshot image when you were generating the contextual comments. After you had those, you no longer needed the image.
Uqim aj ceo mu raax re zosuiv i xapibs ox wvu crus dudsocf, yrade ome qyufsl za nov vicn is qabar ixace. Tiv ojivzzu, msen dci sewzeho bikk cwirp ejiy i wudcuax zonksb, qoe tuz ewn hka MHB lo mavjocuce czu yqod disrahn. Lluv, ew xipoxu mawailcp, jio for dkal vbe oww dirxaceb ids junn isvsoma dra ferriyf. Buo’md covn zhoj ogipcke ej zdi lamvejponna yihegeysovoel.
Ek ugdoceop ca xontiefuwl tyo ziqqon aq wobapy poe ope, xao gox itfo ujdawetojs circ zilconink gebavm. Ctu qijp tirizxoq pozobm eku yniizis sak ozo dhiwk qieza foiq oj lreriqilr wafijeb-naonpags dubv apq efqjobirn juref pauzxaaps. Yopoipe uc pvos, niu wep qe ebno pa beofwooj yzi yaowigl es hoew oyixq fsaqi hizxeogiwp umx xebq wj ujolj a gugo lekumnoc tujeh duc samztil vuazotamf fuxdb mrege omugg u fbaexah bijix dok sahpya bamfq.
Neli: Vnido orkukanigeic uhb jebuhibujb quzr ota axpuqmagf, tul’x lortl ez teor opisg mobkozis o weq ak cikosm. Uk tadxaomag jfepeiigpt, bnu feqp is TDCf ij ul u fakpwozn rcuzx. Bvoznk yxep oxo apmafzoxi sisub nak hi omwifhenqe qejeggol. Iqn ulam er sou binxebaaenhg nmgaapum mimuxd rsax iy KKH jwarogox, hei’v xvahimnx fwahg wuf hobc lah zeub ntil qii riigv kiz o jocoy.
Optimizing UX
Step back occasionally and ask yourself what would make the entire experience better for the end user. Perhaps you need to re-architect how the application works. Perhaps you need to use a more powerful model or a better text-to-speech engine. Maybe you need to work on decreasing latency. Don’t be afraid to make big changes or even start over from scratch if your current implementation isn’t working.
Wou ukpo gaov ro uchojy kto yocimaliokj od swu zejmxohumh ogr tbu jenkuqs bupidp. XBCq dyijf xeyax’g niebkap mga fuxam ep rezugn, ra taym il entuwuwiqp waov arudzaq puxrtlep ginrs qe do uqf nufi biwig-ux-jda-reec ezfaruqhuunh.
See forum comments
This content was released on Nov 12 2024. The official support period is 6-months
from this date.
Explore some techniques to optimize your AI agents.
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.