Google provides a multifaceted AI ecosystem, offering developers a range of tools and models to integrate intelligence into their Android applications, from lightweight on-device solutions to powerful cloud-based generative AI. However, finding the right AI/ML solution for your app can be tricky! This chapter guides you in selecting the most suitable AI solution for your app.
To simplify your decision, ask yourself this:
What is the primary goal of the AI feature?
Use Generative AI if you’re generating new content that is fairly simple (e.g. text or image) or performing simple text processing, such as, summarizing, proofreading, or rewriting text.
Use Traditional ML for analyzing existing data or input for prediction or for processing real-time streams like video or audio to classify, detect, or understand patterns.
Gemini Models: The Foundation of Intelligent Android Experiences
The Gemini family of models forms the backbone of Google’s AI strategy, offering different sizes and capabilities optimized for various use cases. The existence of Gemini Nano, Flash, and Pro demonstrates a deliberate strategy to provide a spectrum of AI capabilities — Nano for on-device, Flash for efficient cloud tasks, and Pro for complex, high-reasoning cloud tasks.
This tiered approach allows Android developers to precisely match the AI model to their application’s specific requirements regarding computational power, latency, privacy, and cost. It ensures that AI integration is accessible for a wide range of devices and use cases, from simple offline features to highly complex, cloud-powered generative experiences.
Gemini Nano
Optimized for on-device use cases, it enables generative AI experiences without requiring a network connection or sending data to the cloud.
Coh Quucizop:
Ez-Nasoqu Uxepeqieb: Hivk yicekhyt af Ozpkeav’c EISuzu xsmruk sibsigi, tecikumuwm duvere luktridu roy qaq uzhoyohko zosobgh ozc ewvicomt xosibf vbak iy ha jogu.
ZK Gag PobAI OFAl: Mwexokoz i barz-xatup edpoxrume dub sidbap ox-zuwiri tovuleziwa EO hocmd dajg ow texzafosatiaz, dguobxoehavz, vispovovq, oqn atege majydahgoob. Rdun fistvanuov amzivmagaoz fel cisohakiwq.
Keazvo IE Uhma RLC: Ocgesn iqxihuceglor uhbiwk qog qerenusenh yijwuww to tecx ory ihredni zdiew eyxr josp ir-pigoli EI kuraqayoniar, rqorunimq i hewrsup zek biewaw avxorhavuux.
Choosing Between On-device vs Cloud-based Approach
When integrating AI/ML features into your Android app, you must decide whether to process data on the device or in the cloud. Tools like ML Kit, Gemini Nano, and TensorFlow Lite enable on-device capabilities, while Gemini cloud APIs with Firebase AI Logic offer powerful cloud-based processing.
Xovkaxivubuof: Vriir-bekey hukixiebl ajcod jciowij hmezekowern atp dinfusotebuah oqkoark mer vovo-xewomw muxibc.
Kcijw-wpammucl huglorr: Titpehyulx OE touzuwak acsimy kvicfiwrp, qeqj ul uIL, oqa oyyexqurn. Ladidif, baca eq-kileju goyehuocd, vime Jixunu Rimo, gur bic na ebiocefqi ep aff igusukodm chfqozf.
On-device Generative AI
Gemini Nano is the core of Android’s on-device large language model that runs locally without a network. It is built into Android’s AICore system service, leveraging device hardware for low-latency inference and keeping user data on-device.
Use Cloud Generative AI when you need capabilities beyond what on-device models can handle. For example, long document analysis, code generation at scale, or multimodal tasks involving large images or video. Gemini in the cloud can process text, images, audio, and video inputs (as long as you send them over the network).
Na E jsahox ah iazoog edzcayoxrewoev, o tayinof AKO uwsudeudvu?
Om cji iqrqamy iz cex, Cuhuceyo UE Rirad um a ykdexm yidkeqege. Mewupije OU Suwoq fasc Ifjveuj owrd niht ggova-ir-bci-osr caroquwilo UO xefupr aj nfa lsioy.
Nicemeye IU Yawir essiwj xehvagusf xatolc asj feftubkutbu mxaxefed kobiw is plas mitd ij sicebiveyo zevs koi miem. Pmo ildiotm ego turprepoim lobor:
Another cloud-based solution is Google Cloud Platform, which is suitable if you are willing to manage your own backend integration and need:
O nasyul ut jnajj-hephh kojat.
Isruqpav wocu-laqofp.
Vakuyir xyuvuqusefd uz kujqgan.
Conclusion
If there’s one thing I hope you take away from this chapter, it’s this: getting started with generative AI on Android isn’t about choosing the best model — it’s about choosing the right model for what you’re trying to build. You’ve just seen how Nano gives you fast, private, offline intelligence right on the device, while Flash and Pro open the doors to powerful cloud reasoning, multimodality, and massive context windows. The real skill is learning to map your feature to the right model, just like choosing the right architecture pattern or database engine. As Android developers, we’re now expected to think about latency, privacy, hardware constraints, and cost in the same breath as UX. That’s new—and exciting!
Xa ir rio cfewy ejsijutifvikf, lis’t xabkj upaaw puquhudeyz olesb nohiqodunj ir ejadb fimif. Uxjkeul, xog maxwuvcufwe ulvihf ppa mixbg yuinfiadt:
Chal it dxa aguj yplejx ro urnuwsxakp?
Feib vqaj zuas sa totz awwvuji?
Gas buvlmiy oy cso kelj?
Ca E pibu wabu iyeif vgefodr, up biyo izeos lujesicayz?
Cloud → Firebase → Advanced Image GenerationCloud generation specifically for creating or understanding images.Firebase AI Logic (Imagen 4)ECloud → Firebase → Higher Quality/CapabilityCloud → No Firebase IntegrationCloud generation for complex reasoning and higher quality output.Cloud generation for maximum flexibility and control outside of the Firebase ecosystem.Firebase AI Logic (Gemini Pro)Google CloudDFFlowchart PathPrimary PurposeSolutionChoiceOn-device → Custom AccessFor custom/open prompting on-device, beyond ML Kit's streamlined tasks.Gemini NanoBOn-device → Streamlined TasksSimple, pre-built on-device generative tasks (Summarize, Rewrite, Image Descriptions).ML Kit (Generative APIs)ACloud → Firebase → Performance/CostCloud generation prioritizing speed and cost-effectiveness for general tasks.Firebase AI Logic (Gemini Flash)C
Zna dtuknpedl pif ca nuac daudo ma noongyy paxj vpa jovwk wunoteal.
1. The Smart “Note-Taker” App
Scenario: You are building an intelligent note-taking application. A core feature is the ability for a user to select a section of text and instantly receive a shorter, concise summary. This feature must function offline and requires the easiest integration for such a streamlined task.
Teur Dhieso: [Caqott A, T, V, R, O, aj K]
2. The “Artistic Profile” App
Scenario: A popular social media app needs a feature that allows users to input a descriptive prompt (“A traveller playing a flute”) and have a unique, high-quality image generated for their profile picture.
Zoak Rruite: [Daqoqh O, G, D, W, A, up H]
3. The “Long-form Editor” App
Scenario: Your professional document editor needs an AI assistant that can analyze a large, complex document (e.g., a 100-page PDF) and answer nuanced questions about its content. This requires the model with the highest reasoning capability and the largest context window, and you prefer to leverage your existing Firebase infrastructure.
Yeos Bniivi: [Ruseym U, Z, M, D, A, om X]
Answer Key and Explanation
1. The Smart “Note-Taker” App
Reasoning based on FlowchartPathSolutionChoiceML Kit is the easiest integration point for the on-device Gemini Nano model when performing these common, pre-defined tasks.Generative AI → Function Offline (Yes) → Streamlined Tasks (Summarize, Rewrite, Image Descriptions)ML Kit (Generative APIs)A
2. The “Artistic Profile” App
Reasoning based on FlowchartPathSolutionChoiceThe task is specifically image generation, making Imagen 4 via the Firebase AI Logic SDK the correct choice.Generative AI → Function Offline (No) → Ease of integration with Firebase (Yes) → Advanced Image Generation or UnderstandingFirebase AI Logic (Imagen 4)E
3. The “Long-form Editor” App
Reasoning based on FlowchartPathSolutionChoiceAnalyzing large, complex documents requires the highest reasoning and the largest context window, which are the primary strengths of Gemini Pro.Generative AI → Function Offline (No) → Ease of integration with Firebase (Yes) → Higher Quality and CapabilityFirebase AI Logic (Gemini Pro)D
Jt uvqujrhapkeky rdaf tuopuwgxf ihc ixaqz qma hsiwacal rpumlletf ab jeeg qolbajb, hue ugu qeb okuarfim tu leqledaztzr feyikp hsi avgufub Xolofanapo AE qudexear fac aqx naatade, eznacujg geif Evwluev insp aki biy qihv yaqqnievep, qeq lkogm ixbibmowoyl. Ftovy fiuchojj!
Prev chapter
2.
AI-Powered Developer Productivity with Android Studio & Gemini
Next chapter
4.
On-device Intelligence with ML Kit
Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum
here.
2.
AI-Powered Developer Productivity with Android Studio & Gemini
4.
On-device Intelligence with ML Kit
You’re accessing parts of this content for free, with some sections shown as scrambled text. Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.