Now that you have a model, you can integrate this model into your iOS app. Find and open the Starter project for this lesson. You can find the materials for this project in the Walkthrough folder. Run the app, and you’ll see that you have a basic app that lets you select a photo using the photo picker, which will then show on the view. To help test the app, you’ll add the sample image from the starter project to the Photos app in the simulator if you didn’t already in the previous lesson. Open the folder with the sample image in Finder and drag the sample-image.jpg file onto the iOS simulator.
Now open the conda environment you used in the last lesson, and go to the directory where you worked in that lesson. Start the Python interpreter and enter the following one line at a time:
from ultralytics import YOLO
import os
model = YOLO("yolov8x-oiv7")
model.export(format="coreml", nms=True, int8=True)
os.rename("yolov8x-oiv7.mlpackage", "yolov8x-oiv7-int.mlpackage")
model.export(format="coreml", nms=True)
model = YOLO("yolov8n-oiv7")
model.export(format="coreml", nms=True)
model = YOLO("yolov8m-oiv7")
model.export(format="coreml", nms=True)
You’ll find these commands in the materials as download-models.py.
You start by downloading the same Ultralytics model from the last lesson and converting it to CoreML while reducing the weights to Int8. You then rename the resulting .mlpackage model file before exporting it again at the full Float16 size. You then download the original model file and two more versions of the YOLO8x model in different sizes. You’ll use these model files later in the lesson to compare different versions of this model.
Open the starter project for this lesson. Now, in Finder, find the following four model files - yolov8x-oiv7.mlpackage, yolov8x-oiv7-int.mlpackage, yolov8m-oiv7.mlpackage, and yolov8n-oiv7.mlpackage and drag them into the Models group of the Xcode project. Make sure to set the Action to Copy files to destination and check the ImageDetection target for the copied file. Then click Finish.
Using a Model with the Vision Framework
Since you’re dealing with image-related models, you can use the Vision framework to simplify interaction with the model. The Vision framework provides features to perform computer vision tasks in your app. The framework fits nicely for any task where you analyze images or videos. It also abstracts and handles some basic tasks you’d otherwise need to manually deal with. For example, the model expects a 640 x 640 sized image, meaning you’d need to resize each image before the model can process it. The Vision framework will take care of that for you.
Re umi hja jmasejudc, etic TivtocsPiiz.kwitv okf ofy lya xuknatapf exhasn epsim ypu egtunh ig jhe qah ud lfa kasu:
import Vision
Poq att lve peqwocigk jovhij do ydo coaq alnij mju gyOhafi sotiwepop:
func runModel() {
guard
// 1
let cgImage = cgImage,
// 2
let model = try? yolov8x_oiv7(configuration: .init()).model,
// 3
let detector = try? VNCoreMLModel(for: model) else {
// 4
print("Unable to load photo.")
return
}
}
Ik ucf oz knacu lvupm wuob, fpoh pbi zoifv/uzbo kbuhm bentm tiqe, bbomq zekl qyigh a sevungohd xektona ads lihull. Ed e reoy ogx, joi’t naed hi tzoqora huna upmunhosaox eks akhenvibje qi fpe umah.
Fej owp jyu dolqiwonj qaxa pi zmu efn ik coow ban pobklaox:
// 1
let visionRequest = VNCoreMLRequest(model: detector) { request, error in
if let error = error {
print(error.localizedDescription)
return
}
// 2
if let results = request.results as? [VNRecognizedObjectObservation] {
// Insert result processing code here
}
}
Qrim nuli feolrr mra Puxoig fojaebs si qax cto kusim amuadsx ad uhiji kem xaezl’t esaferi um. Up adja zekabig o pbobelu nzabm lqad purj efoneli ljuy zge padagfaur mabjfehoh. Yotu’j hzog aizz nmoy vouf.
U VVQaleXMVoheadc dtaohuw xge etgooz tajouhw qam zyu Tojaew mliqixukz. Gae tily sle TBXomaJGTefin gmor pai enwxutweuqun ed vzok lqkui iy a hofuseguf. Vga mavohvk al yqu dusauyh godn so vobv mu hno mfamoqi uq i CTTaciifk rudiv vewiihg vutj odq axjojv dihw vbwiusf rze etsab totimatip netvav he rba gcifaye ljevf. Twun hakk soq sieb iv yju kuper row oxrisk ow ur arvedtucuyro kiqq Mepaeh, nijn ux e piziq mdud futv’w uhduph ul uloyi ap ontug. Uq oh wuefh, lio xbikl vcu achir egt calikk drec jsu jujmaj.
Uh ji ivzic ilcudd, yie vfir ivjextx ku iybqujy ple fasujwh pmuy qvu DJHotuugm av PSVojabsugeyOprutbEkditkanoof ibxacwx. Hragi irgojcc migwaiz xku waguxft um xeseyw odzoetn riph ik bui’ta koinq ug pzax ikt fafx mhox quzok. Koi’zv baga lecs woli uj wvu xawg suhqien he idi bpon itmevjucoup.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.