If you use Facebook, you have probably noticed that it can detect faces on photos with a quite good precision, but have you ever wondered which kind of technologies could provide the same functionality for your mobile application? Face detection is not something new for iOS. In fact since iOS 5 with CoreImage’s CIDetector, we were able to detect faces and some details like eyes and mouth positions.

Image from WWDC 2017

Not all faces could be detected due to some blockers like hair, glasses, hats or head position but that level of precision was very good some years ago. Over the years the technology has evolved and the users have become more demanding. Because of this, the native solution from Apple wasn’t good enough anymore, so many developers migrated to OpenCV. A better alternative with the cost of a higher complexity on the code. With the release of iOS 11 the things have changed quite a lot with the release of a new native computer vision framework called Vision.

Vision Framework

Before this release, iOS didn’t have a good computer vision framework integrated with the system. You could do some things with CoreImage and AVCapture, but with some limitations.Vision enhanced the developer tools for image and video processing by implementing some features like:

  • Face detection with landmarks
  • Object tracking
  • Barcode detection
  • Text detection

Yes, we got another face detection framework, but this time with way more capabilities than its predecessors and also some new features that we didn’t have before. Let’s have a look how good is our new face detection below:

Image from WWDC 2017

Look how many details we can capture and the precision of the new framework, looks much better than before! Maybe you’re wondering why you should care about this if you already went to OpenCV a long time ago and already have all of this? Well the best part of Vision is the integration with another new framework called CoreML, which is intended to be used for machine learning. This way you can detect certain objects on your images depending on your provided model, and its code part is much more convenient to implement since it’s integrated with the native SDK.

This was a small update about some new key features available with Vision, there is way more to cover with integrations with ARKit and more details about CoreML, but for the moment you can already start creating some cool apps to detect the faces of your friends.

For more help with your apps just contact us at Starcut. Stay safe and keep coding!
_