What’s New: Things that caught my eye

I love the idea of ARKit, although I wonder about the practicalities of how people will interact with their iDevice while holding it up to look through it. You build a basic AR experience with SceneKit or SpriteKit, and then configure real world anchors based on flat surface destinations. ARKit can pull ambient light estimates for the captured scene, provide overviews of the visualizing camera, and allow you to “hit test” against modeled sufaces.

CoreML looks really really cool. It leverages training data sets to build models that allow you to predict outcomes for new data as it arrives. Apple has introductions to obtaining CoreML models, integrating them into your apps, and importing third-party machine learning to Apple’s framework. The framework is pretty sparse, so it looks like most of the magic will happen behind the scenes.

It was going to happen eventually, and DeviceCheck allows you to access per-device (developer-specific) data that lets you track which devices have previously interacted with your servers and software.

The new Vision framework offers high-performance image analysis and computer vision tech for face identification, feature detection, and scenery classification. Looks a lot like OpenCV-style stuff at first glance.

You’ll be able to create app extensions that help filter unwanted messages using the new IdentityLookup framework, and provide workarounds that respond to unwanted received messages.

The ColorSync framework doesn’t have much there to play with, but its few constants intrigue me.

The FileProvider (and associated FileProviderUI) frameworks are both major components for accessing documents across applications. We’ve seen a bit of this before, but the new fully fleshed out APIs are fascinating.

Finally there’s CoreNFC, which detects and reads NFC tags and their NDEF data. This framework is limited to the iPhone 7 and 7 plus.

Don’t forget to check out the “diff overview” here.

Comments are closed.