Archive for the ‘Xcode’ Category

Beta 3 Playground Workarounds

Adding Resources and Sources folders to Playgrounds

Until they’re fixed, you may have to add them by hand.

  1. Right-click/control-click.
  2. Show package contents
  3. You can add new Resources and Sources files at the top level
  4. Alternatively, navigate down to individual pages (which are finally in Beta 3!) by showing their package contents and add them there

Fortunately under Beta 3 you don’t have to manually add pages, as the new page functionality is finally back.

Creating New Playgrounds

They’re no longer listed in the File > New dialog. Instead choose File > New > Playground (Command-Shift-Option-N) or open “Welcome to Xcode” (Command-Shift-1) and click “Get started with a playground”.

Most of the obvious alternatives (like Command-Control-N, which creates new workspaces) are already taken, but if you don’t mind using the menu for that, I think it’s a nicer key binding for “New Playground”. If you want to mess with this open prefs (Command-comma), type playgrounds into the search field, and edit the key binding for “New > Playground”

Don’t forget that new playground page is “Command-Option-N”.

The Unexpected Joy of Vector Images in iOS 11

A few years ago, I griped about That Vector Thing, which was the way that Xcode 6 handled PDF vector assets and could not extend them to arbitrary use in UIImages.

Enter WWDC 2017. In Session 201 “What’s New in Cocoa Touch”, Apple described Asset Catalogs with PDF-backed vector images. All you have to do is tick the “Preserve Vector Data” checkbox.

After chatting with some colleagues about whether this would actually work as promised, I dragged up a 29×29 vector PDF image.

And I added it as a 1x image in an asset catalog. Notice the Resizing checkbox to the right. It is normally unchecked by default.

I then built a super-simple single view testbed to test things out.

override func viewDidLoad() {
    super.viewDidLoad()
    let imageView = UIImageView()
    imageView
        .translatesAutoresizingMaskIntoConstraints = false
    imageView.contentMode = .scaleAspectFit
    view.addSubview(imageView)
    ["H:|[v]|", "V:|[v]|"].forEach { format in
        NSLayoutConstraint
            .constraints(withVisualFormat: format, 
                options: .init(rawValue:0), metrics: nil, 
                views: ["v": imageView]).forEach { 
            $0.isActive = true 
        }
    }
    let image = UIImage(named: "Biff")
    imageView.image = image
}

Here’s what that 29×29 image looks like running on an iPhone 7+ in the simulator. The 1x image is being rendered on a 3x destination, at a greatly magnified size. Its vector data ensures the image renders without losing detail or clarity. Compare it to the same asset that does not preserve vector data and my original test from 2014:

  

Click above to see full size screenshot originals. Below is a comparison shot from the simulator at the largest size.

I expect there are minor performance hits in scaling and rendering the vector image compared to loading a standard PNG or JPEG, but I didn’t get around to measuring the costs.

If you like my write-ups, please consider buying a book.

The problem with Swift Playground localization

Starting in Swift Playgrounds 2, you can now use localized strings to guide the narration of your interactive lessons. As the screenshot above demonstrates, you can used localizable markup to provide the most appropriate text for titles, introductory text, and feedback.

However, what you can’t do is localize Swift members. Your French and Chinese consumers must tell Byte to moveForward(), not avancer() or 向前移动().

One of the guiding principles of the Swift language is demonstrated in its embrace of unicode for identifier symbols. This approach accommodates programmers and programming styles from many languages and cultures.

Xcode 9 has introduced major advances in code refactoring. It seems an obvious win to allow that technology to be applied to Swift Playgrounds 2, enabling identifier localization.

That’s because identifiers play such a key role in Swift Playgrounds. Unlike standard development tasks, where it’s unnecessary to create idiomatic APIs like IUContrôleurDeNavigation, the point of Swift Playgrounds is to teach and instruct. It uses small, limited, controlled API exposure, nearly all custom and supporting of the teaching story.

The anthropomorphized Byte character acts as a stand-in for the learner coder. And in doing so, it should communicate with commands that this coder identifies with, turnLeft and moveForward, not incomprehensibleForeignPhrase1 and evenMoreConfusingForeignPhrase2.

I think this is an opportunity waiting to happen, and I can’t imagine it would be all that hard to implement given the expansive identifier suite and the limited API visibility presented in a typical playgroundBook.

What do you think? Is it too much to ask for a localizable.Members.plist?

Simulating a second finger during drag

You can drag and drop in the iOS simulator by clicking and holding an item. The item “pops” and you can then drag it to a destination. Today, an Apple engineer shared a neat way to free up a “second finger” during this process.

Pause and press the control key. This pins an item mid-drag, enabling you to use the Mac cursor as another touch. You can then retrieve your drag by grabbing the paused item and conclude your drop.

Apple open sources key file-level transformation Xcode components

Ted Kremenek writes on the swift-dev list:

This afternoon at WWDC we announced a new refactoring feature in Xcode 9 that supports Swift, C, Objective-C, and C++.  We also announced we will be open sourcing the key parts of the engine that support file-level transformations, as well as the compiler pieces for the new index-while-building feature in Xcode.

We will be releasing the sources in stages, likely over the next few weeks:

– For the refactoring support for Swift, there are some cleanups we’d like to do as well as some documentation we’d like to author before we push these sources back.  Argyrios Kyrtzidis and his team from Apple will be handling that effort.

– For the refactoring support for C/C++/Objective-C, these are changes we’d like to work with the LLVM community to upstream to the LLVM project.  These will likely be first staged to the swift-clang repository on GitHub, but that is not their intended final destination.  Duncan Exon Smith and his team from Apple will be handling that effort.

– We’ll also be open sourcing the compiler support for indexing-while-building, which include changes to both Clang and Swift.   Argyrios and his team will be driving that effort.  For the clang changes they will likely be first staged to swift-clang, and then discussed with the LLVM community to upstream them to mainline Clang.

– Finally, we will be open sourcing the remaining pieces of the Swift migrator.  Argyrios and his team will be handling the push back of changes there, and those changes will only be impacting the swift repository.

As usually, we’ll also be pushing back changes to have Swift work with the latest Apple SDKs.  We’re expecting that push back to happen early next week.  When that happens we will temporarily lock commit access to the repositories.  Details about that will be sent out later in a later email.  Until then, the downloadable toolchains from Swift.org will continue to work with Xcode 8.3.2.  After we do the push back the downloadable toolchains will be moved to be baselined on the Xcode 9.0 betas.  This shift is necessary as changes to the overlays depend on the latest SDKs.

Xcode Autocomplete Frustrations

A year after it debuted, Xcode’s enhanced autocomplete features continue to struggle with overly liberal matches:

In this example, several of the matching text results display few commonalities with my search phrase . There’s really no reason that “fatale” should match CFDataGetLength(theData: CFData!).

It shouldn’t be hard to create heuristics that count the number of matched chunks, their distance from each other, to build a score related to whether the match is chunky (a good thing for keywords) and singular (another good thing for discerning developer intent).

Successful autocompletion promotes good matches and discards inappropriate ones. “upper” should score high on CFStringUppercase and low on CGScreenUpdateOperation and CSSMERR_TP_INVALID_CERTGROUP_POINTER.

That’s not the only problem with autocomplete. Image literal completion is a big problem. Xcode often prioritizes images over code APIs. When starting to type “picker”, Xcode should not suggest “picture-of-lovely-cat”. Here are some real world examples of this issue:

One developer told me that while typing in for closures, that eighty percent of the time, he gets a random autocompleted image literal instead of the keyword he’s shooting for:

Surely, this is an obvious place to introduce autocomplete preferences that allow you to exclude literals from the API list. The auto complete for image literals should act more like colors, offering an Image Literal entry point to a image picker instead of clogging the API name space:

It would certainly get rid of those inappropriate in matches.

Thanks Olivier Halligon, Andrew Campoli, and everyone else who gave me feedback and direction for this post.

Xcode Tricks: API Changes

Here’s a quick trick that helps you review changes between SDK releases, including beta SDKS. In Xcode, select Help > API Changes.

Xcode automatically navigates your web browser to https://developer.apple.com/api-changes/. (You can also visit this link directly.) At the web site, you’ll find a list of current and archival API deltas:

For example, when I clicked on the latest Beta release, here’s the screen that popped up today:

Color-coded overviews reflect the number of changes (modified, added, deprecated) at the top left. A pop-up at the right, enable you to switch to specific betas and releases.

Select individual modules to view changes in-place, annotated with corresponding color highlights.

As you navigate down to finer details, the change highlights trickle down to deeper and deeper levels.

Select an older version of Xcode from the top-right pop-up to view a current-to-previous comparison. In this case, the type constraint has moved to a trailing where condition.

It’s a fascinating way to present and navigate API changes.

Bluetooth Lessons II: Characteristics

Yesterday, I wrote about the basics involved in setting up a Bluetooth manager and scanning for available peripherals. The sample code left off after finding announced devices.

Last evening, I expanded this functionality to find a Mi wristband and execute a repeating vibration pattern. Today’s code is about four times longer and involves a lot more of the direct interaction with a BLE peripheral.

The changes start on finding a desired device. I looked at its peripheral.name, an optional value associated with a CBPeripheral. This approach is essentially the same as looking at SSIDs for WiFi. There’s not a lot of sophistication involved, and no pairing sequence. Once found, my code instructs the central manager to stop its scan (centralManager.stopScan()) and connect() to the matched peripheral.

At this point it’s really important that you do a few things:

  • Create a strong reference to the target peripheral. I used a property. Without this, the peripheral reference deallocates, as I found to my dismay.
  • Set the peripheral’s delegate, so you can monitor its callback routines, specifically centralManager(_:, didConnect:). You won’t be able to start the communication chain otherwise.

For the next step, request service discovery from the delegate callback. After the peripheral connects, call discoverServices(), and listen in peripheral(_:,didDiscoverServices:). At this point, the chain of command has passed from the manager that finds you the right peripheral, and moves to listening to the peripheral itself. Both components must establish delegation.

Today’s sample code uses a brute-force approach to find services, and then in the services callback, discover specific service characteristics (discoverCharacteristics(_:, for:)) for the device.

In production code, you’d limit these calls: supply a list of only those services and the characteristics (the actual API call points) you’re interested in. My code passes nil, because there aren’t that many calls for the target device and I don’t mind the extra overhead.

Like previous steps, characteristics have their own delegate method (peripheral(_:didDiscoverCharacteristicsFor:,error:)). Here’s where you can access and initialize specific actions that support those characteristics. I decided to use notifications to respond to the discovery of the vibration characteristic (“2A06”, an industry standard).

Notifications are short and sweet, especially for playgrounds. You won’t have to invest in designing protocols or implementing delegates. Just listen for the notification and then start doing whatever you need.

In this case, my open ended observer (it lives forever, so I don’t bother trying to save and release it) starts a vibration pattern by calling a custom type method startVibrating(degree:, delay:). This method writes request data to the peripheral’s characteristic, producing each vibration pattern on demand. The delay allows the vibration to repeat after an arbitrary number of seconds.

Like the peripheral, store your characteristics locally. Although they are characterized by a UUID, they aren’t meant to be built on the fly with raw UUID values — or at least not that I could find. Saving each characteristic for later reference appears to be a vital part of the set-up process.

At this point, I’ve pretty much taken this project where I need to. I can produce short and long vibration patterns, I’ve discovered the characteristic for direct vibration control (“FF05”, which appears to take a start and stop value), for testing the device (“FF0D”), and if I ever go that far, for pairing (“FF0F”).

I’m handing it off to my friend. Hopefully there’s enough functionality that he can perform buzzing-like therapy without further financial outlay.

Apparently, controlled buzzing has utility beyond (unconventional) autism therapy. There are any number of ADHD products built on the principle of re-focusing students back on-task every n minutes.

I suppose you could expand this as well with calendar integration for “take your meds” reminders or “get up and move around” ones, at a much lower price point than, for example, the full Apple Watch.

As for me, I discovered that I hate having anything on my wrists and that buzzing really gets on my nerves.  I’m glad I had the opportunity to play around with this, though.

If you end up building something interesting, please drop me a line and tell me about it.