Archive for November, 2018

Prototyping CoreGraphics in the Playground

No matter how flaky, I love using playgrounds to prototype Core Graphics, SpriteKit, and many other see-as-you-go technologies. They’re fantastic for building out specific custom content with a bare minimum of coding investment. You get a lot of win for very little time.

I was helping someone out the other day, explaining the strokeEnd keypath (versus the path keypath) and building a playground showed it off to perfection.

Admittedly, it helps to have quick helper code on-hand for quick starts. I have playground-specific setup code, handing me a view controller (called vc) and a centered view, ready to start demo-ing in this one.

I also have a couple of pages of code (like the layer(path:) constructor, the animateStrokemethods, and the schedule() utility off page, in the support module. They’re all highly reusable. It’s a pity in-playground debugging is so dreadful. It would be an ideal module-building tool if not for that: build and explore (and ideally build tests) in a single place, without having to be in a fixed workspace lacking the exploration feature. Adding “convert this exploration into a test” would be icing on top.

I’m disappointed that playground-specific visualizations built for teaching and demos don’t transfer to the debugger for real-world production support. I don’t see any reason why a CGPoint instance should get a pretty graphic representation but a CGAffineTransform, for which I have quite a full presentation, does not.

I can use custom mirroring to produce valuable output for dump, and therefore for printing objects in the debugger but not for debug quicklooks. Plus as far as I can tell the custom NSObject-only quicklooks haven’t been updated in years and there’s no hint of extending this to structs and enums.

By the way, what’s the deal with all the API audits? How long are these going to go on? If you thought updating the app delegate was a minor nuisance, you haven’t seen what’s happened to all the constants and Core Graphics APIs. This update is huge and disruptive…

Return to Sender: My first dive into watchOS

Just wrote my first two watchOS apps. The first, intended to provide voice coaching for physical therapy exercises, was a failure. Timing just wasn’t reliable and the errors built up more and more over a session of repeated counts, holds, and “almost there”/”two more”/”last one” prompts.

The second was a simple D&D dice set and much more successful. (Barring, of course, my miserable sense of interface design.)

It consists of an iOS app paired with a watchOS extension, which runs as its own watchOS app, allowing anyone with an arm and a need for d20 to roll at will.

The WatchOS APIs were disappointing at first glance. For one thing watch buttons use target-action but provide no sense of sender. A single iOS call must be split into 7 calls on the watch:

// iOS. I used tags. Sue me.
@IBAction func roll(_ button: UIButton) {
    let value = Int.random(in: 1 ... button.tag)
    let percent = button.tag == 100 ? "%" : ""
    label.text = "\(value)\(percent)"
}

// watchOS
@IBAction func d4()  { label.setText("\(Int.random(in: 1...4))")  }
@IBAction func d6()  { label.setText("\(Int.random(in: 1...6))")  }
@IBAction func d8()  { label.setText("\(Int.random(in: 1...8))")  }
@IBAction func d10() { label.setText("\(Int.random(in: 1...10))") }
@IBAction func d12() { label.setText("\(Int.random(in: 1...12))") }
@IBAction func d20() { label.setText("\(Int.random(in: 1...20))") }
@IBAction func dPercent() { label.setText("\(Int.random(in: 1...100))%") }

There’s no auto-layout so far as I can tell, so you either “fill” things or you give them fixed sizes. I couldn’t figure out a way to split my buttons into thirds of the available screen space.

Setting a label’s text is a matter of a function call not a property assignment. That was surprising to me. I’m not sure what advantage was gained here beyond violating the principle of least astonishment.

I had to design my app’s interface in the WatchKitApp target but implement it in the extension target. This may be perfectly normal and in-line with other extensions on iOS but I haven’t spent much time there so it raised my eyebrows a bit. You add your app assets in the app, your complications (which I did not build) in the extension.

Right now on my priority list, I’d like to build something similar to the “breathe” app. I don’t entirely know where to start. I need to use a notification system to launch the app at set intervals, n times a day, preferably during the work day, and then use some kind of coached interaction with haptic feedback. My immediate goal is one that has me regularly exercise my sitting back muscles to build up strength.

I’m a bit sad about the timing issues I encountered with my first jab at the coaching app. I may approach that again with a new design as having my wrist count and prompt would be a lot better than having to do all the work myself (I always lose my place) or having to keep my phone nearby.

My need for reps, holds, and rest periods are simple. And while I don’t particularly like the scrolling wheel that seems to take the place of segmented controls, I can live with them.

If you’d like to see my first project (either to be inspired or to offer critiques), I’ve left a copy at github.