Archive for March, 2017

Dear Erica: Creating a conditional assignment operator

Manuel Carlos asks whether it’s possible to create an operator that executes a closure and returns a value based on whether a condition is true. Here’s his initial attempt:

My recommendation? Skip the operator and just use an if-statement:

if true { v = 100 } // vs v = 100 <-? true
if false { v = 100 } // vs v = 100 <-? false
if true { print("hello") } // vs print("hello") <-? true
if false { print("hello") } // vs print("hello") <-? false

Yes, it’s simple, but simple gets the job done without straining or extending the language. There’s little gained in building a more complex logic-driven assignment for this use case. Placing a truth statement to the right of each action inverts the normal way people read code, burns an operator, abuses auto-closure, and unnecessarily complicates what should be a straightforward operation.

That’s not to say this isn’t an interesting design space. Although Swift does not support custom ternary operators at this time, it may do so at some point in the future. Even then, I’d be cautious about replacing if-then control with such complex solutions.

Apple’s recommendations on auto-closure are:

  • Use autoclosure carefully as there’s no caller-side indication that argument evaluation is deferred.
  • The context and function name should make it clear that evaluation is being deferred.
  • Autoclosures are intentionally limited to take empty argument lists.
  • Avoid autoclosures in any circumstance that feels like control flow.
  • Incorporate autoclosures to provide useful semantics that people would expect (for example, a “futures” or “promises” API).
  • Don’t use autoclosures to optimize away closure braces.

Want to read more about Swift Style? There’s a book!

Xcode Tricks: API Changes

Here’s a quick trick that helps you review changes between SDK releases, including beta SDKS. In Xcode, select Help > API Changes.

Xcode automatically navigates your web browser to https://developer.apple.com/api-changes/. (You can also visit this link directly.) At the web site, you’ll find a list of current and archival API deltas:

For example, when I clicked on the latest Beta release, here’s the screen that popped up today:

Color-coded overviews reflect the number of changes (modified, added, deprecated) at the top left. A pop-up at the right, enable you to switch to specific betas and releases.

Select individual modules to view changes in-place, annotated with corresponding color highlights.

As you navigate down to finer details, the change highlights trickle down to deeper and deeper levels.

Select an older version of Xcode from the top-right pop-up to view a current-to-previous comparison. In this case, the type constraint has moved to a trailing where condition.

It’s a fascinating way to present and navigate API changes.

Announcing the Thud Programming Language

Thud is an uncompiled programming language based on rocks (also known as “sedimentary development”). Thud is natural, organic, easy to read, and easy to write, as it consists entirely of the keyword Thud.

Thud is type safe and thread safe as Thud has no types or threads. You may type the word Thud in any medium regardless of physical state, with an intrinsic guarantee that each atomic Thud cannot and will not metaphysically interact with any other Thud.

Thud has a low learning curve, regardless of language background. Most developers will be able to Thud in as little as thirty easy lessons.

As Thud is uncompiled and will not run on any computer, it has a very low memory footprint, depending on the local geographic features of the implementation site. Individual instances may be thrown at LLVM developers, who will emit ouch, although this is an unexpected benefit and not a result of the non-existent optimizations built into the toolchain.

Thud is static. Any Thud errors (for example, Thump or GastricDistress) can safely be ignored by any Thuddist developers. Type annotations are unneeded, as are Thud debuggers or Thud unit tests. To ensure correctness, one may throw a Thud at a nearby barista, tax auditor, or neighborhood association chairperson, regardless of LLVM background.

Thud is inherently nullable. That which has no Thud, is not Thud. All instances are therefore automatically guaranteed for thuddability, as non-thuddable elements are not Thud, or on every second Thursday, spork.

Thud includes a powerful macro system for metaprogramming, consisting of leaving Thud instances alone, as they aren’t meant to do anything in the first place.

Thuds can exist concurrently, although there is no mechanism for cross-Thud communication, shared memory, or metamorphic rock formations. Although Thud has a “C-Binding Manifesto”, a lack of developer availability means that at the current time, developers must make their own mean-spirited comments towards C-APIs rather than relying on each Thud‘s natural disdain.

Thud does not allow dependencies, ensuring that you will never encounter errors when using ThudPods or Delenda Est decentralized repository managers.

Thud is not hosted at GitHub, simplifying the need to respond to trivial pull requests and user-sourced corrections. You can support Thud by sending large envelopes of unmarked cash to my personal address.

Bluetooth Lessons II: Characteristics

Yesterday, I wrote about the basics involved in setting up a Bluetooth manager and scanning for available peripherals. The sample code left off after finding announced devices.

Last evening, I expanded this functionality to find a Mi wristband and execute a repeating vibration pattern. Today’s code is about four times longer and involves a lot more of the direct interaction with a BLE peripheral.

The changes start on finding a desired device. I looked at its peripheral.name, an optional value associated with a CBPeripheral. This approach is essentially the same as looking at SSIDs for WiFi. There’s not a lot of sophistication involved, and no pairing sequence. Once found, my code instructs the central manager to stop its scan (centralManager.stopScan()) and connect() to the matched peripheral.

At this point it’s really important that you do a few things:

  • Create a strong reference to the target peripheral. I used a property. Without this, the peripheral reference deallocates, as I found to my dismay.
  • Set the peripheral’s delegate, so you can monitor its callback routines, specifically centralManager(_:, didConnect:). You won’t be able to start the communication chain otherwise.

For the next step, request service discovery from the delegate callback. After the peripheral connects, call discoverServices(), and listen in peripheral(_:,didDiscoverServices:). At this point, the chain of command has passed from the manager that finds you the right peripheral, and moves to listening to the peripheral itself. Both components must establish delegation.

Today’s sample code uses a brute-force approach to find services, and then in the services callback, discover specific service characteristics (discoverCharacteristics(_:, for:)) for the device.

In production code, you’d limit these calls: supply a list of only those services and the characteristics (the actual API call points) you’re interested in. My code passes nil, because there aren’t that many calls for the target device and I don’t mind the extra overhead.

Like previous steps, characteristics have their own delegate method (peripheral(_:didDiscoverCharacteristicsFor:,error:)). Here’s where you can access and initialize specific actions that support those characteristics. I decided to use notifications to respond to the discovery of the vibration characteristic (“2A06”, an industry standard).

Notifications are short and sweet, especially for playgrounds. You won’t have to invest in designing protocols or implementing delegates. Just listen for the notification and then start doing whatever you need.

In this case, my open ended observer (it lives forever, so I don’t bother trying to save and release it) starts a vibration pattern by calling a custom type method startVibrating(degree:, delay:). This method writes request data to the peripheral’s characteristic, producing each vibration pattern on demand. The delay allows the vibration to repeat after an arbitrary number of seconds.

Like the peripheral, store your characteristics locally. Although they are characterized by a UUID, they aren’t meant to be built on the fly with raw UUID values — or at least not that I could find. Saving each characteristic for later reference appears to be a vital part of the set-up process.

At this point, I’ve pretty much taken this project where I need to. I can produce short and long vibration patterns, I’ve discovered the characteristic for direct vibration control (“FF05”, which appears to take a start and stop value), for testing the device (“FF0D”), and if I ever go that far, for pairing (“FF0F”).

I’m handing it off to my friend. Hopefully there’s enough functionality that he can perform buzzing-like therapy without further financial outlay.

Apparently, controlled buzzing has utility beyond (unconventional) autism therapy. There are any number of ADHD products built on the principle of re-focusing students back on-task every n minutes.

I suppose you could expand this as well with calendar integration for “take your meds” reminders or “get up and move around” ones, at a much lower price point than, for example, the full Apple Watch.

As for me, I discovered that I hate having anything on my wrists and that buzzing really gets on my nerves.  I’m glad I had the opportunity to play around with this, though.

If you end up building something interesting, please drop me a line and tell me about it.

Bluetooth Lessons I: Manager and Scanning

Last June, Izzy inspired me to do something with Bluetooth and playgrounds but honestly, I haven’t had the time and I couldn’t afford a Sphereo. I’ve wrapped up Swift Style. Attempting to write meaningfully about drawing while the Denver Public School system has for reasons I cannot begin to comprehend released my child to my recognizance for two entire weeks seems unlikely. (Another child has half days. Fun.)

To prepare, I purchased one of the cheapest BLE devices I could find, a Mi wristband (Amazon cost under $20 shipped), which has a reverse engineered API that lets you control vibration. A friend of mine just purchased the hugely expensive Buzzies for Autism bands. I’m  hoping I can mimic some of that functionality with a playground, a low-rent BLE device, and a full-price child.

Have I mentioned recently how awesome playgrounds are for playing around with and learning about new tech? They really are, especially because you can integrate just one concept at a time, and then test it live before expanding to the next.

I decided to go with Cocoa for my BLE exploration instead of iOS, although the tech is more or less the same on both platforms. When you work in Cocoa, using a macOS playground, the startup speed is phenomenal because you don’t have to work with a simulator.

My first project simply sets up a central manager (CBCentralManager), monitors its state, and lists any devices it finds. I’m pretty happy for this as a first day, not many hours to spend on it, playing around and doing something marginally useful result.

The CoreBluetooth documentation is pretty dire. For example, this is the Swift docs for CBManagerStatePoweredOn.  After SE-0005, the constant is actually .poweredOn, as you see in the following sample code, not CBManagerStatePoweredOn. And there’s no documentation in that documentation.

Nonetheless, I persevered and my first child-full day produced a basic helper class. You really need to work in NSObject land for this because of all the delegation. So I set up an objc-friendly class, set it as a manager delegate, and implemented the one required callback method, which follows the manger state.

Try sticking the Bluetooth icon in your system menu bar.  (System Preferences > Bluetooth > Show Bluetooth in menu bar.) It’s a lot of fun to toggle it on and off and watch your playground keep tabs on that state.

Next, I added a basic peripheral scan. You need to scan only when the manager achieves poweredOn state.

Apple writes, “Before you call CBCentralManager methods, the state of the central manager object must be powered on, as indicated by the CBCentralManagerStatePoweredOn constant. This state indicates that the central device (your iPhone or iPad, for instance) supports Bluetooth low energy and that Bluetooth is on and available to use.”

That’s why I added the scan to the playground’s “update state” callback. You’ll want to stop scans when the BLE powers off.

Finally, I implemented one more callback, which asynchronously lists discovered peripherals. It picked up nicely on my Apple TV and when I enabled and disabled a hotspot on my iPhone. Great fun.

Here’s the code involved. You can see how very short it is. The struggle wasn’t in the lines of code or complexity, it’s mostly about how very badly documented most everything seems to be.

I’ll post more as time allows.

https://gist.github.com/erica/d249ff13aec353e8a8d72a1f5e77d3f8

MacBook Pros and External Displays

Today, I hooked a newly purchased display to my MBP. (Looks like they’re out of stock right now, but it was $80 for 24″ when I bought it last week.) This isn’t intended to be my display. It’s replacing an old 14″ monitor for a kid. I thought I’d just steal it now and then during the day. It’s extremely lightweight and easy to move between rooms.

What I didn’t expect was how awful the text looked on it. I hooked up the monitor to the MBP using my Apple TV HDMI cable. The text was unreadable. I use similar TV-style monitors for my main system and they display text just fine. However, I’m using normal display ports and cables for my mini. This is the first time I’ve gone HDMI direct.

So off to websearch I went. Sure enough this is a known longstanding problem that many people have dealt with before. The MBP sees the TV as a TV and not a monitor. It produces a YUV signal instead of the RGB signal that improves text crispness. Pictures look pretty, text looks bad.

All the searches lead to this ruby script. The script builds a display override file containing a vendor and product ID with 4:4:4 RGB color support. The trick lies in getting macOS to install, read, and use it properly. That’s because you can’t install the file directly to /System/Library/Displays/Contents/Resources/Overrides/ in modern macOS. Instead, you have to disable “rootless”.

I wasn’t really happy about going into recovery mode. Disabling system integrity protection feels like overkill for a simple display issue. But it  worked.  It really only took a few minutes to resolve once I convinced myself it was worth doing. If you have any warnings and cautions about installing custom display overrides, please let me know. It  feels like I did something morally wrong even if it did fix my problem.

My external display went from being unusable to merely imperfect. The text is still a bit blurry but you can read it without inducing a migraine. Not nearly as crisp as normal display ports (which looks fine when used with this monitor) but I don’t have to buy a new cable and I don’t plan to use this much.

If I were going to use this monitor regularly with the MBP, I’d definitely purchase a proper cable. As it is, I’m happy enough to have found a workable-ish solution. The monitor is quite nice especially in “shop mode”, and has so far worked well with Chromecast, AppleTV, and Wii.

The problem with macOS and Drawing

Trying to write cross-platform code for drawing routines is ridiculously frustrating. It’s never just a matter of creating cross-platform type aliases. UIColor and NSColor are cousins, not brothers. UIBezierPath and NSBezierPath are slightly more distantly related, especially when it comes to adding curves and retrieving an underlying cgPath.

Although a lot of tech debuts on Cocoa, it often gets refined in Cocoa Touch. That refinement doesn’t prioritize cross-platform development, so the “second time, better design” features that leap into iOS land are hard to support with a single code-base. (And yes, I’m looking at you layout constraints with their different dictionary types for metrics.)

Starting last year, UIKit drawing really improved. It already had several advantages over Cocoa drawing, but last year’s update pushed the APIs to a new level.

NSImage has its  init(size:flipped:drawingHandler:) initializer (introduced in 10.8). in iOS 10, UIImage adopted and refined that closure-based drawing approach to UIGraphicsImageRendererUIGraphicsImageRendererFormat, etc. A renderer can return an image or PNG/JPEG data. Very handy, very convenient, wide-color aware, nifty. And it phases out a lot of home-built solutions that devs had been using to get exactly those results.

The new features are not fully refined. For example, there isn’t a single doc comment across the entire feature suite. Which, you know, sigh. But the architecture is there if not perfect or final, and it’s several steps beyond the corresponding init on macOS.

With this new tech in place, I’m starting to wonder about some older APIs like Cocoa Touch’s drawing stack, which lets you push and pop contexts for immediate drawing. It’s handy for using UIKit calls to draw into custom Core Graphics bitmap contexts. Is that still going to be around and used after WWDC this year? Or next year? Or will it be subsumed into UIGraphicsImageRenderer? It looks ripe for redesign and replacement.

I also don’t think UIGraphicsBegin/EndImageContext is long for the world. It was first introduced in iOS 2. It may be deprecated in iOS 11. The new APIs feel more appropriate to the design and philosophy of iOS, and these calls always stood out as a very odd approach.

But let me get back to the issues of writing “Swift Drawing”. When it comes to book, platform dichotomy hits hard. To get a sense of how much effort it would take to write about drawing for both platforms, I worked on my blend mode appendix sample code this weekend.

This is extremely simple code. It just iterates through  the standard Core Graphics blend modes to create stock samples. The appendix describes each mode We’re talking about maybe a page of code. Getting it to Cocoa? Not so easy.

I put together a very rough skeleton granting Cocoa the basic abilities of Cocoa Touch. I’m not very happy with this code and I’m not sure I’m going to keep pushing on this code or developing its capabilities, which is why it’s in a gist and not a repo.

I suspect we may see new image APIs — both Cocoa and Cocoa Touch — this WWDC. I don’t want to compete with Apple APIs and I also don’t want to have to write two completely distinct drawing books: one for iOS and one for macOS.

So I’m still struggling because while most basic calls for CoreGraphics are interchangeable across platforms, the Cocoa and Cocoa Touch layers are not. If I go with samples that support both platforms, I have to pick one of the following approaches:

  • create an artificial UIKit for Cocoa (that is a disconnect for macOS readers),
  • create an unrealistic platform-independent backbone to unify Cocoa and Cocoa Touch, which means I can’t talk to Cocoa Touch readers in their native language (that is objectively bad), or
  • stick with Cocoa Touch like I did in the original book (simplest but excludes macOS readers).

After this weekend’s experimenting, I’m leaning towards keeping things mostly iOS-centric. I may mix in a bit of “fake UIKit”.

I know I don’t want to go the route of “Color”/”Font”/”BezierPath”/”LayoutConstraint” cross platform types that I use in my own development work. It’s fine to use pseudo-types and typealiases internally but it’s not a way to introduce native tech in a book. Books should teach to the platform with the least distance between the reader and the APIs. No dependencies, few workarounds, minimal obfuscation.

I could build a small set of “fake UIKit” calls for macOS samples. This would allow me to focus discussions on the true cross-platform features like gradients, paths, colors, and bitmaps, without having to write two entire books at once, but I’m not sure it’s worth the effort or the hardship on macOS readers.

The costs look like this:

  • Full macOS support would double my work.
  • Adding compatibility discussions and partial macOS code would add about 60% or more work.
  • A minimal “fake UIKit” and a few references in-book to this approach, I could probably squeak by with about 25-35%.
  • Based on initial feedback, macOS sales would be about 10% at most of my readers.

Bottom line: macOS is a big old monkey wrench for this particular project. After investing a weekend, I’m not sure I want to go there but I’m interested in hearing your feedback after looking at my samples and prototype gists.

Swift Drawing: Would you buy this book?

Over the years, I have received any number of letters, emails, and tweets asking if I were going to update my iOS Drawing book. I first wrote a version for Addison Wesley/Pearson as a kind of fun side project between major iOS releases and it was warmly received. Readers liked the practical solutions for low-level drawing and fancy effects.

Since then iOS evolved and Swift arrived. With new types like UIGraphicsImageRenderer and type extensions for structs like CGRect, drawing has a lot of updated power tools on hand and ready to deploy.  Using C-based APIs with Swift can be a bit tricky, so it might be nice to have some guidance and examples. Plus, playgrounds now make the perfect sample code platform.

So would you be a potential purchaser  if I wrote this? I updated about 25% already, just to get a feel for the changes (and I think they’re quite cool), and am looking at using Leanpub to roll out the book a bit at a time. I’m hoping there’s enough traction out there in potential reader-land to make this worth building. I’d anticipate finishing just before WWDC.

Thoughts? Interest? Please let me know. Thanks!

Dear Erica: Playground Support Folder

“N” asks: “Hey, is the “shared playground folder” long gone, or does it still exist?”

Still there, still useful.

The big difference for long-time playground users is that it moved into the PlaygroundSupport module from the XCPlayground module. The latter was deprecated in Xcode 7. It’s a tiny module that supports playground-specific features. This constant (playgroundSharedDataDirectory) gives you a well-defined sandboxed folder that’s shared between all playgrounds.

This is, by the way, a terrible symbol name (take note!), as it returns a URL. It used to return a string but the name never got updated:

public let playgroundSharedDataDirectory: URL

I often build playground-specific subfolders so my directory doesn’t get all messy.

Another valuable feature is indefinite execution support (needsIndefiniteExecution) for playground pages that have to perform asynchronous work before completion. You can use this support to build little playground-based utilities instead of writing shell scripts.

I have some pages that work with Imgur, Google search, Wolfram queries, etc. A nice thing about building in playgrounds vs shell is that you can integrate audio and visual elements rather than having to save them to files and open them in helper applications.

If you’re writing API utilities, enable manual execution. Constant reloads can almost immediately deplete, for example, your Gist API query count for the day. Oops.

In Xcode, the shared data folder is available for iOS, macOS, and tvOS playgrounds. The shared data folder is not available on iOS’s Swift Playgrounds. This policy discourages custom local storage and access beyond standard media library locations.

There are some further protocols and types under the PlaygroundSupport umbrella in Swift Playgrounds. These aren’t available for Xcode playgrounds because they’re meant for use in Playground Books.

The extra functionality is part of Playground Book support, which underlies the tech in “Learn to Code”, etc. These additional APIs include items like a key-value data store, message passing between the live view and the primary playground page, and more.

If you want to learn more about Playgrounds, I have a book.  It discusses the features you use in Xcode and an overview of how to use iOS Playgrounds. I quite deliberately did not include much about Playground Book authoring as the topics are somewhat orthogonal.

I’ll probably be revising both Playground Secrets and Power Tips and my Swift Documentation Markup after WWDC. There’s also a three-book bundle available with Swift from Two to Three.

Dear Erica: Snake Case

An anonymous reader writes, “Is there is ever a good reason to use snake case in Swift code?”

Although I’m tempted to respond with a snarky no_there_isn't, of course you’ll use snake case, and with good reason. The real world offers many legacy and non-native libraries. Coders often source snake case calls and constants in their production code.

When developing your own symbols, the Swift community consensus has centered on UpperCamel for type and protocol names and lowerCamel for members, freestanding functions, and other symbols. This is a non-binding convention but it will make your code look and feel more Swifty.