Archive for October, 2016

CGAffineTransform constructors and transformers

Core Graphics transforms provide control over coordinate systems, drawing contexts, and paths by enabling you to apply rotation, scaling, and translations. As I’ve been writing about Swift Style, I hope I’m now better able to articulate why I hate the way they’ve been automatically named into Swift.

The current Swift constructors and transformers are:

  • init(rotationAngle: CGFloat)
  • init(scaleX: CGFloat, y: CGFloat)
  • init(translationX: CGFloat, y: CGFloat)
  • rotated(by: CGFloat)
  • scaledBy(x: CGFloat, y: CGFloat)
  • translatedBy(x: CGFloat, y: CGFloat)

Here are my issues:

Confusing naming. “Scale” can be a noun or verb. You can only figure out which one was intended by looking at the translation API. If you have to look at another API to figure out some meaning, the API was written wrong.

In this example, if the two arguments were swapped around and balanced (xScale, yScale), the API would make a lot more sense. I’m not arguing for those terms. I just want to make the point that you should avoid using ambiguous words. This is a classic example of that error.

Missing Types. Why scale and translate by identical meaningless “x” and “y” values? Or rotate by “rotationAngle”? There are better, more exact terms of art and a couple of missing types to support those terms.

Although Core Graphics added CGVector, it is still missing two key types: CGAngle (stores radians or degrees)  and CGScale (stores sx, sy scaling factor pairs).

If I were in charge, you’d be able to initialize a CGAngle using radians, degrees, and π count (for example, 360 degrees is 2π and 45 degrees is 0.25 π). And you’d be able to pull out properties for each of those items as well from a unified structure. A CGScale would provide a natural way to store scaling factors.

Redundancy. If you can come up with a more redundant term than rotationAngle, I’d like to hear it.  rotationAngle uses two nearly identical words to do the work of one: “radians”. It presumes only one confusing initialization style while ignoring other common use cases like “degrees” and “multiples of pi”.

Incorrect Type Use. Although it’s convenient to break out factors into component elements, you’re really translating by a vector or scaling by an (sx, sy) pair. You should be allowed to use these types (for example, scale: CGScale) as well as the broken down member calls (for example, sx:sy:).  This is most notable in the use of CGVector. Although the CGVector type was introduced long after Affine Transforms, transforms were never updated to take advantage of  a semantically richer approach.

Mix and Match Prepositions. Core Graphics includes rotated(by:), scaledBy(), translatedBy(). So where does that “by” belong? In a coherent API, each “by” should either be in the parentheses or outside.

I vote “out”. Doing so enables you to create call families like these, where the specifics of the labels preserve share abstractions and the by isn’t glommed onto the first argument label, throwing off the API’s balance.

public func translatedBy(tx: CGFloat, ty: CGFloat) -> CGAffineTransform
public func translatedBy(vector: CGVector) -> CGAffineTransform

Speaking of unbalanced arguments, scaleX is way longer than y, as is translationX vs y, yet both arguments have equal weight and priority in the calls. Unbalanced labels offer another good indicator of badly designed APIs.


Concatenate got your tongue?

Quick quiz time! Given these two transforms:

let translation = CGAffineTransform(translationX: 5, y: 10)
let rotation = CGAffineTransform(rotationAngle: CGFloat(Double.pi) / 6)

Consider the following assignments.

let a = rotation.concatenating(translation)
let b = translation.concatenating(rotation)
let c = rotation.translatedBy(x: 5, y: 10)
let d = translation.rotated(byDegrees: CGFloat(Double.pi) / 6)

Can you tell me which of these outcomes match each other before scrolling down to the answer? (No cheating!) To provide a little buffer between here and there, let me remind you about what a basic affine transform looks like:


For translation, the tx and ty entries specify the offsets for x and y:

┌                       ┐
│  1.000   0.000   0.000│ translation: (5.0, 10.0)
│                       │ scale:       (1.00, 1.00)
│  0.000   1.000   0.000│ rotation:    0.00°
│                       │ rotation:    0.00 π
│  5.000  10.000   1.000│ rotation:    0.00 radians
└                       ┘

When rotating, the abcd slots are filled by cos(????), sin(????), -sin(????), and cos(????):

┌                       ┐
│  0.866   0.500   0.000│ translation: (0.0, 0.0)
│                       │ scale:       (1.00, 1.00)
│ -0.500   0.866   0.000│ rotation:    30.00°
│                       │ rotation:    0.16 π
│  0.000   0.000   1.000│ rotation:    0.52 radians
└                       ┘

Multiplying translation by  rotation gives you this:


And multiplying rotation by translation gives you this:


The results are identical except in the (tx, ty) offset slots.

Okay, ready with your answers? If you guessed a/d and b/c, you’re right. As a basic rule of thumb, x.concatenating(y) is going to be the same as y.performing(x), where the performing call is rotated(by:), translatedBy(x:,y:), or scaledBy(x, y).

Concatenating a transform simply multiplies one transform by another: T1 x T2. However, performing a transformation (rotated, translated, scaled) gives you T2 x T1. If you hop into the module declarations, the answers are there to see in the hipster retro documentation:

/* Translate `t' by `(tx, ty)' and return the result:
     t' = [ 1 0 0 1 tx ty ] * t */

@available(iOS 2.0, *)
public func translatedBy(x tx: CGFloat, y ty: CGFloat) -> CGAffineTransform

/* Scale `t' by `(sx, sy)' and return the result:
     t' = [ sx 0 0 sy 0 0 ] * t */

@available(iOS 2.0, *)
public func scaledBy(x sx: CGFloat, y sy: CGFloat) -> CGAffineTransform

/* Rotate `t' by `angle' radians and return the result:
     t' =  [ cos(angle) sin(angle) -sin(angle) cos(angle) 0 0 ] * t */

@available(iOS 2.0, *)
public func rotated(by angle: CGFloat) -> CGAffineTransform


Here are the results, printed from a Swift playground, just to confirm that the behavior is, in fact, exactly as documented:

CGAffineTransform(a: 0.866025403784439, b: 0.5, c: -0.5, d: 0.866025403784439, tx: 5.0, ty: 10.0)
CGAffineTransform(a: 0.866025403784439, b: 0.5, c: -0.5, d: 0.866025403784439, tx: -0.669872981077805, ty: 11.1602540378444)
CGAffineTransform(a: 0.866025403784439, b: 0.5, c: -0.5, d: 0.866025403784439, tx: -0.669872981077805, ty: 11.1602540378444)
CGAffineTransform(a: 0.866025403784439, b: 0.5, c: -0.5, d: 0.866025403784439, tx: 5.0, ty: 10.0)

I will leave my rants about the absurd and inconsistent naming, caps, argument labels, and initializers for another day.

Writing updates and asking “Is Github my new Dropbox?”

I’m testing the waters for the first time in using Github rather than Dropbox to coordinate a private project. I’ve used private repos before for material that wasn’t meant for public consumption or to stage material that would then later be released openly but this is the first time I’m testing it out for material that’s substantially not code.

I’ve been meaning to give this a go ever since Github changed its policy to allow unlimited private repositories. I used to be limited to just five in total and I guarded those slots carefully. Under the new policy, I have repos to burn. Today was the first time that I set one up to use in this way.

It feels odd using Github instead of Dropbox as I’m so used to my Github content being primarily open, and Dropbox requiring explicit permissions. Have you tried using Github this way? And how have your experiences been?

The reason I’m testing out Github is that I’m updating iOS Drawing for Swift. I have a week or so to burn while I’m waiting on editorial feedback and tech review on my Swift Style title from Pragmatic. It will take another 4-6 weeks for Addison Wesley to release iOS Drawing rights back to me but I figured I’d get a head start writing some test chapters and get some early feedback about the project while I had some downtime.

I’ve used Dropbox for years to provide material to beta readers and gather their feedback as well as to coordinate material on multiple machines. In testing out Github, I was inspired by Pragmatic’s workflow.

Pragmatic uses a delightfully retro SVN version controlled interactions between editors and authors. (I’ve had to create an SVN/git cheatsheet to remind myself how to SVN all the things.) Pearson/AW in contrast uses Basecamp to manage projects. Basecamp offers a lot of great team features including messaging, calendars, email updates, and so forth, and I’ve been quite happy with it.

Book projects tend to be hefty, especially those with lots of illustrations and sample code but Github has generous file policies. It imposes a 1GB repo limit, 50 MB file warnings, and 100MB file limits.  These are far beyond what I’d need.

I’ve recently changed my overall personal workflow, having been inspired by conversations with editors at O’Reilly. O’Reilly has been pioneering modern, flexible content using markup source. I took my lead from them. (I’m personally using CommonMark instead of AsciiDoc and pandoc instead of Atlas, but the ideas are similar.)

Pandoc has been a pure delight. Even if CommonMark offers less nuance and control than Microsoft Word (however ugly MS Word is, it has power and all the ugly but practical features you need for professional publishing), pandoc allows me to push from manuscript to book in seconds.

I don’t have to use Calibre to build epub, pdf, and mobi output. My code examples are readable and my tables of contents are perfect. I’ve written a bunch of command-line utilities that automate the process of building the ebooks, zipping up archives, and storing copies in a Dropbox beta folder. I still use Dropbox to provide early reader access.

I built Swift Style‘s first draft using this workflow, writing in MacDown, an open source macOS Markdown editor. I like MacDown’s side-by-side display but, to be honest, for material of any size, it has no way to keep the text and output in sync, especially once you introduce illustrations.

If I find some time, I’ll probably try to mess with the source to add this functionality because once you drop the ability to see your edits as you add them, the utility loses a lot of its charm but that’s a project for another day.

In the meantime, I’m just getting settled into Github for this project. A lot of my work steps are similar: I start by pulling and wrap up by pushing but now it’s to the repo, and not to Dropbox. Github offers more version control than Dropbox’s undelete functionality and I don’t have the same worries about filling up my quota.

I’m curious: Are you using Github for non-coding projects? And how has that worked out for you? Did the DNS incident a few days ago make you rethink? Or are you committed to this kind of collaborative tool? Let me know. Thanks!

Animating letters into place

I love playgrounds. This is exactly the kind of challenge they were built for. Joe Fabisevich asked: “Does anyone know how (or if) it’s possible to animate every character of a UILabel (or UITextView/some other text holding view if necessary)? I want to replicate this effect, of the word Hello!


If you want to read about this kind of UIKit fun, my Gourmet Cookbook covers a lot of it. The book is in Objective-C but it’s not hard to translate the tricks over to Swift.

Once you’re in Swift, you can easily put together proof of concepts and tweak parameters from a playground, with instance feedback — from speeding up and slowing down the animation, to testing out different font faces and color schemes.

Source Code

And another take, with a little fade in and scale popping:

This one weird trick for drawing dashes

Just a quick tip before the weekend: You get dashes and you get dashes and you get dashes and…


Here’s the code.

let drawSize = CGSize(width: 100, height: 100)
let ovalRect = CGRect(origin: .zero, size: drawSize)
    .insetBy(dx: 5, dy: 5)

let path = UIBezierPath(ovalIn: ovalRect)
path.lineWidth = 5

// Set up dash pattern 
var dashes: [CGFloat] = [12, 2, 2, 2]

// Plug it into the Bezier path
path.setLineDash(&dashes, count: dashes.count, phase: 0)

// And draw
let image = UIGraphicsImageRenderer(size: drawSize).image {
    context in let bounds = context.format.bounds
    UIColor.white.set(); UIRectFill(bounds);;
    path.fill(); path.stroke()


Your (not very) weird tips:

  • Use an even number of dash values as on-off point patterns. Odd patterns don’t look right. The second time through each pattern the on-off flips and…just try it out and you’ll see what I mean.
  • Make the sum of the dashes divide cleanly into diameter * π (in this example, 90π). This prevents weirdness where the start and end meet up.
  • Make sure the dash values is var and not let, so you can pass them by using “&” instead of doing some godawful UnsafePointer<CGFloat> thing.
  • Draw the path to see your dashes. Swift playgrounds don’t show borders or dashes in QuickLook previews.

Dashes are always as thick as a path’s lineWidth and the lineWidth is centered on the exterior of the shape. In this example, 2.5 points lies outside the path, 2.5 points inside, which you can see when looking at the gaps between each dash segment.

Leave enough space so the border won’t clip by applying rect.insetBy. It’s one of the new Swiftier Core Graphics calls, and it’s much easier to read than the old CGRectInset-style calls.


Solving Mathieu’s Phone: The mystery of disappearing gigs

The other day, Mathieu’s 16 GB phone suddenly had no space. Even after rebooting, even after reformatting (and not restoring from backup), all his spare bytes were being sucked into a black hole.

He had no songs, few apps, a modest number of photos, and under a gigabyte of space available, making him unable to compile, load, and tests his apps.


Each time he deleted one of his apps, the space would mysteriously fill up within a few minutes, adding to the ever increasing “other” bar in iTunes:


This delete-then-lose-space behavior made me think that iCloud was trying to store files locally on his phone to reduce cloud access. I suggested that he disable iCloud and sync just the bare essentials like contacts, calendars, and notes. (Mathieu has a paid 300GB iCloud plan.) Sure enough once he logged out and rebooted, over 7GB of space was freed up and he was able to use his phone again.

I’m not super-familiar with iCloud so if anyone can further explain how this works, and how to set up the phone to limit it from glomming space, I’d sure appreciate being able to pass that along. Thanks!

Which code style reigned supreme? Unswifty Procedural vs Swifty Functional Results

Yesterday, I asked you to pick a style for some sample code about character-by-character layout.

  • Choice A used a procedural for-in loop
  • Choice B used a map-reduce pair
  • Choice C used a series of maps, followed by a reduce.

The people have spoken, and they spoke primarily in favor of choice A:

[socialpoll id=”2395012″]

Summarizing the majority view, David wrote “For my money, A wins by a mile. It’s just far more readable, maintainable, with far less mental work required.”

Here’s the layout I was working on presenting:


And the code I ended up using was this. It ended up closer to choice C than Choice A, but I took my guidance from the “make it clearer crowd”  A crowd. I hope that I picked up the “more readable, maintainable, less mental work” theme the voters asked for.

// Convert characters to attributed strings 
// and measure them
let letters = characters.lazy
    .map({ String($0) })
    .map({ NSAttributedString(string: $0, attributes: attributes) })
let letterSizes ={ $0.size().width })

// Calculate the full extent
let fullSize = letterSizes.reduce(0 as CGFloat, +)
var consumedSize: CGFloat = 0

// Draw each letter proportionally
for (letter, letterSize) in zip(letters, letterSizes) {
    let halfWidth = letterSize / 2.0
    consumedSize = consumedSize + halfWidth;
        defer { consumedSize += halfWidth }
    pushDraw(in: context) {
        // Rotate the context
        let theta = 2 * π * consumedSize / fullSize
        context.cgContext.rotate(by: theta)
        // Translate up to the edge of the radius 
        // and move left by half the letter width.
        context.cgContext.translateBy(x: -halfWidth, y: -r)
        letter.draw(at: .zero)

I took Paul C’s advice to heart: “I like each step to be a useful thought that conveys to a human some but not too much new information about what’s going on.” My revised approach breaks the code down more into  steps.

Reader feedback led me to consider a few excellent points. Was I going to re-use any of the information? (Yes I was) If so, I needed to break down the functional chain. If I went functional, shouldn’t I be using lazy mapping? (Yes I should).

The zipped for-loop let me pair letters with their sizes so I could perform the rest of the layout without re-calculating each letter size. One perfectly cromulent approach put forth (that I decided not to go with) was that I could merge the characters into a single string (presumably disabling ligatures and kerning) and calculate the full width once.

If I were only calculating the width, this would work fine. Once I decided to retain and re-use individual sizing, that approach became less attractive, but I thank the several people who suggested it.

Finally shout out to Rob N, who suggested Core Text layout for better spacing, which I’ll try out this morning.

Thank you everyone for your feedback and input.


Update: I gave it a try, using TextKit, laying out the text in a line and then using the resulting glyph widths. I had to enable kerning to get a better layout result (my first attempts were identical to yesterday’s). Here’s the updated layout, plus an overlay of the two. I think the “W” looks a lot better.




Holy war: Unswifty Procedural vs Swifty Functional

Background: I’m working on a proof of concept chapter.

Goal: Sample code that’s teaching about proportional spacing in character-by-character layout.

The contenders: I initially went with a basic loop (choice A) rather than reduce() (choices B & C).  For an audience that’s learning about graphics rather than Swift, does this read significantly more easily? Are there compelling reasons to prefer the functional approaches? And if so, how much would you break it down? Which would you go with and why? Let the battle commence. Which code reigns supreme?

// Choice A
// Calculate the full extent
var fullSize = 0 as CGFloat
for character in letters {
    let letter = NSAttributedString(string: String(character), attributes: attributes)
    fullSize += letter.size().width

// Choice B
// Calculate the full extent
let fullSize = letters
    .map({ NSAttributedString(string: String($0), attributes: attributes) })
    .reduce(0 as CGFloat, { return $0 + $1.size().width })

// Choice C
// Calculate the full extent
let fullSize = letters
    .map({ String($0) })
    .map({ NSAttributedString(string: $0, attributes: attributes) })
    .map({ $0.size().width })
    .reduce(0 as CGFloat, +)

[socialpoll id=”2395012″]

Which code reigns supreme? Results discussed here.

The Joys of iOS 10 UIKit Drawing

I just spent a few days enjoying all the iOS 10 UIGraphics renderer utilities for images and PDFs and wide colors and so forth. It’s lovely. I thought I’d share a post comparing the old world and the new.


Remember this?

// Create a color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 
if (colorSpace == NULL) {
    NSLog(@"Error allocating color space");
    return nil; 

// Create the bitmap context.
CGContextRef context = CGBitmapContextCreate(
    NULL, width, height,
    BITS_PER_COMPONENT, // bits = 8 per component 
    width * ARGB_COUNT, // 4 bytes for ARGB 
    (CGBitmapInfo) kCGImageAlphaPremultipliedFirst); 

if (context == NULL) {
    NSLog(@"Error: Context not created!"); 
    CGColorSpaceRelease(colorSpace ); 
    return nil;

// Push the context.

// Perform drawing here


// Convert to image
CGImageRef imageRef = CGBitmapContextCreateImage(context); 
UIImage *image = [UIImage imageWithCGImage:imageRef];

// Clean up 
CGColorSpaceRelease(colorSpace ); 


let image = renderer.image { context in
    let bounds = context.format.bounds
    for amount in stride(from: 1.0 as CGFloat, to: 0.0, by: -0.1) {
        let color = UIColor(hue: amount, saturation: 1.0, 
            brightness: 1.0, alpha: 1.0)
        let rects = bounds.divided(
            atDistance: amount * bounds.size.width, from: .maxXEdge)
        color.set(); UIRectFill(rects.0)


public func imageExample(size: CGSize) -> UIImage? {
    let bounds = CGRect(origin: .zero, size: size)
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    let (width, height) = (Int(size.width), Int(size.height))
    // Build Core Graphics ARGB context
    guard let context = CGContext(data: nil, width: width, 
        height: height, bitsPerComponent: 8, 
        bytesPerRow: width * 4, space: colorSpace, 
        bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue) 
        else { return nil }
    // Prepare CG Context for UIKit
    UIGraphicsPushContext(context); defer { UIGraphicsPopContext() }
    // Draw to context using UIKit calls; UIRectFill(bounds)
    let oval = UIBezierPath(ovalIn: bounds); oval.fill()
    // Fetch the image from the context
    guard let imageRef = context.makeImage() else { return nil }
    return UIImage(cgImage: imageRef)


extension UIImage {
    public func grayscaled() -> UIImage? {
        guard let cgImage = cgImage else { return nil }
        let colorSpace = CGColorSpaceCreateDeviceGray()
        let (width, height) = (Int(size.width), Int(size.height))
        // Build context: one byte per pixel, no alpha
        guard let context = CGContext(data: nil, width: width, 
            height: height, bitsPerComponent: 8, 
            bytesPerRow: width, space: colorSpace, 
            bitmapInfo: CGImageAlphaInfo.none.rawValue) 
            else { return nil }
        // Draw to context
        let destination = CGRect(origin: .zero, size: size)
        context.draw(cgImage, in: destination)
        // Return the grayscale image
        guard let imageRef = context.makeImage() 
            else { return nil }
        return UIImage(cgImage: imageRef)

Okay, I admit the bitmapInfo is still a little ugly, but isn’t the rest of it grand?

  • No more hacky UIGraphicsBeginImageContext()/UIGraphicsEndImageContext() stuff, let alone getting image from context. Why wasn’t it like this years ago?
  • I do love my Swift constructors. Creating the CGRect from the size is much cleaner now.
  • If you want Core Graphics, Swift gives you Core Graphics: there’s still good reasons to create custom contexts (for example, device gray color spaces) or otherwise work at a low level without having to fire up Accelerate, Core Image, or other power frameworks.
  • You can pair the graphic stack context push with its pop if you do need custom context work. I love defer pairs that prepare for cleanup at the same time you do set-up. (We need to extend Swift reference types to allow paired deinit tasks too!)
  • Swift handles all the memory management. All of it!
  • Swift optionals and errors let you fail so much more gracefully.
  • As you’d probably expect, PDF drawing is just as easy as working with images.
  • The “hoisted” CG utilities (like CGRect’s divided(atDistance:, from:) and the context’s draw and makeImage) are lovely too.

I’m seeing the light at the end of the tunnel for Swift Style wrapping up. Anyone interested in me revisiting “iOS Drawing” for Swift? Or are there other topics you’d rather me follow on?