Archive for the ‘Musings’ Category

App Clips: when is an app an app and when should it be a webpage

Apple’s new App Clip technology lets people load transient mini-apps without installing through the App Store. Users don’t have to authenticate or authorize the mini-app. It just downloads and works. Whether scanning a code (think QR code) or detecting an NFC tag, iOS users can download and run these pre-vetted packages that represent a light, typically transactional, view of a larger app experience. I went through some writeups and video today and thought I’d share a mental dump of my thoughts.

All App Clips are accessed via URLs and limited to 10MB or less in size.  Their job is to move a user through a quick transaction and then either return control to the user or solicit the user to download the full application. So if you’re selling cupcakes, you can “upsell” the experience from a single purchase to a loyalty program app.

App Clips are designed for transaction. As an app, rather than a web page, they integrate seamlessly with the store ecosystem, allowing users to purchase goods and services from an instant menu and integrate with features like Apple Pay.

You can also use App Clips to decorate a museum or a city with points of interests or a bus stop with upcoming travel information. You can also do this with the phone’s built-in QR recognition for URLs to land on a web page instead but you’d miss the charm of the Apple “concierge” guiding you through the process.

Honestly, there’s nothing an App Clip can do (maybe other than something like Apple Pay) that a reasonably designed web page cannot but it’s that charm and an eye towards lowering the transaction barrier that makes the Clips so compelling. With Clips, end-users can point, pick, and pay with an absolute minimum of effort. If this works as promised, many of the typical web hurdles (I speak as someone who has ordered a lot of MadGreens pick-up food over the last few days) disappear.

A Clip’s transient data only is transient if it is not transferred to a client app. Since the Clip must be developed in tandem with that app, an impulse purchase can be applied directly to loyalty points, putting the user on the path of a redemption reward. The side-by-side development and the tandem review (which takes place together) plus the ability to share assets and re-use code makes this a promising area for developing commercial transaction apps that don’t stand between the user’s desire and impatience and the need for a more traditional app.

I use the Safeway app regularly and I utterly loathe it. Safeway has three tiers of user prices: general rip-off, loyalty-program, and expensive-but-you-can-live-with-it Just4U. Swiping a card only gets you to the middle tier. To get the Just4U prices, you must spend a half-hour before each visit entering all the coupons you think you might need or try to scan the teeny-tiny, often folded, ripped, or dirty QR codes on the shelves and then hope that the store’s remarkably bad WiFi and cell service doesn’t give you dozens of error alerts (which must each be manually dismissed) keeping you from actually being awarded those better prices (and the occasional $5 or $10 off a $100 purchase).

If the App Clip experience shows how to smooth the pathway from desire to acquisition, then Safeway’s approach shows how to tick off its customers and actively convince them not to purchase certain items as the coupons are not working.

However, I’m not only excited about the transactional nature of App Clip, I can easily see how a well-backed municipal or organizational effort can provide “more information”, “deeper information”, “found facts”, and “inspiration” using the same technology. Again, the key lies in reducing turbulence, steering the user, and completing the goal in the shortest period of time.

In terms of designing your own apps for App Clips, remember that simple is always better. The more choices, options, and features in your Clip you present, the more work the user must do. Instead, consider pushing your top sellers as “Quick Buys”. That doesn’t mean you can’t offer the deeper choices but if you do, remember the people queuing up in line, impatiently waiting for your customer to pick their deep-menthe chocolate-macchiato with half-skim/half-soy, two-and-three-quarters squirts of Pumpkin Spice, with low-fat whipped cream.

I’d imagine that the best transactional clips will not only capitalize on desire but also on the flow and customer-to-customer dynamics that might also exist within the store experience.

For App Clips out of the sphere of purchase, keep the same kind of locational awareness. A visitor who has just discovered the fascinating history of your city’s belltower probably shouldn’t step back into traffic to better look up and view the little dancing people on the clockface.

When I first saw this feature, I wasn’t all that excited. Now that I’ve dived in a little more I’m much more impressed by the thought care and clever delivery mechanism Apple. has put together.

What do you think? Too clever for its own good or a tech we’ll be seeing for years to come?

Swift Packages and the need for metadata

Right now, a Swift Package defines the sources and dependencies for successful compilation. The PackageDescription specifies items like the supported Swift version, linker settings, and so forth.

What it does not do is offer metadata. You won’t find email for the active project manager, a list of major authors, descriptive tags, an abstract or discussion of the package, a link to documentation, deprecation information or links to superceding packages upon deprecation.

For me, tags are especially important as they can drive discoverability on aggregators such as SwiftPackageIndex.com or SwiftPackageRegistry.com, among others, as they do in the various App Stores. However, all the other information I’ve mentioned can be equally valuable. The question is how this information should be stored and travel.

Extending SwiftPM’s PackageDescription is the most obvious way but the one with the greatest hurdles. Extending a specification means review, bikeshedding, and approval but is one that would produce the most rigorous and widely-applicable outcome:

let package = Package(
    name: "now",
    platforms: [
      .macOS(.v10_12)
    ],
    metadata: [
      .tags(["dates", "calendar", "scheduling", "time", "appointments"]),
      .maintainer("erica@ericasadun.com"),
    ],
...

Freeform tags are, as Mattt of three t’s pointed out to me, a folksonomy: a user-specified list that can be organized or freeform, sensible or not. Anyone familiar with the App Store will recall how its tags have both benefited developers and how its tags can be abused to drive traffic.

Of course, updating the PackageDefinition spec is not the only approach. The same information could be packaged into a second file cohosted with Package.swift. Perhaps it could be called Package.metadata (if stored as JSON, for example) or PackageMetadata.swift (if the information is SwiftPM-like with its own Metadata type and supporting package to better support automated validation and consumption).

Plain JSON has many advantages. It requires no secondary code development as PackageMetadata.swift would. It has an obvious place to live and can be just as easily omitted. The standard for contents could be community-sourced and Decodable developed specifically for it, plus the JSON could be validated according to that standard.

I am least enthused by PackageMetadata.swift with its high overhead and mimicking of Package.swift but it would certainly fit with the design and approach and lower the overhead for consumption.

What do you think? How would you design metadata delivery for Swift packages? The one file to rule them all expansion of SwiftPM itself, the simplicity of Metadata.json, or something else? Let me know.

Catalina GIFfing: Quick workflow from screen to animated GIFs

All the leaves are green.

And the sky is blue.

I’ve been at my desk.

With screenshot play…

(To be fully truthful, it’s currently raining cats, dogs, kittens, and puppies. But it’s lovely here in the high desert.)

With my newly updated workflow creating the following GIF took about a minute maybe from start to post. The secret? QuickTime Screen Recording (bless you ⌘-Shift-5) and “gifify” courtesy of Homebrew. Set record, demo, stop, convert, drop into WordPress:

I love how easy it is to invoke screen recording these days with macOS’s updated capture interface. It’s especially nice how the optional delay time allows me to get into the zone before recording actually starts.

Back to installation, the blocker was getting homebrew to get itself into position to fully support Catalina. I had to apply homebrew update and homebrew upgrade and homebrew doctor a number of times. Not only did I get gifify installed, but ffmpeg is finally back to working and I once again have emacs for all my git needs.

I’ll spare you how bad the emacs transition was other than to say if you have to disable system integrity and mount read-write by hand, you’re probably doing it the wrong way.

With ffmpeg, it was the dependent libraries including the ones already installed into macOS (like openssl ). Homebrew refused to link:

Warning: Refusing to link macOS provided/shadowed software: openssl@1.1

I wish I had known early about the update/upgrade/doctor approach applied multiple times by the way, not just once, until everything stops complaining and the doctor says “Your system is ready to brew”. Because at that point, installs are a breeze. Installing before then, when the configuration seemed irreparably broken was probably a bad choice.

I spent a bit of time after removing my current bandaid symbolic links. It seems to have helped that I ended up granting separate privileges to ruby in Security & Privacy a while ago. I don’t remember why I did but it’s in there and I vaguely remember going through the process while cursing Cat.

June’s almost here and I wonder if Catalina.successor() will be better or more of the same. It hasn’t been a good Cat year for me.

Black Cyber Friday Monday and the art of the outdated iPhone

I decided to replace my aging iPhone 6+ a few months ago with a refurbished one- or two-years old iPhone model. The iPhone XR seems like a solid choice and I looked forward to some Black Friday/Cyber Monday deals. I assumed I’d purchase a “renewed” model rather than pay full price from Apple.

Then reality hit.

The “renewed” unlocked 128GB XR went from about $560 a few weeks back to $600 on Black Friday and soared even higher over the weekend. Today, it’s back “down” to $630 although it exceeded that price at its height..

Apparently, secondary markets plus high consumer demand create different pricing outcomes than shiny loss leaders.

This lead me to a sea-change in my planning and expectations.

An unlocked brand new SIM-free 128GB XR retails for $649 from Apple plus they throw in a $50 gift card today. If I hand over my existing iPhone 6+, my price goes down $80 (although I’m tempted to hand it down to a child). Not having to deal with Gazelle or the vagaries of Amazon vendors seems like a happy outcome.

Oddly, this is the last major purchase from my yearly “Pick up on Black Friday/Cyber Monday” list. The rest all went fairly smoothly, especially the $129 Apple Watch 3 for my middle child (Thank you David Ashman!) and the $649 MBA for my eldest.

I find year-by-year my purchase requirements are shrinking and changing. I’m spending more on services like iCloud and Backblaze, VPNs and NoIP, and less on hardware. I didn’t bother buying any external backup hard drives this year as I still have a couple in boxes from last year.

For those keeping track, I didn’t replace or upgrade my 2012 Mac mini although I am seriously considering the dual drive upgrade several of you recommended. I now run Mojave and Catalina installs on separate machines because I don’t want to lose access to thousands of dollars of software from the 32-bitpocalypse while I need access to the latest dev tools and OS. It’s a fairly frustrating situation.

So where did I spend my BF/CM money? This year, it was mostly about  gifts (carbon fiber PLA filament for the boy, Switch games for the girls), education (the watch and the laptop), and prosaic things-we-simply-needed (replacing a broken space heater).

It was also a good lesson on what not to buy during the rush.

Lightweight To-Do list formatting

I recently ran across the todo.txt format project, which allows use to use plain text action item lists to create and manage your projects. I love the simplicity of the idea but there were a number of items that prevented me from wholeheartedly adopting it.

First, I think it’s really ugly. It’s hard to scan, especially when there are lots of things to do. I know that the idea is this is an intermediate representation but that representation is visually heavy.

Second, it’s missing notes. Freehand notes are an important part of task management, whether noting down the phone number for the car place when you need to get your snow tires put on.

Third, it’s order dependent, with a left-to-right layout that really doesn’t work for me.

Fourth, I don’t think the (A)-(Z) priority format is particularly readable.

Even though I do like the +Project and @Context annotation, I feel like todo.txt is a few steps  short of a much more flexible and readable intermediate form.

So I started brainstorming on how I might tweak this concept to be more flexible. For one thing, I think the example strains to make the project and context “fluent” in a way that they don’t have to be. Yes, you may want to query to see “what are all the projects I am working on?” and “what contexts do I work in?” so I can support differentiating them but I don’t think they need to be integrated into the to-do text.

What follows is a very lightly-considered redesign that I have pulled entirely out of thin air. I did not spend much time on this and I’m sure there are better ways to redesign this to support text-based lightly annotated human-readable to-do lists. I’m throwing this out there as a starting point in case any of you want to carry this discussion further. It’s a design exercise for me, one that allowed me to take some of these thoughts out of my head and throw them into a blog post. So take this for what it is.

Imagine tagging that looks like more like this, depending on whether you’re annotating project or context values. Part of me pushes back at the notion of the two types of tagging but I appreciate that you might want to query “show me all my projects” and “show me all my contexts”, and I recognize the value in that:

@(tag) or +(tag)
@(key: value) or +(key: value)
@(priority) or +(priority)

A priority can be as simple as a number, which I think people would process better than “A”, “B”, “C”, etc. As a pattern of (0-9)+, it would also allow easy differentiation in parsing between a tag (arbitrary text) and a priority. Priorities would range from 0 (highest priority) to any arbitrary number (lower priority). I doubt people would use more than two or three priority levels (low, medium, high), if they use them at all.

Because dates are critical for todo deadlines, you could pull them out with NLP or be more explicit: @date(due: 2019-05-02). The single keyword would allow you to parse a date with more flexible keys and disallow an explosion of keywords you might encounter by trying to pre-design specific scenarios like @due(2019-05-02), @checkin(next Tuesday), etc.

Relative dates would need a creation date, which if you’re using an editor rather than the raw format could be injected as @date(created: 2019-05-02). The second you do that, though, you’re moving away from the idea of human-readable, simple annotation, tool-consumable.

In my head, a better system would not be single-line. Adding blank spaces between items significantly increases vertical space so python-inspired indentation can avoid that:

Set up appointment for snow tires
  Call and talk to Judy 505-555-1212 # comments are always great
  She says to contact them at least one thursday before the weekend you want # Not thrilled by the manual line breaks here
  to bring the car in.
  @(home) +(winterizing) +(2) @(before: 2019-12-01)
Put snow scraper in car
  @(car) +(winterizing) @(done) # yay me!
Get flu shots for Bob and Francis
  Already got them for Fred and Mary
  @(family) +(health) +(season) +(Costco) +(1) +(shopping) # It's a shopping run, even if it's not strictly buying something

This is rough, obviously, but I think it’s a bit more readable than blockiness of the current format and a bit more flexible:

If working with an editor tool, I’d prefer if done items were automatically moved down to the bottom or maybe removed entirely or added to a different “done.txt” list.

Also, once you have the power of full tagging and categorization, a well built tool should allow you to pull on each of those to view related items in more structured ways. The format is simple but the way you present tagged information doesn’t have to be.

Anyway, those are my thoughts. I see a project with good bones that misses out on a bunch of human factors. What kind of changes would you make to turn this into a more usable and universal approach? Or would you just say “plain text is not really appropriate here” and use OmniFocus or Things instead? I’m curious as to what you think.

Update: Readers mention:

Flipping the switch and the 32-bitpocalypse

I think I’m ready to upgrade my Mac mini to Catalina. I know, I know: “But the 32-bitpocalypse! Are you ready to lose all that investment?” I think I’ve worked through that. Haven’t I?

The last few weeks I’ve been busy. I bought a smallish (0.5 TB) external SSD drive and backed up a good chunk of my Mac mini to it. Today I’ve been running tests on how it works booting on my MBP, not my mini. That’s because my underpowered mini just isn’t strong enough either in boot speed or  running off the external drive to make this a reasonable approach.

On the MacBook, however, the SSD responsiveness is pretty fine. Once booted, I’ve tested Office, Photoshop, and a bunch of other 32-bit apps and while they’re not going to win awards for speed, they run and appear to be stable.

That leaves me with the dilemma. Do I flip the switch? Do I go full Cat on my main work machine? It’s been a reasonably time since release, so what mine fields should I expect to encounter? I honestly don’t want to upgrade and then have to start restoring from Carbon Copy Cloner backups from regret. (My backups are run nightly so they’re there if I need them.)

What do you think? Pull the switch or walk away? I hate being out of step with the latest OS, even if I do have Cat installed on my MBP and am happily using it there. Give me your advice. I’m not ready to walk away from so many apps that I still use many times a week but I don’t want to freeze my mini in the past. Thanks in advance for your advice and suggestions.

Falling back to an older MBP

I just recently switched from a 2018 15″ MBP back to my 2015 13″ and I thought I’d share some of my reactions to the change. My meandering and unstructured thoughts follow.

Unexpectedly, on returning to the 2015 MBP, I didn’t suddenly go “omgomg the keyboard is amazing again”. The older style was never amazing compared to any good mechanical keyboard. I adapted to the new keyboard just fine, even though my hands never really were big enough for it to be truly comfortable.  Yes, the older keyboard’s keys really are less of a stretch but it wasn’t that much of a hardship either way.

The basic truth for me is that both keyboards are fully usable and that having the dedicated escape key (or not) was never a big deal. The virtual one did the job just fine. That’s something I never expected to admit but it’s true. I may not love MBP keyboards but they work.

I will admit the older dedicated function keys are slightly better, especially given how I’ve rebound them using Keyboard Maestro, but not an order of magnitude more usable. Just…better. Not way better.

The trackpad feels old and quaint in comparison to the newer one. I miss the 2018 trackpad more than I expected, especially given how I’m an admitted trackpad hater. Moving documents has become again slightly more difficult on the older MBP, so there’s that.

My 13″ screen now feels cramped and tiny, as expected. But I once again appreciate how clean and beautiful the Retina display is, as I compare it to my one-older-purchase, the non-Retina MBA. The 2015 13″ is a great laptop, even if it feels chunkier and less streamlined than the 2018. You can tangibly feel the design differences between the 3 years as well as the battery improvements.

On the other hand, moving back from USB-C to all these wonderful ports is delightful. My 2018 was always an octopus, and I had to carry around a bag of hubs and adapters. (I even have one on my keychain, which I probably don’t need anymore.)

Yes, I still use some adapters like my HDMI to Thunderbolt 2, so I can have two monitors running from the 2015 laptop, but that lives on the HDMI cord, not with my laptop.

I have a bunch of extra file space with the built-in SD card reader with my computer-flush reader adapter so it looks built in. The two standard USB ports are so convenient. I have an entire bag of USB-C gizmos that I’d carry around with the 2018 machine that I dumped into my USB box-of-everything for now.

On the other hand, I do miss the fingerprint reader. My experiences unlocking with my watch are hit and miss. Sometimes it works great. Sometimes it doesn’t. I can’t really figure out when it will and won’t: it’s not just after restarts or a long time between use. In contrast, the fingerprint reader absolutely rocks on the 2018, even if I had to remove a bunch of items from the touch bar because sometimes I’d turn on Siri or mute my audio when I thought I was still unlocking.

Speaking of the touch bar, I don’t miss it at all. As a touch typist I was always a little stunned to discover that it had relevant information on it that I never looked at. If the touch bar had important stuff, it should have been on the screen and my screen should have been touchable all over. As someone or other said (I forget exactly who — sorry!), it’s a keyboard when it shouldn’t be and a touch screen where it shouldn’t be. I’m paraphrasing from memory.

Every one of you who develop content for the touch bar, bravo. I am glad of how you are helping the user, especially the user who can see the touch bar and interacts with it. I don’t want you to change a thing. Instead, I want Apple to step up and get that material integrated with the main display so I can take part too.

I’m holding off on upgrading a lot of things right now. I’m still using my iPhone 6+, my 2012 Mac mini, my 2015 MBP. They all work and get the job done and nothing yet has really given me the motivation to push forward to new hardware. The iPhone 11 is lovely but I don’t really take many pictures. The new mini isn’t self serviceable (at least to a klutz like me) and I honestly don’t want to leave Mojave on it and lose my 32-bit apps. The 2015 (running Catalina) is still a really great laptop.

I’m waiting to fall in love again, the way I did with the 5th gen iPad mini, which swept me off my feet early this year. I adore the mini. Revised and supporting the pencil, it gave me new ways to use it, better user experiences, and a solid beautiful form factor making it a natural upgrade from the 2nd gen.

I want my next hardware purchases to have that same passion instead of incremental utility.

What are you buying and what are you holding onto waiting for the right moment? Let me know.

Things I love about Apple Watch

I bought my Apple Watch to track. As I have learned over the last few months, the watch kind of stinks as a tracker and a fitness tool. It loses track of my workouts or fails to pick them up entirely. My movement in real life and my watch’s rings often fail to align. I may be wiping sweat from my brow as my watch thinks I’m taking a nap or something. And if tracking were the only thing my Watch offered, I’d write it off as an expensive failure.

Much to my surprise, though, I have fallen in love with my watch. The reasons surprise me, and had nothing to motivate my purchase. In looking back over the past months, I am constantly astonished by the watch’s convenience  and comfort (at least once I was able to get a band I could live with, because my first few attempts with the band didn’t go very well…)

Let me step back and speak of my watch’s best points.

For one thing, it lets me answer calls on my wrist. This is not a feature I would ever have wanted or searched for or asked for or desired. And yet, it’s one of the greatest things my watch does.

I bought the basic watch. It’s the one without built-in cellular so I receive calls only when my watch is near enough to my phone. I find the reach is good enough that I can answer my phone from my wrist anywhere within my house. I no longer  have to run to grab the phone from its charger. That’s amazingly convenient.

Sure it’s just weird speaking into my wrist. I’m not Dick Tracy and the ergonomics aren’t the best. And yet, I can handle a quick convo and get on with my life and I don’t have to start digging through my backpack to find the phone, which always manages to slip between things or slide into a notebook or otherwise hide itself away.

It’s a feature I didn’t know I wanted and it’s one I use daily. That alone is close to justifying the purchase price.

And, if I need to find the phone to open an app or do something that demands more real estate, I don’t worry about it slipping away in my backpack anymore. Now, I just pull up the watch’s quick utility panel and tap the phone pager. My phone pings and I instantly find wherever it has slipped. Brilliant!

But wait, there’s more.

This may sound absolutely ridiculous, but I no longer have to dig out that same self-hiding phone to look at the time. I can now glance at my wrist and the time is right there. Yes, I am the only person alive who is surprised and astonished that a watch tells you what time it is or that might be a reason for its purchase. This probably explains a lot about me.

When I put the Mickey face on the watch, I can even touch the watch without looking and hear the time. I wish that feature was available on other faces and without the creepy pedo-vibe that Mickey gives. I’ve sort of given up on Mickey because 1. pedo and 2. not enough complications (the add-ons that let you stick mini-app widgets onto the main display, kind of the “Apple Watch Dock” metaphorically) and switched to the Infographic display instead. But I love the “tap to hear the time” feature, even if I don’t use it very much.

Not only do I now know the time just by looking but my watch also tells me what date it is: perfect for writing checks, filling out forms, what have you. I don’t have to check my computer’s menu bar or drag out that phone. Again, this is about as deep into Captain Obvious territory as you get. It’s also a simple pleasure I did not consider when buying this hunk of expensive tech strapped to my arm.

And if that’s not enough, I also know what appointments are coming up next because my calendar events appear in the center of my main screen. These events travel with me no matter where I am and I can review, analyze, and plan even on the go. I didn’t realize how much I checked my schedule until I stopped looking at my calendar on my computer all the time. My watch freed me from that.

My most-used complication (again, that’s a “docked” app widget) is my timer. And I use it constantly throughout my day,  whether I’m cooking or working or doing physical therapy. I never knew how much I needed a $400+ timer on my wrist. You can pick from the presets (1 minute, 3, 5, 10, etc) or customize exactly the timing you need. I wish I could run more than a single timer at once, as there are often several things I’m doing at once. (Making waffles? 2 minutes, one after another. Stew? 90 minutes. etc)

I love how I can download audible books and music to my wrist and listen to them completely without a phone as I work out. Yes, I wish I could use the built-in speaker but even with Bluetooth-only audio, it freeing to step away from my big old 6+ and fashion-forward (yeah right) fanny pack. At some point I’m going to buy all kinds of tinkly yoga music but for right now I’m doing my PT stretches at the gym to the strains of “Funny in Farsi“.

I’m really pleased that Siri is along for the ride. I get a lot of the same utility from wristSiri as phoneSiri in terms of quick math calculations, sending a text, making a call, and so forth. I love Siri!

I’m also a big fan of both the breathe and stand app, items that have…mixed…reception and utility in the wider watch world. I use the breathe app not just to work on breathing but to exercise my back muscles and sitting strength. The stand app gives me the opportunity to stretch and move and challenge myself. I’d like to push these even further and have been shopping for a good coaching app that instills physical, mental, and personal habits via the watch. If you have any recs, let me know!

I haven’t installed many 3rd party apps on this, but I’m particularly pleased with the Just Press Record app, which does exactly what the name suggests for noting down quick thoughts when there’s no pen or paper around. On a similar note, I like that you can draw messages in the built-in texting app. It’s not ideal but it’s saved me a few times when I needed to respond to family quickly, even if the response sometimes looks like “Yes you CaN buy IT but nto mre than $10“.

I’d love to hear what 3rd party apps you’ve found that work well for you. I used the Sleep++ tracker (paid with IAP) for a while but it’s not great and I like not having to wear a wristband to bed. I also own the irritatingly meticulous Rules game, which I occasionally play when I’m desperate for something to occupy me. I feel as if I’m missing out on some great apps but I wouldn’t know where to begin.

Speaking of apps, every now and then I need a quick app that I can grab and take with me and the watch is perfect for that. To be honest, writing for watchOS is annoying (and testing is horrible) but for simple things, I can write and throw an app onto my watch for immediate use as I need it and then unload it when I don’t need it anymore. I’ve done this to scrape schedules for quick reference, to store a few important pictures to show and share, and a few other apps of various utility.

Again, this is an unexpected pleasure that doesn’t involve having to drag the phone along with me when I use them and despite the development process not being ideal, it’s a fully customizable thing that gives me the tools I need when I need them in a form that is the most portable I could imagine.

Let me finish by talking about my wristband, which I mentioned earlier in this post. When buying the watch, I first went with the sports loop, after trying all the bands on at the store. Once at home, I found it uncomfortable when worn for multiple hours at a time. So from there I bought a very cheap milanese loop knockoff for about $10 at Amazon. It’s very pretty, but had the same no-flex/no-give issues as the sports band.

In the end, I purchased a third party woven elastic band from Tefeca for just under $30 with tax shipped. I still don’t love having something physical on my wrists but this seems to be the best compromise I’ve found.

So what do you love about your watch and what advice do you have for me as a new watch owner? Please let me know.

Teaching collections: shifting paradigms and breaking rules

Just because some things look alike and may act alike at some level, doesn’t mean that they should be taught at the same time, under a unified umbrella of learning. Consider bagels and donuts. They are both toroids . You can layer several instances onto a stick for storage or serving. You can cut them both in half. If you have no taste or sanity, you can place custard — or, cream cheese, salmon, onions, and capers — interchangeably between the two sides of either item.

Despite these common API surfaces, their use-cases, edge conditions, and recipes share little overlap. Conformance to ToroidFoodstuff does not correlate with each preparation of dough, the cooking process, or the serving and accoutrements associated with either food.

So why do we always have to lump arrays, sets, and dictionaries into a single lesson on collections?

A new language learner has little interest in traversing dictionaries, although it’s possible, or taking a set’s prefix, which is also allowed. Nor are new learners always prepared to take on optionals, the core return value for dictionary lookups, early in the language learning process.

I’ve recently spent some time helping to outline an introductory sequence for Swift learning. I pitched eliminating collections as a single topic unto itself. I want to reject superficial similarity and build language skills by introducing simple, achievable tasks that provide measurable and important wins early in the learning process.

Take arrays. They can store numbers and strings and more. You can grow them, shrink them, slice them. They have a count. They have indexes.  Arrays are perfectly matched with iteration in general and for-in loops in particular. Arrays and for-in iteration work hand-in-hand. So why not learn them together?

The answer is generally that arrays belong with collections and for loops belongs within a larger set of iteration topics. Ask yourself whether new coders actually need while and repeat-while loops in their initial approach to the language? How often in normal Swift coding do you reach for those two for simple reasons in simple code?

I’m not saying while-loops shouldn’t be taught. I’m trying to figure out what sequence of incremental learning provides new Swift developers with the most coherent set of basic tools they need to express themselves and expand their understanding over time.

Every classroom minute spent mastering while is a minute that could expand and practice for. Introductory lessons should focus on the core terms and patterns most commonly used in the workplace. Expressive language vocabulary can always be expanded through practice and engagement. Classroom minutes represent the restricted path.

Dictionaries, I argue, should be taught late. Every lookup is tied directly to optionals, a dictionary’s native return type. And optionals are quite a conceptually heavy topic. Dictionaries are the perfect pairing.  The type is a natural source of optional output, and an opportunity to discuss nil-coalescing and default fallbacks.

From there, you can pull in failable initializers, and optional chaining.  Dictionaries also lend themselves to advanced concepts like (key, value)-based for-in loop tuples, the key-value basics of Codable, and how custom structs relate to key-value coding, not to mention the entire conversation about nil, errors, try?, and more.

As for sets, well, I love sets and use sets, but are sets even appropriate for new learners outside of some sense of “completionism” learning? Should they be taught only because they’re one of the “big three” collection types? I’d argue that people should learn sets when they are already proficient in core language basics, not in the most introductory setting.

For example, you can tie sets into a lesson on view touches. Just because they’re a collection doesn’t mean that the newest students have to learn every collection right away, just as they don’t need to learn NSDictionary, NSArray, and AnyObject, and so forth, in the first days or weeks of exposure to Swift.

Trying to structure a plan to create a solid foundation of incremental learning is a challenging task for any non-trivial topic. When it comes to Swift and Cocoa/Cocoa Touch with its vast range of potential interests, ask the questions: “What core concepts and patterns best reward the language learner with immediate benefit?”, “What grouping conventions should be tossed overboard to better focus on the skills with highest returns?”, and “What critical paths allow learners to proceed towards measurable skills and performance with the least overhead?”

Justify each topic with an answer that’s not “it’s covered that way in the Swift Programming Language book”, especially when working with new learners versus developers moving into the language with existing programming experience. And even when teaching more experienced students, let the daily realities they’re trying to move towards mold the curriculum of what you choose to teach.

The best learners teach themselves. The best curriculum sets them up to do so.

Musing on API names for trailing and functional closures

When a call returns a value, I almost always use in-line closures. I prefer to surround functional calls with parens. I know my style is not universally popular, as many developers prefer the look of raw braces. It’s not uncommon to see calls like this:

numbers.sorted { $0 < $1 }.forEach {
  print($0)
}

The functional sort is chained to the non-functional loop, both using braces. Mandatory functional parens do two things. They differentiate expressions and they add the clarity of role labels that are lost with trailing closures. Compare these with the preceding call:

numbers.sorted(by: { $0 < $1 }).forEach {
  print($0)
}

// or even

numbers.sorted(by: <).forEach { print($0) }

In each case, the combination of name and label (in this case, sorted(by:)) is clearer and more easily read. Many APIs are designed without initial labels for Void-returning closures. They’re intended from the start to act as language-like constructs, so adding labels is either superfluous or detracts from the call when omitted:

public func forEach(_ body: (Self.Element) throws -> Void) rethrows

Omitting labels shouldn’t cloud intent even when a closure meant for functional calls is used. I could imagine common API naming rules to help guide labels and functional use (for example, “omit labels for Void-returning closure arguments” or “include role hint labels for functional closure arguments used to process and pass through values”).

Such rules might help but there’s one class of closures that doesn’t normally follow. Completion handlers, especially those sourced from Cocoa and Cocoa Touch, are almost always labeled as completion or some close variant. They’re designed for non-returning (and often asynchronous) execution. For example, you might see:

func run(_ action: SKAction, completion block: @escaping () -> Void)

or

func dataTask(with: URL, completionHandler: (Data?, URLResponse?, Error?) -> Void) -> URLSessionDataTask

These items are normally called as trailing closures, so why shouldn’t their API naming follow suit? Wouldn’t it make more sense to have _ completionBlock: or _ completionHandler: than to use explanatory labels that are most commonly hidden when called?

What do you think? Let me know.