Archive for the ‘Various Frustrations’ Category

Taking charge of those xips

Apple adopted the digitally-signed xip format for Xcode downloads a few years ago. It’s basically a signed version of zip archives. Most commonly, you download a xip and double-click. Archive Utility will open the file, verify its signature, and expand its contents.

In its default settings, Archive Utility always expands files to the same folder you download to. With Xcode, this is a big pain as moving the app with its thousands and thousands of tiny subfiles and embedded executables takes forever. Alternatively, moving the xip file from one location on your system to another can be painfully slow.

Fortunately, Archive Utility does allow you to specify where to unpack. Launch the Application using spotlight (or /System/Library/CoreServices/Applications/Archive Utility.app) and open preferences.

Although there’s no “Ask” option for “Save expanded files”, you can select where you want items to be stored using “into” from the pop-up:

Once set, you have to unset it for general use, because the location persists between launches. This is, needless to say, a big pain when you use archives for non-Xcode purposes on a regular basis:

Fortunately, you can unxip more effectively by using the command-line xip utility located in /usr/bin/xip without having to mess with Archive Utility or its preferences:

% xip
Usage: xip [options] --sign <identity> <input-file> [ <input-file> ... ] <output-xip-file>

Usage: xip --expand <input-file>

99.9% of everything you do with xip is that last “Usage” example. Still, as xip doesn’t offer a --help option, if you want to know what those interesting [options] are, you’ll need to read the man page (man xip). I prefer to open man pages in Preview instead of the command line, using this little trick:

man -t xip | open -f -a /System/Applications/Preview.app

Notice two things here:

  • First, the -t flag tells man to use the Groff typesetter (no relation) to format the page to postscript. This presents as a PDF in Preview. (Specifically, it uses /usr/bin/groff -Tps -mandoc -c if that kind of detail intrigues you.)
  • Second, the path for Preview has changed in Catalina to /System/Applications. If you want to do this on Mojave or earlier, adjust the path accordingly.

(Isn’t that a neat way to view man pages?)

While the man page suggests you can sign your own xip archives and provide your own identities, don’t bother. This format is exclusive to Apple, starting  from macOS Sierra. Only xip archives signed by Apple can be expanded in modern macOS releases. (See Tech note TN 2206 for details.)

Since --expand cannot be used with any other arguments, hop over to /Applications, and expand from there:

% cd /Applications/
% time xip --expand /Volumes/Kiku/xips/Xcode_11.2.1.xip 

Adding the time command at the start of the line lets you know how long it took to unxip, which is deeply satisfying to those of pedantic bent like myself. For those playing along, it was

xip: expanded items from "/Volumes/Kiku/xips/Xcode_11.2.1.xip"
1109.625u 275.408s 10:58.85 210.2%	0+0k 0+0io 167pf+0w

Update:

Whisky tango foxtrot: Xcode allows ObjC switch unindenting

This happened.

You might think I’m about to go off on some Swift rant (and trust me there is a Swift rant inside me waiting to emerge) but it’s the second checkbox that made my mind explode.

From:

To:

Who thought this was a good idea? I’ve never been a fan of left-aligned case in Swift although I embrace it as the standard. But in Objective-Freaking-C? As a standard Apple-blessed toggle in Xcode? No! Thrice no! The option enabling the choice is bad for Swift and worse for Objective-C.

Why is this option in there and why is it available for both languages? It would be best to, as Joe Groff put it, “let sloth naturally lead everyone to pick the default” given that the feature has been expressed in Xcode. Or better yet, file some bug reports for the broken feature.

Each language default reflects years (and decades) of language style consensus:

  • Swift: keyword-aligned.
  • Objective-C: “scope”-aligned.

This new choice in preferences is madness.

Talk me down from here, friends.

Flipping the switch and the 32-bitpocalypse

I think I’m ready to upgrade my Mac mini to Catalina. I know, I know: “But the 32-bitpocalypse! Are you ready to lose all that investment?” I think I’ve worked through that. Haven’t I?

The last few weeks I’ve been busy. I bought a smallish (0.5 TB) external SSD drive and backed up a good chunk of my Mac mini to it. Today I’ve been running tests on how it works booting on my MBP, not my mini. That’s because my underpowered mini just isn’t strong enough either in boot speed or  running off the external drive to make this a reasonable approach.

On the MacBook, however, the SSD responsiveness is pretty fine. Once booted, I’ve tested Office, Photoshop, and a bunch of other 32-bit apps and while they’re not going to win awards for speed, they run and appear to be stable.

That leaves me with the dilemma. Do I flip the switch? Do I go full Cat on my main work machine? It’s been a reasonably time since release, so what mine fields should I expect to encounter? I honestly don’t want to upgrade and then have to start restoring from Carbon Copy Cloner backups from regret. (My backups are run nightly so they’re there if I need them.)

What do you think? Pull the switch or walk away? I hate being out of step with the latest OS, even if I do have Cat installed on my MBP and am happily using it there. Give me your advice. I’m not ready to walk away from so many apps that I still use many times a week but I don’t want to freeze my mini in the past. Thanks in advance for your advice and suggestions.

How I got Rust working in Xcode

A while ago, I posted about how I set up Xcode to work with Python. Yesterday, I was taking a class on Rust and decided to use my friendly neighborhood (sp)IDE(rman) coding environment, namely Xcode.

I’m not going to say it was a stunning success but there was enough interest that I thought I’d share the steps so you too could embrace Rust through Xcode.

Install Rust. You start, as one does, by installing Rust. Hop over to https://www.rust-lang.org/tools/install to grab a copy of the tools. They install to ~/.cargo, for whatever reason. I put a link in to / usr/local/bin.

Create a Project. Create an external build system Xcode project by choosing File > New > Project > Cross-platform > External Build System > Next. Enter a product name (I called mine “Rust” because that’s exactly how creative I am.) and set your build tool (in my case, /usr/local/bin/rustc because of the link). Save it somewhere convenient.

Create a source file. Apparently “rs” (rust source?) is the proper extension. I went with “test” as my name. File > New > Empty > test.rs

fn main() {
    println!("hello world");
}

Don’t forget to add some code.

Compile. Edit your scheme.  Choose Run > Info > Build Executable > Other and select your compiler. Adding it to /usr/local/bin made it easier to select rustc for me. Then uncheck Debug executable because you’re not debugging the Rust compiler.

At this point you can click Run and you’ll see the standard option message because you haven’t specified what it should run.

Back in the scheme editor select Run > Arguments and add the source file and output file. Unfortunately, I could not get this to work with SRCROOT at all, so here it is in all its glory with complete paths.

The Pre-action removes any build product from a previous run:

So here we are. With luck, it compiles. If not, the errors appear in pretty horrible form in the Xcode console, where curses is what we do, not how the console interprets pretty text output.

You can get slightly less horrible feedback by adding the launch argument: –error-format=json

Yeah, it’s wordy but it’s slightly less awful.

Pick a path. Unlike python, rust is just a compiler. If you build, and then add a step execute, the execution output (unlike compiler errors) will not normally print at the Xcode console. The challenge is to get that information in some form where you can access it.

At first I went with a little post-action osascript and threw up the output in a separate window:

But I really wanted to make it work with the console So back I went to Applescripting. Instead of rustc, I changed my build tool to osascript:

I added this instead to my run scheme arguments.

Yep, I’m using osascript to run a shell script that just compiles with rust and then runs it, passing the output through back to Xcode.

I know this is bad. I know I should be ashamed. I hang my head.

But you know what? It works. Stray osascript-crud and all:

I’m not sure how much this makes me a programming outcast but it was kind of fun to figure out how far I could push my beloved enemy Xcode.

WebsearchFodder: My mouse moves but won’t click

Weirdest thing this morning. My mouse stopped working right. I could move the cursor but not click the mouse. So I swapped it out for another mouse. Same problem. So I rebooted. Same problem. I then switched to a wireless mouse and then a Bluetooth one. Same problem across the board.

I won’t make you sit through all the problem solving that went on: same issue meant that this was not a mechanical error, and not tied to, for example, specific wires, or bulging batteries or whatever. The tl;dr is this: I had taken out a magic trackpad a few hours earlier, intending to use it (but never got around to it), and left it on a counter and a child had put something on top of it.

The magic trackpad had not only powered on but was continuously, due to the weight, issuing some sort of mouse press because of the weight of the stuff dumped on top of it. Once I took the weight off, everything started working again back at my computer.

Diagnostically: the cursor moves, any right-button works, any scroll wheel works, but not the left-button. Solution: hunt around for a wireless pointing device that might be interfering. If you have Screen Sharing enabled, you can disable Bluetooth and see if that resolves the problem.

I took the batteries out of the trackpad, and put it away gently.

I’m leaving this blog post in case it ever helps anyone else out on this very weird issue. The advice out there on the web all assumes a mechanical issue either with a built-in trackpad, with a pointing device, or a system issue. This was such a sideways situation that surely I can’t be the only person it will happen to but it probably most everyone will never be affected.

Bad things: Extension Access Control

Swift extends the courtesy of an access control annotated extension to its top level members. I’m going to call this “inheritance”, but I know there’s a better name for this but I just don’t know what it is.

Consider the following:

// Base type is public
public struct MyStruct {}

// Here, the extension is declared public, so each top level member
// "inherits" that access level.
public extension MyStruct {
  // This is public even if it is not annotated
  static var firstValue: String { return "public" }

  // This is also public but the compiler will warn.
  public static var secondValue: String { return "public but warned" }

  // This class is also public via "inheritance" 
  class PublicSubclass {
    // However, its members must be annotated. This is public
    public static let publicValue = "public"
    // This defaults to internal
    static let internalValue = "internal"
  }
}

In this example, firstValue inherits the public access level from the MyStruct extension. The explicit annotation for secondValue is warned by the compiler as unnecessary.  If you treat warnings as errors, that’s a problem.

Each of the static properties are accessible outside the module except for internalValue, as even in a public class declaration, its members do not inherit its control level:

Before I start putting some preliminary style guidance out there, I’d like to point out a few more things about this. Here’s a second example:

internal class InternalType {}

extension InternalType {
  public static var value: String { return "value" }
}

Swift compiles this code without error. It is clearly a developer-sourced issue. The intent to make the member public is fundamentally flawed. as it exceeds the type’s access control level. This issue also exists outside of extensions, where the compiler will not warn on too-high levels for direct type members:

internal class AnotherInternalType {
  public var value = "value" // no warning
}

You’d imagine this is a place where the compiler should up its game, no? This is a point of code that is technically functional and compilable but whose specification undercuts the documenting nature of using access control. Shouldn’t the annotation be limited and warned here?

The compiler will find mismatches between the extension ACL and the type ACL:

And that’s where the problem comes in because the guidance I’m working on says: “Do not annotate extensions with access control levels except when working with trivial utilities”. Skipping extension ACL ensures that you can meaningfully and intentionally add access control to each member declared within that extension. Each access level is co-located with the declaration it decorates. This makes your code more easily audited and its access levels will be immediately apparent as to intent and implementation.

What are your thoughts? Can you think of any reasons why extensions should ever be ACL’ed in production code? And is this just a bug/language enhancement thing or is there something I’m missing. Thanks in advance for your feedback.

Fleeing Bluehost: It’s crunch time

I have under 30 days to move from Bluehost or I’ll be locked into another year. If you don’t recall, Bluehost is infuriating. It shuts down whenever I have a traffic spike. Its SSL certificates are not automatically renewed, so every 90 days or so things fail.

My email is associated with unifiedlayer, one of the worst spam providers, which means that a lot of my outgoing email never arrives. Every time I need tech support, they try to upsell me to yet another paid service. The fees have increased and increased over time.

While I’d really love to have a statically generated site, I’m not willing to give up comments. I’m sticking with WordPress as the least turbulent solution unless someone has a better idea.

I need email. I need a wordpress site. I’d like to keep a listserv going but I can probably transfer that to slack if needed. I can’t really think of any other features that I need at this time.

  • Diogene recommended SiteGround. It offers well reviewed WordPress hosting. This sounds scary though: “For migration just use IMAP for your email and synchronize all mail locally then when you move you host sync back again with IMAP”
  • Dave DeLong says FastMail is a great solution for the mail-only axis. Hank Gay, Christopher Frederick, and Dewey concur. Christopher mentions that I can set up “SPF and DKIM records” to provide more secure ownership, whatever these things are.
  • Despite the general love for FastMail, Michael Weaver says iRedMail is a good alternative as well.
  • Matt mentioned nosupportlinuxhosting.com
  • Will suggests A2Hosting. Chris likes ASPnix.com.
  • John Woolsey pitches GreenGeeks.com.
  • Nate H suggests dreamhost (also recced by Tim as a site for “people who don’t know what they’re doing”, which is pretty much me) and siteground.
  • Mark Nichols uses WebFaction, but also supports Digital Ocean.
  • Brian Anderson suggests hostagor.com.
  • Kevin likes the roll-your own AWS solution: S3 for web, EC2 for wordpress, WorkMail for mail. Any thoughts on these?
  • Simon Davies agrees on AWS but suggests hosting email with zoho.com.
  • Dan Messing and Mark Bernstein like pair.com.

I’m looking for the simplest migration with the longest shelf life and the least worries. It should remain reasonably budget affordable as well.

I want to get this done quickly and easily and it scares me to pieces. This is, admittedly, way out of my comfort zone, which explains why I’m still with Bluehost even years after identifying the problems.

Any advice and support will be greatly appreciated.

UX fail: Logging into Apple TV

My son brought a new (which is to say, an old) Apple TV into our lives yesterday. He picked up a 2nd gen unit from Goodwill for under ten bucks in excellent condition. We were delighted.

Even older Apple TVs, a couple of generations before the app store hit, allow you to share music from the family library, watch shows and movies, project from hand-held devices, and use TVs as wireless extra displays.

We immediately began setting it up. The first issue at hand was to log into my Apple ID, so the unit could see my account, purchases, and home share.

I use long passwords. I recommend long passwords. Entering long passwords on a 2nd generation Apple TV using a remote is…well, it’s pretty horrible. Even though I’m a bit proud of my remote skills (for example, press-and-hold to access upper case letters from the lower case screen, or using hold-to-slide for quick letter navigation), it still takes a significant amount of time to enter my password.

And, when I had done so, and counted off the number of dots and confirmed they matched the right number of password characters, I expect that I was done with setup once I clicked the continue button.

Nope, not so quick.

It wasn’t until after Apple TV asked me whether to store my password for purchases (no thank you), and had moved onto yet another screen, that it stated I could not log in at this time. Something something about verification.

I assumed I entered the password wrong but I was a bit befuddled that it didn’t tell me that right away. I had already moved a couple of screens forward before it rejected my entry. What was going on?

After a few times through the process, I knuckled down and hit the web to search for “Apple TV verification”. That’s when I discovered that I needed to generate a verification code and add it to my password (one after the other, all in text in a single privacy-protected box) to log into my iTunes account.

This design shocked me. There was exactly no information listed on the enter password screen suggesting you need to not only enter your password but also append a six digit two-factor code to your password. There was no information guiding users through the steps to generate that code. There was no support for automatically sending a two-factor request to other registered devices, the way it normally works with my browser. Instead, you must generate a verification code on an another device using the same Apple ID.

I use two primary Apple IDs: one is for iTunes purchases and is shared with my family. The other is for my development work. All my mobile devices are signed into both, but you can only generate a verification code for your iCloud ID. You cannot for your iTunes ID.

I had to go through the hassle of picking a victim iDevice, logging out of iCloud, including disabling Find My iDevice and deleting all local iCloud data, just so I could log in using my iTunes account to generate my 6 digit time sensitive verification code. (Settings > iCloud > username > Passwords & Security > Get Verification Code)

It took me quite a bit of time to get a device to the point where I could do that. Wisely (but really just luckily), I left the device logged in to the iTunes iCloud account. I had not realized I’d need to authenticate in several places on Apple TV. The first time accessed my purchased content. The second time enabled home sharing. Again, without any hints about extended passwords and 2FA.

Fortunately, I targeted an aging iPod touch as my sacrificial victim, which, while running the latest iOS release, is not a heavily used dev system. I have not yet moved it back to my main iCloud account  just in case I have to go through this nonsense again.

Once I had my six digits, I had to add them to my password entry. Since timing is critical, I had to type out the password first, fetch the code, and then enter the verification code on my  Apple TV, which had gone into screen saver mode due to the delay. I added the digits to the end of the password (none of which are readable, it’s all dots), and hoped that it took.

All of this took place without any textual or visual indication to set user expectations that the password needed extra characters at the end to begin with.

This is probably the worst design for 2FA anyone could have come up with and I’m baffled at how this got past any level of management to be presented in Apple deployment. It feels like the first iteration of a solution offered by a summer intern before anyone with sense got involved.

I’d imagine that the second you enter an Apple ID, the device is fully capable of determining whether 2FA is needed. If so, it should guide the user how to obtain that information. Add some text, show a video, do whatever is needed, but contextualize!

Assuming that people know how to create the code and then append the code to the password is asinine. It’s also bad design. Make the 2FA code a second screen, for heaven’s sake. Lead users through the process. And for all that is good and holy, don’t make the user pass through one or more screens after the failed password before informing them that (1) the password didn’t take and (2) a validation step is needed and should have been done several screens earlier.

In the best of all worlds, just allow the 2FA code to auto generate and notify the way it does with Safari. Manual generation should be the fallback position only if associated devices are not available.

Every week or two, I have to re-enter a code to access Apple’s developer site. My office rings with the various beeps and whistles of 2FA. Every device helpfully shouts out its association with the iCloud account and provides a six digit key for me to use right away.

Having to laboriously set up a device and then manually generate a code is nonsense. Differentiating the main iCloud account and the iTunes account, both of which have been authenticated, is also nonsense. If a device is signed into both, it should produce 2FA codes for both.

The screen that most offended me was the one interspersed between the “enter your password” and “you need a verification code”. Who gave the okay to continue on with the “use this password to authorize purchases” screen before confirming 2FA? It’s just insane.

In the end, a process that should have taken 5 minutes max stretched to nearly 90. If someone comfortable with problem solving and web searches was this put off by the anti-intuitive UX design, imagine how Apple’s core customer base will react.

This is the furthest I’ve ever gotten from “it just works” in Apple’s ecosystem and a user experience that gives me great pause.

ISO-8601, YYYY, yyyy, and why your year may be wrong

The end of the year is rolling around and it’s time to remind everyone about that yyyy works the way you think it does and YYYY does not. Here’s a quick example to start:

Just because you test a format quickly with the current date and get back the result you expect, does not mean you’ve constructed your date format correctly.

Speaking of which, BJ Homer points out that you can just use “y”, as “yyyy” pads to 4 digits which doesn’t usually matter but isn’t always needed. Olivier Halligon adds, further, that not all calendars use 4 digit years. “For example the Japanese start a new era every time the emperor changes, resetting to year 1 in that era; we’re currently in year Heisei 30.”

To quote The Dave™: “Nooooooo…. Please use “y”, not “yyyy”. “yyyy” zero-pads years that aren’t four digits, and there are multiple calendars w/ 2 or 3-digits years (Japanese, Chinese, Republic of China). “y” is the natural length of the year: “30” for Japanese cal, “2018” for Gregorian, etc”

What you’re actually seeing with “Dec 24, 2017” is the first day of the last full week of the preceding year. It doesn’t matter what numbers you plug into the month (“MM”) or day (“dd”). The presence of YYYY in the date format without its expected supporting information reduces to “start of year, go back one week, report the first day”. (I’ll explain this more in just a little bit.)

Here are some examples, which you can check from the command line using the cal utility:

As Apple’s 2014-era date formatting guide points out:

A common mistake is to use YYYY. yyyy specifies the calendar year whereas YYYY specifies the year (of “Week of Year”), used in the ISO year-week calendar. In most cases, yyyy and YYYY yield the same number, however they may be different. Typically you should use the calendar year.

Unicode.org’s Unicode Technical Standard #35, Date Format Patterns goes into a little more depth:

[“Y” is] Year (in “Week of Year” based calendars). This year designation is used in ISO year-week calendar as defined by ISO 8601, but can be used in non-Gregorian based calendar systems where week date processing is desired. May not always be the same value as calendar year.

ISO 8601 uses a 4-digit year (YYYY) for “week of year” calendars from 0000 to 9999. If you’re into trivia, the years before 1583 are technically excluded except by special agreement between sending and receiving parties.

Anyway, if you’re going to use YYYY formats, you’ll want to use additional format elements that support “week of year” date construction. For example, consider the calendar for this upcoming January, which starts on Tuesday the 1st:

    January 2019      
Su Mo Tu We Th Fr Sa  
       1  2  3  4  5  
 6  7  8  9 10 11 12  
13 14 15 16 17 18 19  
20 21 22 23 24 25 26  
27 28 29 30 31

This year, January first can be considered as the first week of 2019 or the 53rd week of 2018, as the weeks overlap in the middle. Using e (the numeric day of the week) and ww (the ordinal week to count from), you can represent both dates correctly using the oddball YYYY formatting token.

Here are examples that use the week-of-year approach counting from both 2018 and 2019:

As you can see from this, when you use YYYY and do not supply an ordinal week or day, they both default to zero, which is why you get the behavior of the zeroth week (that is, the week before the first week) on the zeroth day (also, the first day of that week) for that calendar year, which is the week before the first week that overlaps the stated year. This explains the results of all those otherwise random late-December dates from earlier.

ISO 8601 should be updated in a few months, with a release somewhere around February.

From what I can tell, the first part revises the current standard and the second expands it. The only freely accessible human-viewable material I could find were the five-page TOC previews for 8601-1 and 8601-2.

(Hat tip: Thanks, Robin Malhotra)

Prototyping CoreGraphics in the Playground

No matter how flaky, I love using playgrounds to prototype Core Graphics, SpriteKit, and many other see-as-you-go technologies. They’re fantastic for building out specific custom content with a bare minimum of coding investment. You get a lot of win for very little time.

I was helping someone out the other day, explaining the strokeEnd keypath (versus the path keypath) and building a playground showed it off to perfection.

Admittedly, it helps to have quick helper code on-hand for quick starts. I have playground-specific setup code, handing me a view controller (called vc) and a centered view, ready to start demo-ing in this one.

I also have a couple of pages of code (like the layer(path:) constructor, the animateStrokemethods, and the schedule() utility off page, in the support module. They’re all highly reusable. It’s a pity in-playground debugging is so dreadful. It would be an ideal module-building tool if not for that: build and explore (and ideally build tests) in a single place, without having to be in a fixed workspace lacking the exploration feature. Adding “convert this exploration into a test” would be icing on top.

I’m disappointed that playground-specific visualizations built for teaching and demos don’t transfer to the debugger for real-world production support. I don’t see any reason why a CGPoint instance should get a pretty graphic representation but a CGAffineTransform, for which I have quite a full presentation, does not.

I can use custom mirroring to produce valuable output for dump, and therefore for printing objects in the debugger but not for debug quicklooks. Plus as far as I can tell the custom NSObject-only quicklooks haven’t been updated in years and there’s no hint of extending this to structs and enums.

By the way, what’s the deal with all the API audits? How long are these going to go on? If you thought updating the app delegate was a minor nuisance, you haven’t seen what’s happened to all the constants and Core Graphics APIs. This update is huge and disruptive…