Archive for the ‘Misc’ Category

Nostalgia Tuesday: By request, my 2012 Siri Post

Well, if anything does happen on Monday, we can play “How badly did she get it wrong“, right? And to add some icing, here’s a what-if post about Siri controlling your Apple TV and a proof-of-concept Siri-style dictation used in-app.

(There’s a comment on the video that I particularly love: “Scammer watch the mouse across the screen at the end.” I hate to destroy the tinfoil but I was feeding the Apple TV output through EyeTV and recording the output on my Mac. Bless that person’s conspiratorial heart.)

How 3rd Party apps might integrate with Siri

Third-party integration into Siri remains at the top of many of our TUAW wish lists. Imagine being able to say “Play something from Queen on Spotify” or even “I want to hear a local police scanner.” And Siri replying, “OK, you have two apps that have local police scanners. Do you want ScannerPro or Wunder Radio?”

So why doesn’t Siri do that?

Well, first of all, there are no third party APIs. Second, it’s a challenging problem to implement. And third, it could open Siri to a lot of potential exploitation (think of an app that opens every time you say “Wake me up tomorrow at 7:00 AM” instead of deferring to the built-in timer).

That’s why we sat down and brainstormed how Apple might accomplish all of this safely using technologies already in-use. What follows is our thought experiment of how Apple could add these APIs into the iOS ecosystem and really allow Siri to explode with possibility.

Ace Object Schema. For anyone who thinks I just sneezed while typing that, please let me explain. Ace objects are the assistant command requests used by the underlying iOS frameworks to represent user utterances and their semantic meanings. They offer a context for describing what users have said and what the OS needs to do in response.

The APIs for these are private, but they seem to consist of property dictionaries, similar to property lists used throughout all of Apple’s OS X and iOS systems. It wouldn’t be hard to declare support for Ace Object commands in an application Info.plist property lists, just as developers now specify what kinds of file types they open and what kind of URL addresses they respond to.

Practicality. If you think of Siri support as a kind of extended URL scheme with a larger vocabulary and some grammatical elements, developers could tie into standard command structures (with full strings files localizations of course, for international deployment).

Leaving the request style of these commands to Apple would limit the kinds of requests initially rolled out to devs but it would maintain the highly flexible way Siri users can communicate with the technology.

There’s no reason for devs to have to think of a hundred ways to say “Please play” and “I want to hear”. Let Apple handle that — just as it handled the initial multitasking rollout with a limited task set — and let devs tie onto it, with the understanding that these items will grow over time and that devs could eventually supply specific localized phonemes that are critical to their tasks.

Handling. Each kind of command would be delineated by reverse domain notation, e.g. com.spotify.client.play-request. When matched to a user utterance, iOS could then launch the app and include the Ace dictionary as a standard payload. Developers are already well-acquainted in responding to external launches through local and remote notifications, through URL requests, through “Open file in” events, and more. Building onto these lets Siri activations use the same APIs and approaches that devs already handle.

Security. I’d imagine that Apple should treat Siri enhancement requests from apps the same way it currently works with in-app purchases. Developers would submit individual requests for each identified command (again, e.g. com.spotify.client.play-request) along with a description of the feature, the Siri specifications — XML or plist, and so forth. The commands could then be tested directly by review team members or be automated for compliance checks.

In-App Use. What all of this adds up to is an iterative way to grow third party involvement into the standard Siri voice assistant using current technologies. But that’s not the end of the story. The suggestions you just read through leave a big hole in the Siri/Dictation story: in-app use of the technology.

For that, hopefully Apple will allow more flexible tie-ins to dictation features outside of the standard keyboard, with app-specific parsing of any results. Imagine a button with the Siri microphone that developers could add directly, no keyboard involved.

I presented a simple dictation-only demonstration of those possibilities late last year. To do so, I had to hack my way into the commands that started and stopped dictation. It would be incredibly easy for Apple to expand that kind of interaction option so that spoken in-app commands were not limited to text-field and text-view entry, but could be used in place of touch driven interaction as well.

 

Sing out your Mac

Use tcsh (just enter /bin/tcsh) and then:

  • repeat 22 echo "da" | say -v good
  • repeat 26 echo "da" | say -v cello
  • repeat 22 echo "da" | say -v "bad news"

and of course

  • say -v cello droid

Was reminded of this today by:

Dipping toes into coworking

As George RR Martin would have put it if his kids were in public school: Summer’s coming. (“A Song of Bored and Cranky”) This Summer, I plan to try to use some coworking spaces to get the heck out of the house and away from the kids for a couple of days a week.

So far, I’ve tried out “eat-working” (Starbucks, Panera, Einsteins), which really doesn’t offer the kind of long-term desk situation I’m looking for, “public-working” at the library, which had iffy chairs and tables and intermittent Internet, and (so far) one commercial day trial, which had great desks, horrible chairs, nice ambience, but a no-talking policy which meant I couldn’t conference, use my phone, or use Siri except by stepping outside. (And absolutely no way to turn up the iTunes volume and shout it out, shout it out loud…)

If you’ve coworked and have any recommendations for what I should be looking for in a good coworking space, please share. I’m not exactly looking for place recommendations (unless you have specific ones to mention) but more to put together a list for myself of what makes a good coworking environment so I can better evaluate what I”m looking at before committing to any monthly or longer contract. I’d appreciate any advice you have to offer.

I’m looking for something fairly local, inexpensive, with good business-level WiFi, comfortable business-level chairs and desks (I can bring in my own lumbar cushion and footstool if needed), safe area, clean bathrooms, nearby shops, a microwave, easy in-and-out, and  daylockers of some sort so I don’t have to carry absolutely everything in and out with me every time I hit the bathroom or go to lunch. I’d also like to be surrounded more by tech folk than marketing folk but I recognize that’s not going to be something I can control.

I will say that while I was remarkably productive on my days out, I was productive in all the wrong ways: I zoomed through my correspondence. I’m now set up beautifully with my calendar and with “Things“. I got nothing done on actual development or real writing work. And I did nothing that needed phones, such as making appointments or checking in on projects. I also found it really hard to take breaks, stretch, and just do the “wander and think” thing.

What kind of work opportunities do you reserve for outside your office? And how do you adapt your workflow to small screens (I have the new MBP with me), strange places, and ergonomic limits?

Thanks in advance for your thoughts.

How to curl raw content from gists

In my most recent post about installing Ubuntu/Swift, I glancingly referred in a screenshot to pulling swift source from a github gist. I thought it was useful enough a tip to pull out to its own post.

Say you have a gist, for example: this one, which is located at gist.github.com/erica/4d31fed94f3668342623. I threw this sample together to help some Linux-ers work with incomplete Foundation and Standard Library implementations on Ubuntu.

If you grab this link’s content from the terminal command-line using curl, you’ll end up with the page’s HTML source, which is pretty much not what you want to compile:

Screen Shot 2015-12-20 at 11.20.36 AM

Instead, replace gist.github.com with gist.githubusercontent.com and append /raw to the url:

https://gist.githubusercontent.com/erica/4d31fed94f3668342623/raw

This adjusted URL bypasses the HTML page contents and accesses the raw text stored within the gist. Just redirect with > into a Swift file and you’re ready to compile.

Screen Shot 2015-12-20 at 11.24.02 AM

I hope this tip helps minimize the misery of sharing code with Ubuntians.

Run Swift in your Web Browser and on your iPad’s Web Browser

image courtesy of "peshalto"

image courtesy of “peshalto”

Today, IBM introduced an online 2.2 Swift Sandbox. John Petitto writes, “We love Swift here and thought you would too so we are making our IBM Swift Sandbox available to developers on developerWorks.”

The IBM unit joins a couple of other sites already available for you to use.

If you don’t mind working with an external keyboard, a lack of functioning control keys and arrow keys, and using Safari-based interpreters, you can also use Swift on your iPad. I assure you the idea is a lot better than the reality.

Amazon Kindle Fire, the $35 “Education” edition

It was hard to miss the $35 Kindle Fire deal on Friday. Deep discounts extended across nearly all the Amazon Kindle product line but I limited myself to purchasing two units, augmenting the 2011-vintage unit we already had on-hand.

Our new tablets arrived yesterday and they are definitely a step up in quality from the original line. They’re faster, the UI is cleaner, the features more extensive with built-in cameras and microphone. The units are not so different in weight but they feel better made and more consumer ready.

The new Fires are also more obviously and in-your-face a marketing arm of Amazon and less general purpose tablets. That is hardly surprising for a $35 (shipped!) purchase but it’s one that as a parent you have to be really cautious with. I quickly enabled parental controls (something I’ve never done on our iPads) and disabled insta-purchases.

One of the two Kindles is replacing a first generation iPad mini, which was lovingly purchased as “gently refurbished” before being dropped from a height of about three feet to its death, approximately five seconds (give or take a week) after its arrival. That mini replaced a 1st gen iPad, which since the mini’s untimely demise, has been back in service — gasping and wheezing and doing its best to keep up. The Kindle is no iPad mini but it has a role in our lives to fill.

Speaking as a parent, having a $35 alternate is a very good thing. I don’t really care that it doesn’t run all the same apps (or even very many of the good apps). It connects to the net, does email and web, allows child to do most school related tasks. It is acceptable.

We’ll see how the school transition goes. I suspect teachers will applaud the built-in book reading and condemn the onboard videos. (There’s also a music app but really who wants to spend time setting that up?) At the very least, this new tablet will probably work better and more reliably than the 1st generation iPad that’s currently being hauled to and from school every day. Fingers crossed.

As for the second Kindle Fire, well, that’s going to younger brother who is currently trying to keep his Chromebook working. The 2012 Samsung Chromebook although initially appealing turned out to be one of the worst pieces of hardware we ever bought

His all-Chromebook school agrees. They’re transitioning next year away from these cheaply made, unreliable pieces of…hardware…probably to iPads if they can get a deal/grant/whatever through the school district.

Every parent was required to purchase Chromebook insurance. We’ve paid twice for replacements, and this doesn’t count the 2012 Chromebook we personally bought out of pocket and liked so much for the first few months until it started to fail and fail and fail and fail.

Compare this to our 2011 Kindle Fire which other than a loose charging port is still working well and our 2010 iPad, which we’re abandoning only because it weighs about as much as a baby elephant and it cannot run new operating systems.

Amazon isn’t pushing Kindle into the classroom the way Apple makes that connection. It’s a commerce machine not a expression of learning and expression. I may have to use side-loading to get classroom-specific Android software onto the boy’s new Kindle Fire. Last night, I got the technique down, just in case.

For $70 total shipped between the two tablets, it’s an experiment I’m happier to make than usual. Wish us luck. I know there will be more roadbumps than if we went the iPad route.

Did the Verge just insult me?

Me, a few weeks ago:

When your brain and fingers are absolutely wired for Emacs editing, it’s a frustrating experience to have to work on the iPad, with all its touching. As a touch-typist, any time I have to move my hands away from the keyboard, it feels like I have failed.

After some searching around App Store, I eventually downloaded a few Emacs-style editors. Of these, em notes (about five bucks) offered the best solution. It links with your Dropbox account and enables you to edit text in an application folder there, ensuring you can load and work on documents and have them available as well in the “real world”, aka anywhere you’re not working on an iPad.

The Verge, today:

Oh and it should go without mentioning, that the keyboard itself is only a solution for geeks and no one else…for a regular person who wants to regularly use an iPad Pro with a keyboard, your only solution is using the UI with your fingers constantly. This is simply true; just as it’s true that only the nerdiest of nerds should learn how Emacs and its keybindings work…

Sniff.

Apple TV: Exploring physical activity

Incorporating, or more accurately trying to incorporate, physical activity into a tvOS app isn’t easy. Apple TV is no Kinect or Wii. At this time, about the best you can do with “native equipment” is wave a Siri remote around in the air or pop it into a pocket and use it as a kind of pedometer when walking on a treadmill or in place.

I spent a bit of time reviewing some of the “greatest hits” of physical TV gaming from walking companions to boxing/cheerleading games (would be much better with a secondary nunchuck) to yoga/balance and so forth. The best of these look at a lot more than a single arm-action point of control.

If you don’t mind going cross-system, of course, you can use motion features already on iOS and project them (directly over AirPlay or indirectly through Bonjour, etc) to a tvOS app but we’re already now talking bigger, bulkier, more planning, less impulse use, and less tvOS “app” design.

In theory you can use a wrist strap of some kind with the Siri remote and rely on arm motion to model physical activity. The chances of flinging the remote remain quite high unless the remote is physical tied down to a forearm, requiring a well-designed physical adaptor.

Conclusion? At this point, Apple is wasting a strong health branding component with its Apple TV product. Between the watch, iOS Motion, and Health Kit, Apple TV should be much more proactive than apps limited to logging meals (still easier to do on an iOS device) and offering coaching advice.

Opportunity wasted, premature entry onto the market, or simply wrong aim/branding?

A stroll through a Swift oddity: argument ordering

This little oddness popped up in IRC and I thought I’d share.

When creating methods and functions, you cannot normally re-order arguments. Try it (see this Swiftstub), and you’ll raise an error like this example’s “argument ‘a’ must precede argument ‘c’“.

But give each argument a default value, and you can re-order as desired (see this Swiftstub). Cool, right?

I’m guessing this is less an intentional feature than an artifact of Swift’s default value implementation. When defaults are available, you’d generally want to offer more permissive inclusion and exclusion.

As things stand, I think it’s kind of nifty. That said, I wouldn’t be surprised if it was either eliminated in the language or the expressibility expanded to all functions and methods with fully-articulated external names.