Last June, Izzy inspired me to do something with Bluetooth and playgrounds but honestly, I haven’t had the time and I couldn’t afford a Sphereo. I’ve wrapped up Swift Style. Attempting to write meaningfully about drawing while the Denver Public School system has for reasons I cannot begin to comprehend released my child to my recognizance for two entire weeksseems unlikely. (Another child has half days. Fun.)
To prepare, I purchased one of the cheapest BLE devices I could find, a Mi wristband (Amazon cost under $20 shipped), which has a reverse engineered API that lets you control vibration. A friend of mine just purchased the hugely expensive Buzzies for Autism bands. I’m hoping I can mimic some of that functionality with a playground, a low-rent BLE device, and a full-price child.
Have I mentioned recently how awesome playgrounds are for playing around with and learning about new tech? They really are, especially because you can integrate just one concept at a time, and then test it live before expanding to the next.
I decided to go with Cocoa for my BLE exploration instead of iOS, although the tech is more or less the same on both platforms. When you work in Cocoa, using a macOS playground, the startup speed is phenomenal because you don’t have to work with a simulator.
My first project simply sets up a central manager (CBCentralManager), monitors its state, and lists any devices it finds. I’m pretty happy for this as a first day, not many hours to spend on it, playing around and doing something marginally useful result.
The CoreBluetooth documentation is pretty dire. For example, this is the Swift docs for CBManagerStatePoweredOn. After SE-0005, the constant is actually .poweredOn, as you see in the following sample code, not CBManagerStatePoweredOn. And there’s no documentation in that documentation.
Nonetheless, I persevered and my first child-full day produced a basic helper class. You really need to work in NSObject land for this because of all the delegation. So I set up an objc-friendly class, set it as a manager delegate, and implemented the one required callback method, which follows the manger state.
Try sticking the Bluetooth icon in your system menu bar. (System Preferences > Bluetooth > Show Bluetooth in menu bar.) It’s a lot of fun to toggle it on and off and watch your playground keep tabs on that state.
Next, I added a basic peripheral scan. You need to scan only when the manager achieves poweredOn state.
Apple writes, “Before you call CBCentralManager methods, the state of the central manager object must be powered on, as indicated by the CBCentralManagerStatePoweredOn constant. This state indicates that the central device (your iPhone or iPad, for instance) supports Bluetooth low energy and that Bluetooth is on and available to use.”
That’s why I added the scan to the playground’s “update state” callback. You’ll want to stop scans when the BLE powers off.
Finally, I implemented one more callback, which asynchronously lists discovered peripherals. It picked up nicely on my Apple TV and when I enabled and disabled a hotspot on my iPhone. Great fun.
Here’s the code involved. You can see how very short it is. The struggle wasn’t in the lines of code or complexity, it’s mostly about how very badly documented most everything seems to be.
Today, Nikita Voloboev was trying to wrap his head around how this whole Cocoa/Cocoa Touch API thing worked. The conversation started when he asked, “Is UIKit part of Cocoa?” The docs weren’t really giving him an idea of how it all worked. After a few minutes back and forth, he derived this concept using the tools at mindnode.com:
It was a good first guesstimate even though it doesn’t exactly capture Apple’s API design. I hopped into Preview and sketched out this diagram instead. (And yes, some day I may write a book about all the cool things you can do for free in Preview):
I explained how the API families shared certain frameworks but that the frameworks weren’t uniformly implemented across all the platforms within that family. Cocoa and Cocoa Touch, I tried to explain, were the API families specific to four operating systems: macOS, iOS, watchOS, and tvOS.
I pointed him to this definition, which I pulled up by a web search for define cocoa apple:
Cocoa is Apple’s native object-oriented application programming interface (API) for their operating system macOS. For iOS, tvOS, and watchOS, a similar API exists, named Cocoa Touch, which includes gesture recognition, animation, and a different set of graphical control elements.
He then threw that definition into another mind map, which he uses to keep notes. This was a cool and unexpected way of exploring new knowledge:
I’m fascinated by his learning toolset, which includes both mind mapping and Anki Decks (see https://apps.ankiweb.net and this explanation of Spaced repetition.) I tend to turn to paper and pen, or other familiar tools, to take notes or to share.
What kind of tools do you use to explore and explain new areas of learning?
In my continuing exploration of the Cozmo SDK, I’ve written my first game. “Hide and Seek” is not particularly complex or challenging. I’m still pretty new to both Python and the SDK tools and my wrapper code is still pretty basic.
“Hide and Seek” goes like this. You place a light cube near Cozmo out of his direct line of sight. He searches until he finds it, picks it up, moves it to a new location, and drops it. He then turns around, with his back to the cube and says “Let’s play again”.
Although this is a first take, I think my wrapper approach remains pretty simple and readable. I’m trying to emphasize learning programming concepts, using Python, which means that Cozmo access needs to be encapsulated and procedural:
# run, cozmo, run
def actions(cozmoLink):
'''Specify actions for cozmo to run.'''
# Fetch robot
coz = Cozmo.robot(cozmoLink)
coz.say("Hide and seek!")
# Look for a cube
if not coz.findACube():
coz.say("No cube")
return
# Found one!
coz.say("I found a cube")
coz.takeCube() # Pick it up
coz.say("I have the cube")
coz.drive(time = 3, direction = Direction.forward) # Drive
coz.dropCube() # Place the cube
coz.turn(degrees = 180) # Turn around
coz.say("Let's play again")
Cozmo.startUp(actions)
The most complex concept here is searching for a block (coz.findACube()) and acting on a Boolean return value. I’m not completely in love with how I established this notion. Maybe something more along the lines of “ask to look and find” instead of just “find” would better indicate conditionality.
Under the covers, the Cozmo class now stores both a cube of interest and a list of cubes within view. I’m not sure I’m going to stay with this specific design but this new feature is what allows you to omit mentioning the cube instance in the latter half of the game. I think I probably need to step back and refactor to evolve a Cozmo’s “world” class, to describe what he sees, and better mirror the world in the direct APIs.
Here’s a video of the “Hide and Seek” gameplay:
My immediate goals are to encapsulate all the asynchronous and exception-handling code into very simple call styles. I want to model the world, the robot, and the interactions in a more human-based way, to support simple programming concepts: procedure, state, condition, iteration, and eventually functions.
Even though I’m directly interested in teaching, at the back of my mind, I want to eventually get to the point where I can introduce some emotion programming, which I think is perfect for Cozmo.
I wouldn’t have to start from scratch. There’s some GPL licensed work done in the Facemoji project. Facemoji harvests an emotion dataset, classifies the emotions, and then matches incoming video against the data set. Wouldn’t it be great if Cozmo could react to your face beyond recognition, playing off happiness, sadness, etc?
Anki has been kind enough to let me play with their new Cozmo unit and explore their SDK. Cozmo is a wonderful device, developed by people who understand a lot of core principles about human interaction and engagement.
Cozmo is adorable. When it recognizes your face, it wriggles with happiness. It explores its environment. When it’s bored, it sets up a game to play with you. It can get “upset” and demand attention. It’s one of the most personable and delightful robots I’ve played with.
At its heart is a well-chosen collection of minimal elements. The unit can move around the room, with a 4-wheel/2-tread system. It includes an onboard forklift that can rise and fall, an OLED “face” that expresses emotion, and a camera system that ties into a computer vision system, which I believe is based on PIL, the Python Image Library. (Anki tells me that Cozmo’s vision system “does not use PIL or Python in any way, though the Python SDK interface uses PIL for decoding jpegs, drawing animations, etc.”)
Three lightweight blocks with easily-identified markings complete the Cozmo package, which Cozmo can tap, lift, stack, and roll.
Between its remarkable cuteness and its vision-based API, it’s a perfect system for introducing kids to programming. I was really excited to jump into the SDK and see how far I could push it.
Here is Anki’s “Hello World” code (more or less, I’ve tweaked it a little) from their first developer tutorial:
import sys
import cozmo
'''
Hello Human
Make Cozmo say 'Hello Human' in this simple
Cozmo SDK example program.
'''
def run(sdk_conn):
robot = sdk_conn.wait_for_robot()
robot.say_text("Hello Human").wait_for_completed()
print("Success")
if __name__ == '__main__':
cozmo.setup_basic_logging()
try:
cozmo.connect(run)
except cozmo.ConnectionError as err:
sys.exit("Connection error ????: %s" % err)
Although simple, this “Hello World” includes quite a lot of implementation details that can scare off young learners. For comparison, here’s the start of Apple’s tutorial on Swift “Learn to Code”:
There’s such a huge difference here. In Apple’s case, everything that Byte (the main character) does is limited to easy-to-understand, simple calls. The entire implementation is abstracted away, and all that’s left are instructions and very directed calls, which the student can put together, re-order, and explore with immediate feedback.
In Anki’s code, you’re presented with material that’s dealing with set-up, exceptions, asynchronous calls, and more. That is a huge amount of information to put in front of a learner, and to then say “ignore all of this”. Cozmo is underserved by this approach. Real life robots are always going to be a lot more fun to work with than on-screen animations. Cozmo deserved as simple a vocabulary as Byte. That difference set me on the road to create a proof of concept.
In this effort, I’ve tried to develop a more engaging system of interaction that better mirrors the way kids learn. By creating high level abstractions, I wanted to support the same kind of learning as “Learn to Code”. Learn to Code begins with procedural calls, and then conditional ones, and moving on to iteration and functional abstraction, and so forth.
My yardstick of success has been, “can my son use these building blocks to express goals and master basic procedural and conditional code?” (I haven’t gotten him up to iteration yet.) So far, so good, actually. Here is what my updated “Hello World” looks like for Cozmo, after creating a more structured entry into robot control functionality:
from Cozmo import *
# run, cozmo, run
def actions(cozmoLink):
'''Specify actions for cozmo to run.'''
# Fetch robot
coz = Cozmo.robot(cozmoLink)
# Say something
coz.say("Hello Human")
Cozmo.startUp(actions)
Not quite as clean as “Learn to Code” but I think it’s a vast improvement on the original. Calls now go through a central Cozmo class. I’ve chunked together common behavior and I’ve abstracted away most implementation details, which are not of immediate interest to a student learner.
Although I haven’t had the time to really take this as far as I want, my Cozmo system can now talk, drive, turn, and engage (a little) with light cubes. What follows is a slightly more involved example. Cozmo runs several actions in sequence, and then conditionally responds to an interaction:
from Cozmo import *
from Colors import *
# Run, Cozmo, run
def actions(cozmoLink):
'''Specify actions for cozmo to run.'''
# Fetch robot
coz = Cozmo.robot(cozmoLink)
# Say something
coz.say("Hello")
# Drive a little
coz.drive(time = 3, direction = Direction.forward)
# Turn
coz.turn(degrees = 180)
# Drive a little more
coz.drive(time = 3, direction = Direction.forward)
# Light up a cube
cube = coz.cube(0)
cube.setColor(colorLime)
# Tap it!
coz.say("Tap it")
if cube.waitForTap():
coz.say("You tapped it")
else:
coz.say("Why no tap?")
cube.switchOff()
Cozmo.startUp(actions)
And here is a video showing Cozmo executing this code:
If you’d like to explore this a little further:
Here is a video showing the SDK feedback during that execution. You can see how the commands translate to base Cozmo directives.
I’ve left a bit of source code over at GitHub if you have a Cozmo or are just interested in my approach.
As you might expect, creating a usable student-focused learning system is time consuming and exhausting. On top of providing controlled functionality, what’s missing here is a lesson plan and a list of skills to master framed into “Let’s learn Python with Cozmo”. What’s here is just a sense of how that functionality might look when directed into more manageable chunks.
Given my time frame, I’ve focused more on “can this device be made student friendly” than producing an actual product. I believe my proof of concept shows that the right kind of engagement can support this kind of learning with this real-world robot.
The thing that appeals most to me about Cozmo from the start has been its rich computer vision capabilities. What I haven’t had a chance to really touch on yet is its high level features like “search for a cube”, “lift it and place it on another cube”, all of which are provided as building blocks in its existing API, and all of which are terrific touch points for a lesson plan.
I can easily see where I’d want to develop some new games with the robot, like lowering reaction time (it gets really hard under about three quarters of a second to tap that darn cube) and creating cube-to-cube sequences of light. I’d also love to discover whether I can extend detection to some leftovers my son brought home from our library’s 3D printer reject bin.
Cozmo does not offer a voice input SDK. It’s only real way to interact is through its cameras (and vision system) and through taps on its cubes. Even so, there’s a pretty rich basis to craft new ways to interact.
As for Anki’s built-ins, they’re quite rich. Cozmo can flip cubes, pull wheelies, and interact in a respectably rich range of physical and (via its face screen) emotional ways.
Even if you’re not programming the system, it’s a delightful toy. Add in the SDK though, and there’s a fantastic basis for learning.
I recently reached out to Anki, creators of the Cozmo robot, to ask if I could explore the device from a developer’s perspective. Shipping with a Python SDK, Cozmo offers some surprisingly sophisticated image processing and recognition features, analogous to Apple’s Core Image.
Before jumping into the programming side of things, let me acknowledge that I am primarily an Apple developer. Therefore I must categorically kick off by evaluating package design.
I didn’t get to start unboxing last night. When my unit arrived during a cold snap, the package window was all fogged up. I opened the top, added a desiccant unit, and let the package dry out and warm out overnight.
This morning, I finally could explore. As you’d expect with high end consumer goods (Cozmo retails for $180), the box used satisfyingly thick cardboard, was easy to open, and presented the product nicely while hiding the user manual, power cord, and accessories.
Unexpectedly, my favorite part of the entire boxing system was Cozmo’s perch. Made of quality yellow plastic, this industrial presentation feature is practically a toy in itself.
Flip it over, and instead of expected wire wraps, there’s an ingenious system to release the Cozmo unit. Pull up the yellow spacer and pinch the two white tabs. It all comes apart, allowing you to remove Cozmo, and start him charging.
Then while he charges, you can spend time putting the pieces back together and taking them apart over and over. It’s practically an extra free toy that ships with the robot. Beautiful design and completely unexpected.
The full complement of parts include the Cozmo robot, a USB charging dock with a separate a wall-adapter, a trio of play blocks (“Interactive Power Cubes”), and a welcome packet.
The instructions say to place him on an open table in a well lit room, with room to move around. Charging from empty to full is specified at 10-12 minutes, with a rated play time of 1-2 hours. (I wouldn’t be surprised if that number drops once you start putting extra demands on his processors.)
That 10-12 minutes subjectively lasts approximately 3-4 months after you first finish unboxing. Child and I got into a heated debate as to whether we’d name him after Cosimo de’ Medici, founder of the Medici political dynasty, or Cosmog, nebula Pokemon and opener of “Ultra Wormholes”.
I don’t want to spoil the instruction booklet for you so let me just say, the writeup is adorable, clever, and simple. The consumer warnings in particular made me laugh out loud. It’s a great example of technical writing and communication, focusing on simplicity and clarity. It’s really well done.
You must install a separate Cozmo application you on an iOS or Android device. It just shipped a 1.1.0 update, so make sure you’re running the latest version. It helps if you watch this video before trying to set-up.
Open Settings and connect to the Cozmo WiFi network. Lift Cozmo’s front arm to display the password, and then use Settings to log in, making sure to type everything using exact upper casing. The password is random mix of upper case letters and numbers, and the iPhone’s keypad doesn’t remember casing when moving between number input and text input.
Anki recommends typing the password into your Notes app, and I endorse this suggestion, especially when I point out how many times I tried to get it to connect. That’s because at first I didn’t realize Cloak VPN was trying to secure the Cozmo WiFi connection. It really really helps if you set it up to trust the Cozmo network. This one detail put a huge crimp into my set-up, causing immeasurable pain and frustration. I finally found this support write-up, which mentions VPN issues at the very bottom.
I haven’t spent much time playing with Cozmo yet but speaking as an iOS dev, there are quite a few things Anki could do to tighten up their app. There’s great content and some terrific games but the app reads as “cross platform with compromise”. (It seems to have Unity under its cover.)
To give just a couple of examples, the cursor controls in a custom text field don’t follow iOS standards. When you use cursor arrows during typing, the active cursor position does not update. That’s iOS 101.
In terms of user interface flow, it’s missing iOS’s inherent “deference to the user”. For example, there’s no “try again” button when attempting to connect without having to go to the help screen over and over again. (Which I did, over and over and over, through an hour or so of set-up until I found that VPN advice.) When you’re using games and other features, the ability to quickly switch tasks is somewhat limited.
The app would benefit from a HIG/GUI once-over for usability but that’s really the only weak spot in the big package that I’ve encountered.
As for Cozmo himself, he’s a delight. As part of set-up, he learned my face and that of my daughter. Who cannot love a robot that wakes up from charging, sees your face, recognizes you, greets you by name and then wiggles with happiness?!? It’s phenomenally adorable.
Of course, my interest lies primarily in the SDK, and that exploration will have to wait for another write-up. For now, a summary to date:
Great starter games to inspire development possibilities.
There’s an app.
Cozmo is right now exploring the floor of my office, making random offhand comments in robotese about what he’s finding and generally having a ball. I think I’ll stop writing for a short while and join him in having fun.
For all I know this already exists and I just was unable to google it up. Assuming it doesn’t, tmdiff allows you to perform a command line diff on a text file against a time machine version.
The list option just lists the dates for the backups in reverse chronological order. Supply a path to diff, e.g.
tmdiff Style600-Control\ Flow.md
It defaults to using the “but last” backup offset of 1. If you want to use the most recent backup, use 0 instead, or any number moving further back in time as the value increases:
I hope this is handy for someone out there on the opposite side of the Intertube, especially since version control is baked into stuff like TextEdit. Do let me know if you use it.
Update: See also github.com/erica/tmls and github.com/erica/tmcp. The former runs ls, complete with arguments. The latter performs a nondestructive copy with the Time Machine date appended.
I’m busy writing about literal style (prefer literals when you can but add typing) and stumbled across a form of binary exponentiation that I hadn’t used before.
Do you know what these two Swift literals evaluate to?
0x15p1, 0xA.8p2
I’ll give you a hint. The numbers must start with hex values and may include elements past the decimal point. The p exponentiation means “multiply by 2 raised to this power”, so 0x15p1 means 15 hex multiplied by 2, and 0xA.8p2 means A.8 hex multiplied by 4.
Of course you can just type them into a playground but where’s the fun in that?
let dent = 0x15p1
If you still don’t know the answers, I’m pretty sure that a pair of ordinary lab mice might.
If you like good, stupid, subversive humor (and who among us does not?), consider pre-ordering Jim Benton‘s “Man, I Hate Cursive”.
Due out this October, this cartoon collection for “People and Advanced Bears” is silly, witty, and laugh-out-loud fun. It offers a collection of Benton’s more popular strips from Reddit, “shining a light on talking animals, relationships, fart jokes, and death” according to the book’s promo copy.
I liked it a lot. Admittedly, some of the humor leans off-color: it’s the kind of book you gift a friend, a fellow programmer, a geek, but not maybe your mom unless your mom is a friendly programmer geek, in which case, she’ll enjoy the laughs.
You’ll probably like it too, in which case, it’s excellent for leaving around on coffee tables if you’re a little uptight or in bathrooms, where its humor might be more appreciated during those deeply philosophical times when you forget your iPad and don’t subscribe to the Ikea catalog.
At just under a hundred pages, the book ended way too soon for me. “Man, I Hate Cursive” is available for pre-order on Amazon ($11.07 paperback, $9.99) and will be published on October 18, 2016.
NetGalley provided me with a free copy of the book for this review.