Anki has been kind enough to let me play with their new Cozmo unit and explore their SDK. Cozmo is a wonderful device, developed by people who understand a lot of core principles about human interaction and engagement.
Cozmo is adorable. When it recognizes your face, it wriggles with happiness. It explores its environment. When it’s bored, it sets up a game to play with you. It can get “upset” and demand attention. It’s one of the most personable and delightful robots I’ve played with.
At its heart is a well-chosen collection of minimal elements. The unit can move around the room, with a 4-wheel/2-tread system. It includes an onboard forklift that can rise and fall, an OLED “face” that expresses emotion, and a camera system that ties into a computer vision system, which I believe is based on PIL, the Python Image Library. (Anki tells me that Cozmo’s vision system “does not use PIL or Python in any way, though the Python SDK interface uses PIL for decoding jpegs, drawing animations, etc.”)
Three lightweight blocks with easily-identified markings complete the Cozmo package, which Cozmo can tap, lift, stack, and roll.
Between its remarkable cuteness and its vision-based API, it’s a perfect system for introducing kids to programming. I was really excited to jump into the SDK and see how far I could push it.
Here is Anki’s “Hello World” code (more or less, I’ve tweaked it a little) from their first developer tutorial:
import sys import cozmo ''' Hello Human Make Cozmo say 'Hello Human' in this simple Cozmo SDK example program. ''' def run(sdk_conn): robot = sdk_conn.wait_for_robot() robot.say_text("Hello Human").wait_for_completed() print("Success") if __name__ == '__main__': cozmo.setup_basic_logging() try: cozmo.connect(run) except cozmo.ConnectionError as err: sys.exit("Connection error ????: %s" % err)
Although simple, this “Hello World” includes quite a lot of implementation details that can scare off young learners. For comparison, here’s the start of Apple’s tutorial on Swift “Learn to Code”:
There’s such a huge difference here. In Apple’s case, everything that Byte (the main character) does is limited to easy-to-understand, simple calls. The entire implementation is abstracted away, and all that’s left are instructions and very directed calls, which the student can put together, re-order, and explore with immediate feedback.
In Anki’s code, you’re presented with material that’s dealing with set-up, exceptions, asynchronous calls, and more. That is a huge amount of information to put in front of a learner, and to then say “ignore all of this”. Cozmo is underserved by this approach. Real life robots are always going to be a lot more fun to work with than on-screen animations. Cozmo deserved as simple a vocabulary as Byte. That difference set me on the road to create a proof of concept.
In this effort, I’ve tried to develop a more engaging system of interaction that better mirrors the way kids learn. By creating high level abstractions, I wanted to support the same kind of learning as “Learn to Code”. Learn to Code begins with procedural calls, and then conditional ones, and moving on to iteration and functional abstraction, and so forth.
My yardstick of success has been, “can my son use these building blocks to express goals and master basic procedural and conditional code?” (I haven’t gotten him up to iteration yet.) So far, so good, actually. Here is what my updated “Hello World” looks like for Cozmo, after creating a more structured entry into robot control functionality:
from Cozmo import * # run, cozmo, run def actions(cozmoLink): '''Specify actions for cozmo to run.''' # Fetch robot coz = Cozmo.robot(cozmoLink) # Say something coz.say("Hello Human") Cozmo.startUp(actions)
Not quite as clean as “Learn to Code” but I think it’s a vast improvement on the original. Calls now go through a central Cozmo class. I’ve chunked together common behavior and I’ve abstracted away most implementation details, which are not of immediate interest to a student learner.
Although I haven’t had the time to really take this as far as I want, my Cozmo system can now talk, drive, turn, and engage (a little) with light cubes. What follows is a slightly more involved example. Cozmo runs several actions in sequence, and then conditionally responds to an interaction:
from Cozmo import * from Colors import * # Run, Cozmo, run def actions(cozmoLink): '''Specify actions for cozmo to run.''' # Fetch robot coz = Cozmo.robot(cozmoLink) # Say something coz.say("Hello") # Drive a little coz.drive(time = 3, direction = Direction.forward) # Turn coz.turn(degrees = 180) # Drive a little more coz.drive(time = 3, direction = Direction.forward) # Light up a cube cube = coz.cube(0) cube.setColor(colorLime) # Tap it! coz.say("Tap it") if cube.waitForTap(): coz.say("You tapped it") else: coz.say("Why no tap?") cube.switchOff() Cozmo.startUp(actions)
And here is a video showing Cozmo executing this code:
If you’d like to explore this a little further:
- Here is a video showing the SDK feedback during that execution. You can see how the commands translate to base Cozmo directives.
- I’ve left a bit of source code over at GitHub if you have a Cozmo or are just interested in my approach.
As you might expect, creating a usable student-focused learning system is time consuming and exhausting. On top of providing controlled functionality, what’s missing here is a lesson plan and a list of skills to master framed into “Let’s learn Python with Cozmo”. What’s here is just a sense of how that functionality might look when directed into more manageable chunks.
Given my time frame, I’ve focused more on “can this device be made student friendly” than producing an actual product. I believe my proof of concept shows that the right kind of engagement can support this kind of learning with this real-world robot.
The thing that appeals most to me about Cozmo from the start has been its rich computer vision capabilities. What I haven’t had a chance to really touch on yet is its high level features like “search for a cube”, “lift it and place it on another cube”, all of which are provided as building blocks in its existing API, and all of which are terrific touch points for a lesson plan.
I can easily see where I’d want to develop some new games with the robot, like lowering reaction time (it gets really hard under about three quarters of a second to tap that darn cube) and creating cube-to-cube sequences of light. I’d also love to discover whether I can extend detection to some leftovers my son brought home from our library’s 3D printer reject bin.
Cozmo does not offer a voice input SDK. It’s only real way to interact is through its cameras (and vision system) and through taps on its cubes. Even so, there’s a pretty rich basis to craft new ways to interact.
As for Anki’s built-ins, they’re quite rich. Cozmo can flip cubes, pull wheelies, and interact in a respectably rich range of physical and (via its face screen) emotional ways.
Even if you’re not programming the system, it’s a delightful toy. Add in the SDK though, and there’s a fantastic basis for learning.
- Ordering Cozmo: anki.com
- SDK Home: https://developer.anki.com/en-us
- Developer forums: https://forums.anki.com
- SDK Documentation: http://cozmosdk.anki.com/docs/