Archive for the ‘How To’ Category

Carrying user-sourced code forward in Swift Playgrounds for iOS

Had a really neat challenge today, as a Slack-buddy attempted to work with Apple’s exquisitely insufficient Playground Book documentation. His goal was simple: he wanted to be able to incrementally grow and test code from page to page, copying the user’s work as they moved on.

In theory, Swift Playgrounds for iOS enables you to build books where your reader/student incrementally builds code. Each page introduces a new concept, a new tweak, or a new approach. It’s a great way to layer each lesson on a previous take, or to take one lesson and branch it out to multiple endpoints.

You can either build, build, build to one big story or take one core concept and show many different ways to apply it. Either way, you want to be able to bring code — whether from the most recently edited page or from a shared core page — forward, so the reader/student can further engage with it, edit it, and make it fit with each page’s challenge.

Implementing “code forwarding” (I just made up that term) proved trickier than expected. Playground book workflow is often “understood” (that is, you have a deep understanding of what’s required because you’ve worked with it a lot or you’ve poked around at Apple’s examples or reversed engineered to see how things work) rather than explained step by step in the official docs. Because of this, he ran into several roadblocks along the way.

  • First among these, is that Apple does not provide a Playgrounds Book Author tool for Mac. You have to build your books by hand, going through the specs and hoping that each iteration works. Most of the time it does. Sometimes, maddeningly, it does not.
  • Second, you have to transfer the book to the iPad for each test (I use AirDrop™), and guess at what went wrong if it doesn’t work. When testing a series of book-based exercises, you have to either hard-code each “success” sequence (and there’s no way to set a “I am debugging/developing this book” flag) or actually do the coding, which can take a lot of time, especially if you’re debugging page 7 and you have to work through the exercises on page 1 – 6 for each test of page 7.
  • Finally, if your book is even a little out of spec with what Swift Playgrounds for iOS expects, it’s going to die without much feedback or explanation, leaving you scratching your head, cursing Dev Tools (but we really love you guys, we do), or otherwise venting frustration.

I wasted a bunch of hours because I wanted to make this work. And finally, I managed to get things working to my satisfaction. I thought I’d take the time to write things up to save you some bother. Here are a few things I learned while deep diving into today’s experiments.

The Zen of User Code

Code-forwarding allows you to propagate user-sourced/user-edited code from an earlier page in a playground book to a later page in a playground book. When you code-forward from a “source” to a “destination”, Swift Playgrounds for iOS makes a copy of the earlier code and places onto the current page.

That code is copied once and you aren’t given the option to re-copy unless you reset the page. Every page in Swift Playgrounds offers a reset option in the ellipsis menu, but its discoverability is low. Apple expects each reader/student to work through exercises linearly, progressing only when each previous problem is solved. This means that you don’t get “live” updates by popping back and making new changes to the source page. The destination copies once.

That also means you cannot apply code-forwarding until your page is set as complete. By “complete”, I mean that the book’s source code and Swift Playgrounds accept that the reader/student has done sufficient work to move forward and progress to the next page/exercise.

This usually happens by executing a page epilogue. The epilogue tests the state of the page’s data, determines if the problem was solved (for example, whether the robot reached the end square and the code progressed to a hidden portion containing this test), and then updates a user assessment. Unless a page’s assessment status is “passed” (that is, done), the reader/student is not offered code copying on the following page.

This is built into Swift Playgrounds for iOS, and is an underlying assumption on how progressive learning plans operate. It’s a critical pathway for building page-by-page progress and enabling code-forwarding. This is why the following snippet includes its bit of hidden code. This code allows a user to pass, that is receive a passing grade/assessment for the page, without doing any more work than running the current page:

//#-copy-source(id1)
//#-editable-code 
func foo() {
    // ... starter code here
}
//#-end-editable-code
//#-end-copy-source

//#-hidden-code
import PlaygroundSupport
PlaygroundPage.current.assessmentStatus = 
    .pass(message: "Great job!")
//#-end-hidden-code

Building Code-Forwarding

The preceding code incorporates two essential parts of using a copy-source markup area.

  • First, is the actual tagged copy-source code. This delimits the code that gets copied forward to one or more other pages. Make sure to mark it editable when you want to present a challenge requiring end-user-reader modification. You can omit editable tags when you want the next step or branches to start with code you source yourself. It’s an unusual approach but it’s not illegal to do so.
  • Second, is the hidden assessment update. Normally you’d use more sophisticated logic to determine whether a reader/student has met those challenges laid out in the current page before allowing them to .pass or .fail. When you just want to demonstrate core functionality, make it clear in your marked-up write-up that the user must run the code before continuing. Use the approach in this code to “pass on first run” for demonstration. You’ll probably want to update the message to something along the lines of “Great! You’ve seen this work, move to the next page to start making changes.”

Building The Destination

Crafting a destination page is trickier than laying out acopy-source area: You must update your page’s manifest as well as its content source. The manifest will expect properly internationalized source strings. That means at a minimum, a code receiving page will need a Contents.swift file, a Manifest.plist file, and a PrivateResources folder with at least one localized lproj folder (in my case, en.lproj), which in turn holds the ManifestPlist.strings file.

Here’s what a simple manifest looks like for a copying destination. Keep in mind that each value entry for the CodeCopySetup keys is actually a placeholder for localization.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>CodeCopySetup</key>
	<dict>
		<key>CopyCommandButtonTitle</key>
		<string>CopyCommandButtonTitle</string>
		<key>DefaultCommandButtonTitle</key>
		<string>DefaultCommandButtonTitle</string>
		<key>NavigateCommandButtonTitle</key>
		<string>NavigateCommandButtonTitle</string>
		<key>NotReadyToCopyInstructions</key>
		<string>NotReadyToCopyInstructions</string>
		<key>ReadyToCopyInstructions</key>
		<string>ReadyToCopyInstructions</string>
	</dict>
	<key>Description</key>
	<string>Description</string>
	<key>LiveViewEdgeToEdge</key>
	<true/>
	<key>LiveViewMode</key>
	<string>VisibleByDefault</string>
	<key>MaximumSupportedExecutionSpeed</key>
	<string>Fastest</string>
	<key>Name</key>
	<string>Name</string>
	<key>PlaygroundLoggingMode</key>
	<string>Off</string>
	<key>Version</key>
	<string>1.0</string>
</dict>
</plist>

I followed Apple’s example in my ManifestPlist.strings file, so the English expressions aren’t terribly exciting. The Name field used in the manifest is spelled out in addition to the button text:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>CopyCommandButtonTitle</key>
	<string>Copy My Code From the Last Page</string>
	<key>DefaultCommandButtonTitle</key>
	<string>Start Coding on This Page</string>
	<key>Name</key>
	<string>Copying Text from the Previous Page</string>
	<key>NavigateCommandButtonTitle</key>
	<string>Return to Previous Page</string>
	<key>NotReadyToCopyInstructions</key>
	<string>Be sure to complete the previous page before you move on to solving the next step.</string>
	<key>ReadyToCopyInstructions</key>
	<string>You can bring over your algorithm from the previous page to continue improving it.</string>
</dict>
</plist>

Here, each possible assessment state and action is given a human-readable form. I was unable to make the system “default” to the items mentioned in the Playground Page Manifest documentation (such as “Copy my code” and “Start with provided code”). I’m sure if I tried hard enough, I could have gotten this working per the docs but I didn’t have the time to push.

In building the code-destination contents, link each identifier you used in the source (it’s id1 in this example but it can be any key you want to use) and the page to copy from (Page1). This page names comes from the name of the playgroundpage file hosting the user-edited or user-sourced content.

You must mention the page because you may keep enhancing the same progression of code from page to page, while using a single identifier. If you start on page 1, update on page 2, when you get to page 3, you want to copy from the updated source on page 2, not page 1. Mentioning which source you want to copy helps keep you and the user on track.

//#-editable-code
//#-copy-destination("Page1", id1)

//#-end-copy-destination
//#-end-editable-code

If you’re using a branching storyline (for example, you might explore variations on a sort or showcase different blending modes for merging images), you can place this destination code on each branch page.

More often, you’ll want to progressively modify code through a series of exercises. To carry the code further, add copy-source tags around the destination as in the following code, using the same id1 identifier, and refer to #-copy-destination("Page2", id1) for the next copy on Page 3 and so forth. Read this directive as this is the copy destination for the code tagged with id1 sourced from page 2.

Here’s what an edit-and-carry approach looks like for a second page, referring back to ("Page1", id1). In my imagination, this is the first time code has been copied and this markup sets up a user-editable progression that can be carried to the third page and beyond.

//#-copy-source(id1)
//#-editable-code
//#-copy-destination("Page1", id1)

//#-end-copy-destination
//#-end-editable-code
//#-end-copy-source

That’s pretty much all you need: proper tags, proper localized strings, and proper id/page references. If you’d like to try out a copy of my playground, you can grab a copy from here or email me for a copy if that doesn’t work.

Fixing Mail Plugins for High Sierra

Are all your flagged emails back? Do you want them gone? Mail plugins help make macOS mail a bit less awful. If your Mail bundles stopped working, it’s not hard to get them back up and running. The secret lies in adding a `Supported10.13PluginCompatibilityUUIDs` entry in to any mail plugin bundle’s internal Info.plist file. You can then move the plugin back from the disabled folder to the main plugin folder.

The overall technique changed in 10.12, requiring a separate entry field that’s OS-specific. You can pull the current compatibility key by issuing:

defaults read /Applications/Mail.app/Contents/Info.plist PluginCompatibilityUUID

This reads the compatibility UUID and prints it to the terminal command line. You need this UUID to edit the Info.plist file. Starting in 10.12, you need to use the XX.YY format as part of the key name. For example, here’s what the 10.13 version looks like:

<key>Supported10.13PluginCompatibilityUUIDs</key>
<array>
	<string>CompatibilityKeyHere</string>
</array>

Once you’ve done this, move the bundle back to the right folder and restart mail. The bundle should hopefully continue working.

Simulating a second finger during drag

You can drag and drop in the iOS simulator by clicking and holding an item. The item “pops” and you can then drag it to a destination. Today, an Apple engineer shared a neat way to free up a “second finger” during this process.

Pause and press the control key. This pins an item mid-drag, enabling you to use the Mac cursor as another touch. You can then retrieve your drag by grabbing the paused item and conclude your drop.

Grepping for parentheses

Just because I had to do this today and thought I’d share. Either use the -E option, for example,

grep -E "let \(" */*.swift

or use egrep directly:

egrep "let \(" */*.swift

Both egrep and grep -E use extended regular expressions, allowing you to use a simple backslash rather than trying to get your shell to cooperate with hyper-escaping. You can use the same approach with sed as well:

echo "(hi)" | sed -E "s/\(/[/"

I hope this helps someone.

How To: Finding your iTunes Connect Vendor ID number

Apple sent me an email:

Apple was recently notified that your bank account information has been manually corrected by the processing bank for iTunes payments to vendor [my personal vendor id number]. Unfortunately Apple cannot continue processing payments with the current banking details provided in iTunes Connect. To avoid unnecessary payment delays or returns, please log into iTunes Connect and provide the correct account information in Agreements, Tax, and Banking. For instructions on how to update banking, see ‘How do I add or edit a new bank account?’ in iTunes Connect Resources and Help. After the banking changes have been entered in our systems, payments will resume.

Oh shiny.

I pull out my latest statement and off I go to update my details. Here’s my problem. I don’t know which vendor account Apple is referring to. Like many developers, I use multiple iTC accounts. When I log into each one, I cannot find the vendor ID. Argh.

To save you lots of time, do this:

  • Log into any iTC account.
  • Manually navigate to https://reportingitc2.apple.com/reports.html
  • The vendor ID is listed next to your name. You should not have to re-authenticate when hopping from iTC to this page.

This helped me figure out which affected account was being referred to in the email and was able to update my info.

To update a direct deposit routing number, you need to hop into Agreements, Tax, and Banking. Click on the Bank Info > View button. You cannot edit the routing number in your existing Current Bank Account. Instead you need to Add Bank Account with the same details and the new routing number and then select that as your new account. Once you do it will take 24-48 hours for the change to go through and you will not be able to make further edits during that time.

MacBook Pros and External Displays

Today, I hooked a newly purchased display to my MBP. (Looks like they’re out of stock right now, but it was $80 for 24″ when I bought it last week.) This isn’t intended to be my display. It’s replacing an old 14″ monitor for a kid. I thought I’d just steal it now and then during the day. It’s extremely lightweight and easy to move between rooms.

What I didn’t expect was how awful the text looked on it. I hooked up the monitor to the MBP using my Apple TV HDMI cable. The text was unreadable. I use similar TV-style monitors for my main system and they display text just fine. However, I’m using normal display ports and cables for my mini. This is the first time I’ve gone HDMI direct.

So off to websearch I went. Sure enough this is a known longstanding problem that many people have dealt with before. The MBP sees the TV as a TV and not a monitor. It produces a YUV signal instead of the RGB signal that improves text crispness. Pictures look pretty, text looks bad.

All the searches lead to this ruby script. The script builds a display override file containing a vendor and product ID with 4:4:4 RGB color support. The trick lies in getting macOS to install, read, and use it properly. That’s because you can’t install the file directly to /System/Library/Displays/Contents/Resources/Overrides/ in modern macOS. Instead, you have to disable “rootless”.

I wasn’t really happy about going into recovery mode. Disabling system integrity protection feels like overkill for a simple display issue. But it  worked.  It really only took a few minutes to resolve once I convinced myself it was worth doing. If you have any warnings and cautions about installing custom display overrides, please let me know. It  feels like I did something morally wrong even if it did fix my problem.

My external display went from being unusable to merely imperfect. The text is still a bit blurry but you can read it without inducing a migraine. Not nearly as crisp as normal display ports (which looks fine when used with this monitor) but I don’t have to buy a new cable and I don’t plan to use this much.

If I were going to use this monitor regularly with the MBP, I’d definitely purchase a proper cable. As it is, I’m happy enough to have found a workable-ish solution. The monitor is quite nice especially in “shop mode”, and has so far worked well with Chromecast, AppleTV, and Wii.

Xcode Tricks: Adding Keyboard Shortcuts

One of the great things about Xcode is that you can add custom keyboard shortcuts for just about any command. Two favorites enable me to toggle playground markup on and off and to run a playground on demand.

Adding shortcuts lets me run these common tasks without lifting my hands from the keyboard and selecting items from a menu. It’s a big time saver.

You establish shortcuts using Xcode > Preferences > Key Bindings.  Search for the command-name using the field at the top-right, then double click in the key field and type the key (or key-chord) you want to use.

Click away from the field after entering your selection. Don’t press return or Xcode interprets that as your requested key entry.

A red exclamation point indicates binding conflicts:

Click the red icon to jump to the Conflicts tab, which lists all keybinding issues:

Red conflicts indicate unresolvable overlaps. Yellow conflicts mean that keyboard shortcuts established in System Preferences may override the built-in bindings.

Activate the edit field by double clicking. Either replace the key binding or click the small gray circle with a minus at the right to remove the assigned key. Once resolved, conflict items leave the list.

The Customized tab list all user-adjusted key bindings. This tab enables you to review your changes in one place.

This screenshot shows both the custom F8 binding for “Execute Playground” and the customization for “Step Out” on the debug menu, which normally uses F8. I removed that to resolve the overlap, preferring F8 for playgrounds.

To revert changes, select one or more items you’ve customized and click Delete. If you want to try this out, it’s safe. Deleting key binding customizations uses Xcode’s undo stack. You can undo your reversion to recover any customization you may have accidentally removed.

Swift Style vs ProseLint: The Smackdown

ProseLint is great. As I’m writing a book about style and linting, it’s natural to try to lint the book that lints your programming. In using this tool, I’ve encountered some amusing “lint fails” that I’d thought I’d share.

ProseLint vs Nil Coalescing: “hyperbolic.misc ‘`??`’ is hyperbolic.” Winner: Swift Style.

ProseLint vs discussion of Forced Unwwrapping: “leonard.exclamation.30ppm More than 30 ppm of exclamations. Keep them under control.” Winner: ProseLint. Any forced unwrapping, even in a discussion about forced unwrapping, is an obvious fail. Save the kittens, drop the !’s.

ProseLint vs Meaningful Variable Names: “typography.symbols.multiplication_symbol Use the multiplication symbol ×, not the letter x” Winner: Swift Style. As Freud said, sometimes an letter  “x”  is just a letter “x”. (Or was that Groucho Marx? I forget.)

ProseLint vs “Use American English Spelling” rule: “consistency.spelling Inconsistent spelling of ‘color’ (vs. ‘colour’)” Winner: Swift Style. When writing for a global audience, prefer “color” to “colour”. (See? I did it again. — B. Spears)

Winner? Forget the points. It’s ProseLint. This summary doesn’t include the great catches made and fixed, like excessive use of “very”, repeated word detection, etc. Great tool, check it out.

Apple TV, Home Sharing, and Missing Movies

I rented Hunt for the Wilderpeople last week, while it was the $0.99 featured rental. I’ve heard good things about this Kiwi movie (I’m a bit of a kiwiholic) and couldn’t wait to watch it.

screen-shot-2016-12-13-at-2-41-44-pm

So today, with a draft of Swift Style pushed up to Pragmatic, I thought I’d set it up for a nice family watch tonight. I opened the Computers > Rentals section on Apple TV and saw this:

screen-shot-2016-12-13-at-2-47-11-pm

I wasted about 20 minutes googling things like “why doesn’t my rental show up on my Apple TV” and checking my iTunes accounts and home sharing setup, when I suddenly remembered this had happened to me before.

With that spark of inspiration lingering in my mind, I went to iTunes on my computer (where I had rented it) and sure enough, it was still up in the cloud. I clicked the download button and got it down to my computer:

screen-shot-2016-12-13-at-2-41-42-pm

About 1.41 Gigabytes later (and several pause/resumes when the download speed got slow — seriously, at one point the ETA jumped from over 40 minutes to under 3), I returned to Apple TV and hopped into my home-sharing library.

screen-shot-2016-12-13-at-3-19-15-pm

Tada.

So if you’re looking for a lost movie, or you can’t find your rental on Apple TV, make sure that if you rented it on your home computer, that you’ve downloaded it from the cloud before attempting to play it from ATV.

Programming Cozmo

Anki has been kind enough to let me play with their new Cozmo unit and explore their SDK. Cozmo is a wonderful device, developed by people who understand a lot of core principles about human interaction and engagement.

Cozmo is adorable. When it recognizes your face, it wriggles with happiness. It explores its environment. When it’s bored, it sets up a game to play with you. It can get “upset” and demand attention. It’s one of the most personable and delightful robots I’ve played with.

At its heart is a well-chosen collection of minimal elements. The unit can move around the room, with a 4-wheel/2-tread system. It includes an onboard forklift that can rise and fall, an OLED “face” that expresses emotion, and a camera system that ties into a computer vision system, which I believe is based on PIL, the Python Image Library. (Anki tells me that Cozmo’s vision system “does not use PIL or Python in any way, though the Python SDK interface uses PIL for decoding jpegs, drawing animations, etc.”)

Three lightweight blocks with easily-identified markings complete the Cozmo package, which Cozmo can tap, lift, stack, and roll.

Between its remarkable cuteness and its vision-based API, it’s a perfect system for introducing kids to programming. I was really excited to jump into the SDK and see how far I could push it.

Here is Anki’s “Hello World” code (more or less, I’ve tweaked it a little) from their first developer tutorial:

import sys
import cozmo

'''
Hello Human
Make Cozmo say 'Hello Human' in this simple
Cozmo SDK example program.
'''

def run(sdk_conn):
    robot = sdk_conn.wait_for_robot()
    robot.say_text("Hello Human").wait_for_completed()
    print("Success")

if __name__ == '__main__':
    cozmo.setup_basic_logging()    
    try:
        cozmo.connect(run)
    except cozmo.ConnectionError as err:
        sys.exit("Connection error ????: %s" % err)

Although simple, this “Hello World” includes quite a lot of implementation details that can scare off young learners. For comparison, here’s the start of Apple’s tutorial on Swift “Learn to Code”:

screen-shot-2016-12-12-at-11-45-24-am

There’s such a huge difference here. In Apple’s case, everything that Byte (the main character) does is limited to easy-to-understand, simple calls. The entire implementation is abstracted away, and all that’s left are instructions and very directed calls, which the student can put together, re-order, and explore with immediate feedback.

In Anki’s code, you’re presented with material that’s dealing with set-up, exceptions, asynchronous calls, and more. That is a huge amount of information to put in front of a learner, and to then say “ignore all of this”. Cozmo is underserved by this approach. Real life robots are always going to be a lot more fun to work with than on-screen animations. Cozmo deserved as simple a vocabulary as Byte. That difference set me on the road to create a proof of concept.

In this effort, I’ve tried to develop a more engaging system of interaction that better mirrors the way kids learn. By creating high level abstractions, I wanted to support the same kind of learning as “Learn to Code”. Learn to Code begins with procedural calls, and then conditional ones, and moving on to iteration and functional abstraction, and so forth.

My yardstick of success has been, “can my son use these building blocks to express goals and master basic procedural and conditional code?” (I haven’t gotten him up to iteration yet.) So far, so good, actually.  Here is what my updated “Hello World” looks like for Cozmo, after creating a more structured entry into robot control functionality:

from Cozmo import *

# run, cozmo, run
def actions(cozmoLink):
    '''Specify actions for cozmo to run.'''
    
    # Fetch robot
    coz = Cozmo.robot(cozmoLink)

    # Say something
    coz.say("Hello Human")

Cozmo.startUp(actions)

Not quite as clean as “Learn to Code” but I think it’s a vast improvement on the original. Calls now go through a central Cozmo class. I’ve chunked together common behavior and I’ve abstracted away most implementation details, which are not of immediate interest to a student learner.

Although I haven’t had the time to really take this as far as I want, my Cozmo system can now talk, drive, turn, and engage (a little) with light cubes. What follows is a slightly more involved example. Cozmo runs several actions in sequence, and then conditionally responds to an interaction:

from Cozmo import *
from Colors import *

# Run, Cozmo, run
def actions(cozmoLink):
    '''Specify actions for cozmo to run.'''
    
    # Fetch robot
    coz = Cozmo.robot(cozmoLink)

    # Say something
    coz.say("Hello")

    # Drive a little
    coz.drive(time = 3, direction = Direction.forward)
    
    # Turn
    coz.turn(degrees = 180)
    
    # Drive a little more
    coz.drive(time = 3, direction = Direction.forward)

    # Light up a cube
    cube = coz.cube(0)
    cube.setColor(colorLime)
    
    # Tap it!
    coz.say("Tap it")    
    if cube.waitForTap():
        coz.say("You tapped it")
    else:
        coz.say("Why no tap?")
    cube.switchOff()

Cozmo.startUp(actions)

And here is a video showing Cozmo executing this code:

If you’d like to explore this a little further:

  • Here is a video showing the SDK feedback during that execution. You can see how the commands translate to base Cozmo directives.
  • I’ve left a bit of source code over at GitHub if you have a Cozmo or are just interested in my approach.

As you might expect, creating a usable student-focused learning system is time consuming and exhausting. On top of providing controlled functionality, what’s missing here is a lesson plan and a list of skills to master framed into “Let’s learn Python with Cozmo”. What’s here is just a sense of how that functionality might look when directed into more manageable chunks.

Given my time frame, I’ve focused more on “can this device be made student friendly” than producing an actual product. I believe my proof of concept shows that the right kind of engagement can support this kind of learning with this real-world robot.

The thing that appeals most to me about Cozmo from the start has been its rich computer vision capabilities. What I haven’t had a chance to really touch on yet is its high level features like “search for a cube”, “lift it and place it on another cube”, all of which are provided as building blocks in its existing API, and all of which are terrific touch points for a lesson plan.

I can easily see where I’d want to develop some new games with the robot, like lowering reaction time (it gets really hard under about three quarters of a second to tap that darn cube) and creating cube-to-cube sequences of light. I’d also love to discover whether I can extend detection to some leftovers my son brought home from our library’s 3D printer reject bin.

Cozmo does not offer a voice input SDK. It’s only real way to interact is through its cameras (and vision system) and through taps on its cubes. Even so, there’s a pretty rich basis to craft new ways to interact.

As for Anki’s built-ins, they’re quite rich. Cozmo can flip cubes, pull wheelies, and interact in a respectably rich range of physical and (via its face screen) emotional ways.

Even if you’re not programming the system, it’s a delightful toy. Add in the SDK though, and there’s a fantastic basis for learning.