Today, I hooked a newly purchased display to my MBP. (Looks like they’re out of stock right now, but it was $80 for 24″ when I bought it last week.) This isn’t intended to be my display. It’s replacing an old 14″ monitor for a kid. I thought I’d just steal it now and then during the day. It’s extremely lightweight and easy to move between rooms.
What I didn’t expect was how awful the text looked on it. I hooked up the monitor to the MBP using my Apple TV HDMI cable. The text was unreadable. I use similar TV-style monitors for my main system and they display text just fine. However, I’m using normal display ports and cables for my mini. This is the first time I’ve gone HDMI direct.
So off to websearch I went. Sure enough this is a known longstanding problem that many people have dealt with before. The MBP sees the TV as a TV and not a monitor. It produces a YUV signal instead of the RGB signal that improves text crispness. Pictures look pretty, text looks bad.
All the searches lead to this ruby script. The script builds a display override file containing a vendor and product ID with 4:4:4 RGB color support. The trick lies in getting macOS to install, read, and use it properly. That’s because you can’t install the file directly to /System/Library/Displays/Contents/Resources/Overrides/ in modern macOS. Instead, you have to disable “rootless”.
I wasn’t really happy about going into recovery mode. Disabling system integrity protection feels like overkill for a simple display issue. But it worked. It really only took a few minutes to resolve once I convinced myself it was worth doing. If you have any warnings and cautions about installing custom display overrides, please let me know. It feels like I did something morally wrong even if it did fix my problem.
My external display went from being unusable to merely imperfect. The text is still a bit blurry but you can read it without inducing a migraine. Not nearly as crisp as normal display ports (which looks fine when used with this monitor) but I don’t have to buy a new cable and I don’t plan to use this much.
If I were going to use this monitor regularly with the MBP, I’d definitely purchase a proper cable. As it is, I’m happy enough to have found a workable-ish solution. The monitor is quite nice especially in “shop mode”, and has so far worked well with Chromecast, AppleTV, and Wii.
Anki has been kind enough to let me play with their new Cozmo unit and explore their SDK. Cozmo is a wonderful device, developed by people who understand a lot of core principles about human interaction and engagement.
Cozmo is adorable. When it recognizes your face, it wriggles with happiness. It explores its environment. When it’s bored, it sets up a game to play with you. It can get “upset” and demand attention. It’s one of the most personable and delightful robots I’ve played with.
At its heart is a well-chosen collection of minimal elements. The unit can move around the room, with a 4-wheel/2-tread system. It includes an onboard forklift that can rise and fall, an OLED “face” that expresses emotion, and a camera system that ties into a computer vision system, which I believe is based on PIL, the Python Image Library. (Anki tells me that Cozmo’s vision system “does not use PIL or Python in any way, though the Python SDK interface uses PIL for decoding jpegs, drawing animations, etc.”)
Three lightweight blocks with easily-identified markings complete the Cozmo package, which Cozmo can tap, lift, stack, and roll.
Between its remarkable cuteness and its vision-based API, it’s a perfect system for introducing kids to programming. I was really excited to jump into the SDK and see how far I could push it.
Here is Anki’s “Hello World” code (more or less, I’ve tweaked it a little) from their first developer tutorial:
Make Cozmo say 'Hello Human' in this simple
Cozmo SDK example program.
robot = sdk_conn.wait_for_robot()
if __name__ == '__main__':
except cozmo.ConnectionError as err:
sys.exit("Connection error 😬: %s" % err)
Although simple, this “Hello World” includes quite a lot of implementation details that can scare off young learners. For comparison, here’s the start of Apple’s tutorial on Swift “Learn to Code”:
There’s such a huge difference here. In Apple’s case, everything that Byte (the main character) does is limited to easy-to-understand, simple calls. The entire implementation is abstracted away, and all that’s left are instructions and very directed calls, which the student can put together, re-order, and explore with immediate feedback.
In Anki’s code, you’re presented with material that’s dealing with set-up, exceptions, asynchronous calls, and more. That is a huge amount of information to put in front of a learner, and to then say “ignore all of this”. Cozmo is underserved by this approach. Real life robots are always going to be a lot more fun to work with than on-screen animations. Cozmo deserved as simple a vocabulary as Byte. That difference set me on the road to create a proof of concept.
In this effort, I’ve tried to develop a more engaging system of interaction that better mirrors the way kids learn. By creating high level abstractions, I wanted to support the same kind of learning as “Learn to Code”. Learn to Code begins with procedural calls, and then conditional ones, and moving on to iteration and functional abstraction, and so forth.
My yardstick of success has been, “can my son use these building blocks to express goals and master basic procedural and conditional code?” (I haven’t gotten him up to iteration yet.) So far, so good, actually. Here is what my updated “Hello World” looks like for Cozmo, after creating a more structured entry into robot control functionality:
from Cozmo import *
# run, cozmo, run
'''Specify actions for cozmo to run.'''
# Fetch robot
coz = Cozmo.robot(cozmoLink)
# Say something
Not quite as clean as “Learn to Code” but I think it’s a vast improvement on the original. Calls now go through a central Cozmo class. I’ve chunked together common behavior and I’ve abstracted away most implementation details, which are not of immediate interest to a student learner.
Although I haven’t had the time to really take this as far as I want, my Cozmo system can now talk, drive, turn, and engage (a little) with light cubes. What follows is a slightly more involved example. Cozmo runs several actions in sequence, and then conditionally responds to an interaction:
from Cozmo import *
from Colors import *
# Run, Cozmo, run
'''Specify actions for cozmo to run.'''
# Fetch robot
coz = Cozmo.robot(cozmoLink)
# Say something
# Drive a little
coz.drive(time = 3, direction = Direction.forward)
coz.turn(degrees = 180)
# Drive a little more
coz.drive(time = 3, direction = Direction.forward)
# Light up a cube
cube = coz.cube(0)
# Tap it!
coz.say("You tapped it")
coz.say("Why no tap?")
And here is a video showing Cozmo executing this code:
If you’d like to explore this a little further:
Here is a video showing the SDK feedback during that execution. You can see how the commands translate to base Cozmo directives.
I’ve left a bit of source code over at GitHub if you have a Cozmo or are just interested in my approach.
As you might expect, creating a usable student-focused learning system is time consuming and exhausting. On top of providing controlled functionality, what’s missing here is a lesson plan and a list of skills to master framed into “Let’s learn Python with Cozmo”. What’s here is just a sense of how that functionality might look when directed into more manageable chunks.
Given my time frame, I’ve focused more on “can this device be made student friendly” than producing an actual product. I believe my proof of concept shows that the right kind of engagement can support this kind of learning with this real-world robot.
The thing that appeals most to me about Cozmo from the start has been its rich computer vision capabilities. What I haven’t had a chance to really touch on yet is its high level features like “search for a cube”, “lift it and place it on another cube”, all of which are provided as building blocks in its existing API, and all of which are terrific touch points for a lesson plan.
I can easily see where I’d want to develop some new games with the robot, like lowering reaction time (it gets really hard under about three quarters of a second to tap that darn cube) and creating cube-to-cube sequences of light. I’d also love to discover whether I can extend detection to some leftovers my son brought home from our library’s 3D printer reject bin.
Cozmo does not offer a voice input SDK. It’s only real way to interact is through its cameras (and vision system) and through taps on its cubes. Even so, there’s a pretty rich basis to craft new ways to interact.
As for Anki’s built-ins, they’re quite rich. Cozmo can flip cubes, pull wheelies, and interact in a respectably rich range of physical and (via its face screen) emotional ways.
Even if you’re not programming the system, it’s a delightful toy. Add in the SDK though, and there’s a fantastic basis for learning.
This is my Jeep stroller. It is probably the best stroller ever. It’s also the best shopping cart ever. It has cupholders. It has not just a pop-up glove compartment at the top but extra cargo bags on the left and right for small items.
The basket underneath is enormous. It carries tons of weight. The wheels are rugged and have been able to navigate through just about any terrain, including going to the local market through Colorado winters.
I can load a couple of 12-packs of sodas, carry about 6-8 other bags of groceries. There’s room above, there’s room below, there’s easy-to-tie-to-handles. When I’m at the store, I can stick a basket in it, fill it, surround it, and put coats, gloves, hats, etc below (or use that area for more groceries). It is, in short, the ultimate shopping machine.
Only one problem. My baby is now a few years away from starting to grow facial hair. Try to use this thing and you get side eye, hairy eyeball, every kind of “are you some kind of sick bag-lady with an old shopping cart stroller” look you can imagine.
There’s some kind of unspoken consensus that after our kids have grown, we must transition to granny carts. I used to own a granny cart when I was in college. It was fantastic. (4-wheel variety because two wheels and tilts are a sucker’s game.) I loved that cart for shopping, laundry, and so forth, but after using my Jeep stroller, there is no way I am going back to the granny cart.
I’m throwing this out there to my brain trust. How do I “de-baby” this cart so I can continue using it to lug massive quantities of various haulage without being a social pariah?
It was bad enough a few years ago when I could answer all the “so where’s your…baby…?” questions with “I’m on the way to pick him up from daycare/school/whatever.” But now? I don’t have that excuse.
There is no baby. There is only cart. It doesn’t have to be “cool”. I just don’t want members of the homeowners association to start calling the police about the crazy lady walking around with an empty baby stroller.
It was hard to miss the $35 Kindle Fire deal on Friday. Deep discounts extended across nearly all the Amazon Kindle product line but I limited myself to purchasing two units, augmenting the 2011-vintage unit we already had on-hand.
Our new tablets arrived yesterday and they are definitely a step up in quality from the original line. They’re faster, the UI is cleaner, the features more extensive with built-in cameras and microphone. The units are not so different in weight but they feel better made and more consumer ready.
The new Fires are also more obviously and in-your-face a marketing arm of Amazon and less general purpose tablets. That is hardly surprising for a $35 (shipped!) purchase but it’s one that as a parent you have to be really cautious with. I quickly enabled parental controls (something I’ve never done on our iPads) and disabled insta-purchases.
One of the two Kindles is replacing a first generation iPad mini, which was lovingly purchased as “gently refurbished” before being dropped from a height of about three feet to its death, approximately five seconds (give or take a week) after its arrival. That mini replaced a 1st gen iPad, which since the mini’s untimely demise, has been back in service — gasping and wheezing and doing its best to keep up. The Kindle is no iPad mini but it has a role in our lives to fill.
Speaking as a parent, having a $35 alternate is a very good thing. I don’t really care that it doesn’t run all the same apps (or even very many of the good apps). It connects to the net, does email and web, allows child to do most school related tasks. It is acceptable.
We’ll see how the school transition goes. I suspect teachers will applaud the built-in book reading and condemn the onboard videos. (There’s also a music app but really who wants to spend time setting that up?) At the very least, this new tablet will probably work better and more reliably than the 1st generation iPad that’s currently being hauled to and from school every day. Fingers crossed.
As for the second Kindle Fire, well, that’s going to younger brother who is currently trying to keep his Chromebook working. The 2012 Samsung Chromebook although initially appealing turned out to be one of the worst pieces of hardware we ever bought
His all-Chromebook school agrees. They’re transitioning next year away from these cheaply made, unreliable pieces of…hardware…probably to iPads if they can get a deal/grant/whatever through the school district.
Every parent was required to purchase Chromebook insurance. We’ve paid twice for replacements, and this doesn’t count the 2012 Chromebook we personally bought out of pocket and liked so much for the first few months until it started to fail and fail and fail and fail.
Compare this to our 2011 Kindle Fire which other than a loose charging port is still working well and our 2010 iPad, which we’re abandoning only because it weighs about as much as a baby elephant and it cannot run new operating systems.
Amazon isn’t pushing Kindle into the classroom the way Apple makes that connection. It’s a commerce machine not a expression of learning and expression. I may have to use side-loading to get classroom-specific Android software onto the boy’s new Kindle Fire. Last night, I got the technique down, just in case.
For $70 total shipped between the two tablets, it’s an experiment I’m happier to make than usual. Wish us luck. I know there will be more roadbumps than if we went the iPad route.
Middle child and I were at the dollar store earlier this week. It’s fall break and we were feeling antsy and rich, with dollars in our pockets and hours to kill. So we picked up one each of Crayola’s Color Studio HD+ and Light Marker products for a cool buck each (originally priced at $29.99 and co-branded with Griffin).
After returning home and putting these technologies to the test, we quickly figured out why they had been discounted down to a buck each.
They kind of suck.
The Light Marker app (free) uses your iPad’s onboard camera to look at a colored flashlight, letting your little artistic prodigy draw pictures from a foot or two away from the canvas. I’m not joking here. The child waves the flashlight in a dim or dark room, and with luck, manages to “draw” images to the screen.
It’s a terrible user experience and a terrible app.
However, it’s not nearly as bad as the unusable Crayola ColorStudio HD+ with stylus. This “stylus”, believe it or not, works by emitting a high pitched irritating pulsing beep, which the iPad tracks and triangulates to figure out where the “stylus” is on-screen. It also has a hideous color-changing light-show on the side of the “stylus”.
You have to push really hard to get the iPad to recognize the interaction. My daughter is way better at this than I am, and she drew the magnificent inspired art work at the top of the post.
Both apps are shortly going to be trashed.
As for the products, the Light Marker stylus is of moderate use in that it is, in fact, a flashlight, so can be used as a flashlight. The HD “stylus” will be in the trash shortly as it gave me a headache during use and its only good feature seems to be that it…no…never mind. It doesn’t have a good feature.
However, hidden within the packaging of the Light Marker is a damned fine iPad stand that we *loved*. Well worth that $1, this collapsable stand has rubberized footers, a solid build, and can not only be used with the intended iPad, but also pretty much every iDevice and Kindle we could throw at it — regardless of width and case. It folds down to almost nothing but is strong enough to throw into a backpack or purse.
I’m probably going to go back and buy a few more Light Marker packages because this stand is awesome.
To summarize: both products are crap, not worth $1, let alone $30. Given the ubiquity of $1 tablet styluses at the dollar store, it’s not as if they couldn’t have just packaged a decent toddler-appropriate stylus. This is a perfect example of people trying to be too clever and not at all practical when putting together a product.
However, if you have a Dollar Tree near you, head on over and buy some of these stands. I loved ’em.
Update: Compared to my beloved Two-Hands stands, this is nothing to write home about. Not nearly as stable, won’t move with the device when you pick it up, can’t adjust the angle with exact precision etc. BUT unlike the Two Hands, this cheapy stand can handle Kindle (Two Hands ends are too thin), thick cases (same problem), phones and ipod touches. I’ve been using this all afternoon for plopping testing devices into. If you’re looking for the ultimate iPad stand, stick with the Two Hands. For throwing testing devices onto, this is great.
CUPERTINO, California — July 15, 2015 — Apple® today introduced the best iPod touch® yet and unveiled a new lineup of colors for all iPod® models, including space gray, silver, gold, pink and blue. The ultra-portable iPod touch features a new 8 megapixel iSight® camera for beautiful photos, an improved FaceTime® HD camera for even better selfies, the Apple-designed A8 chip with 10 times faster graphics performance for a more immersive gaming experience, and even better fitness tracking with the M8 motion coprocessor
We own an iPod touch 5th gen that I picked up refurb for about $150 and it’s really fabulous. One of my favorite iOS purchases ever, and currently my primary development unit for iOS 9.
The new iPods with teen-friendly case colors (think metallic rather than iPhone 5C colored, plus the now-standard gold/silver/black) start from $199 at the Apple Store.
Both the nano and the shuffle also appear to include new finishes.
The newly introduced 12-inch MacBook confuses me. Yes, I want one. Yes, I see where they will sell a lot of these to college students and on-the-go professionals. It has a great potential market. No, where I’m confused, is where this new MacBook fits in the product line. Is it the Air killer?
I was looking at a side-by-side spec-off this morning and noticing that while the display, battery life, and storage are up compared to a 13-inch MBA, the processor speeds are way down even with Turbo Boost. When Apple said it was taking lessons from mobile, it wasn’t kidding. The new MacBook takes the mobile crown from the MBA, complete with most of the touch points the Air had held.
In other words, what is the point of the MacBook Air anymore? Should they rename it the “MacBook Budget Alternative” instead?
The MacBook is more stylish, lighter, cleaner, with a better keyboard and a longer battery life. Its display kills the Air’s display. The processor downgrade is probably even a feature if you consider a lower load on the battery.
In other words, I’m beginning to think that the new MacBook exists to make anyone who buys the 2015 MBA feel very sad that they didn’t have the $300 or $400 to get the cuter model.