Controlling a Mac with Amazon Echo or Google Home

Asking Google Home or Amazon Echo to play any song you like from the Spotify catalog is extremely liberating—but until there is support for the Spotify Connect feature, these devices can only play back music on their own speakers, or on speakers to which they’re directly connected. I have AirPlay speakers all over the house and I would like to be able to tell Alexa or Google Assistant to play a given song in my living room or my kitchen or in my office—or any combination of those locations—but that’s just not possible today. This was the biggest of the complaints I wrote about last week in my comparison of Echo and Home.

Persistence and hackery can overcome almost any tech roadblock though. Over the weekend I strung together a series of tools that allow me to issue voice commands to Google Home (which I prefer slightly) to select music on Spotify, then switch the playback to the Mac mini that sits at the heart of my home theater setup, and from there pipe music to the various AirPlay speakers in the house.

What’s more interesting is that the basics of this solution could theoretically allow you to control almost anything on your Mac via voice—and it’s incredibly easy to set up. It starts with the invaluable IFTTT service, which supports both Google Assistant and Alexa. You can define your own custom phrases to serve as IFTTT triggers, which can then generate simple text files on a cloud storage service like Dropbox, which can kick off automated routines on your Mac. This is what it looks like to set up the IFTTT component:

IFTTT Configuration

When you set Dropbox to sync the resulting text files to your Mac’s hard drive, the key is to do so to directories that you’ve loaded with macOS’s Folder Actions feature. Basically, when the text files are added to these folders, they act as triggers for Automator actions, which can do tons of stuff, including run AppleScript code—which in turn can do even more stuff. Once Automator is done, it can even clean up after itself by trashing the text file it used as a trigger, if you need.

Automator Folder Action

This is the basic approach I use to enable Google Home to play music throughout my house, though there are more steps involved, and some janky workarounds. Using the Home’s built-in voice commands I search for and play the songs I want, just as you would normally do. Then I speak a custom phrase like “Switch tunes to home theater” (most music playback-related phrases are reserved by Google Home so you can’t just say “Ok Google, play music on my home theater”) to kick off an IFTTT applet. The applet saves a text file in Dropbox, which syncs to my Mac to a directory with a folder action attached to it. That action runs a bit of AppleScript to open Spotify and, through the brute force of repeating pre-recorded mouse locations and clicks (courtesy of an extremely unsexy app called Mac Auto Mouse Click), it opens the Spotify Connect menu and switches the playback device from Google Home to the Mac mini. Then finally (whew), Automator also tells AirFoil to redirect that music to a pre-defined set of AirPlay speakers. All of a sudden, the music I asked for in the kitchen is playing all over the house.

Granted, the setup is hardly elegant, to say the least. The beauty of it is that it’s incredibly simple to set up, easy enough that I’ve created a series of similar commands to play and pause the music, jump back and forward, etc. As I’ve said in the past, I’m not a programmer by any means—before this weekend I had never spent more than a few minutes in Automator or writing AppleScript—so the learning curve is shallow.

This kind of rudimentary but highly engaging automation is sure to become more and more central to consumers as voice-powered interfaces gain ground. So it’s all the more concerning that Apple parted ways with its longtime champion of automation products last fall, though perhaps there are other plans afoot to continuing evolving the automation of Apple’s devices and software. Looking ahead at a future filled with these kinds of devices, as users we are only going to want the apps and services that we use to be more scriptable, more responsive to integrations. And we’ll want that ability to automate to be simple enough that we can put together the missing flows and actions that we want ourselves.



Four More Years


You can read about it in books as much as you want, but in parenting there are thresholds of knowledge you can only acquire by experiencing them firsthand. One of them is having twins. Ours turn four years old today—four years of craziness I never thought I had it in me to survive. But it’s been worth it, every day. When the boys were first born and my wife and I were basically in shock from the relentless grind of caring for two infants, another parent of twins told us, “At some point the clouds part and you can’t imagine it any other way.” That turned out to be so true. I can’t imagine life without this pair of irrepressible, unbelievable, unstoppable, wonderful little tykes. Happy birthday Lafayette and Thiebaud.



At Home with Echo and Home

Amazon Echo vs. Google Home

I’m sure I’m not the only person who got both an Amazon Echo and a Google Home over the holidays and is putting both of them through their paces. But I’m definitely the only person who got both and who writes on this blog, so here are my thoughts about the two.

My wife gave her parents an Amazon Echo for Christmas and the whole extended family played with it last week when we were visiting. What I saw was that, for both adults and kids encountering this kind of technology for the first time, it’s a blast to learn, play and explore the device’s capabilities. So much so that the Echo’s novelty just about eclipses how rough Alexa’s natural language processing still is.

The Echo’s a blast—so much so that its novelty eclipses its still very rough natural language processing.Twitter

Alexa is clearly able to understand your commands—and act on them—better than Siri is able to, but it doesn’t feel leagues better. In practice, it’s not uncommon to have to issue commands three, four, five or more times before Alexa understands what you’re trying to say—or until you learn the way Alexa wants you to say it. In fact, I found that the mere act of listening to other people of varying tech savviness negotiate with Alexa to get things done can be a frustrating experience. I bit my tongue several times as family members wrestled with slight variations on phrasing, emphasis and syntax—and then I went ahead and encountered exactly the same problems when uttering my own commands. Nevertheless, none of this dissuaded people from continuing to talk to Alexa. In the pantheon of this past Christmas season’s gifts, it was a hit.

At home we also received an Echo Dot as a present, a product which I think could be a home run. For just US$50, you get everything that the Echo does except for the higher quality speaker (which means Amazon is basically charging you US$130 for the full fledged version’s speaker, when you think about it). At that price point, I could easily imagine having a Dot in each room of the house, which would make for a really powerful system.

After we got back to Brooklyn, I hooked up the Dot to my Todoist account. Task management has always been my top priority for any kind of A.I.-based assistant; one of my biggest complaints about technology is that it still hasn’t solved the use case in which a task occurs to me and, somewhere between having that thought and pulling out my phone, unlocking it and opening up Todoist, I’ve somehow forgotten entirely what it was I was trying to record. Fewer things are more frustrating.

Technically, Alexa does solve that problem, but not very elegantly. You can certainly ask it to record items for you, and if it understands what you’re saying (again, phrasing, emphasis and syntax are important, and assigning dates via voice can be a frustrating challenge) it will add them to both its Alexa-based to-do list and to Todoist itself. But it often misinterprets dates, and it’s difficult to edit or correct what it records. The end result is something less than a well-formed task, as it may require editing and/or tagging and filing on a screen. A friend of mine says he uses Alexa-plus-Todoist to merely capture what he said and then massages the data later; to me, that’s just one step up from a voice memo.

By comparison Google Home and Todoist make for a moderately better pair. For one, Home lets you “talk” directly with Todoist—literally, a different voice takes over (that’s a meaningful change in the user interface that has the effect of making you feel that you’re interacting directly with the service, instead of through a middleman)—where Alexa merely syncs its tasks to the third-party app. Additionally, you can mark items as completed, add labels and move items to different projects. On the whole, this is a step up from Alexa and so I find it much more useful. But what I also discovered was that it’s relatively difficult to navigate a to-do list by voice; I have lots of items on my list at any given moment, and it’s time-consuming to go through them via audio. I also felt gun shy about editing them or marking them completed, afraid that I would inadvertently mess up my tasks somehow. Maybe more experience with these systems will bring a greater level of confidence. As it stands, I don’t yet feel proficient getting things done by voice.

On the other hand, both systems do relatively well with lower stakes tasks, like playing music. In fact, having a virtually unlimited catalog of music, as you get with Spotify, that can be controlled by voice will probably, for most people, be the single most useful aspect of these devices. In the case of my in-laws, when we added a Spotify account to their Echo and they realized that you could ask for just about anything and the Echo would give it to them, the device seemed to become much more useful.

However, here I still have a complaint. Neither Echo nor Home are able to play music via Spotify Connect. Which is to say, as of today, they’re only capable of playing music through their own speakers (or the ones that they’re directly linked up to). I have a network of AirPlay speakers set up in my house, and what I would like to do is to tell Alexa or Google Assistant to play music on, say, the living room speaker, or in the kitchen and in my office at the same time. That’s currently not possible, though this capability will come to Sonos speakers sometime in 2017. Hopefully Spotify is not deferring a solution to this problem entirely to Sonos, as I’m on record for having no real use for Sonos in my house.

Echo and Home save a click, which is all people really want out of technology innovation.Twitter

A few more comparison notes: both devices include prominently placed buttons that allow you to turn off their listening capabilities instantly. As a somewhat privacy-minded tech consumer, this was important to me, but it didn’t take me long to realize that the buttons are essentially useless if you want to use the smart speaker, and the smart speaker is essentially useless if you want to use the button. It seems pretty unlikely that I’ll ever have the wherewithal to use this feature if I keep these devices in my home; it’s just not practical to try to remember to turn off the mic and also remember to turn it on later. The end result is that, like it or not, these devices are always listening. In fact, this always-on user experience is why these machines succeed where Siri doesn’t; they save a click, which is all people really want out of technology innovation.

And finally, design. The Google Home comes in a nicer package, is a much nicer form factor, and is just gorgeous. It’s a very handsome expression of impressive technology. That said, the Amazon Echo, and the Echo Dot, look like they’re gadgets that you would buy at Bed, Bath & Beyond and are packaged exactly that way—and I don’t mean that pejoratively. An Echo looks like it belongs in your home, alongside your Kitchen-Aid and your fancy coffee maker. An Echo is not a miniature totem of technological achievement, it’s an appliance that makes your life easier. That says a lot about the difference between these two.



Messin’ with Workflow

Workflow on iPad

Among other things, the holidays afford time for indulgences masquerading as productivity. For instance, I spent an inordinate amount of time playing around with Workflow, the surprisingly powerful automation app for iOS. It’s an elegant, easy to learn scripting platform that makes iOS’s famously buttoned-up ecosystem of native apps feel sufficiently pliable to just about any whim that might occur to you.

A few examples: I built a workflow (the app’s confusingly self-referential terminology for a script) to move screenshots from my iPad to Dropbox and delete them from my camera roll; one to scan my calendar for events on a specific day and share my availability with coworkers; and another to generate PDFs of just about anything and store them in a designated folder. All of these can be triggered with just a tap or two from a Workflow widget in my device’s Notifications Center.

I found Workflow to be particularly effective for blogging from my iPad. This is something I do regularly because I often travel only with a tablet. Before Workflow it required jumping through more hoops than doing the same task on my desktop or laptop. So I created workflows to resize the canvas of images I post (like the one above) and another to upload it to my WordPress install while also saving a backup to Dropbox, again all with just a couple of taps.

Workflow is an incredibly well-considered scripting platform that’s still somewhat raw.Twitter

It’s true that none of my time-saving scripts amount to mind-bending breakthroughs in computing, but that doesn’t change the fact that Workflow can remove considerable friction from productivity routines on iOS. One notable added bonus is that after building the kind of complex and powerful workflows that you really need the larger screen real estate of an iPad to properly piece together and test, that work can be synced instantly to your iPhone, giving you that same complexity and power in the palm of your hand, on the go.

To be honest, I spent more time constructing these scripts than I’ll probably save in practice, at least measured in real minutes. But these were my first projects in Workflow so they were useful for learning how to use the app. More to the point, it was fun to do, too; Workflow makes experimentation (mostly) easy and straightforward, a huge selling point for non-programmers like myself.

For all its power though, Workflow is actually a curious mix of the polished and the raw. Its creators have delivered an incredibly well-considered scripting platform in terms of under-the-hood thinking and simplifying assumptions that make it a real pleasure to use. On the other hand, the app leaves you somewhat on your own as you explore it. There’s no undo, so an errant swipe could change parameters in your work before you know it, and you have to be conscientious about saving backup copies as your workflows evolve.

The app is also conspicuously missing robust, canonical documentation of its many concepts and capabilities. There are tips built into the interface but they’re rudimentary, and if you need more depth the developer directs you to a Subreddit. (You can also refer to a series of excellent episodes of Federico Viticci and Fraser Speirs’s Canvas podcast, which are an excellent introduction to Workflow.) While you don’t need to have an engineering background to learn Workflow, those with even a passing grasp of how scripting works, along with a willingness to Google for answers when you’re stuck, will have a much easier time than true novices.

Ultimately t’s probably most accurate to say that Workflow is just a very young scripting environment, one that is off to a great start but has much growing to do. All other things being equal, I’d much rather have it in its current, somewhat threadbare state when it’s a bit of a challenge but still tremendously useful than to be without it. It’s already an indispensable tool if you’re interested in going iPad-first or even iPad-only.



Tyrus Wong, Disney Artist, Dies at 106

The artist Tyrus Wong, who made major contributions to Walt Disney’s 1942 animated classic “Bambi,” passed away just before the end of the year. He was responsible for the gorgeously distinctive background paintings that made it a breakthrough in animation.

Painting for “Bambi” by Tyrus Wong

Wong’s obituary recounts his epic tale: as a Chinese-born child he was forced to endure withering government screening and trials to immigrate to the States; his father taught him to use a brush with just water because they could not afford ink; he earned a pittance in wages and the bigotry of low expectations making his way in the arts; when he tried his hand at animation, he was assigned the “in between” work that was considered the trade’s lowest and most menial job; through sheer pluck he managed to convince Disney to hire him to create the work that has helped “Bambi” endure for decades, and yet he was still fired during an employee strike in which he did not take part; ultimately it took until the 1990s for him to win recognition for his seminal work. Despite all that the man lived to be 106 years old before he passed on. Amazing.

Read the full obituary at Also, read about the retrospective exhibition of his work that was mounted last year at



“Rogue One” Cracks Open the Door to the Star Wars Universe

The “Star Wars” franchise is generally classified as science fiction but for many years now it’s really been in the process of metamorphosing into a genre of its own. It’s hard not to look at the last seven films and miss the fact that as a whole they have become increasingly, almost pathologically self-referential, governed by their own increasingly solipsistic rules and conventions, preoccupied with burying the original trilogy further and further in useless proprietary trivia.

This momentum towards meaninglessness is what each new episode must contend with. Last year’s “The Force Awakens,” in its slavish devotion to recreating tropes and devices we’d seen in the franchise before, had the effect of making the vastness of space seem small, hermetic, and starving for possibilities.

In many ways “Rogue One,” the latest installment and the first “standalone” film, doesn’t quite escape these expectations. Throughout its two hours and thirteen minutes, it busies itself with conveying meanings that only the most ardent devotees of Star Wars will ever be able to decipher. It’s peppered with details and characters and allusions to not just its theatrical predecessors, but also to the television shows and novels and video games and toys that we’re all supposed to be buying so that we can enjoy the next movie, television show, novel, video game or show.

“Rogue One”

Yet “Rogue One” also manages, somehow, to sneak in a real story in the midst of all that fan service. It’s not the most original story, or the most vividly rendered, but it’s a highly entertaining one that achieves a workable truce with the demands of its unique, Disney-owned form. It’s the tale of a rag-tag band of misfits, led by a conflicted protagonist, who attempt to steal a critical MacGuffin that could tip the balance in an exhausting war. Along the way you get distrust and scheming and then reversals of fortune and leaps of faith and heartening epiphanies, and some intensely choreographed shoot ’em ups and explosions too.

You’ve seen all of this before. It echoes many hallmarks of war movies and heist flicks—and that, maybe more than anything is what makes this movie work so well. Despite all of its sly winking and nodding towards the initiated, this movie is ultimately interested in more than its sandbox, this fictional universe in which people duel with light swords and you can hear explosions in outer space. Alongside the fan service, there are references to “The Longest Day,” “The Dirty Dozen”, “The Asphalt Jungle,” “Le Cercle Rouge,” “Rififi” and scores of other films. By borrowing liberally from these relatively fresh sources of inspiration, director Garett Edwards returns us to what made “Episode IV” so fascinating: the idea that you could pastiche together dozens of bits of cinematic history and create a wholly immersive and novel world out of them.

All of this may sound like rewarding a triumph over low expectations, and there’s a certain amount of truth to that. Frankly, until now every installment in this series since 1980’s “The Empire Strikes Back” has been terrible. “Rogue One” seems refreshing simply because it was directed and apparently reworked with an eye on making it survive as a movie on its own merits. And because it cracks open the door to its universe just a bit and lets in some new ideas.

For some fans like myself who have always felt that there’s more to explore in this franchise, this is its most salient achievement: “Rogue One” effectively proves the inherent sturdiness of the Star Wars universe. It shows that it’s possible to tell more than just that one same old story about hiding critical data in a droid which makes its way to a reluctant hero who finds the Force, et cetera et cetera. The result is that it makes this far, far away galaxy feel more porous and sprawling, less predictable and much, much more interesting.



When Samantha Bee Met Glenn Beck

Two-thousand and sixteen has been such a bizarre, horrible year. A case in point is Glenn Beck’s bizarre reconsideration of his previously divisive hyperbolic tendencies. Here he is chatting with Samantha Bee on her show “Full Frontal,” and the result is confounding on every level, not least because Beck seems to make more sense than Bee. Watching this clip is like watching reality collapse upon itself. If it weren’t for the fact that next year could be even worse, I’d say I can’t wait for this awful year to be over.



Designing an Alien Alphabet for “Arrival”

Alien Alphabet from “Arrival”

This article at Wired offers some interesting insight into the thinking that informed the alien “written language” in Denis Villeneuve’s “Arrival.”

A single logogram can express a simple thought (‘Hi’) or a complex one (‘Hi Louise, I’m an alien but I come in peace’). The difference lies in the complexity of the shape. A logogram’s weight carries meaning, too: A thicker swirl of ink can indicate a sense of urgency; a thinner one suggests a quiet tone. A small hook attached to one symbol makes it a question. The system allows each logogram to express a bundle of ideas without adhering to any traditional rules of syntax or sequence.

Whether this is truly plausible or not, the result is beautiful. Read the full article at



The Truth About the McDonald’s Coffee Case

I can’t remember how long I’ve been familiar with this incident in its urban legend form, but this video from “Adam Ruins Everything” sheds some light on the actual facts. If you’re not familiar with it, the story goes that a woman bought some coffee from McDonald’s and sued when she spilled it on her lap, prompting people everywhere to bemoan the susceptibility of the American legal system to frivolous lawsuits. The details are more horrible—and sinister—than that.



What I Learned about My iPhone After Switching to the Google Pixel

Google Pixel

Carrying around two phones is one of those things that I’ve always associated only with a certain class of dork, but right now that’s me. Along with my iPhone, I’ve been toting around a Google Pixel phone everywhere I go and, as much as I can, I’ve tried to make it my primary device. Everything that I would normally turn to my iPhone for, I try to turn to my Pixel for.

I did this in part to learn more about Android; even though this isn’t my first Android device (I’ve owned two others and a tablet), it’s the first one that from the outset looked like it stood the best chance of legitimately replacing my iPhone. Everything from the build quality to the subtle but meaningful extra attention and care paid to the operating system felt closer to the iPhone than I’ve seen before.

To be sure, it’s a terrific phone. It has a world class still camera that just about lives up to its hype, and to me the operating system has never felt as united with its hardware as it does in this phone.

As much as I tried though, after living with this device for several weeks I still felt that there were several stumbling blocks to jumping entirely to Android. Whether you consider it lock-in or value-add, Apple’s ecosystem is a powerful argument for sticking with the iPhone.

Everyone talks about iMessage being the most compelling argument for Apple’s ecosystem and I found that to be absolutely true. I had hoped that Google’s new Allo messaging product would be a worthy contender, but it fell far short. Allo doesn’t match iMessage’s key strength—the ability to abstract your account—the place where you send and receive messages—from the device. By contrast you can use one iMessage account on multiple devices (I count five devices for my account) and send and receive messages on all of them, but each Allo instance is tied to a single phone number and device, so there’s no device switching, and certainly no receiving Allo messages on my desktop. Learning that was a disappointment and pretty much meant the end of the argument for switching entirely to Android. iMessage is a huge advantage for Apple.

I also discovered something interesting about Google’s much vaunted strength in services: sometimes it’s no better than Apple’s. As an iTunes Match user, I’ve long bemoaned Apple’s inability to make automatic syncing of my music library between devices truly seamless and glitch free. It’s gotten better over the years, but it’s still prone to oddball errors and quirks which, in the past, always made me wish that Google was powering the service instead.

In reality Google’s vaunted strength in services is sometimes no better than Apple’s.Twitter

When I got the Pixel I figured I could use Google Play Music syncing for the same purpose—to get the contents of my music library to the Pixel. To my surprise, Google does an even poorer job than Apple. Among the problems I encountered: albums show up in multiple parts; tracks are missing; corrected meta information doesn’t get synced etc. To be fair, Google Play Music syncing is still mostly usable; it just failed to live up to my expectations for Google’s services prowess.

Another thing that surprised me was the experience of using Android’s lock screen notifications, which in the past I’ve admired and found more powerful than those on iOS. They’ve generally been richer and more capable where for a long time iOS lock screen notifications were fairly limited. That is, until iOS 10 overhauled its notification system. Aesthetically, the new notifications interface actually doesn’t look as good, to me, as it did in iOS 9. But I hadn’t realized until I started using the Pixel how very good its interactions are in general. By and large, iOS 10 notifications are easy to use and understand: if you see something on the lock screen you can take action on it and that clears all the other notifications. If you missed a notification, you can access it again by pulling down from the top of the screen once the phone is unlocked.

Android notification behavior, on the other hand, is harder to predict. They tend to stick around even after you’ve engaged with them, and worse, they reshuffle all the time, sometimes right before your eyes. It’s relatively difficult to clear them all too, unless you effectively view (or at least scan) all of them. In the end, I found it disappointing that a system that I had liked previously had turned into something more complex than I feel is really necessary.

None of which is to say that the Pixel is a bad phone. If you’re predisposed towards Android, or don’t enjoy iOS, the Pixel presents a superb overall experience. But I had hoped that, despite my predilection towards Apple, I would be able to find a viable alternative should I ever want to jump ship. I still hope that Android evolves into that, because I think that makes for a much more interesting market. For now though, even though I’m still carrying around my Pixel, my iPhone remains my main device.