is a blog about design, technology and culture written by Khoi Vinh, and has been more or less continuously published since December 2000 in New York City. Khoi is currently Principal Designer at Adobe, Design Chair at Wildcard and co-founder of Kidpost. Previously, Khoi was co-founder and CEO of Mixel (acquired by Etsy, Inc.), Design Director of The New York Times Online, and co-founder of the design studio Behavior, LLC. He is the author of “Ordering Disorder: Grid Principles for Web Design,” and was named one of Fast Company’s “fifty most influential designers in America.” Khoi lives in Crown Heights, Brooklyn with his wife and three children. Refer to the advertising and sponsorship page for inquiries.
After a seven month design process that was conducted in the open, Mozilla this week unveiled their new branding. Tim Miller, who leads the organization’s Creative Team, writes in this announcement blog post:
At the core of this project is the need for Mozilla’s purpose and brand to be better understood by more people. We want to be known as the champions for a healthy Internet. An Internet where we are all free to explore and discover and create and innovate without barriers or limitations. Where power is in the hands of many, not held by few. An Internet where our safety, security and identity are respected…
Our logo with its nod to URL language reinforces that the Internet is at the heart of Mozilla. We are committed to the original intent of the link as the beginning of an unfiltered, unmediated experience into the rich content of the Internet.
The work was done by London’s Johnson Banks, who wrote some thoughts here. The typeface is a bespoke creation by Typotheque for this project; it’s called “Zilla” and it’s intended to be free for everyone to use, but in a cursory search I couldn’t find a way to download it yet.
My first impression was that this is a bit of a groaner—the visual pun struck me as the tech/design equivalent of dad humor (as a dad myself, I should know). But it didn’t take me long to warm up to it. I’m a fan of its utter lack of pretension, and how unabashedly it embraces the organization’s geeky legacy. Overall, thumbs up.
One of the more underappreciated reasons to own an iPad is that it can change your relationship with your “real” computer. The folks at Astropad have been plugging away at this for a few years now. Just last week they upped their game with the release of their new Astropad Studio, which pairs a Mac app that you download directly from astropad.com with an iPad app available through the App Store. It turns your iPad into a a Wacom-style graphics tablet, but because you can configure your own buttons, it actually presents an alternative interface to your favorite software. Not only can you create your own custom shortcuts with combinations of Apple Pencil gestures and touch, but you can also configure just the menu items you like, turning off the ones that you don’t find useful. For an app like Photoshop, it basically lets you pare back the experience to just what matters to you. These videos do a good job explaining the product:
Astropad Studio is clearly aimed at professionals, which explains its price—surprisingly hefty for your average iOS app, but entirely reasonable (cheap, even) for pros: US$8 per month or US$65 per year. Yes, it’s only available via subscription (the company still sells their original app for a one-time fee of US$30), but the reality is that that’s the model that works for professional software these days. Learn more at astropad.com or in their blog post announcement.
Asking Google Home or Amazon Echo to play any song you like from the Spotify catalog is extremely liberating—but until there is support for the Spotify Connect feature, these devices can only play back music on their own speakers, or on speakers to which they’re directly connected. I have AirPlay speakers all over the house and I would like to be able to tell Alexa or Google Assistant to play a given song in my living room or my kitchen or in my office—or any combination of those locations—but that’s just not possible today. This was the biggest of the complaints I wrote about last week in my comparison of Echo and Home.
Persistence and hackery can overcome almost any tech roadblock though. Over the weekend I strung together a series of tools that allow me to issue voice commands to Google Home (which I prefer slightly) to select music on Spotify, then switch the playback to the Mac mini that sits at the heart of my home theater setup, and from there pipe music to the various AirPlay speakers in the house.
What’s more interesting is that the basics of this solution could theoretically allow you to control almost anything on your Mac via voice—and it’s incredibly easy to set up. It starts with the invaluable IFTTT service, which supports both Google Assistant and Alexa. You can define your own custom phrases to serve as IFTTT triggers, which can then generate simple text files on a cloud storage service like Dropbox, which can kick off automated routines on your Mac. This is what it looks like to set up the IFTTT component:
When you set Dropbox to sync the resulting text files to your Mac’s hard drive, the key is to do so to directories that you’ve loaded with macOS’s Folder Actions feature. Basically, when the text files are added to these folders, they act as triggers for Automator actions, which can do tons of stuff, including run AppleScript code—which in turn can do even more stuff. Once Automator is done, it can even clean up after itself by trashing the text file it used as a trigger, if you need.
This is the basic approach I use to enable Google Home to play music throughout my house, though there are more steps involved, and some janky workarounds. Using the Home’s built-in voice commands I search for and play the songs I want, just as you would normally do. Then I speak a custom phrase like “Switch tunes to home theater” (most music playback-related phrases are reserved by Google Home so you can’t just say “Ok Google, play music on my home theater”) to kick off an IFTTT applet. The applet saves a text file in Dropbox, which syncs to my Mac to a directory with a folder action attached to it. That action runs a bit of AppleScript to open Spotify and, through the brute force of repeating pre-recorded mouse locations and clicks (courtesy of an extremely unsexy app called Mac Auto Mouse Click), it opens the Spotify Connect menu and switches the playback device from Google Home to the Mac mini. Then finally (whew), Automator also tells AirFoil to redirect that music to a pre-defined set of AirPlay speakers. All of a sudden, the music I asked for in the kitchen is playing all over the house.
Granted, the setup is hardly elegant, to say the least. The beauty of it is that it’s incredibly simple to set up, easy enough that I’ve created a series of similar commands to play and pause the music, jump back and forward, etc. As I’ve said in the past, I’m not a programmer by any means—before this weekend I had never spent more than a few minutes in Automator or writing AppleScript—so the learning curve is shallow.
This kind of rudimentary but highly engaging automation is sure to become more and more central to consumers as voice-powered interfaces gain ground. So it’s all the more concerning that Apple parted ways with its longtime champion of automation products last fall, though perhaps there are other plans afoot to continuing evolving the automation of Apple’s devices and software. Looking ahead at a future filled with these kinds of devices, as users we are only going to want the apps and services that we use to be more scriptable, more responsive to integrations. And we’ll want that ability to automate to be simple enough that we can put together the missing flows and actions that we want ourselves.
You can read about it in books as much as you want, but in parenting there are thresholds of knowledge you can only acquire by experiencing them firsthand. One of them is having twins. Ours turn four years old today—four years of craziness I never thought I had it in me to survive. But it’s been worth it, every day. When the boys were first born and my wife and I were basically in shock from the relentless grind of caring for two infants, another parent of twins told us, “At some point the clouds part and you can’t imagine it any other way.” That turned out to be so true. I can’t imagine life without this pair of irrepressible, unbelievable, unstoppable, wonderful little tykes. Happy birthday Lafayette and Thiebaud.
I’m sure I’m not the only person who got both an Amazon Echo and a Google Home over the holidays and is putting both of them through their paces. But I’m definitely the only person who got both and who writes on this blog, so here are my thoughts about the two.
My wife gave her parents an Amazon Echo for Christmas and the whole extended family played with it last week when we were visiting. What I saw was that, for both adults and kids encountering this kind of technology for the first time, it’s a blast to learn, play and explore the device’s capabilities. So much so that the Echo’s novelty just about eclipses how rough Alexa’s natural language processing still is.
Alexa is clearly able to understand your commands—and act on them—better than Siri is able to, but it doesn’t feel leagues better. In practice, it’s not uncommon to have to issue commands three, four, five or more times before Alexa understands what you’re trying to say—or until you learn the way Alexa wants you to say it. In fact, I found that the mere act of listening to other people of varying tech savviness negotiate with Alexa to get things done can be a frustrating experience. I bit my tongue several times as family members wrestled with slight variations on phrasing, emphasis and syntax—and then I went ahead and encountered exactly the same problems when uttering my own commands. Nevertheless, none of this dissuaded people from continuing to talk to Alexa. In the pantheon of this past Christmas season’s gifts, it was a hit.
At home we also received an Echo Dot as a present, a product which I think could be a home run. For just US$50, you get everything that the Echo does except for the higher quality speaker (which means Amazon is basically charging you US$130 for the full fledged version’s speaker, when you think about it). At that price point, I could easily imagine having a Dot in each room of the house, which would make for a really powerful system.
After we got back to Brooklyn, I hooked up the Dot to my Todoist account. Task management has always been my top priority for any kind of A.I.-based assistant; one of my biggest complaints about technology is that it still hasn’t solved the use case in which a task occurs to me and, somewhere between having that thought and pulling out my phone, unlocking it and opening up Todoist, I’ve somehow forgotten entirely what it was I was trying to record. Fewer things are more frustrating.
Technically, Alexa does solve that problem, but not very elegantly. You can certainly ask it to record items for you, and if it understands what you’re saying (again, phrasing, emphasis and syntax are important, and assigning dates via voice can be a frustrating challenge) it will add them to both its Alexa-based to-do list and to Todoist itself. But it often misinterprets dates, and it’s difficult to edit or correct what it records. The end result is something less than a well-formed task, as it may require editing and/or tagging and filing on a screen. A friend of mine says he uses Alexa-plus-Todoist to merely capture what he said and then massages the data later; to me, that’s just one step up from a voice memo.
By comparison Google Home and Todoist make for a moderately better pair. For one, Home lets you “talk” directly with Todoist—literally, a different voice takes over (that’s a meaningful change in the user interface that has the effect of making you feel that you’re interacting directly with the service, instead of through a middleman)—where Alexa merely syncs its tasks to the third-party app. Additionally, you can mark items as completed, add labels and move items to different projects. On the whole, this is a step up from Alexa and so I find it much more useful. But what I also discovered was that it’s relatively difficult to navigate a to-do list by voice; I have lots of items on my list at any given moment, and it’s time-consuming to go through them via audio. I also felt gun shy about editing them or marking them completed, afraid that I would inadvertently mess up my tasks somehow. Maybe more experience with these systems will bring a greater level of confidence. As it stands, I don’t yet feel proficient getting things done by voice.
On the other hand, both systems do relatively well with lower stakes tasks, like playing music. In fact, having a virtually unlimited catalog of music, as you get with Spotify, that can be controlled by voice will probably, for most people, be the single most useful aspect of these devices. In the case of my in-laws, when we added a Spotify account to their Echo and they realized that you could ask for just about anything and the Echo would give it to them, the device seemed to become much more useful.
However, here I still have a complaint. Neither Echo nor Home are able to play music via Spotify Connect. Which is to say, as of today, they’re only capable of playing music through their own speakers (or the ones that they’re directly linked up to). I have a network of AirPlay speakers set up in my house, and what I would like to do is to tell Alexa or Google Assistant to play music on, say, the living room speaker, or in the kitchen and in my office at the same time. That’s currently not possible, though this capability will come to Sonos speakers sometime in 2017. Hopefully Spotify is not deferring a solution to this problem entirely to Sonos, as I’m on record for having no real use for Sonos in my house.
A few more comparison notes: both devices include prominently placed buttons that allow you to turn off their listening capabilities instantly. As a somewhat privacy-minded tech consumer, this was important to me, but it didn’t take me long to realize that the buttons are essentially useless if you want to use the smart speaker, and the smart speaker is essentially useless if you want to use the button. It seems pretty unlikely that I’ll ever have the wherewithal to use this feature if I keep these devices in my home; it’s just not practical to try to remember to turn off the mic and also remember to turn it on later. The end result is that, like it or not, these devices are always listening. In fact, this always-on user experience is why these machines succeed where Siri doesn’t; they save a click, which is all people really want out of technology innovation.
And finally, design. The Google Home comes in a nicer package, is a much nicer form factor, and is just gorgeous. It’s a very handsome expression of impressive technology. That said, the Amazon Echo, and the Echo Dot, look like they’re gadgets that you would buy at Bed, Bath & Beyond and are packaged exactly that way—and I don’t mean that pejoratively. An Echo looks like it belongs in your home, alongside your Kitchen-Aid and your fancy coffee maker. An Echo is not a miniature totem of technological achievement, it’s an appliance that makes your life easier. That says a lot about the difference between these two.
Among other things, the holidays afford time for indulgences masquerading as productivity. For instance, I spent an inordinate amount of time playing around with Workflow, the surprisingly powerful automation app for iOS. It’s an elegant, easy to learn scripting platform that makes iOS’s famously buttoned-up ecosystem of native apps feel sufficiently pliable to just about any whim that might occur to you.
A few examples: I built a workflow (the app’s confusingly self-referential terminology for a script) to move screenshots from my iPad to Dropbox and delete them from my camera roll; one to scan my calendar for events on a specific day and share my availability with coworkers; and another to generate PDFs of just about anything and store them in a designated folder. All of these can be triggered with just a tap or two from a Workflow widget in my device’s Notifications Center.
I found Workflow to be particularly effective for blogging from my iPad. This is something I do regularly because I often travel only with a tablet. Before Workflow it required jumping through more hoops than doing the same task on my desktop or laptop. So I created workflows to resize the canvas of images I post (like the one above) and another to upload it to my WordPress install while also saving a backup to Dropbox, again all with just a couple of taps.
It’s true that none of my time-saving scripts amount to mind-bending breakthroughs in computing, but that doesn’t change the fact that Workflow can remove considerable friction from productivity routines on iOS. One notable added bonus is that after building the kind of complex and powerful workflows that you really need the larger screen real estate of an iPad to properly piece together and test, that work can be synced instantly to your iPhone, giving you that same complexity and power in the palm of your hand, on the go.
To be honest, I spent more time constructing these scripts than I’ll probably save in practice, at least measured in real minutes. But these were my first projects in Workflow so they were useful for learning how to use the app. More to the point, it was fun to do, too; Workflow makes experimentation (mostly) easy and straightforward, a huge selling point for non-programmers like myself.
For all its power though, Workflow is actually a curious mix of the polished and the raw. Its creators have delivered an incredibly well-considered scripting platform in terms of under-the-hood thinking and simplifying assumptions that make it a real pleasure to use. On the other hand, the app leaves you somewhat on your own as you explore it. There’s no undo, so an errant swipe could change parameters in your work before you know it, and you have to be conscientious about saving backup copies as your workflows evolve.
The app is also conspicuously missing robust, canonical documentation of its many concepts and capabilities. There are tips built into the interface but they’re rudimentary, and if you need more depth the developer directs you to a Subreddit. (You can also refer to a series of excellent episodes of Federico Viticci and Fraser Speirs’s Canvas podcast, which are an excellent introduction to Workflow.) While you don’t need to have an engineering background to learn Workflow, those with even a passing grasp of how scripting works, along with a willingness to Google for answers when you’re stuck, will have a much easier time than true novices.
Ultimately t’s probably most accurate to say that Workflow is just a very young scripting environment, one that is off to a great start but has much growing to do. All other things being equal, I’d much rather have it in its current, somewhat threadbare state when it’s a bit of a challenge but still tremendously useful than to be without it. It’s already an indispensable tool if you’re interested in going iPad-first or even iPad-only.
The artist Tyrus Wong, who made major contributions to Walt Disney’s 1942 animated classic “Bambi,” passed away just before the end of the year. He was responsible for the gorgeously distinctive background paintings that made it a breakthrough in animation.
Wong’s obituary recounts his epic tale: as a Chinese-born child he was forced to endure withering government screening and trials to immigrate to the States; his father taught him to use a brush with just water because they could not afford ink; he earned a pittance in wages and the bigotry of low expectations making his way in the arts; when he tried his hand at animation, he was assigned the “in between” work that was considered the trade’s lowest and most menial job; through sheer pluck he managed to convince Disney to hire him to create the work that has helped “Bambi” endure for decades, and yet he was still fired during an employee strike in which he did not take part; ultimately it took until the 1990s for him to win recognition for his seminal work. Despite all that the man lived to be 106 years old before he passed on. Amazing.
Read the full obituary at nytimes.com. Also, read about the retrospective exhibition of his work that was mounted last year at observer.com.
The “Star Wars” franchise is generally classified as science fiction but for many years now it’s really been in the process of metamorphosing into a genre of its own. It’s hard not to look at the last seven films and miss the fact that as a whole they have become increasingly, almost pathologically self-referential, governed by their own increasingly solipsistic rules and conventions, preoccupied with burying the original trilogy further and further in useless proprietary trivia.
This momentum towards meaninglessness is what each new episode must contend with. Last year’s “The Force Awakens,” in its slavish devotion to recreating tropes and devices we’d seen in the franchise before, had the effect of making the vastness of space seem small, hermetic, and starving for possibilities.
In many ways “Rogue One,” the latest installment and the first “standalone” film, doesn’t quite escape these expectations. Throughout its two hours and thirteen minutes, it busies itself with conveying meanings that only the most ardent devotees of Star Wars will ever be able to decipher. It’s peppered with details and characters and allusions to not just its theatrical predecessors, but also to the television shows and novels and video games and toys that we’re all supposed to be buying so that we can enjoy the next movie, television show, novel, video game or show.
Yet “Rogue One” also manages, somehow, to sneak in a real story in the midst of all that fan service. It’s not the most original story, or the most vividly rendered, but it’s a highly entertaining one that achieves a workable truce with the demands of its unique, Disney-owned form. It’s the tale of a rag-tag band of misfits, led by a conflicted protagonist, who attempt to steal a critical MacGuffin that could tip the balance in an exhausting war. Along the way you get distrust and scheming and then reversals of fortune and leaps of faith and heartening epiphanies, and some intensely choreographed shoot ’em ups and explosions too.
You’ve seen all of this before. It echoes many hallmarks of war movies and heist flicks—and that, maybe more than anything is what makes this movie work so well. Despite all of its sly winking and nodding towards the initiated, this movie is ultimately interested in more than its sandbox, this fictional universe in which people duel with light swords and you can hear explosions in outer space. Alongside the fan service, there are references to “The Longest Day,” “The Dirty Dozen”, “The Asphalt Jungle,” “Le Cercle Rouge,” “Rififi” and scores of other films. By borrowing liberally from these relatively fresh sources of inspiration, director Garett Edwards returns us to what made “Episode IV” so fascinating: the idea that you could pastiche together dozens of bits of cinematic history and create a wholly immersive and novel world out of them.
All of this may sound like rewarding a triumph over low expectations, and there’s a certain amount of truth to that. Frankly, until now every installment in this series since 1980’s “The Empire Strikes Back” has been terrible. “Rogue One” seems refreshing simply because it was directed and apparently reworked with an eye on making it survive as a movie on its own merits. And because it cracks open the door to its universe just a bit and lets in some new ideas.
For some fans like myself who have always felt that there’s more to explore in this franchise, this is its most salient achievement: “Rogue One” effectively proves the inherent sturdiness of the Star Wars universe. It shows that it’s possible to tell more than just that one same old story about hiding critical data in a droid which makes its way to a reluctant hero who finds the Force, et cetera et cetera. The result is that it makes this far, far away galaxy feel more porous and sprawling, less predictable and much, much more interesting.
Two-thousand and sixteen has been such a bizarre, horrible year. A case in point is Glenn Beck’s bizarre reconsideration of his previously divisive hyperbolic tendencies. Here he is chatting with Samantha Bee on her show “Full Frontal,” and the result is confounding on every level, not least because Beck seems to make more sense than Bee. Watching this clip is like watching reality collapse upon itself. If it weren’t for the fact that next year could be even worse, I’d say I can’t wait for this awful year to be over.
This article at Wired offers some interesting insight into the thinking that informed the alien “written language” in Denis Villeneuve’s “Arrival.”
A single logogram can express a simple thought (‘Hi’) or a complex one (‘Hi Louise, I’m an alien but I come in peace’). The difference lies in the complexity of the shape. A logogram’s weight carries meaning, too: A thicker swirl of ink can indicate a sense of urgency; a thinner one suggests a quiet tone. A small hook attached to one symbol makes it a question. The system allows each logogram to express a bundle of ideas without adhering to any traditional rules of syntax or sequence.
Whether this is truly plausible or not, the result is beautiful. Read the full article at wired.com.