is a blog about design, technology and culture written by Khoi Vinh, and has been more or less continuously published since December 2000 in New York City. Khoi is currently Principal Designer at Adobe, Design Chair at Wildcard and co-founder of Kidpost. Previously, Khoi was co-founder and CEO of Mixel (acquired by Etsy, Inc.), Design Director of The New York Times Online, and co-founder of the design studio Behavior, LLC. He is the author of “Ordering Disorder: Grid Principles for Web Design,” and was named one of Fast Company’s “fifty most influential designers in America.” Khoi lives in Crown Heights, Brooklyn with his wife and three children. Refer to the advertising and sponsorship page for inquiries.
They say a compelling story comes not from plot but from character. But Marvel Studios confounds this truism. As they demonstrate in the passable “Spider-Man: Homecoming,” which I finally got around to seeing in August, the producers fully grasp the title character, probably better than any previous creative regime who have handled him. In Tom Holland they’ve cast the best of the actors to ever play Spider-Man, one who is a perfect fit with the motivations they’ve written, and one who delivers each line with total, infectious alacrity. And yet, this keen understanding of character fails to inspire a particularly compelling story. Instead what you get is a mostly forgettable joy ride of a plot—the highest tension the film ever achieves happens during a conversation in a car ride, while all of the other action set pieces wilt away almost as you watch them. In fact “Homecoming” is more transactional than it is narrative; the movie moves along from cinematic universe obligation to cinematic universe obligation, tirelessly invoking its tiresome franchise debts. The saddest thing is that there’s virtually no point in complaining about this, as this movie could never have existed outside of the confines of the corporate strategy PowerPoint deck that brought it to life. And its ninety-two percent Rotten Tomatoes rating shows that Marvel’s decade-long campaign to lower our standards as moviegoers has succeeded. They won, we lost.
On the bright side, I did get to see seventeen other films in August, many of which were wonderful. Of particular note was the special theatrical screening of “Three Days of the Condor” at Alamo Drafthouse Cinema in Brooklyn. It was followed by an enormously entertaining Q&A with novelist James Grady, whose book was the basis for the movie. That’s a pretty good example of how a great understanding of character can power a legitimately great movie.
Of the major technological shifts that designers are all constantly being told we need to prepare for, the idea of voice assistants becoming a major method for interacting with computers fascinates me most, in no small part because it seems the most inevitable. Whether or not we’ll soon all be wearing augmented reality glasses or immersing ourselves in virtual reality spaces for hours at a time, it’s easy to imagine a role for a voice-based interface like Alexa, Google Assistant or Siri in these new experiences, not to mention in plain old real life. We are at the very beginning of this phenomenon, though, and the tools for creating applications on these platforms are still primitive.
That’s why I was so impressed when I saw Sayspring for the first time earlier this year. While there are plenty of development tools for voice out there, Sayspring is the first I’ve seen that treats voice user interfaces explicitly as a design problem, which immediately struck me as exactly right. The app allows even those with no prior experience in voice UIs or bots to get a prototype Alexa skill or Google Assistant service up and running within minutes—on an actual Echo or a Home, no less. More than just technically impressive, that fast track ability quickly makes it clear that the experience of a voice app requires lots of iteration, careful trial and error—in other words, design. Of course, thinking about voice UIs in this design-centric way raises all kinds of questions about how this technology can evolve into a language (no pun intended) that resonates with users. So I interviewed Sayspring founder and CEO Mark Webster about his ambitions for Sayspring and his thoughts on voice assistants in general.
Khoi Vinh: What makes Sayspring different and better than the development kits that Amazon and Google provide?
Mark Webster: As you mention, both Amazon and Google have been focused on releasing code templates and tutorials to help developers quickly build simple apps, like quizzes and fact generators. This has been great to help everyone get their feet wet with these platforms, but it has also resulted in some lame voice apps. While these platforms are new, so are voice interfaces themselves. Product teams don’t have much experience with them. What’s needed to fulfill the promise of this powerful new medium is not a focus on how to build for voice, but what are we building, why are we building it, and who is it for?
Answering those questions requires making design a part of the process, and doing that requires a set of tools that removes all the technical barriers to working with voice. That’s what Sayspring does. Our collaborative design software lets designers, UX professionals, and product people craft voice-powered user experiences, and actually talk to them in real-time, without needing to code or deploy anything.
Our belief is that great voice experiences start with focusing on the user journey, and that is how the process of designing and prototyping on Sayspring begins. You don’t need to know anything about the underlying technology of voice to use Sayspring. You just plan out the flows of the experience, add some commands and responses, and start talking to your project on any device. You can even share it with others, to do user testing prior to development.
The best companies understand the value of a proper design process across web and mobile. Sayspring is bringing that disciplined approach to voice. Over time, we’ll make it easier for every person in the organization to work with voice apps, from design to development to production. But I strongly believe that if you begin product creation with a weak design approach, everything that comes after it is a waste of time so that’s why we’re starting there.
The design and prototyping approach kind of upends what everyone expects from a voice UI tool at this stage in the evolution of voice UIs: that they should help you deploy a completed product as well. How important—or unimportant—is that to Sayspring, now and in the near future?
Let’s take a look at where we are in the current evolution of voice UIs. When it comes to voice applications, Alexa alone already has over 11,000 Alexa Skills, so the deploying of completed products is continually happening. The process could definitely be easier, but developers are figuring it out. But many of these applications are underwhelming and quickly abandoned.
Every disruptive medium has an early period where creators take something from an earlier medium and just move it to the new one. The first television shows were radio shows in front of a camera, the first mobile apps were just smaller versions of websites. It takes some time to understand the nuances and power of a new medium, to create the experiences that fully leverage the possibilities available.
For voice, we’re trying to push the process forward by empowering designers to work in this new medium in a way that’s impossible without Sayspring.
We’re at the dawn of a massive shift in computing, unlike what we’ve seen in the past. Voice promises to deliver interactions closer to how we all communicate as human beings. Applications have to adapt to people now, instead of the other way around. There is no mouse or keyboard or screen to learn how to use. We all know how to speak to one another, and voice applications have to conform to that. This is maybe the biggest design challenge we’ve seen in the digital world.
One huge advantage to working with voice UIs over visual UIs is that they’re ultimately text-based, so the division between design and development is more blurred than it traditionally has been in the past. This means that once you’ve completed the design process, moving into development and deploying is relatively straightforward and Sayspring will be able to do a lot of the heavy-lifting to bring that product to production. Our mission is to become the one place where a team can design, build, and manage their voice applications across multiple voice platforms. So we plan to handle the end-to-end process of creation, but first we all collectively need to focus on designing experiences worth shipping.
Do you believe that voice UIs will continue to be “ultimately text-based,” as you say? I’m thinking about Apple’s CarPlay, Google’s Android Auto and obviously Amazon’s recently announced Echo Look. These seem to suggest that voice and screen can be an effective combination.
I meant that the output of the voice design process—intents, utterances, entities, the actual speech—is ultimately text-based, so from a workflow standpoint, moving from design to development tends to be more seamless.
Many voice-driven experiences will involve a screen though, in addition to voice simply being a method of input for a traditional GUI-based experience. We’re already working on display support for Sayspring. The visual components for Alexa and Google Home are currently restricted to text and one image, so it’s easy to implement. As they get more involved, Sayspring would integrate with other tools like Photoshop or Sketch, to handle the visual aspect. I can’t see Sayspring ever trying to rebuild the popular products we already have for visual design.
We think a lot about the multi-modality of voice and screens. One thought exercise we often use is to imagine an assistant following you at all times with a laptop they’d use to perform the tasks you ask for. You could have a conversation to get things done, and at certain moments it would make sense for them to turn the laptop around and show you the screen.
So you might tell them you want to see a Broadway show this weekend, and have a dialog about what kind of show you’re interested in, what you’ve already seen, and what’s available. If you decide to buy tickets, they would likely want to show you a seat map of the theater on the screen. You’d tell them what tickets to buy, and they’d finish the transaction. That’s the way ticket purchasing through a VUI would likely happen as well.
I imagine that kind of metaphor or mental model is helpful for imagining what a voice-based user experience can be, since the form is relatively new. How much do you think it’s Sayspring’s job to help your users grok the unique challenges of designing in this medium?
Considering how new this medium is to most people, we feel a huge responsibility to both our users and to the design community as a whole. We’re not only helping people work through the process of voice design, we’re also out there advocating that voice experiences need to be designed in the first place. Too many voice applications are being created by developers with no design input, and most companies working with voice have yet to establish a proper design process. We see ourselves as advocates for design within the broader conversation about voice.
Helping designers work with a new medium also brings with it some product challenges for our team. To give you one example, personality design is a crucial part of the voice design process. Beyond just selecting the words to use, speech can be styled with SSML (Speech Synthesis Markup Language) to add pauses, change the pronunciation of words, and include certain tone or inflection changes.
SSML looks similar to HTML, and we’re considering adding a rich-text editor for SSML to Sayspring. Now, there aren’t many designers out there asking for better SSML tools, and it doesn’t come up in user feedback. But do we put it out in the world anyway, with the belief that it will help users create better voice experiences? How opinionated should we be in steering the establishment of voice design practices? Or should we just be responsive to a more organic evolution? It’s a tough question to answer for us, and we think about that balance each time we prioritize our own product roadmap.
These kinds of considerations really suggest that this is an entirely different kind of design. Do you have a sense yet what makes for a good voice designer? And where is the overlap with what makes for a good visual interface designer?
While it’s going to be a new kind of design for many designers, it should still be based on the design process most of us are familiar with. All design work, including voice, should follow the fundamentals of defining the problem, doing research, brainstorming ideas, designing solutions, collecting feedback, and iterating. Ultimately what will make a good voice designer will be people who are thoughtful and process-driven in their visual design work, bringing that over to voice.
Sayspring wants to be the canvas where designers work with a medium that’s new to them, that is based on the process they already know and is inspired by the tools they’ve used before, but created specifically for voice. We view this new form of voice design as a remix of several disciplines that all have a long history to borrow from. Designing for phone-based interactive voice response (IVR) systems involves crafting a voice-driven user journey. Copywriting and screenwriting focus on word selection, delivery of message, narrative, and personality. And sound design or voice-over work has a lot to say about pacing, inflection, tone of language, and auditory atmosphere.
We’ll also eventually have multiple designers, with specific specialties, all working together on voice projects. Including sound design is a great example actually. Most creators of Alexa Skills or Google Assistant Actions have yet to explore using non-verbal audio as part of the experience. For example, “earcons” are short, distinct sounds that can be used to landmark a user to where they are in the application, almost like using varying header colors to identify sections of a website. Almost no one is using them. Nearly every Skill lets you know you’ve opened it by saying “Welcome to Skill Name” instead of playing a short, familiar audio clip. That will change over time.
I think we’ll soon see the design of voice application involve an interaction designer to construct it, a copywriter to write the actual speech, and a sound designer adding enhancements, cues, and atmosphere to the experience. And we want that all to happen in Sayspring.
Do you think that having more diversely skilled teams and richer workflows is what’s needed to help these Alexa Skills and Google Assistant Actions get to the next level? By some estimates, the vast majority of these apps go unused–they’re hard for new users to discover and even if a user installs one of them, the chances that she’ll be using it again in week two or week three are vanishingly small. Aside from Spotify, I’m not sure there’s another app that has broken through in a serious way.
I think smarter teams gaining a deeper understanding of how to use this new medium, along with platform improvements (like changes to discovery), will drive the shift to voice.
Alexa knowing more about you is important to fix discovery. For example, music is an ideal use case for voice interfaces, but the option to connect to Spotify then never having to ask for Spotify again helped. We’re seeing more and more of this happen. Alexa just released a new video API to connect to cable boxes and streaming services that doesn’t require the specific invocation of the skill or app. Dish already launched this type of skill for Alexa, so now saying “Alexa, go to ESPN” changes the channel on your TV. That will help repeat usage tremendously. But they also need to make sure that skill is well-designed and easy to use.
Many early skills are starting points. Domino’s pizza launched an Alexa Skill, which allows you to reorder a pizza you’ve ordered before. Domino’s CEO Patrick Doyle has said the growing use of that skill has convinced the company to devote more resources towards the voice ordering experience. They’re working on being able to create an order from scratch now. With different sizes, toppings, deals, and sides, that’s a much harder design problem, and requires a more thoughtful design process.
But also, the shift to voice is bigger than just Skills and Actions. Google Analytics just announced voice support, on mobile and desktop, so you can ask “How many visitors did we have last week?” instead of fiddling with an interface. And twenty-five of all desktop Cortana queries on Windows 10 are voice. With 100 million active monthly Cortana users, that’s a lot of people already talking to their desktop computer. Designing voice interfaces will soon be a required task for any digital product design team.
I appreciate that a pizza order is more involved than playing music, but I wonder about the limits of voice UIs in terms of dealing with complex tasks of any real scale. In my personal experience, even the cognitive load of playing music is more than I would’ve expected. Unless I have a really, really specific idea of what I want to hear, I have to keep this catalog of music and artists that I like in my head, and that means I tend to play the same few things over and over. Yet when I look at iTunes, for better or worse, I can happen across all kinds of stuff that I wouldn’t be able to recall standing there in my kitchen, talking to Alexa or Google Home. Is there a practical limit to what these voice interfaces can do? Without an accompanying visual UI, that is?
Every medium has different strengths and weaknesses, and designers need to keep pushing the perceived limits of that medium to find solutions that users find valuable. For example, a lot of Alexa users deal with the issue of music discovery you mention, so last week Amazon Music launched support for over 500 activity requests like “pop music for cooking” or “classical music for sleeping” to help address it. I will say though that having spent a chunk of my career in streaming media, discovery is its own beast and definitely wouldn’t be solved just by adding a screen. I mean, the whole reason Pandora exists is to help you find something to listen to, regardless of the interface.
I wouldn’t think of voice interfaces in terms of having a practical limit, but rather back to this idea of its strengths and weaknesses. Everything shouldn’t be crammed into a voice-only experience. Something like browsing Pinterest wouldn’t work without a screen. Getting ideas for redesigning your kitchen requires a visual display. However, sitting on your couch, watching the TV screen and using a voice interface to ask for ideas, make suggestions, and browse through photos of kitchen concepts, glass of wine in hand, sounds delightful.
That does sound kind of delightful–but maybe just because of the glass of wine? All kidding aside, I think what you mean is that voice represents a much more casual, laid back approach to computing, is that right? And that we probably shouldn’t look to voice for major productivity?
It doesn’t feel either/or to me. It’s a more responsive form of computing, which offers entirely new opportunities for user-centric design. If I walk into my house, and want to turn on some lights and music with ease, voice is a great interface to deliver on that. If I’m a pharma sales rep who is driving between appointments and want to update my Salesforce records, instead of having to do them at the office later, voice serves that situation well too. Either way, it’s going to be fun to get our hands dirty figuring it out. And with seventy percent of the market right now, Alexa is great place to start designing for voice.
Last question: would you care to wager who will win this race, Alexa, Siri, Google Assistant, Cortana, other?
Overall, I think different platforms win different spaces. Alexa might take the home, Cortana might win the enterprise, Google Assistant or Siri could win the car. But it’s fair to say that if you’re competing directly with Amazon, on anything, you should be worried.
Rick Poynor’s new book “National Theatre Posters: A Design History” is the latest in a long line of covetable design tomes from Unit Editions. In 256 pages, it chronicles more than five decades of design work for an “enormous range” of theatrical productions, serving at once as a survey of design’s relationship with the revered institution and, the publishers claim, “a case study of the way the poster as a medium has evolved in Britain in the last half-century.” The book was designed by Spin and, judging from photos, is just stunning.
The online publication Racked is not where you’d normally expect to find thoughtful consideration of the arcana of design and visual communication. It’s true that like all Vox Media properties it’s smartly written but its focus is not on design but rather shopping:
Racked covers shopping from every angle and in various forms, from service stories to reported features to essays to longform. We publish pieces about how and why we buy things, but also use shopping as a frame to tell all sorts of smart and diverse stories, both big and small. At Racked, shopping pertains to clothes, accessories, and beauty, not home or wellness.
And yet, Racked published an article last month that might just be the most insightful piece of design criticism I’ve read in a long time. It’s certainly the most perceptive commentary on design from a publication not explicitly focused on design that I can recall. Written by senior reporter Eliza Brooke, the piece finally asks a simple but little discussed question: “Why Does Every Lifestyle Startup Look the Same?”
Brooke argues that in recent years a preponderance of new consumer brands have all settled on a surprisingly samey aesthetic. She labels it “startup minimalism,” a mixture of visual principles borrowed from mid-century modernism and sans serif typefaces that owe varying levels of debt to Futura. (I would also add that it includes a twee layout and photographic sensibility influenced by Wes Anderson films.)
…this genre of branding has become especially, almost predictably, concentrated among venture-backed lifestyle startups like Outdoor Voices, Bonobos, Frank And Oak, Lyst, AYR, Reformation, Glossier, Allbirds, and Thinx. Some use it for nearly everything on their websites but the logo, and some use it for nearly everything, including the logo.
One of the remarkable features of startup minimalism is its flexibility. It can sell anything.
…The more you see branding like this, the more the individual data points seem to coalesce into a single mass.
More than just simply identifying the trend, Brooke’s article endeavors to understand its history and examine its implications. I found this passage about the implicit promise of startup minimalism particularly observant:
Simple branding also reinforces many startups’ pitches, which go something like this: They’re making great-quality products and selling them straight to you at a low price, because they’ve cut out the retail markup. They offer at-home try-ons and free return shipping, with the label pre-printed and included in your delivery. Not only does pared-down branding mimic the straightforwardness of the customer experience, but, as Critton points out, it holds the brand responsible for the quality of its service. There are no trimmings to disguise a shoddy product or user experience—unless, of course, startup minimalism has become that very trimming.
Brooke’s article is insightful on its own merits, but as an example of design criticism—and make no mistake, this is criticism—published in a non-design forum, it’s remarkable. Asking fundamental questions like “Why do all of these brands look the same, and what does it mean?” begs answers that are so potentially far-reaching, it’s almost an embarrassment that we haven’t seen this discussed much more exhaustively in design circles. Seeing it on Racked also highlights how the language of design can be made relatable to a non-design audience; Brooke’s prose is lucid and convincing and refreshingly light on technical jargon. My only complaint about the article is that there aren’t many more like it.
Few companies seem to be as uniquely devoted to the iPad as Astro HQ. Their AstroPad software turns an iPad into a Wacom-like graphics tablet, complete with support for Apple Pencil. (I wrote about it back in January.)
This week they’re announcing their first hardware product: Luna Display is a compact dongle that plugs into your Mac’s Mini DisplayPort or USB-C port and turns your iPad into a second monitor with refresh speeds to rival a direct wired connection. This instantly gives you more screen real estate either at the office or on the road.
One unexpected benefit of a wireless, touch-enabled secondary display is that you can also take it off of—and away from—your desk. As this video demonstrates, that means you can effectively run macOS on your iPad from, say, your couch. This is a great way to get access to those few Mac-specific features that the iPad doesn’t yet support.
The team is raising money for the initial production run right now via Kickstarter. The campaign has just begun but incredibly it raised over five times its target goal in under an hour.
I mean, really loving him. Loving him so much that you would join dozens of other artists to hand paint over 62,000 frames in a new biopic of his story, together bringing to life one of the most unique rotoscope-style animated films ever made. Just take a look at this stunning trailer to see the effect.
According to the website, “Loving Vincent” is a new movie made in a completely unprecedented way. It started with original, live action footage which was then translated into ninety key “design paintings.” These established the overall aesthetic of what a film would look like had it been entirely painted by Vincent Van Gogh himself. Then using those design paintings as a kind of style guide, ninety-five artists manually painted, on canvas board, the starting frame for each of the movie’s 853 shots.
Aside from sounding epically laborious, that methodology makes perfect sense to me. But the process from there on out truly surprised me. To create each subsequent frame, the artists would then paint directly over that scene’s original starting painting. After that new version was photographed, the artist would then paint the next frame, and so on and so on. The end result is some 62,450 captured animation cells—but only 853 oil paintings of the final frame in each shot. All of the interstitial steps were effectively lost, buried under countless new coats of paint. That’s a shockingly unflinching and fragile way of working, but it does seem to be a fitting tribute to the deeply analog nature of oil on canvas.
The film is due to be released in October. Until then you can explore the process at lovingvincent.com, or read coverage about the project at nytimes.com.
It frustrates me how much senseless waste we all generate on a daily basis simply by going about our business. Plastic bags, paper napkins, plastic cutlery, water bottles… it’s unnecessary and, maybe obviously but perhaps not imperatively enough, it’s also incredibly damaging to the planet. I’m just one person and can only do so much, but nevertheless a while back I decided to start carrying these three essential items in my work bag—my attempt at doing a tiny bit to help reduce all this consumption.
First is this OutSmart titanium spork. As the name implies it’s a fork and a spoon but it actually has a little serrated edge that lets it do triple duty as a reasonably effective knife too. Being made of titanium, it’s super lightweight as well as surprisingly strong. Additionally, it’s TSA-approved for your carry-on bags and it’s healthier than shoving crappy plastic cutlery in your mouth. Just US$11 from Amazon.
Next up is this amazing Zojirushi stainless steel water canister. Any water bottle will do but this is the best one that I’ve ever owned. It keeps cold drinks cold for days and features a surprisingly effective lock mechanism to help ensure that its contents don’t spill all over your other belongings. This canister alone has reduced my plastic consumption immeasurably simply because it’s allowed me to stop buying bottled water—which in my opinion are maybe the most wasteful products of all. At US$28 from Amazon, it costs way more than I would recommend spending on a water bottle, but maybe ask for it at the holidays.
Finally, I also keep a simple, lightweight canvas tote bag rolled up in my bag. There’s nothing special about this bag at all. It’s exactly the same as any of a dozen canvas tote bags you might have gotten at some conference or event or fundraiser or whatever. But having this with me has allowed me to say “no thanks” countless times when cashiers offer to put my purchases in plastic bags. Of all the items I carry, this is the one that I’ve had with me most consistently and for longest, and I like to think about the reasonably substantial pile of plastic bags that I’ve saved from the landfill as a result. Don’t buy your own—just wait until someone gives one to you for free.
Have a look at this adorable comic book cover featuring the version of The Joker that appeared in this year’s kid-friendly “LEGO Batman Movie.” Sure he looks a little menacing here but not much. This is after all, a little kids’ toy. Plus that movie was super playful and fun, right?
Now take a look at the actual contents of this same comic book.
Something’s isn’t right here, wouldn’t you say? Inexplicably, the publisher wrapped a kid-friendly image from a PG-rated blockbuster movie around a comic book story of what appears to be truly gruesome horror. If you look carefully at the right-hand side there, I’m pretty sure that it shows an appallingly frightful version of The Joker fingering the rag-like skin of a disembodied face. That’s actually one of the most disturbing things I think I’ve ever seen in a comic book.
This came to my attention, I regret to say, the other day when my wife and I made the stomach churning mistake of buying this comic book for our four year-old boy—without examining its contents. We were completely aghast at the contents when we finally looked inside.
Surely there’s a disconnect here. This disgusting switcheroo can’t have been the intended result. Someone in some department somewhere messed up. My intention here is not to call them out on their error. I mean, it would have been nice if this hadn’t happened. But in the end this unfortunate episode is on my wife and me, as parents. Caveat emptor. We’ve all been warned countless times not to judge a book by its cover, and this time we simply didn’t heed that advice.
However, I think it’s worth asking, “Who are comic books for, anyway?” Because if this represents the mainstream of comic book storytelling today, it’s not an exaggeration to say that they’re no longer appropriate for kids.
You might think from that remark that I hold comics in low regard but just the opposite is true. I read lots of comics as a kid and I still read them today, though only occasionally. Whatever modest artistry that I bring to my profession I owe at least in part to my lifelong love affair with the medium.
In fact, when I was reading comics they were already getting more “grown-up”; the common refrain at the time was that “Comic books aren’t just for kids anymore.” I read Art Spiegelman’s “Maus” in its first edition. I also bought—and pored over—every issue of both Alan Moore’s “Watchmen” and Frank Miller’s “The Dark Knight Returns” as they were released. Some of these remain among the best things I’ve ever read in any medium. Comics in that era seemed bursting with new possibility. New kinds of stories were being told with a new level of visual ambition. The future seemed limitless.
Now, looking back at that time and seeing what has become of comics since, I can’t help but think that it all went wrong. There are still some wonderful, challenging, grown-up comics being made today, it’s true. But I think it can also be argued that that burst of innovation we saw those many years ago never truly benefitted the mainstream of comics the way many people thought it would. We never really got more of the likes of “Watchmen” and “Dark Knight Returns.” Or, at least, we got much, much more of what I found myself holding in my hands with disbelief this week: tedious soap operas teeming with self-seriousness and tasteless shock value. These comics aren’t for kids and yet they aren’t really for adults either. Instead they’ve become exactly what those who’ve never understood the medium have always accused them of being: an exploitation of that nexus between childhood and adulthood, schlock intended for unrepentant adolescent minds trapped in grown up bodies. It makes me really sad.
However, for reasons too complicated to explain, for the past week or so I’ve been using an early 2015 model MacBook Pro. There’s no Touch Bar, the screen is not as sophisticated, and it’s both thicker and heavier. But you know what? It feels like an upgrade.
I say this for a bunch of reasons, but maybe the most socially significant of them is the fact that this older model has a keyboard that produces hardly any noise at all as I type on it. By contrast, my newer MacBook Pro’s abrasively loud keyboard has become a major annoyance in my work life. The clamorous, hard-to-ignore clickety-clack of its keys is so disruptive in live meetings and, especially, over conference calls (where the mic seems to hone in on the specific frequency of the tapping) that it effectively makes the MacBook Pro with Touch Bar harder to use than other laptops. It doesn’t matter how great a piece of technology is when your usage of it is hindered by the irritation of your colleagues. I’ve been dealing with this all year and I’m tired of it.
The deeper problem with the new model MacBook Pro is, of course, its blithe reliance on USB-C as the only available type of physical port. A lot has been written about this but it bears repeating that it’s a pain in the neck. I’ve had to buy a host of adapters and dongles and now tote them along with me constantly, unnecessarily complicating an aspect of my tech life that, as a rule, should always be trending towards simpler.
Of course, the MacBook Pro with Touch Bar also uses USB-C for its power supply, a technically impressive feat that’s also a nontrivial hindrance. It makes me long for the halcyon days of MagSafe power adapters, which were so profoundly elegant that they still seem essential. MagSafe had become nearly ubiquitous by the time Apple conspicuously omitted it from this model. Between my office and home, I couldn’t even count the number of MagSafe adapters I own or have easy access to. Now I’ve got just one hateful USB-C power adapter and I have to carry it everywhere.
Suffice it to say that my 2015 MacBook Pro has MagSafe, older style USB ports, and works will all of my devices (even my Google Pixel phone, which is itself USB-C-based). Using it as my primary computer feels like rejoining the world of the living.
Apple has a history of making bold leaps forward that also obsolesce popular technology—usually ports and media formats—and I’ve been on board with just about all of them. To my mind, these dramatic shifts work best when they bring with them demonstrable, near term benefits to the user. When Apple omitted the floppy drive from its first iMac, it showed that network transfer of files was much more elegant—and faster. When Apple killed the beloved FireWire port, it opened up the world of more widely available USB peripherals. When Apple ditched optical drives, they hastened the demise of physical media and spared customers the expense of the cumbersome hardware. And even when Apple retired the old, iPod-style Dock connector, it gave the world the infinitely better designed Lightning cable.
But after living with the MacBook Pro with Touch Bar for months now, I can only conclude that its “bold leap forward” is an ambition that leaves me cold. I just don’t see any immediate, material user benefit to consolidating on USB-C, at least to the premature exclusion of Thunderbolt, HDMI, MagSafe and the older USB standard. Between those four technologies, there are far more devices out there in the wild than there ever were of the older technologies that Apple defiantly obsolesced in the past. This makes life today much more difficult for more people than during any of Apple’s previous technology shifts. It’s true that there are some meaningful benefits to the Touch Bar itself (I’ve barely mentioned it here because I barely use it), but that feature is hardly contingent on the omission of these others. And it’s also true that USB-C is becoming more popular, but that’s a reality for another day. Today’s reality is that the MacBook Pro with Touch Bar could stand to inherit much from its immediate predecessors. My only hope is that Apple realizes this.
These videos were made by food stylist David Ma, whose skills apparently include filming his work in distinctive directorial styles. Each short movie imagines a simplified recipe through the lens of one of today’s most recognizable filmmakers.
Waffles rendered in the over-the-top bombast of Michael Bay…
Ceremonially precious s’mores à la Wes Anderson…
And spaghetti and meatballs ’sploitation, Quentin Tarantino style…