is a blog about design, technology and culture written by Khoi Vinh, and has been more or less continuously published since December 2000 in New York City. Khoi is currently Principal Designer at Adobe, Design Chair at Wildcard and co-founder of Kidpost. Previously, Khoi was co-founder and CEO of Mixel (acquired by Etsy, Inc.), Design Director of The New York Times Online, and co-founder of the design studio Behavior, LLC. He is the author of “Ordering Disorder: Grid Principles for Web Design,” and was named one of Fast Company’s “fifty most influential designers in America.” Khoi lives in Crown Heights, Brooklyn with his wife and three children. Refer to the advertising and sponsorship page for inquiries.
The three of us talked about the current state of criticism in design, a topic that regular readers here know that I’ve been interested in for some time. In fact, in a few public appearances recently I’ve been giving a talk on this very subject: about how discourse in our profession is dominated by practitioners—designers writing about design for other designers—and how that limits our ability to have the thoughtful and truly honest analysis that our craft deserves. Molly and Anne expanded brilliantly on my somewhat cursory assessment of the situation; they offered tons of revealing insights into the challenges of conducting critical discourse, the economics of writing about design, the pitfalls of commenting within such a small professional community, a path forward for both designers and writers, and more.
Our conversation lasted about an hour and was live streamed; you can watch an archive of it below. Huge thanks to Molly and Anne for joining me, to the amazing communications team at Adobe for putting the event together, and a special shout out to the folks at Stream Thirteen for their exceptional stage and streaming video production.
In his “Times Insider” column, Times historian David Dunlap runs through the long history of the news organization’s venerable nameplate logotype. He traces its subtle but, to many of its devoted readers, meaningful transformations over the years: key redrawings of its letterforms, the shocking removal of a once grammatically mandatory hyphen, and the summary dismissal of a traditional period at the end of the name which contemporary readers may be surprised to learn survived as late as the year “Sgt. Pepper’s Lonely Hearts Club Band” was released.
Dunlap’s writing for “Times Insider” could be safely categorized as “inside baseball” for enthusiasts of the Gray Lady, but this particular column is actually a great example of how good writing can make design palatable for a general audience. Dunlap’s narrative may seem esoteric—it starts with the birth of blackletter style calligraphy in the 7th Century—but it takes care to explain the historical context of this particular specimen of design in a human, relatable way. It also demystifies the arcane perception of this craft by featuring illuminating quotes from some of the stewards of the Times brand. In short, it was written for normals. I wish there were more writing about design like this in The New York Times—and elsewhere.
Internal design groups often have the challenge of explaining not just the value but the fundamentals of their contributions to the company. This is true even at a design-rich organization like Adobe, where the design teams for two younger businesses—Document Cloud, based on our PDF and e-signature products, and Experience Cloud (née Marketing Cloud), based on our adtech products—often compete for visibility with the longstanding Creative Cloud team.
I was struck by the ingenuity of the latter though, when they produced this Adobe Design Marketing Cloud Biennial Report. It’s a beautiful, limited edition, hardbound compendium of the group’s work, processes and staff, written for an audience of internal stakeholders to augment their understanding of and appreciation for the work that that design team does.
The report was conceived, designed and produced entirely by the same designers who work on the division’s products—usually on top of fulfilling those full-time responsibilities. As such, it’s an impressive labor of love, but the physical thingness of it deserves commendation too—especially inside a software company, where most of us don’t even know how to work the printers, much less where they’re located. By forging a finely crafted object that has real weight, that can be presented by hand, that reveals a tactile and emotional narrative with every page turn, the design team’s story is amplified in a way that a digital presentation couldn’t have achieved.
Even if I didn’t work at Adobe (and to be clear, I contributed nothing to this project) I would be impressed by this effort. It takes skills that are unique to designers—the ability to tell a story in a powerfully visual, immersive way, and to package it in an unexpectedly delightful experience—and puts them in service of making the case for design itself. If you work on a design team that you believe could be leveled up internally, this is a worthwhile strategy to emulate.
Earlier this year, Italian food conglomerate Ferrero used an algorithm to redesign the labels for its beloved sweetened hazelnut cocoa spread and sugar blast delivery mechanism Nutella. I know what you’re thinking: another example of data-driven “optimization” encroaching on the art of design. Well in some ways it’s actually worse news: rather than relying on an algorithm to create something soulessly efficient, Ferrero used the technology to generate an almost inconceivable variety of creative expression. The project combined dozens of different patterns and colors to create seven million individually unique Nutella labels for the Italian market which apparently sold like hotcakes—or at least, um, like a delicious spread that tastes great on hotcakes. The results are actually completely charming. I mean just look at these:
As pervasive as conferences and lectures are, especially in tech, it’s surprising that you don’t see a lot of products aimed at helping people become better public speakers. Logitech hopes to make a dent here with its Spotlight Presentation Remote, a simple handheld device that elevates the humble clicker—which Logitech claims hasn’t really changed much in two decades—to something worthier of today’s hardware and software.
Even at first glance, the Spotlight looks more elegant than any other presentation clicker that I’ve seen. It has just three buttons: a big “next” button that can’t be easily mistaken for the smaller “back” button just below it, and a “pointer” button that highlights and magnifies an on-screen cursor. That last feature requires Logitech’s own software, but they claim it works “on Windows and Mac platforms, and [with] Powerpoint, Keynote, PDF, Google Slide and Prezi.” Logitech’s software is also required for what strikes me as the Spotlight’s smartest feature; a timer that vibrates when the end of your session nears.
Unfortunately for me, the Spotlight’s fatal flaw is that it doesn’t work with iOS. For the past few months I’ve been delivering all of my presentations exclusively with Keynote on iOS. It’s been liberating to be able to leave my laptop at home and plug into the venue’s AV system just with my iPad (I’ve even been doing much of the writing and design of my presentations solely on my iPad, too). It’s been sooth sailing except for one instance when I was using the Keynote app on my iPhone as a remote control for Keynote on my iPad; halfway through my presentation Bluetooth failed on me and I lost the connection. So a more robust solution like Spotlight would be welcome; maybe in a future revision.
You can learn more about Spotlight at logitech.com. If the lack of iOS compatibility is not a deal breaker for you, you can buy one in “slate” or “gold” at amazon.com (affiliate link).
As you can imagine, I was heartbroken to read yesterday that Adam West passed away at age eighty-eight. People who know me from Twitter might recall that I’ve been using a low-res avatar of Adam West as Batman since forever, and in fact for many years I used West’s picture on nearly every social network. There wasn’t a specific day when I formally decided to sport that picture so consistently; I think I just picked it for lack of a better image back in the AOL Instant Messenger days and, keeping to the dictum that “people will remember you better if you always wear the same outfit,” I’ve more or less stuck to it ever since.
The fact that I don’t have a great story for that avatar belies the fact that, maybe more than I knew or cared to admit, Adam West’s portrayal of Batman meant a lot to me. I remember being fascinated by reruns of the 1966-1968 television show as a very young kid, even as the masks and outlandish costumes frightened me. When I was in grade school in suburban Washington, D.C., I remember trying vainly to tweak our TV’s antenna in order to pull in a stronger signal from a Baltimore television station that ran episodes on weekday afternoons. Baltimore was the next market over so I could just barely get a very snowy picture, but it was enough for me just to be able to soak up every episode that I could.
Some Batman fans look back with embarrassment at that television show, but in my mind it made the imaginary world of comics so vivid and exciting. I really loved it. The camp satire was lost on me at the time, but the earnestness of West’s portrayal of that character was perfect for me. He took this idea of a ridiculously attired, overgrown boy scout completely seriously, and even if I did notice him stealing a knowing glance at the camera for the benefit of grown-ups, it never felt dishonest to me. West was just as serious about the comic book conceits as he was about the camp, which is why when you watch those episodes today, they’re still enormously entertaining. I find them delightful, packed with cleverness and, more importantly, lovingly made.
In retrospect, I realize now that “Batman” was also surprisingly graphical; each episode was rife with strikingly bold colors, dramatic camera angles, those unforgettable “WHAM! BOFF! POW!” fight graphics, hand-lettered signage throughout (as warmly documented over at batlabels.tumblr.com, to my endless delight), and a marvelous translation of the comic aesthetic in the opening credits.
There’s also the matter of the opportunistic and quickly made feature film that was made from the show—essentially a ninety-minute episode. I watched it yesterday with my kids as a kind of remembrance of West on the occasion of his passing. It was the first time I’d seen it in decades after having watched it endlessly as a kid, and when it started I almost wondered if perhaps the movie hadn’t had its title sequence updated, so contemporary was this image:
In fact, the rest of the opening title sequence is terrific in its compositions and typography. You’ll have to indulge me as I share screen captures here, because I think their bold use of spotlighting, key colors and Franklin Gothic are my favorite thing this month.
I don’t want to make too much of the idea that West essentially inspired me to become a designer, even though there’s a direct line to be drawn between what I enjoyed so much about that show and how I would make my career later in life. I wouldn’t even want to overemphasize how much West contributed to my lifelong fascination with the Batman character—maybe the most elastic and adaptable of our contemporary myths. In the end, I think the greatest thing that West did was he gave me—maybe you, too?—something to love as a kid, something entirely engrossing and captivating, and something that turned out to be wonderfully durable when we grew up. I don’t make it a habit of watching those old episodes, but once in a while when I see them, I’m so grateful that they were made with so much respect for both children and adults, and at the center of all of that was Adam West’s timeless portrayal of the character. I’ll be forever thankful to him for that, and I wish him a peaceful rest.
To my regret the only movie that I got out to see in theaters last month was the overwrought “The Lost City of Z.” Reviews were largely glowing for James Gray’s old Hollywood-style adventure-in-the-Amazon tale but I found it turgid and, worse, hypocritical; it pays significant lip service to treating indigenous people with respect but devotes practically zero screen time to giving them a voice. You can skip this one.
This was particularly disappointing because it was a pretty slow month for moviewatching. I only saw a dozen films, with a good number of them kids’ fare that I watched with the family. Aside from re-watching a few favorites, none of the other movies I watched really moved me very much. Oh well, there’s always next month.
“Jennifer’s Body” It had all the elements to be a cult great but it really, really wasn’t.
Last fall, Adobe, Apple, Google and Microsoft jointly announced a specification for variable fonts, a potentially dramatic reinvention of the way fonts are digitally constructed and delivered. In his announcement post at Typekit’s blog, Adobe Head of Typography Tim Brown imagines “a single font file gaining an infinite flexibility of weight, width, and other attributes without also gaining file size.”
This will essentially allow a much greater range of typographic options for designers like you and me. For instance, if you need to condense or extend the widths of certain characters, adjust a font’s x-height slightly, or even subtly modify the sharpness or roundness of letterforms, a variable font will allow you to do that while tapping into the built-in rules and logic encoded by the type designer. The end result will be richly expressive typography customized in accordance with the aesthetic intentions of the designer of the typeface itself—no more ill-informed modifications using the brute force of vector editors. This animated GIF by Dutch type designer Erik van Blokland hints at some of the possibilities.
The specification is just the first step towards building this glorious future, though. I talked to my colleague Dan Rhatigan about what it will take for us to get there. Rhatigan is a Senior Manager for Adobe Type and a veteran of the industry, having worked as a typesetter, graphic designer and typeface designer and teacher, including stints in London and New York as Type Director for Monotype. He is a key figure in this critical early stage for variable fonts, having participated in the development of the specification and also now spending time helping to turn its promise into reality.
Khoi Vinh: My initial understanding of the variable fonts announcement was that it’s multiple master fonts done right. Maybe we can start there—what did the original multiple masters fonts specification get wrong?
Dan Rhatigan: Variable fonts build on the ideas of a couple of older font formats: not just Adobe’s Multiple Masters but also Apple’s OpenType GX.
It’s a shame that Multiple Master fonts never lived up to their potential as commercial products, because the idea of letting users pick their own preferences within a given typeface’s range of options was a great one. In reality, though, it was a clunky way to work. As a user, you got a file that contained all these possibilities, but you had to use Adobe Type Manager (I think I may have also used a QuarkXPress plug-in) to play around and choose your desired mix of options, then spit out a new font file with the results. You couldn’t get a live preview in your documents, and then you had to manage all these files you’d create. I played around with Multiple Masters when I worked in publishing, and it was a hassle to keep track of the custom styles everyone on the team would create.
There’s an elegance to this new variable font model: you use a single file all the time, and rely on your browser or software to generate the desired result out of that file. The trick now, though, is getting all the software we use to implement good controls for making all those possible adjustments.
The great legacy of Multiple Master fonts, though, has been that they changed the way type designers work. Most of us type designers have been designing big families using the organizing ideas of the technology: you draw the key weights or widths or other styles, and allow your font tools to interpolate the styles in between. So at least we have an entire generation that learned to think about designing type along the same principles we need to produce variable fonts.
Is it right to say that there hasn’t been a lot of innovation in type technology since then, or have there been significant changes happening out of the public eye?
OpenType was the last really major development in type technology—twenty years ago! Even web fonts have been more of a revolution in how we use and deliver fonts, rather than in the fonts themselves. This is a big challenge when it comes to the major leaps we’re seeing now, like variable fonts, like SVG color fonts (both considered extensions of the OpenType specification). Fonts are software, after all, and the pace of development was bound to catch up with the increasing sophistication of all our other software. It’s really the demands of the faster, more powerful digital world around us that’s requiring our fonts to become smaller and that’s requiring a more sophisticated means of delivering typeface designs to people.
In software terms, twenty years is a basically an ice age. Are we in for a major disruption with variable fonts? What’s going to change?
We might have a major disruption, depending on how much other pieces of software are willing to adapt their typesetting controls. A lot depends on how well the implementation helps people understand the new capabilities of the fonts. Applications might choose to stick with a simple mode that just presents a variable font like it’s a typical font family, and ignore the potential of working with the in-between areas of the design. What I hope, though, is that that apps will see that this could be a real opportunity to handle type differently, with better automation of things like copyfitting or size-specific adjustments, or giving users a way to create and manage their custom type styles.
I think the first real disruption will happen with the people tinkering with CSS and variable fonts, who will be the ones to explore the possibilities a little more freely at first. Besides, the compact file sizes will make it much easier to deliver web fonts if they’re variable fonts, so we may finally see a broader range of styles on a page, and much more variety for sites with Chinese, Japanese, or Korean text.
First, though, there is still work to do at the infrastructural level: enabling support for variable fonts at the OS level, and then at the level of app-specific rasterizers, and then UI controls…
There’s a lot to unpack there that I want to get back to. But first let’s talk turkey: how will this change the way graphic and UX/UI designers use fonts? Will there be a learning curve?
There are already lots of fonts available, but not many ways to use them. The easiest way to play with variable fonts is to visit axis-praxis.org using Webkit or Chrome Canary in the latest version of macOS. It’s a test site using a couple of variable fonts already lurking in macOS, as well a number available through various Github projects.
Other than building web pages to test the fonts in experimental browser builds, there may not be many ways to work with variable fonts until other pieces are in place toward the end of this year, and then in 2018.
What are those major pieces? Is it principally support from the major operating systems that we’re talking about?
Exactly that. The specification for variable fonts is relatively fresh, so work on the rasterizers—the underlying support for interpreting the fonts—is underway, and then that has to be integrated into OS updates, so that browsers and other apps can draw upon that support.
Okay let’s suppose that that comes along next year. You said that we might see some applications present variable fonts simply, perhaps not exposing the full richness of what they can do, and maybe others will provide an interface that handles type very differently from what we’re accustomed to today. This sounds to me like a we could have a situation where there are a thousand different font selection user experiences, with each app handling it radically differently, emphasizing various aspects of the standard. Is that right? Or is there a possibility that we might get a single, uniform user experience for handling type? And I present these options without judgment—clearly there are pros and cons to both.
However, the likelihood of a variety of new approaches to application UI both excites me and scares me a little. It would be great for applications to rethink how they can help users improve their typography, but I know that people often resist change to controls that they’ve used for ages. I think a delicate balance is needed: an updated experience that leads people to understand the capabilities instead of just dumping new options in front of them. But I think there’s also a lot of potential in accessing some of the capabilities variable fonts under the hood—automating use of axes (when available) to control optical sizing, or weight and width depending on context in a document, maybe.
That’s an interesting possibility: the idea that all of these new options might actually impose a responsibility on application UI designers to help users improve their typography, not just simply give them access to fonts. Is there anything in the variable fonts spec that can alert a user when a particular variation has gotten too tall, or wide or just plain ugly? Should there be? I’m only half joking.
Sort of in-font alarm system? No, but there are built-in controls. Type designers need to specify minimum and maximum values for any design axis in a font, so some reasonable parameters are inherent in the font format. Anyone who uses the font, however, will still need to show some good judgement about what they use, within that “safe space” determined for each typeface. This increases the type designer’s responsibility, though, to make sure that they anticipate and resolve what can happen in the design space where different axes may intersect. That is, if there are axes for weight and width available to the user, then the designer needs to make sure all the possible outcomes that are allowed will work well.
How about the business implications? This is a lot more work for type designers. Are we going to see fonts get more expensive as a result?
Honestly, the business outcomes are still anyone’s guess. Every type designer I’ve talked to about this has had a different theory, but since we’re a ways off from widespread support, it will be a while before foundries test the waters and some patterns emerge.
Fair enough, but if you’ll allow me I’m going to press you a bit on the state of font licensing, which in my opinion serves neither type designers nor type customers particularly well. Most font licenses are restricted to a finite number of devices, which seems antiquated when fonts—like most contemporary software—should really be a kind of service, distributed on a usage basis via the cloud. Now we’ll soon have variable fonts, which sounds like a fundamental sea change in the mechanics of the technology—which in turn suggests a once in a generation opportunity to rethink the way fonts are licensed. Put another way: does it make sense to license next generation fonts with last generation terms?
That model of licensing is just one of many that are used now, though. Web fonts and apps, in particular, shook up a lot of the industry’s assumptions about licensing already, since actual font data is distributed. Subscription services—like Adobe Typekit, for instance—have already been gaining ground. Variable fonts will surely add another wrinkle to licensing models, since they have so much potential capability. So far, I’ve heard plenty of theories and ideas how they might be treated, but no consensus, since no one has yet to see what the value of the fonts in use will really be.
So a lot is up in the air, which is exciting but also somewhat fraught. Do you believe that the value that variable fonts brings to designers makes their success inevitable? How would you rate the chances?
The big players have plenty of reasons to want variable fonts for their compression and programmability, so one way or another they will at least find their niche. While there may also be lots of potential on the web, I worry that it will come down to how apps handle variable fonts for creative work that will make or break them as widely used commercial products. It will probably come down to UX decisions that make them vital design tools rather than a specialized technical solution.
In general I’m pretty excited about smart speakers and voice assistants, but I’m not sure what to make of HomePod, Apple’s newly announced entry into this category. It won’t be out until December and relatively little information has been released about it. In particular there’s not a lot of detail on whether it will have a rich ecosystem of apps or third-party integrations, which has thus far been a somewhat useful measuring stick for Amazon Echo and Google Home.
However, I do think that Apple has gotten at least one thing right: the fact that Echo and Home have somewhat oversold the idea of these devices as assistants that can do anything for you. It’s true there are thousands of skills available for Echo but that is a red herring as most of them are useless and/or abandoned. This article at Recode claims that even if a user discovers a skill and enables it, “there’s only a three percent chance, on average, that the person will be an active user by week two.” And if you look carefully at the weekly emails that Amazon and Google send out touting their products’ newly added capabilities (I own both so I get both), you’ll immediately notice a pattern: most of these incremental “skills” boil down to voice-based searches that aren’t exactly earth-shattering (e.g., “Tell me what time ‘Better Call Saul’ is on”).
The truth is that what these devices are best at is playing music from streaming music services. In my experience, this is far and away the most valuable task that they can complete reliably. In fact, you might argue that music is all that Amazon Echo and Google Home are good for.
I’m not sure if Apple would go as far as that, but the apparently extensive engineering that they’ve invested in making HomePod a serious music device shows us that, at the very least, they understand how essential it is to get that one use case right, to hit it out of the park, even. I think that’s smart.
What I’m less sure is smart, however, is HomePod’s relatively rich price point of US$349. Most people compare that to the ~US$150 price point of the fully fledged Echo and Home devices—and of course the even cheaper Echo Dots—and feel that Apple has missed the mark in a pretty bad way. They may be right; I do worry that at that price point most consumers will take a pass on HomePod. But then again we’ve had this conversation almost every time Apple releases a new product line, whether it was the iPod, iPhone, iPad, Apple Watch etc. Apple products are always more expensive, sometimes surprisingly so, and yet they tend to succeed nevertheless.
My belief is that these voice assistants are still at the “Palm Pilot” stage of their evolution. That is, they’re better and more useful than the first iterations of their products—you could say Sonos speakers are the Star-TAC to Echo’s Palm Pilot. But they’re not that much better, and it will take an iPhone-like breakthrough to really demonstrate their potential.
As of now I can’t be sure if HomePod is that breakthrough or not. What throws me more than anything is that I had anticipated that Apple’s speaker would come with a screen because in my estimation, tying in a visual interface to the voice UX is critical for making these devices truly useful. I was genuinely surprised that HomePod lacked this but then I considered that perhaps Apple is counting on the screens that we already own and that they already dominate. Imagine issuing a Siri command to HomePod and getting a corresponding visual interface on your iPhone, Apple Watch and/or your television (with an Apple TV attached). These devices could instantaneously switch into “Siri mode,” providing truly rich responses to what users input by voice. That sounds incredibly powerful and like a brilliant way of reinforcing the ecosystem that Apple has already built in our homes and lives. It’s the kind of thing that no other company could do, which tends to be a leading indicator of when Apple succeeds. We’ll see, though; I’m looking forward to getting my hands on a HomePod later this year.