You need to be careful where you step at our house because my twin five-year old boys are crazy about LEGOs and they’re underfoot everywhere. Which makes us ideal customers for Stüda, a smartly designed, LEGO-compatible furniture line created by Italian studio Nine. It’s actually kind of an ingenious way to embrace the chaos that LEGOs introduce to the home environment, and luckily the pieces aren’t bad looking at all.
More information on Stüda furniture at archdaily.com. If you’re a LEGO enthusiast yourself, don’t miss this post from January highlighting some wonderfully designed LEGO letterforms. Also, enjoy this photo of my twins posing with their LEGO minifigure counterparts, which I made from foamcore and construction paper for their birthday party not long ago. All those skills learned in my foundation year of art school finally came in handy.
The extensive and ambiguously titled exhibition “David Bowie Is,” which originated at The Victoria and Albert Museum in London but is now running through 15 July at The Brooklyn Museum in New York, gathers a ton of artifacts from the iconic musician’s many decades as an entertainer into one massive career retrospective. There are four hundred or so objects on display, including countless photographs, original album art, Bowie’s own drawings, sixty original performance costumes, dozens of samples of handwritten notes and lyric sheets, and more.
It all amounts to as complete an immersion into Bowie as you could ask for, but as with most everything revolving around the curiously unchallenged legacy of this singularly talented artist, it’s not particularly penetrating or surprising. If you’re a longtime fan, you’re not likely to discover new sides of Bowie—or, even, less well known sides. (Tin Machine, anyone?) Still, for those already familiar with his œuvre it makes for an enjoyable if not particularly edifying afternoon out. For those who are new to his work, it’s probably a pretty enjoyable primer.
The exhibition itself is designed thoughtfully and executed with a fair amount of technologically forward-leaning imagination, especially the audio component. Each visitor is issued a pair of over-the-ear headphones (Sennheiser is a prominent sponsor of the show) attached to a Bluetooth receiver that automatically plays audio based on your specific location within the exhibition halls at any given time. Step towards one artifact and you might hear one of Bowie’s many immortal songs; step towards a different one and you might hear an excerpt from his appearance on an old TV show synced with a video projected on the wall. Everything changes automatically; all you need to do is walk and look.
This coordination of exhibits and audio is technically impressive but has some unintended consequences too. If you walk up to an artifact that interests you, you may find it difficult to actually read the curator’s notes while listening to a voice in your head that may be saying something completely different. Your choices would be to either wait until the audio is done, pull off your headphones and miss out on that content, or to fumble with the receiver to find the pause button. One way or another, it’s at least momentarily discombobulating.
More significantly, if you enter the exhibition with companions, within moments you’ll realize that the experience is so tailored to your current proximity as an individual that there’s little sense in the group keeping pace with one another. Two people standing side by side may be listening to two entirely different things, so why stick together? To actually share the experience requires pulling off your own headphones, getting the attention of your companion and urging them to pull off their own, too. It’s an awkward ritual, and it gets annoying for everyone if it’s repeated too many times.
The overall effect struck me as a disappointing hint of what the future might look like, not just for ostentatious tributes to classic rock stars but for life in a technological society too. In a way, this example of automatically playing audio keyed to your location is a decent hint of how augmented reality will function unless it’s done much more thoughtfully: it’s an alienating combination of precision targeting and clumsy relevance. The experience is customized for your data points—your position in space, your implied interest in certain content—but it’s not necesssarily in tune with what you might actually want at any given moment.
It’s also surprisingly isolating. At one point I took off my headphones to survey the exhibition space and what I saw was a room full of people immersed in their own headphones while more or less oblivious to one another’s presence. That’s not to say that it was silent; to the exhibition designers’ credit, speakers were piping Bowie songs throughout the space. This offered a patina of human activity in what would have otherwise surely been an eerily silent experience, because there was no talking, no discussion of the experience that we were all ostensibly sharing. If you like to go to museums for the interesting discussions they inspire, this might not be for you. That old cliché about feeling alone in a crowd never felt more real.
And it was exacerbated by the museum’s restriction on cell phone service (which supposedly interferes with the ability of the headphones to connect with location beacons via Bluetooth) and an inane prohibition on photography. That latter rule struck me as particularly ironic given how media-aware Bowie’s entire approach to celebrity was from almost his very start. If ever there were a rock god made for Instagram, it was David Bowie.
It’s not clear to me that the curators of this exhibition intended to leave so little for its visitors to actually do, but it’s worth considering nevertheless that this might be a likely if not inevitable outcome of immersive media. On the one hand it isolates you from your companions in the real world; on the other hand it abets restrictions on your own technology and therefore your own ability to participate in the experience. Visitors to “David Bowie Is” are socially discouraged from sharing their experiences within the exhibition and they’re also officially prohibited from sharing what they see, hear, learn and think about with the world outside of it (at least in the moment). What’s left but just to consume what’s put in front of them, passively? David Bowie himself would’ve hated that.
Even those of us who try to be conscientious about our waste would be hard pressed to answer the question “What happens when we recycle?” This exceptionally informative episode of the podcast 1A from WAMU and NPR looks more closely at recycling as a concept, as a practice, and as an industry.
Host Joshua Johnson finds that while two-thirds of Americans have recycling bins in their homes, just over a third of Americans’ trash actually gets recycled. That’s not just a result of individual action (or lack thereof), though how we each personally think about consumables is important. It also comes down to how producers of waste—companies, manufacturers, and retailers—have come to rely on the application of a recycling symbol on a package to excuse otherwise environmentally detrimental practices. Our addiction to online shopping and having goods shipped to us, for example, now consumes so much cardboard that it basically counters the paper waste saved by the dwindling consumption of newspapers. And more and more products are being shipped in packaging that is more difficult to recycle than before. Add to that the shocking (to me) revelation that a lot of recycling advocacy is funded by companies who own landfills and who stand to benefit from their use, and it becomes clear that recycling as a proposition is complex and not necessarily a net positive.
It’s not difficult to imagine a role that designers can play here. Of course, designers of consumer packaged goods have the opportunity to positively impact how companies think about the boxes, bottles and cans that they design for. But the whole recycling “ecosystem,” if you will, is so opaque and has been so poorly understood that it seems ripe for a designer to help clarify its intentions (e.g., emphasize reduction and reuse before recycling), shed light on its process, and provide better guidance on how to positively contribute to it. One might even argue that the usefulness of the now ubiquitous recycling logo has come to an end, and a new design system is needed. If you’re interested in these issues at all, I highly recommend listening below. You can also learn more at the1a.org.
Amazing footage from the archives of The Museum of Modern Art of the City of New York in the year 1911. It starts on what appears to be the Staten Island Ferry, docks at Battery City, then goes on a street tour of several neighborhoods in Manhattan. The footage has been altered in two subtle but powerful ways: the normally heightened playback speed of film from this era has been slowed down to a more “natural” pace; and the addition of a soundtrack of ambient city sounds, subtly timed with the action on screen. The result feels more visceral, more relatable, and the early 20th Century seems not so distant to our experience today after all.
Apple’s AirDrop is a huge timesaver for me, especially when I’m reading a web page on my iPhone and want to switch over to another device. I find it’s faster to tap on the share icon and then AirDrop the page to my Mac than it is to open up Safari, click on the tab overview button and then find that page listed amongst my iCloud tabs (though I do use that method regularly too).
I’m a Safari user but sometimes I would rather AirDrop a page to Firefox or Chrome instead. This is typically a multi-step process: first, let AirDrop open it in my default browser, then copy the URL and switch over to my alternative browser, paste the URL and hit enter.
That can all be reduced to a single click with Bumpr, a utility my friend Scott Ostler and I built that lets you choose which browser to open up any given link with, on the fly. I made the video below to show how it works. The window on the left is a view of my iPhone, where I open Twitter and click on a link to a story at The New York Times. Once that page is loaded, I tap on the share icon and AirDrop the page to my MacBook Pro. On the desktop you’ll see the Bumpr menu appear instantly where my cursor was at rest. At that point I just choose Chrome and the link opens in that browser immediately. Handy.
Being able to switch easily between browsers like this is becoming increasingly important. Not only does it allow you to segregate the browsing you do for work from the browsing you do for your personal business, but it allows you to minimize (or at least distribute) the information these browsers are collecting on you. And, given the recent sentiment around trying to defuse Chrome’s potentially damaging, Internet Explorer-like lock on the browser market, Bumpr is a great way to wean yourself off of dependence on Chrome or any other single browser. Get it on the Mac App Store or find out more at getbumpr.com.
Each time I get to see a movie at the theaters I try to make it count. In March, I made a calculated bet by going to see a little-noticed but highly praised indie flick called “Thoroughbreds” from playwright and first-time filmmaker Corey Finley. It tells the tale of two teenage girls in wealthy New England, portrayed with a pitch-perfect mix of angst, insouciance and privilege by Olivia Cooke and Anya Taylor-Joy. Together they contemplate a vicious murder and in doing so ensnare a hapless local loser played, with nuanced care and unintentional sadness, by Anton Yelchin in his very last role. This movie’s inward-gazing air of dread is so effectively realized and its camerawork and pacing are so confidently executed that you wouldn’t know Finley had never been behind the camera before. It’s very, very good and is still playing in some theaters so go see it if you can.
Actually, I did get out to theaters two other times, once for “Game Night,” because there was literally nothing else worthwhile playing and I had a free night. It’s an absurd farce that is maybe as good a definition as any of a low-stakes good time at the movies. I also saw “The Big Bad Fox and Other Tales,” an animated import from France. That was an outing with my kids; we went to see it as part of The New York International Children’s Film Festival, so it feels like it doesn’t really count as a theatrical outing of my choice. However, I won’t deny that it was thoroughly delightful.
I also watched seventeen other movies last month, all on video. Here is the full list:
“Seven Days in May” From a time when movies weren’t embarrassed to read like airport paperback novels.
Last week technologist Dave DeLong wrote a smart piece on his blog called “If iPads Were Meant for Kids” in which he argues that Apple’s signature tablets are less than ideally suited for younger users and their parents. His post is essentially a rundown of improvements that he suggests for the platform, several of which are ingenious. Here are a few of my favorites:
A centralized system for content ratings that can inform third-party apps. Parents would be able to allow only G and PG media, say, and every app on the device could register that setting and adjust its own content settings accordingly.
Timer settings that can restrict kids’ device sessions. Parents could set a device to lock after a certain amount of time and as the limit nears, the iPad could flash warnings to the child before shutting them out. Parents could also restrict usage during certain hours, e.g., during weekdays or evenings.
The ability for parents to install apps on the child’s iPad remotely. This would contrast with the current method in which kids request permission to install an app, the benefit being that it would minimize the child’s time spent within and exposure to the App Store.
Options to disable both incoming and outgoing iMessage and FaceTime communications except with certain approved contacts. There are third party solutions that accomplish similar goals, but the ubiquity of Apple’s text and video messaging platform makes it so much easier for trusted contacts to communicate with children. A whitelist feature like this would be a huge enhancement.
DeLong’s other ideas are also worthwhile. You can read the full post at davedelong.com.
The extent to which one can easily imagine a multitude of enhancements to the iPad for various user groups is indicative of the device’s unique circumstances. On the one hand, Apple sells more iPads each quarter than it does Macs and the business is on an upward trend. On the other hand, it’s clear that lots of different types of users could benefit from more specialized iPad features—not just children and parents.
Apple’s focus last year on professional iPad users and their very recent efforts to make the iPad more appealing for education users demonstrate this. The company’s challenge here is nontrivial in that they will need to prioritize among several different kinds of highly valuable users in the short term—but ultimately success may lie in building deep experiences for all of them. It’s actually pretty exciting to think about; good iPad software—apps that strike that special balance between ease of use, portability and raw power—is for me the true sweet spot for what computing can be.
Its format is a bit unique: it begins with a feature speaker who lectures on a topic at the intersection of design, research and writing, followed by a respondent speaker who offers a counter-argument.
This year’s event was hosted by writer and filmmaker Adam Harrison Levy. The feature lecture was given by Natasha Jen, partner at Pentagram in New York, who talked about “The Designer as Critic.” At the core of her talk was a skepticism about the merits of the “design thinking” approach to problem solving. In the past, in other lectures, Jen has stated explicitly her belief that “Design thinking is bullshit.”
I presented a response which might best be titled “In Defense of Design Thinking, Which Is Terrible.” I’ve rewritten my speaker notes in more readable form below. It combines many of the themes in talks I’ve been giving for the past year alongside new ideas about design vernacular, democratization and more. You can also watch video of both lectures at designresearch.sva.edu.
It’s an honor to be on stage here, taking part in the Phil Patton lecture. What a wonderful legacy he left.
Also it’s such an honor to be on stage with Natasha, of whom I’ve been a fan for a number of years. Thank you Natasha for your address. I’m going to try my best to live up to it with some remarks of my own.
What to do with design thinking? Actually, when it comes to design thinking, I can take it or leave it. There are some great things about design thinking, but it’s also true that design thinking has its downsides.
It can be superficial, it can be misleading, and it can produce bad design. (That last concept, “bad design,” is an idea I’d like to come back to in a moment.) Even so, design thinking is still a useful lesson in how we, as designers, think about the democratization of our craft.
Before I dive too far into design thinking though, I want to talk first about technology, coding and engineering. You can’t talk about design without talking about these things. I won’t be making an apples-to-apples comparison, of course, because I’m going to be talking about engineering very broadly, and design thinking is relatively narrow—it’s just a slice of design, not the entirety. Nevertheless, engineering is useful as a proxy for discussing design being done by non-designers.
How many people here know how to code? Okay a sizable minority of you.
If you’ve ever tried to learn how to code and you weren’t trained in a computer science program at college, you know that there’s still no shortage of educational resources out there for you.
Similarly you could go to a place like Khan Academy online, where you can learn to program for free, forever. Amazing.
And Apple has a wonderful product called Swift Playgrounds. It’s an app for your iPad but it’s really an environment where kids can learn to program. Apple is of course invested in getting as many people to use their technology products as they can, and in fact they promote their Swift programming language with this page on their site, with the headline “Everyone Can Code.”
“Everyone Can Code” is an interesting idea. The implication is that there will be a lot of good code out there, but it also implies that there will be a lot of bad code too. In a world where everyone can code, not all code will be good. There will be bad code, in fact.
It’s worth noting though that engineering as a discipline, as a trade, as a profession is largely unthreatened by the idea of bad code. In fact, you could say that the prevalence of bad code has been a boon to the world of engineering. In spite of all the bad code being written out there, the discipline is thriving.
This is due in part to the fact that engineering has come to be widely distributed. It’s everywhere now, in everything, and that has helped establish a cultural comfort with engineering, with its tools and, importantly, with its vernacular.
We all speak engineering now. Not just words like “gigabytes,” “megahertz” and “RAM.” Those are terms that we’ve adopted in order to better describe technology.
But we’ve also adopted technology terms as a way of describing our own world. Words like “reboot,” “bandwidth,” “offline” and “beta.” We use these words not just to talk about modern life, but also to cement the relationship between technology and our daily life.
Even numbers, which theoretically have no meaning, are influenced by this relationship. When we say “1.0” and “2.0,” we are connoting specific ideas and meanings derived from tech.
In fact, you could say that bad engineering, just like good engineering, has helped turn technology into the most powerful force for change in the 21st century. Engineering has been incredibly democratized and it’s been good for engineers. Today’s engineers are in greater demand than ever.
And yet design—and designers—seem perpetually threatened by democratization. I’ve been a designer for two and a half decades and I’ve seen this again and again.
We’ve talked for years about accreditation, the idea of licensing designers and regulating design. This would require designers to pass the equivalent of a bar exam in order to work professionally, thereby controlling the gates of who gets to practice design.
We bemoan services like 99 Designs, a marketplace for design services where designers can bid on small projects. The end result that we fear from sites like this is downward pressure on the economics of design.
And with great passion, we’ve fought the custom of spec work, in which designers and design companies do free work in the hopes of winning paid contracts. A good argument can be made against spec work, of course. But sometimes our vehement opposition to it seems to exist out of spite, like rich one-percenters incensed by the idea of social services.
And it’s this tradition that I’m reminded of when I consider the backlash against design thinking. It sounds like more of the same. It sounds territorial—like designers defending our turf.
When I saw that, I was immediately struck by its territoriality. I know this was not her intention, but for me the unmistakable implication was…
Only designers can do design.
And also, perhaps, only designers should do design.
To expand on this reaction, allow me to go a bit deeper into my analogy of technology.
Some of you may be familiar with Eric S. Raymond’s book “The Cathedral and the Bazaar.” This is a foundational text in the idea of open source, which is arguably the ultimate expression of democratization in modern technology.
Put somewhat simplistically, Raymond’s book argued that there is a dichotomy of approaches to software development.
There’s the cathedral model, in which software and technology are solely the domain of the developer. If you think about early computing in the 1960s and 1970s, in order to participate in digital technology, you’d usually have to drive to a computer center where huge mainframes were housed, a kind of cathedral of technology administered by a priesthood of computer scientists.
In the bazaar model, by contrast, software is iterated on in public view, and everyone is able to participate. Technology happens everywhere in the bazaar, and this approach is in part why we have supercomputers in our pockets, on our wrists, and available everywhere we go.
Now, when I think about this particular part of Natasha’s quote…
…Claiming that it can be applied by anyone to any problem
…it sounds like that same tension between a cathedral model and a bazaar model. It suggests that design, when it’s practiced out in the wild, is problematic, is superficial, is misleading, leads to bad design. And it also implies that good design is practiced by only “real” designers, under controlled circumstances, addressing only worthy problems.
Now when we listen to arguments like this for the sanctity of the design cathedral—or the fallacy of the design bazaar—it’s important to keep in mind how the business of design has traditionally worked. Put simply, there has long been an economic incentive for designers, especially in studios and agencies, to shroud design itself in secrecy, to obfuscate the particulars of its methods. Maybe even moreso, there is an economic incentive to promote designers as “genius inventors,” singular talents who are uniquely able to channel the spirits of “good” work—priests in the cathedral of design.
Designers want design to be an exclusive domain. They want its processes to be mysterious, and often rooted in the idiosyncrasies of mercurial creative directors and savants, because it preserves the perceived value of our craft. Put more plainly: the more difficult design is to practice, the more lucrative it is for practicing designers.
But, as the dichotomy of the cathedral and the bazaar implies, if you have an idea—a force of nature—like technology, it becomes most powerful when it’s democratized, when it gets out there into the world and in the hands of millions of people.
I believe this is true of design, too.
Any embrace of design by non-designers is a good thing, and design thinking qualifies here. The reason for this is that when that happens, it means our language, the vocabulary of design, is broadening to the rest of the world.
This is important because relatively little of the world of design has crept out into the world at large, out beyond our professional circles. Those who aren’t already clued into the parlance of design don’t have the language to talk about what it is that design does and can do. There are few if any design counterparts to words like “reboot,” “bandwidth,” “offline,” and “beta”—those words that I mentioned earlier which have earned their places in the common vernacular. And when you lack language, you also lack the capacity to understand.
Now another question for the audience: how many of you are designers? From the show of hands, it looks like most of you.
And how many of you have been able to successfully explain what you do to your mom?
That question always incites chuckles. We joke inveterately about our mothers and fathers’ inability to understand what it is that we do. The humor comes from love of course, because we imagine they’ll never understand. But when we settle for that circumstance, when we accept it without contest, we’re effectively abdicating our responsility to explain design to the world at large—not just a responsibility but an opportunity to do so.
By evolutionary design, our mothers and fathers are predisposed to rooting for us, to trying to understand what it is that we do, to champion what we do. If designers are looking for advocates amongst non-designers, parents should be an easy win. But if we can’t explain it to our mothers and fathers, we’re doing something wrong.
So if we’re not talking to our parents when we’re talking about design, to whom are we talking? To whom is our critical discourse aimed?
The answer is obvious: ourselves. And that’s pretty much it.
Designers are most comfortable defining design to one another, to discussing design only with initiated peers who already have the vocabulary to talk about the work. You can see this in almost anything anyone ever publishes about design; the audience is almost always people who are more or less just like us.
Now at this point it’s worth noting that today, March 28, 2018, is the first time that Natasha and I have ever met. In some respects it’s surprising that it took so long because we’re both residents of New York City, we’re both designers, and we have plenty of mutual friends.
Design is a small community. Most of us, I would say, are just one or two degrees of separation apart from one another. And the smallness of that community has profound repercussions on who talks about design.
Most of what gets written about design, most of what gets read, and certainly most of the discussion around design, amounts to designers talking to other designers.
In some respects that’s a good thing. The design community is wonderful about sharing our knowledge with one another, talking about our process, and pushing our craft forward. We’re generally open and welcoming, if only to other designers.
But this has its downsides. And it’s not just that we’re insular and that our language is obscure, either. No, it’s worse: the smallness of our community compromises our discourse.
We all know each other, and if we don’t, we know that we might know one another soon.
We might hire someone we know. We might be hired by someone we know. We might pitch business to someone we know. We might attend a conference and sit next to someone we know in the audience or on a panel. We might even find ourselves on stage at an SVA event with someone we’ve just met, with whom we have many friends in common.
That familiarity, that closeness, can have a chilling effect on what it is that we’re willing to say. It prevents us from talking openly and honestly. It constrains our discourse.
Actually in some ways I’m grateful that Natasha and I didn’t know one another before this evening. It’s easier to talk frankly and to make honest points because we’re not friends. (Though Natasha, I do hope we’re friends after this. If you want to take a wait-and-see attitude though, I understand.)
However, this lack of independence in design discourse is really, really problematic if you think about it. Imagine if other art forms worked this way.
Consider Michael Kimmelman, architecture critic at The New York Times. He nearly won a Pulitzer for his incisive writing which puts architecture in context, gives it meaning and makes it more relevant for countless people.
Now imagine if Kimmelman were a practicing architect. Imagine that he has projects all over the world and is working on a huge high rise in downtown Manhattan at the same time as he files his bylines at The Times. That would undoubtedly and profoundly change the way we think about what he has to say about architecture.
Think about more “populist” arts, too. Gene Siskel and Roger Ebert, who rose to prominence as film critics in the 1970s. They used to have a syndicated television show and for years they would come into our homes every week and talk not just about what was in theaters but also about the ideas behind film. In doing so they turned us all into better informed, more passionate moviegoers.
What if Siskel had been a working film producer too? Or if Ebert was a screenwriter and director at the same time as he was a critic. The effect would have been that they would have both been far, far less influential than they were, and we’d all be poorer for it.
That hypothetical scenario happens to be exactly the situation that design finds itself in today—we have a heavily compromised discourse. Just about everything that gets written about design, every robust discussion about design is constrained from being truly honest, truly open, and from truly pushing the profession towards greater relevance to the world at large.
I might even argue that in a more ideal world, Natasha shouldn’t be here tonight giving a lecture in honor of Phil Patton. And I would certainly argue that I shouldn’t be up on stage here either. For all you know, I could be just really craftily trying to sell you on Adobe software.
Who should be here? An independent voice, a critic whose job it is to think and talk about and interrogate design. Someone whose income is not derived from the practice of design, whose agenda is clear and unbiased when it comes to interrogating design’s practices, its implications, and its people.
Unfortunately we don’t have a lot of independent design critics. This is true for a host of reasons. And whether it’s a cause or an effect, the fact of the matter is that design discourse is dominated by designers, and that is a major defect of our industry.
I offer this complaint not just because I long for our craft to be taken more seriously as an art form (which I do). This is about more than recognition.
It’s also about how vitally important it is for designers to contribute at this very moment in history.
If you Google the term “tech backlash” you’ll get no shortage of results that reference this pivotal moment in the relationship between people and the digital world we’ve all been building the past few decades.
There are headlines about how people are re-evaluating how they interact with Facebook, Google and Twitter. There are widespread concerns about the massive amounts of data that have been compiled about all of us, all with worrisome opaqueness. Even the devices we love, like the powerfully liberating smartphones we all carry around, are now seen as having potentially damaging effects on our mental health. And of course there is alarm at the way that our social media activities have basically been weaponized against us and against democracy.
The world at large and maybe even many of the people in this room think of these purely as tech problems. But upon deeper reflection I bet that many if not all of you would agree that these are just as much design problems as they are tech problems. And you’d likely all agree that designers can—and should—contribute to the solutions for these challenges.
However if you Google “design backlash” the results are dramatically different. You’ll get almost none of those stories about this moment in history. Just a random selection of results that happen to include the words “design” and “backlash” on the same page.
That is as stark as any an illustration of how little the world understands design, how little it values design, how little it thinks of design as a critical factor in changing the world.
We all know that impression is wrong. We’ve been arguing the exact opposite for decades. We’ve been relentlessly advocating for the importance of design.
But the way we’ve been arguing has been counterproductive.
Our insular discourse, the way we’ve jealously protected the language and tools of design, the way we’ve focused so much on the “genius designer”… these behaviors have all worked against our own interests.
They’ve limited us, limited the opportunities we get to contribute to the fullest of our ability. And they’ve limited our capacity to fulfill design’s true potential as the world-changing force that we’ve all been insisting that it is.
So when I consider design thinking, it matters less to me whether it leads to a lot of bad design or not. What matters to me is whether it helps broaden the language of design, if it helps expand the community of design, if it helps build a world that values and understands design better than it does today. If design thinking is making us more relevant to the world at large, leading non-designers to embrace the way designers think, then the net effect strikes me as positive.
So design thinking? Sure.
I’d be happy to have, alongside design thinking, the idea of “design feeling” too.
“Design sleeping”? “Design eating”? Bring it on.
Ultimately debates like this are about a simple question: what do we want design to be?
Do we want it to be as small as it is today, an insular community with an obscure language that largely gets ignored by the world at large? Or do we want it to be as big and influential and as inspiring as we all know it can be?
The perfect note-taking software hasn’t been invented but Agenda, a new contender that runs on the Mac, comes reasonably close. It finally delivers on a notes management feature that I’ve spent what seems like an eternity waiting for: the ability to link a note to a specific event on my calendar.
Like Evernote (which is what I use currently) or Apple’s own Notes app, in Agenda I can simply create new notes in a folder or project view, displayed on the lefthand side. Those notes can be either re-ordered manually or sorted by the dates that they were most recently edited.
Unlike those competing apps, though, Agenda also gives me the option of associating any given note to a specific event on my calendar. The screenshot below shows how clicking on the calendar icon lets me find a date, view the events on that date, and then link that event with the current note. Even more powerfully, I can also view my calendar in a right-hand pane, click on an event there and initiate a new, linked note that automatically copies over the event’s title, attendees and description. Brilliant.
The team behind Agenda markets this as a “unique approach [to] organizing notes into a timeline.” I was delighted to find it (and also delighted by how elegantly the whole app has been designed and constructed). But this ability to link between two databases, one storing notes and the other storing events, seems so basic and obvious to me that I’m shocked it’s not more common.
I personally take notes in exactly this way: I create a new note in Evernote for each new meeting (in fact, I use a somewhat clunky IFTTT applet to do this automatically) because this is exactly how I would go look for it later. That’s not to say I don’t also appreciate the value of being able to create notes that are not pegged to specific events; I’ve got plenty of those, too. I just want some of my notes to be easily findable within a calendrical interface. In my opinion every note-keeping app should work this way.
Alas, even though Agenda gets this right, and even though I’ve been waiting for the feature forever, I have to admit I won’t be able to switch away from Evernote. The truth is that I need to take notes on every platform these days, not just on my desktop. I actively take notes on my iPhone, sometimes on Android, occasionally through Evernote’s web interface, and even occasionally via Google Home and Amazon Echo. Access from everywhere has become table stakes for basic kinds productivity software like notes and to-do items, which is why Evernote is so invaluable to me, and so hard to leave. Fingers crossed that Agenda comes to other platforms soon or, even better, that this incredibly obvious feature migrates to other note-taking apps too.