I try to post these roundups as soon as I can after each month closes but I’m barely getting this one in before October. And it does seem like a long time ago that I saw Spike Lee’s “BlacKkKlansman” at the theater, but the disappointment still lingers.
It’s not wise to walk into any of this director’s movies expecting perfection, but I’ve come to expect, at least, a kind of thrilling audacity, a go-for-broke sensibility that bends narrative and polemic together in unexpected ways. That’s what Lee’s 2015 film “Chi-Raq” was like: not altogether successful, but still wildly ambitious, and pretty amazing to watch.
By contrast, “BlacKkKlansman” is almost shockingly…conventional. Its crazy premise—an African-American cop fools the Ku Klux Klan into accepting him as one of their own—never reveals itself to be anything crazier than the way it’s described—no new layers are peeled back, no unexpected twists are presented. It’s just kind of boring, actually. And by the end, when Lee hastily motors through various plot resolutions and tries to tie the movie’s historical milieu into present day events, the total effect feels slapdash. A missed opportunity.
Here are all fifteen movies I watched way back in August.
There are literally hundreds of designers working at Adobe in offices all over the world. Back in July we got all of them together for four days of lectures, workshops and discussions from both Adobe staff and guest speakers from outside the company. We called it Design Summit; a huge internal design conference held in San Francisco’s massive Pier 35 event space.
One of the most popular sessions was this talk from Zimbabwean designer Farai Madzima, who joined us for the second day of our conference, flying in from Ottawa where he works as a design leader at Shopify. In a truly not-to-be-missed lecture, Madzima floored the audience with his thoughts and experiences on cultural bias in design. If you only watch one video today, make sure this is it:
Design Summit was, for me, a thrilling reminder of Adobe’s commitment to the craft and the industry: We paused projects all over the company for a week; spent a small fortune flying designers in from every corner of the globe; held a special invitation-only event for the Bay Area design community; and discussed process, creativity, accessibility, diversity, inclusion, technology, ethics, the future of the craft, and more—truly a range and depth of design discussion that few if any companies would ever entertain.
On top of it all, it’s worth noting that the event was also visually stunning, thanks to the stellar work of the Adobe Design Brand Team. They created this beautiful identity system from whole cloth, all inside of just a few months and alongside their regular workload. They then implemented it in just a few days as environmental graphics throughout Pier 35, as smart looking swag and paraphernilia distributed to every attendee, as video interstitials on stage—and even in the form of pop-up shop inside the event space. Amazing work; you can see more if it in this Behance project.
Libraries are being disparaged and neglected at precisely the moment when they are most valued and necessary. Why the disconnect? In part it’s because the founding principle of the public library — that all people deserve free, open access to our shared culture and heritage — is out of sync with the market logic that dominates our world. But it’s also because so few influential people understand the expansive role that libraries play in modern communities.
When I started working professionally and earning a paycheck I got in the habit of buying the books I wanted to read and basically stopped going to libraries of any kind altogether. That lasted until just a few years ago when my daughter learned to read and became an avid bookworm. Now it’s unusual if we go more than a few weeks without visiting one or more branches of the Brooklyn Public Library system.
We’re lucky that the library system in Brooklyn is relatively well funded. You can reserve, borrow and renew books via the system’s web site. You can also do the same with e-books using the reasonably good if imperfect Libby app from Overdrive (I’m in the middle of a Lee Child book on my phone right now thanks to this). Pretty 21st Century, right?
A Brooklyn Public Library card in and of itself is kind of a remarkable thing too. To begin with, the current design sports an illustration from Maurice Sendak’s immortal “Where the Wild Things Are,” making it easily the most aesthetically sophisticated card in your wallet. That card also lets you access the free Kanopy streaming movie service which has some great classic as well newer independent films—basically it’s like Netflix, but you don’t have to pay for it and the content is actually good. And finally the card also gets you free access to thirty-three New York City museums.
All of that is a remarkable deal, but what has struck me most about coming back to public libraries is how so many people get so many different uses out of the buildings themselves. As a public space, they’re unlike any other. As Klinenberg writes:
Libraries are an example of what I call ‘social infrastructure’: the physical spaces and organizations that shape the way people interact. Libraries don’t just provide free access to books and other cultural materials, they also offer things like companionship for older adults, de facto child care for busy parents, language instruction for immigrants and welcoming public spaces for the poor, the homeless and young people…
I recently spent a year doing ethnographic research in libraries in New York City. Again and again, I was reminded how essential libraries are, not only for a neighborhood’s vitality but also for helping to address all manner of personal problems.
For older people, especially widows, widowers and those who live alone, libraries are places for culture and company, through book clubs, movie nights, sewing circles and classes in art, current events and computing. For many, the library is the main place they interact with people from other generations.
For children and teenagers, libraries help instill an ethic of responsibility, to themselves and to their neighbors, by teaching them what it means to borrow and take care of something public, and to return it so others can have it too. For new parents, grandparents and caretakers who feel overwhelmed when watching an infant or a toddler by themselves, libraries are a godsend.
In many neighborhoods, particularly those where young people aren’t hyper-scheduled in formal after-school programs, libraries are highly popular among adolescents and teenagers who want to spend time with other people their age. One reason is that they’re open, accessible and free. Another is that the library staff members welcome them; in many branches, they even assign areas for teenagers to be with one another.
I’ve seen for myself real life examples of virtually all of these use cases. It really opened my eyes to how vital a civic institution the libraries in my community are. But I take mild exception to the emphasis that Klinenberg places on a library’s ability to “address all manner of personal problems.” That phrasing gives the impression that a library is a place you go principally to solve some kind of challenge.
While that’s often true, it’s also true that a library is a building that’s uniquely open to any purpose you bring to it. Your business there could be educational, professional, personal or even undecided, and you don’t need to declare it to anyone—you can literally loiter in your local public library with no fear of consequences.
Even more radically, your time at the library comes with absolutely no expectation that you buy anything. Or even that you transact at all. And there’s certainly no implication that your data or your rights are being surrendered in return for the services you partake in.
This rare openness and neutrality imbues libraries with a distinct sense of community, of us, of everyone having come together to fund and build and participate in this collective sharing of knowledge and space. All of that seems exceedingly rare in this increasingly commercial, exposed world of ours. In a way it’s quite amazing that the concept continues to persist at all.
And when we look at it this way, as a startlingly, almost defiantly civilized institution, it seems even more urgent that we make sure it not only continues to survive, but that it should also thrive, too. If not for us, then for future generations who will no doubt one day wonder why we gave up so much of our personal rights and communal pleasures in exchange for digital likes and upturned thumbs. For years I took the existence of libraries for granted and operated under the assumption that they were there for others. Now I realize that they’re there for everybody.
In spite of all the high-minded cinema fare I profess to care for so deeply, the movies I get most excited about are usually popcorn action thrillers. That explains why, if I’m honest, “Mission: Impossible—Fallout” was probably the movie I’ve looked most forward to all year. Its predecessor, “Mission: Impossible—Rogue Nation” pulled off the unthinkable feat of being both the fifth and most interesting installment in an already excellent franchise. When I learned that its director, Christopher McQuarrie—one of the best filmmakers working today—would return and that “Fallout” would be a direct sequel, I started getting very, very excited.
For my money, this is a series that has only gotten better with each new installment. No one asked me to rank them from least to most favorite, but I will now anyway, as the ordering is actually quite elegant: I, II, III, IV (“Ghost Protocol”), and then V (“Rogue Nation”). In fact, I re-watched them all in July in anticipation of “Fallout” which, as it turns out, completes the progression by being the best one yet. We’re all well aware by now of star Tom Cruise’s almost disturbing obsession with risking his own life for our entertainment, but in this outing he and McQuarrie achieve an almost sublime synthesis of character development and action. What’s communicated through stunts, body blows and explosions here is as meaningful as what’s expressed through dialogue. It’s as close as a large-scale Hollywood action has ever come to an auteurist psychological drama.
One of the unintended consequences of having banked a string of six highly successful, generally well-reviewed films in a row is that the series has also created an unmistakable snapshot of popular contemporary thinking. Beyond the cracking good action, after watching the complete series it became clear to me that at heart these movies are about the tension between physicality and technology.
This has been true from the start. The very first installment also happened to produce the series’ most lasting image: that of Tom Cruise dangling from wires as he attempts to extract data from a highly secure computer terminal. Since then, a similar act of extraction has figured intrinsically into the plot of all of these films. Over and over again they posit that human movement and physical action is the only reliable way to render some crucial value required by people from the intractable grip of technology, whether what must be breached is a data center at a CIA compound, a nonsensically located server room on a forbiddingly high floor of the Burj Khalifa, or any number of situations made fraught by technology’s uncannily accurate ability to subvert the truth (read: the countless masks that are a hallmark of the franchise).
It’s also no accident that Tom Cruise is commonly referred to as today’s “last movie star.” As a conceptual whole, “Mission: Impossible” tries to make sense of how a classic, cinematic idea of masculinity can overcome technology’s encumbrances. Sure, Cruise’s Ethan Hunt character is always abetted by his technologically incisive colleagues Luther Stickell (Ving Rhames) and Benji Dunn (Simon Pegg), but these are strictly secondary characters—comic relief, even. Ultimately the resolution of the plot falls to the alpha male, Cruise himself. This series is a reflection of society’s struggle to reconcile heroism and hacking.
And that’s all I have to say about “Mission: Impossible” for now—unless you want to subscribe to my newsletter, where I’ll have some more totally unnecessary thoughts for subscribers only. Meanwhile, here is the full list of all fifteen movies I watched last month, only seven of which starred Tom Cruise!
To our own detriment, designers prefer to think about “how” much more than “why.” This was demonstrated in my blog post from earlier this week but here’s another good example—or perhaps it would be more appropriate to call it a bad example. You may or may not find it disturbing.
Last month the widely respected, “evidence-based user experience research, training, and consulting” firm Nielsen Norman Group published a fascinating report on best practices to consider when designing websites for children. Its author, Feifei Liu, summarizes a study that the firm did in which they interviewed kids aged three to twelve to learn how they behaved performing a series of interactive tasks. Liu writes:
Our research with kids on the web and mobile devices shows that the physical development of motor skills and motor coordination influences children’s ability to interact with devices.
Roughly, children under five have limited motor abilities and require very simple physical interactions on touchscreens. For kids between six and eight years old, their developing motor skills allow them to perform simple interaction gestures on laptops like clicking and simple keyboard usage. Whereas starting around the age of nine years, more advanced interaction techniques become possible. Around the age of eleven years, children become able to use the same range of physical interactions as adult users. (Though obviously, their mental development stage and educational level still dictate simpler overall user interfaces for eleven-year olds than for adults.)
That’s the executive summary, leading off the top of the report. The rest of it digs into those findings, detailing a series of recommendations for designers creating websites for kids. Some of these include: emphasizing swiping, tapping, and dragging on touchscreens; avoiding interactions that require dragging, scrolling, and clicking small objects; and generally accommodating the limited motor-coordination facility of this audience.
Useful stuff. I don’t dispute the findings at all. But it’s disturbing that the report focuses exclusively on usability recommendations, on the executional aspect of creating digital products for kids. There’s not a single line, much less a section, that cares to examine how design impacts the well-being of children.
This seems particularly egregious when one considers current societal discussions about how digital technology impacts younger users. Recent studies point out that mobile device usage among young children has skyrocketed to an average of as much as two hours per day, up from as little as just five minutes a day at the beginning of this decade. Meanwhile, the American Academy of Pediatrics revised their recommendations for device usage amongst children this year to just one hour per day, arguing that “Too much media use can mean that children don’t have enough time during the day to play, study, talk, or sleep.” The non-profit group Common Sense Media found that, contrary to advice from pediatricians, much of this time spent in front of screens is happening just before bedtime, and children in lower-income families are much more likely to spend more time on devices than those from more affluent families. And a lot of attention has been paid to San Diego State University professor of psychology Jean M. Twenge’s studies of the first generation of teenagers to grow up with mobile technology, and the radical and often worrying shifts in behavior that smartphones have engendered in them.
In fairness, none of this is incontrovertible proof that screen usage is harmful to children, but it’s also safe to say that there’s reasonable cause for concern. At the very least thoughtfulness is warranted in the design of digital products for this audience.
It’s also worth noting that Nielsen Norman Group is famously focused on the narrow subject of how to make digital experiences as usable as possible; their expertise on usability is widely recognized and rightfully acclaimed. The larger question of whether a design solution is in the best interests of its users has always been purposefully beyond their scope. But pretending that there is no link between the usability of an experience and the long-term well-being of its users is, frankly, a specious position at best. Particularly for this target group of users.
Habits are formed around the usability of a product; if an app or website makes it easy to complete a task, users are likely to do it more often than not. Usability advocates often treat this as an inherently good quality; by and large every business wants their products to be easier rather than more difficult to use. But as the aforementioned research suggests, it’s become clear that guilelessly encouraging longer, more frequent sessions isn’t necessarily better for kids.
I would contend that it’s really no longer useful—or responsible—to think of the work we as designers do in such narrow terms. You don’t even need much imagination to expand the definition of “usability” in this way. Beyond just the study of practices that make digital products easier to use, it’s reasonable to think of usability as a field that considers what’s in the best interests of the user. Clearly, there are best practices to be learned when it comes to limiting children’s time, signaling danger to parents, discouraging successive sessions over short spans, and even for encouraging physical movement. That all sounds like usability to me.
We’re moving past the stage in the evolution of our craft when we can safely consider its practice to be neutral, to be without inherent virtue or without inherent vice. At some point, making it easier and easier to pull the handle on a slot machine reflects on the intentions of the designer of that experience. If design is going to fulfill the potential we practitioners have routinely claimed for years—that it’s a transformative force that improves people’s lives—we have to own up to how it’s used.
This lengthy, thoughtful screed was inspired in part by an article that I wrote earlier this year for Fast Company called “Design Discourse Is in a State of Arrested Development,” the gist of which was to say that what gets written, read, discussed and lectured with regard to design is, on the whole, very shallow. I argued that that superficiality points to a systemic failure in design: an unwillingness to “ask tough questions,” and an inability to push the craft forward in the interest of both its practitioners and of its audience.
As publishers and key participants in the world of design discourse, Teixeira and Braga admit that they have played a part in perpetuating this environment. They write:
Last year, we published and shared 4,302 articles and links with the community — through Medium, our newsletter, our chatbot, our yearly trends report, Today, Journey, and many other channels.
In an extensive exploration of the subject, Teixeira and Braga examined every link they found on major online design forums (e.g., DesignerNews, WebDesignerNews, StackExchange UX, and Reddit UserExperience, Sidebar, Product Weekly, UX Curator, and UX Collective itself) for a month. “Every link shared between 12 Feb and 11 Mar 2018,” they say, “was put under the microscope, through the lenses of independence, honesty, breadth, and depth.”
They then plotted each article’s on a spectrum with “tactical” articles on one side (with templates, kits and tutorials at the extreme) and “strategic” articles on the other (with discussions of ethics, responsibility and impact). While acknowledging the subjective nature of the exercise, the results are nevertheless eye-opening: as seen in the chart below, the vast majority of the links fall on the “don’t make me think” end of the spectrum.
It’s clear that the currency of design discourse is really concerned with the “how” of design, not the “why” of it. As Teixeira and Braga write:
While designers tend to be skeptical of magic formulas—we’re decidedly suspicious of self-help gurus, magic diets, or miraculous career advice—we have a surprisingly high tolerance for formulaic solutions when it comes to design.
That’s a pointed criticism but, from my perspective, it’s also quite accurate. Rather than leaving that conclusion on its own, though, the essay tries to come to grips with, appropriately, why this is. Consistent with the habits of good designers, Teixeira and Braga undertake a bit of “user research” to understand how design content gets consumed, and who actually generates it. They even dig into one of the key paradoxes of an art form that is examined almost solely by its own practitioners: its highest functioning leaders usually can’t spare the time to write about their own perspectives.
The whole article is full of valuable insights like this but it’s worth reading for another reason alone: it shines a light forward for design discourse by first recognizing its deficiencies and then by modeling a way forward. Read it in full at essays.uxdesign.cc.
Sometimes you need to explain what design is to people who don’t understand it, but need to. This is the situation I found myself in this week: I’ve been collaborating on a project with some incredibly smart people outside of the company who have a passing understanding of what UX/UI design is, but who need to get a better sense of its particulars, of what it is and what it isn’t, of who does it and how it’s done, and how it’s similar to and different from other flavors of design. After trying to explain it orally and inarticulately, it became obvious that it would be more productive to try and explain in written form.
Lucky me, I had a bout of insomnia at 4:00a this morning. So I got out of bed and, in roughly an hour, hammered out a kind of primer on UX/UI design, which I’m publishing below. It’s a very unformed, rambly screed that I won’t pretend is at all definitive or even fully accurate. In fact it’s still basically a first draft; I literally typed it out in bullet point form, as shown below, a trick I used in order to absolve myself of the responsibility of writing a fully articulated essay. It proved useful to those colleagues of mine and so I thought it might prove useful to readers here, too. Let me know what you think.
A Primer on UX/UI Design
Virtually any time you use software—an app on your phone or your laptop, a website, a check in kiosk at the airport—you are actually interacting with an interface created by a designer. In effect, the designer shapes the technology into something understandable, useful and, ideally, delightful to the user.
At the simplest level, the designer does this by laying out, or visually organizing what you see. She decides where the buttons and text go on a screen, what other elements like photos, illustrations and/or graphics belong on that screen, and what happens when the user clicks, taps or otherwise interacts with parts of the screen. This is the interface.
The interface is where UX/UI design most clearly intersects with “traditional” graphic design, because it is in the layout of the interface that the UX/UI designer uses many of the same elements and tools as designers who create books, posters, packaging etc. Specifically, both kinds of designers employ typefaces, graphics, photos and/or illustrations; make deliberate color selections; think extensively about the composition of the elements they are placing on their canvas; integrate or even design from scratch logos and brand systems. There is significantly overlap here and many professionals practice both, but UX/UI design and graphic design are not exactly the same.
When the UX/UI designer “decides what happens,” she is determining both the behavior (i.e., whether a button changes color, shape, shifts in place or otherwise responds to the user’s input) and the flow (i.e., what screen the user goes to, or what new parts of the interface are presented to the user).
Taken together, the interface, the behavior and the flow form the user experience. This is a gross simplification, but it’s a reasonable way of understanding that term.
These terms aren’t absolute; one of the most frustrating things about our profession is that there are few fixed terms for our tools, methods and work product.
To perform her duties, a designer almost always has to work closely with engineers and product managers, people who are responsible for building the actually technology for the app, website, etc.
In decades past, how a designer worked with engineers was much more rudimentary, even perfunctory. Oftentimes engineers would effectively determine the majority of the interface, behavior and flow, and would allow the designer only to embellish what had already been established, e.g., changing fonts or colors, making slight modifications to the layout or behavior, and rarely allowing the designer to change the flow. The result of course was very poorly designed and frustrating to use software.
Professionals often refer to this minimal role as “window dressing” or “prettifying”; the implication being that it is a circumscribed version of the full scope of what a designer’s job should be. Oftentimes, this way of working is pejoratively referred to as “visual design” or “graphic design,” and there are some designers who are wholly disinterested in this aspect of the job altogether; they believe that design is really about the behavior and flow of an experience. Of course there are other designers who believe that the visual design is just as important as the behavior and flow of an experience. To put it succinctly, there are many gradations between the two beliefs. Our belief is that every point on the spectrum is valid.
When digital technology was relatively young and its value was novel, “window dressing” was generally acceptable, because users accepted that they had to submit to poorly designed interfaces in order to harness the power offered by technology. This is how we got many of the terrible interfaces that marked the first generation of desktop software.
As digital technology matured and became more capable, and as it at the same time became more widespread and commonplace—first with the web and then later with mobile apps—the bar for UX/UI design was progressively raised. Site by site and app by app, consumers were exposed to more and more good design, and they soon came to expect every digital product to meet a minimum bar of design quality, even if they are still not able to articulate what good design is.
Today, the commonly understood definition of good design among professionals has generally moved far beyond design as a superficial layer on top of technology. Designers tend to think more holistically about the problems they work on now.
This often means that a designer doesn’t just apply her skills to a solution, but also to defining the right problem. Good design means asking the right questions, questions that are in alignment with both the business goals of the company she is working for and with what the intended users of that company’s app or website want to accomplish when they’re using it.
The act of researching a given problem is commonly understood to be part of the design process now. Research can mean interviewing and/or observing users, examining data on existing usage patterns, interrogating the motives of the company and/or the engineering team and more. It can also mean “testing” a design solution for its usability or acceptability to users. Research is now a common and critical part of good design practice.
Design professionals have also come to embrace the highly iterative nature of designing for digital products. This stands in contrast to traditional graphic design where, because of the fixed nature of the medium, it was relatively difficult (if not impossible) to make changes to a design solution. In digital media, design solutions are easily altered, and as a result they are often thought of as being in perpetual evolution. This is why apps and websites are constantly being redesigned, not just in major, easily identifiable overhauls, but also in countless subtler methods. Designers now embrace the ethos of iteration as a part of the design process are commonly involved in continually perfecting their work product.
This more expansive definition of design has led modern practitioners to define design as more than just the visual. Every “touchpoint” where a user or customer interacts with a company’s products or services is seen as an opportunity to apply the principles of good design, from the emails they get to the technical support they receive to even the quality of offline, in-person interactions with the brand. The end result is no longer just a “good looking” or “user friendly” interface; the goal is now to create a satisfying if not delightful overall experience for users.
As we move into immersive media like augmented reality and virtual reality, design professionals are continuing to apply this broader view of design. Voice interfaces, for instance, are now a logical extension of UX/UI design even though they the longstanding visual elements of interfaces are largely absent.
This trend of design becoming defined more and more expansively will continue. On the one hand, it will mean more opportunity for more designers, but on the other hand it will also mean more and more people will be undertaking design—not just designers. Design is a process, and while designers will always be in the lead with regard to how it is practiced, just as with engineering, that process has become so important to the success of businesses and organizations that it will become necessary for those who aren’t designers to take part in it. Whether they’re marketers, strategists, writers or engineers, it is very likely that within the next decade, design as a process, as a way of thinking, will become a part of many more people’s jobs.
Paul Schrader’s “First Reformed” does not itself fully stare into the abyss of human callousness, but it’s enough that it shows us what happens when someone does that, how it consumes the soul with horror and isolation. This is the same trick Schrader pulled off when he wrote the script for “Taxi Driver” more than four decades ago.
His ability to successfully revisit this cinematic territory is both easy and difficult to believe, as I discovered when I saw “First Reformed” in the theater last month. On the one hand, the character of Rev. Ernst Toller, played by Ethan Hawke, is a logical contemporary update to Robert DeNiro’s Travis Bickle, where the struggle against the decay of modern society is replaced with an attempt to reckon with environmental disaster. On the other hand it’s hard to believe that this movie was made by a seventy-one year old, so vital and alive and surprising is Schrader’s filmmaking here. If he’d made this four years after “Taxi Driver,” that would not have been difficult to believe, but forty-two years? This film is a miracle.
In addition to “First Reformed,” I also saw fifteen other films last month, listed below. One of them was Christopher Nolan’s “unrestored” print of Stanley Kubrick’s “2001: A Space Odyssey,” in 70mm projection. I knew this movie was beautiful and transcendent, but I had no idea of the depth of its beauty or the extent of its transcendence, having never seen it on the big screen before. If you get a chance to experience it, you owe it to yourself to go.
A few weeks ago I was invited to appear as a guest on the second episode of Mule Design’s new podcast, The Voice of Design with hosts Erika Hall and Larisa Berger. It was a great discussion that sprang in part from my article “In Defense of Design Thinking, Which Is Terrible” back in May. It was a great talk about the state of the design industry and what we have to do to level up our profession.
You can have a listen to the episode below and you can also subscribe to the podcast on iTunes.
Jason Reitman is one of those journeyman movie directors who can pass as an auteur. Usually, as in the case of “Thank You for Smoking,” “Up in the Air” and “Young Adult,” it’s pretty easy to see past his ambitions to the clumsy conceptions that are the real heart of his moviemaking. But in the case of “Tully,” his charmingly narrow look at the trials of motherhood and post-partum depression, he manages to transcend his level best. A little. It’s not a great film but it mostly works—at least in a New-Yorker-short-story-of-the-week kind of way.
The main reason it’s any good is Charlize Theron’s performance, a fully committed deep dive into the chasm between youthful ambition and middle-aged helplessness. I’m usually not a big believer in the maxim that gaining weight equals great acting, but the startling body transformation Theron underwent for this role deserves merit for being more than just a superficial if physically demanding affectation. Rather, it demonstrates Theron’s formidable willingness to find highly specific, uncritical empathy with the characters she plays. It’s getting harder and ignore the fact that she’s one of the best actors working today.
Here is a list of all sixteen films I watched in May.
“Tully” A pretty big improvement over this creative team’s previous outings, even if you can see the central plot twist coming a mike away.
“Molly’s Game” On re-watch, still flawed, but still very good.
“The Incredibles” By a mile, the best super-hero film of this century. So far!
“Maggie’s Plan” More or less a Woody Allen film not made by Woody Allen.