Hypercinema 9/15/22
Falling Toward Something
I'm what I'd call a "casual" special and visual effects nerd. I don't really do it myself, but I really enjoy reading about and watching how they're done (and whenever I go to the Museum of the Moving Image, I always spend too much time gawking at the miniatures).
So when we were asked to find and present an example of synthetic media that goes beyond a simple deepfake, my mind immediately went to Disney's recent de-agings of Mark Hamill as Luke Skywalker in Star Wars. Why? Because the first example of this, from Season 2 of The Mandalorian, wasn't really a deepfake, while the more recent one was—and my effort to untangle the differences between the two for myself went a long way toward showing (1) just how much non-AI artistry went into both instances and (2) how easy it is to find incomplete or misleading information about these techniques that obscures that human touch.
Later on in this blog I also want to touch on a more recent story from the world of filmmaking: the use of automated synthetic video techniques to remove swears and match mouth movements to foreign language dubs.
Where to begin with synthetic Luke? In my research, it leapt out at me that it's easy to go looking for a detailed answer as to how they pulled it off the first time, only to arrive at an explanation that amounts to "it's a deepfake." Broadly speaking, that's incorrect. After some time had passed (I guess to try, however futilely, to keep fans from being spoiled), Disney released a behind the scenes look at production of The Mandalorian that showed that the first instance of young Luke was mostly a matter of heavy compositing and pre-deepfake de-aging techniques. A deepfake was, as best I can tell, used as reference. If visual information from it made it into the final compositing at all, it's essentially buried under several layers of other VFX passes.
Setting that aside for a moment, a lot of the way those shots were planned had almost nothing to do with synthetic techniques. Mark Hamill was brought in to perform the scenes himself; a body double didn't just serve as a generic stand in, but was selected for his resemblance and ability to mimic Hamill's performance; hair, makeup and costuming all had a say in trying to recreate or slightly update Luke's appearance from 1983's Return of the Jedi.
On top of all that and before they had final footage to start altering, the director and VFX specialists also planned the staging and framing of the shots such that they were confident their chosen de-aging methods would be feasible. I think the attempt that VFX artists at Corridor Digital made at outdoing Disney with the help of deepfake tech is actually a better look at how other steps in the creative process are essential in making the synthetic methods achieve a desired look.
Far from automating or eliminating a lot of the work that would've gone into the task a few years ago, in a film production context the introduction of synthetic methods can demand a lot more labor and craft. We've gone from a situation where you might cast one person as young Luke to having the involvement of Hamill and multiple other lookalike actors.
I think that reality alone goes a long way toward addressing some of the concerns raised about how this tech should be used, but things become really muddy and concerning again when you pour buckets of shorthand and marketing speak on top of it. While Disney did rely more on deepfake techniques the second time it did these young Luke shots, that didn't mean the crew hit a button and simply got a better result–but in a lot of coverage of the second instance, a ton of focus was put on the fact that Disney hired a deepfake specialist who many thought did a better job than the company's first attempt. What was actually the work of many artists and specialists trying to one-up what had been achieved previously was flattened into this guy with deepfake knowledge made a world of difference, an easy-to-digest narrative that does a disservice to all those involved and overemphasizes how advanced (and, depending on your perspective, worrying) deepfake tech currently is.
This impulse to simplify the story of all the labor and thought that works in concert with synthetic tools to achieve a particular effect ends up being very convenient for folks with something to sell. That brings me to the 2022 movie Fall.
Ahead of its release earlier this summer, this low-budget thriller popped up on my radar because of several stories around the web talking about how its director turned to deepfake technology to replace dozens of swears in the film in order to hit a PG-13 rating. Bringing attention to this fact was no doubt intended to help promote the film, but it also served as promotion for the tech used, namely deepfake redubbing methods made by a company called Flawless AI (which the director of Fall, conveniently enough, is co-owner of).
Every instance of the stories covering Fall's use of AI that I encountered drew attention to the claims that this method was affordable and fast, something that also gets brought up a lot in the reporting on Flawless AI about the main pitch behind its tech: matching actors' lip movements to foreign film dubs. A lot of the focus made in marketing materials and coverage of these methods is put on the idea that this tech helps "preserve" an actor's performance in another language.
Regardless of whether you like or prefer the output of these methods to a movie with subtitles or dubbing that doesn't entirely match an actor's performance, there's nothing that synthetically altering a piece of media can do to preserve its original intent or effect. In practice you're always creating something new, an offshoot of what was used as the basis in the first place.
I've already gone on too long, but what really piques my concern here is that we're already bad (as the Luke talk shows) about having a clear idea of how these tools are being used, and without that it's really difficult to have discussions about the effect they have on a piece of work or on a medium.–9/22/22