The choice to go from 24 fps to 48 fps was that some filmmakers really hated the strobing effect when the camera pans in 3-D versions of movies. Their solution was to up the frame rate—giving the filmmaker more information to play around with. Honestly, the 24 fps strobing never bothered me, cause if you are telling your story right, little nitpicks like the don’t enter the mind of your audience.
For reasons unclear even to me, I responded to his gentle correction with A Brief and Inadequate History of Special Effects:
I didn’t want to get too technical in the podcast, but I was hinting at that: 3-D created a problem that didn’t previously exist, and the solution is worse than the original problem. No more strobing, but now the effects are so obviously “special” that we may as well be watching the original Clash of the Titans. An incredible film, don’t get me wrong, it just required a superhuman suspension of disbelief. Which at the time was fine, because “special effects” like George Reeves flashing across the sky were meant to be “special,” outside of the ordinary, and didn’t need to look as if they were of this world or obeyed its laws of physics.
I tend to think George Lucas ruined this fantastical acceptance of the specialness of “special” effects when he married recognizably modernist styles with space stations and star ships—the Millennium Falcon could’ve been a Le Corbusier, the stormtroopers come from the mind of an Italian fascist, and half the scenery consisted of the same brutalist style that litters my campus. Point being, his realist aesthetic made “special” effects look quaint, the people who loved them rubes, and that’s where we’ve been ever since. Realism or naught! Realism or naught! (With a few exceptions, Del Toro notably among them.)
So I could understand why Jackson wanted The Hobbit to accede to the demands of the regnant style, but in doing so he utterly ruined his film. I mentioned in the podcast that the best scene in the film, Bilbo’s encounter with Gollum, looked like exactly what it was: Martin Freeman in front of a green screen talking to a man in ping-pong ball covered suit. (I know that’s not how they do it anymore but you know what I mean.) It looked like Jackson had decided to avoid the uncanny valley by introducing its monstrous child to an actual human being and hoping the audience wouldn’t be able to tell the difference. I’m not going to say it made me want to cry, but I’m not going to deny I teared up a bit at the sheer waste of it all.
Like you, I’m more interested in the story, so if the technological advances can be integrated into it—like the conference tables in Avatar—I’m fine with that because it complements the narrative. But I don’t even think we need 3-D. It took us millions of years to develop the particular sort of stereoscopic vision we have, and our brains react to an “occupied periphery” the same way now as they did before: by flooding our bodies with hormones that make us nervous, tense, excited, afraid, etc. Since our eyes still point forward, you don’t need anything more fancy than an IMAX to occupy our peripheries, and I’m fine with that.
I thought I was talking about special effects and their more cloyingly “special” forbears, but the real sore spot for me here is the blind lionization of a limited definition of “realism.” Don’t misunderstand me: I find relocating fantastic narratives to a world that resembles ours an admirable endeavor. Heath Ledger’s interpretation of the “Joker” outstrips Jack Nicholson’s because we don’t need a vat of quasi-mystical chemical slurry to believe that a child of neglect and poverty might come to resent those he believes kicked him down to choke him out. I’m all for grounding narratives that occur in fictional worlds in ones that mostly obey the rules of ours. I’m on board with Battlestar Galactica and (though I’ll never admit it) I even watch Arrow. But the “reality” of “realism” has to amount to more than a little extra grease smeared on the walls of some backlot “Brooklyn.” Because when “competitive realism” becomes a sport the audience always loses. Embracing filth for love of the slop as an ethos would be one thing, but embracing it as an aesthetic out of devotion to an empty notion of what constitutes “realism” is more than just a thing:
It’s a terrible one.
Consider the most common way to impart unrehearsed immediacy to a scene: the shaky cam, proud descendent of the cameras carried by war reporters who (we imagine) ran alongside the men whose deaths they documented. Because deep in the ancestral soul of every shaky cam is a connection to the atavism whose jittering eye (we imagine) once captured soldiers piling up in Norman shallows. Because the essence of the physical circumstances of war correspondents (we imagine) is transferred not just into the tool they used, the shaky cam, but into scenes whose style bears a family resemblance to ones shot with them. I added “we imagine” to the previous sentences because many people believe in what amounts to a form of idolatry when it comes to the shaky cam shot: God the War Correspondent infuses His essence into the totem of His Shaky Cam in such a way that all evidence of shakiness in film represent an invocation of His Brave Reportage.
Which is the height of insanity considering the ubiquity of shaky cams and shots designed to resemble them. Otherwise we must believe that the Great War Correspondent is present when a teenage girl on a soap opera throws a temper tantrum and slams her bedroom door behind her. Because shaky cams capture plenty of those. Not to mention celebrities. His Journalistic Eminence must love celebrities. Just turn on the television between 5 p.m. and 6 p.m. and witness the celebrity-chasing that passes for “local news” now. Those cams are all a-shaking and there’s not a single noble soldier dying lonely on a foreign shore in sight.
We’ve established that the shaky cam’s war-oriented history is partly responsible for why it’s considered “better” at creating “realistic” representations of the world. We’ve now also taken our first step toward understanding why “realism” is associated with misery: its tools are. Think about it: a shaky cam can almost perfectly approximate the swiveling eye-level perspective of a human head, so if its operator breaks into a run the resulting images are almost perfectly identical to what you would have seen were you the one doing the running. That makes sense. But before we continue I want you to take a look at this picture:
It’s from science. Which one isn’t important because, for now, I just want to compare the movement of the human head relative to the body while walking and running. Humans have evolved to walk chill. Just look at our head bob as we strut around in our hilariously skinny jeans. But look what happens when we hear a car backfire in one of “those” neighborhoods: our stride lengthens and center of gravity lowers, meaning we’re more stable than we were before we nearly shat our tiny pants. Now take a look at our running-head. What happened to our swagger? I’ll be brief: because a bipedal gait is inherently unstable, our head has a tendency to pitch forward when we run; and because evolution looks unkindly upon individuals who flee danger by planting their face where their feet ought to be, we’ve evolved a robust musculoskeletal system that keeps our head up and eyes front when we pick up the pace.*
So our head bop up-and-down atop an unflexed musculoskeletal apparatus when we walk, but when we break into a run that same apparatus yanks our head back and stabilizes our body in a way that prevents our head from bopping. Meaning that when we run our eye-level rarely varies, whereas when we walk it’s constantly moving which is the opposite of the “realistic” effect provided by the shaky cam. Consider this randomly selected clip:
While the frightened children walk the handheld camera is canted, but the level of framing barely bobs at all. We have no bobbing where a “realism” conforming to human biomechanics would require it. But when the frightened children start running at 1:33, the level of framing bobs up a foot and down another with every step. Meaning we have excessive bobbing where a “realism” conforming to human biomechanics would demand none. Shots from a shaky cam are only more “realistic” if we define “realism” as “an aesthetic commitment to seeing something and representing it the opposite.” We’re not about to do that. So where do we stand now?
We know that shots from shaky cams look more “realistic” by means of an accident of journalistic history and by virtue of the fact that they represent the world not as it is but as the opposite. Keep in mind that this is the shot whose realist credentials most would consider unimpeachable and you begin to see what an aesthetic commitment to realism entails: the perpetual recreation of the contingent circumstances in which the shaky cam shot became popular and secondary elaborations on those contingent circumstances that borrow the “realist” credentials of the original while doing the opposite of what happens in reality. Considering that this is the strongest case a committed realist could make, you can see the kinds of problems that might arise were a spirit of “competitive realism” to sweep through a generation of filmmakers. How can they be more “realistic” than journalism’s happy accident and the opposite of evolutionary development? What’s more “realistic” than the opposite of perceived reality?
Because not even they can answer questions that make no sense, they’ll change the terms of the debate to features common to the happy accident: the shaky cam will be used in scenes in which battles rage and men are confused. Then they’ll extend the purview of “battles” to include arguments and cast a net wide over all manner of confusions. Now a man will “battle” with his balance and his wife until he runs from an apartment confused because he’s sober and single and in order to imbue the scene with the “realism” it requires it’ll have to be filmed by a trampolining meth addict. My example’s admittedly extreme, but you see my point: if historical accidents and perceptual inaccuracies become the standard for “realism,” a competition can only result in increasingly random exaggerations. (That they’re mistaken for transparent representations of reality only makes the situation more infuriating.)
All of which is only to say that there’s nothing realistic about cinematic “realism,” but there is something more realistic about films that aspire to it than, say, animated features or movies starring Muppets. I’ll grant you that. But once you start talking about crafting something that’s “more realistic” than previous films, you run into a whole host of problems. In literature, when a group of young writers tried to be more-real-than-the-realists, the result was literary naturalism, and the aesthetic of the current crop of directors seems aligned with them—its cities are sullied, its fields despoiled, its masses uneasy—but the aesthetic of the literary naturalists was built on science. Bad science, I admit, but science nonetheless. Literary naturalists had a reason to believe the boot on their neck would be crusted in shit if not covered in blood: their science told them so. Not so for the current realists, for whom the legacy of perceptually incorrect happy accidents suffices. Theirs is an empty aesthetic turned pissing-contest and we’re the ones getting the golden shower.
What does this have to do with Peter Jackson filming The Hobbit in 48 frames-per-second? If, as my friend says, Jackson shot at that frame-rate because he wanted to avoid a nearly unnoticeable strobing, he presumably did so because he felt that it would break the illusion on which cinema depends. If the audience notices the limitations of the camera, it becomes aware that there’s a camera between it and the world depicted on-screen. Since we don’t see strobe effects of the sort in the real world, we shouldn’t see them on the screen because they’re unrealistic. His solution was to eliminate the strobing by making sure that illusion was never created in the first place. Because you can’t be ripped from a world you’re not immersed in.
Of course I don’t actually believe that’s what Jackson intended, but I do believe he thought it was his turn to up the competitive realist ante. The long explanation above is meant, in part, to communicate why Jackson would agree to participate in this competition. The conventions are so naturalized it’s almost impossible not to think of them as realistic, so it takes so heavy-duty defamiliarization to recognize them for what they are. Put differently, I think Del Toro would’ve made a Hobbit in a style that recognized that the “special” in “special effects” isn’t something to run away from, because such effects are only slightly less contrived than their “realist” counterparts and can be far more effective when telling a story, say, about wizards and magic rings.
*For more on that, see Dennis Bramble and Daniel Lieberman, whose hipster I borrowed above, and much of whose work can be downloaded free of charge from Lieberman’s site.