Speaking at the AGI Conference in San Francisco last week, Ben Goertzel, the artificial intelligence researcher behind the humanoid robot Sophia, introduced his creation’s younger “sister”: Nurse Grace, designed to provide medical care for the elderly. Female-presenting robots endowed with AI – representatives of an archetype that has long preoccupied the science fiction imaginary – have stepped out of our futurist fantasies and into the public sphere. In doing so they have become ambassadors of a collapse of the limits by which we distinguish humanity from technology, self from other, today from tomorrow, and fact from fable. The plot of Alex Garland’s Ex Machina (2014), which investigates the intelligence and independence of a sentient bot named Ava, is not so different from what’s taking place in Goertzel’s lab and others like it.
Ex Machina has received critical acclaim for its realism, its minimal presentation despite its subject matter – another way in which the film melds science fiction with daily reality. Garland’s Annihilation (2018) is not known for its down-to-earth visuals, but the work is nonetheless about dissolution – that of the borders we imagine to distinguish the individual human from the rest of the world. Behind the lens as director of photography for both films – and many others – is ROB HARDY. The English cinematographer is best known and most sought after for his work on films and television programs that fall within the sci-fi genre – perhaps because, as we speculated in 032c Issue #38, his camera seems able to annihilate precisely that the space between the future and the now.
Interview: Shumon Basar
Reality-lag is one of the hallmarks of life lived today. It’s the gnawing feeling that our understanding of what is real is lagging behind where actual reality has taken us. Often, it’s not until we see something translated into fiction that we apprehend the sheer extent of what’s happening to us. Among those illuminating this predicament is cinematographer Rob Hardy, whose work on films such as Ex Machina (2014) and Annihilation (2018) has shaped some of the most vital visual language of the last decade. These features, like Hardy’s most recent project, the FX television drama Devs (2020), are all screenwritten and directed by novelist-turned-filmmaker Alex Garland. Together, Garland and Hardy – alongside musicians Ben Salisbury and Geoff Barrow – craft ominous worlds, beautiful and terrifying, inhabited by tech megalomaniacs, body doubles, and extrinsic intelligences.
These not-so-alternate realities pose recurring questions about humans in an increasingly posthuman stage of history. They render a sense that the future, to paraphrase William Gibson, is already here – it just looks like today. As cinema attendance is decimated and film productions are put on hold, the moving image is migrating to the new natural habitat of our personal screens: “smart” televisions, laptop computers, mobile phones, VR headsets. Video games have become almost indistinguishable from blockbuster feature films in hyperrealism and scope. Stories are something we swipe through. Hardy’s trajectory – from making music videos for Bjork, Placebo, and Skrillex to his creative alliance with Garland – has made him adept at adapting to these mutations of display. As his practice evolves alongside how films are made and consumed, Hardy remains as precise in his words as he is choosing camera lenses. It’s a reminder of the ineffable power of the moving image, and the technical expertise required to summon an aesthetics nearing the numinous.
The average movie viewer might know the name of a director or two, but they probably won’t know any cinematographers, or what are known as directors of photography (DPs). I remember when I discovered a DP could be auteur-like. I was 15 years old. The DP’s name was Sacha Vierny, who shot several of Peter Greenaway’s best films, and, decades before those, Hiroshima mon amour (1959) and Last Year in Marienbad (1961), both directed by Alain Resnais. When did you realize a DP was an essential piece of the filmmaking puzzle? Was there an epiphany?
Greenaway’s films always had a sumptuous quality to them. I found them visually extraordinary. That and the work of Sergio Leone, in particular For a Few Dollars More, and Andrei Tarkovsky’s Stalker and Mirror. I couldn’t help but be influenced by those films in my early years at film school, but it was much earlier than that when I first became aware of the cinematographer as “auteur” – though I’m extremely hesitant about that word in the context of filmmaking, because the art form is, by its very nature, a collaborative one. My realization that cinematographers exist came when I was around 12 years old and discovered the B-movie horror films of John Carpenter. It was the first time I became aware specifically of the visual consistency of a film – or, in this case, of a series of films. Dean Cundey was the cinematographer, and his collaboration with Carpenter on films such as Assault on Precinct 13, Escape From New York, and his masterpiece The Thing all possessed a very particular use of space and texture that connected with me. As a result, and without even knowing how or why, I started reading the nuances of composition and the spatial geography of any given scene. That said, even with this realization I had no desire to be a cinematographer. I was fully focused at that point on becoming a director.
“EVERY SOUND, WHETHER MUSIC OR FIELD RECORDING, IS PUT INTO A VISUAL CONTEXT, NO MATTER HOW ABSTRACT. EVERY SOUND HARBORS AN IMAGE.”
You studied at Newport Film School and then Sheffield Hallam University, in the north of England. Sheffield is really important in the history of electronic pop music, and you started out making music videos in the 1990s. What kind of introduction to filmmaking was that experience?
Music has always been the primary source of inspiration. I could track my entire creative history in an extraordinarily long playlist, which of course I won’t, but in the early years of my career, when I was combining images with music whilst shooting music videos and short films, I became completely immersed in what was to me a perfect synergy of music and film. One always informs the other. For example, listen to the way in which the recording engineer Steve Albini records the instruments within a room, as opposed to just recording the instruments themselves. That’s the basis of the way in which I try to approach the visual framework of a film. The actors and the space are essentially inextricably linked. In the mid 90s, Sheffield was a creative hub – and it still is – so it was easy to work across disciplines. You had challenging experimental theatre in the form of Forced Entertainment and Third Angel; a thriving underground music scene being cemented in labels such as Warp Records, which later branched into Warp Films; and a lot of experiments with art installation.
And you knew all of these scenes?
In one capacity or another, I worked across all these disciplines, but music was always a key component. For me, sound informs the shape and content of imagery – every sound, whether music or field recording, is put into a visual context, no matter how abstract. Every sound harbors an image. In Sheffield, along with the performance artist John Rowley, I managed to find a loophole in contemporary dance film funding, and as a result, I wrote and directed a trilogy of films that were highly kinetic and very physical in nature, using sound and movement as a springboard.
What were they like?
The first in this series was based on a BBC Sound Effects Library vinyl LP called Sound Effects of Death and Disaster that we found in a charity shop. The scratches on the vinyl itself became signifiers for the images. It was in making these films that I learned some invaluable lessons about the endless possibilities of camera/actor choreography. Most importantly, they taught me how to view the geography of a space as a third character, something that has become pivotal in the way I work. In tandem to this, shooting music videos enabled me to learn how to move fast, light spaces quickly, and be prepared for any eventuality while attempting to retain a degree of visual consistency.
Another “beginning” I’d like to ask you about is when you first started collaborating with the director Alex Garland. He’s had such an intriguing trajectory, first as a 20-something “Gen X” wunderkind novelist in the mid 90s, then as a celebrated screenwriter through the 2010s. Ex Machina was the first film he both wrote and directed. How – and why – did he track you down to join him on this new journey?
After shooting the 2007 film Boy A, I shot a film called Red Riding 1974, which was the first part of a trilogy of features made for Channel 4 based on the Red Riding novels of David Peace. Dark and hallucinatory by nature, this film really felt like it was pushing the boundaries of what was possible to achieve with the twisted nature of that story. It is unlike anything made for TV before or since, an experiment on a grand and ambitious scale. Each film in the trilogy was autonomous from the others – using different styles, locations, crew, directors, and so on. We were encouraged to work with complete creative freedom and control. We shot on 16mm and took a lot of risks. We were in uncharted territory. I was immersed in experimenting with the form. In many respects I see it as one of the best pieces of work I’ve ever shot. In any case, it was Red Riding 1974 that brought me to the attention of Alex Garland. I was shooting in New York when the script for Ex Machina came to me, and suffice to say, I was blown away by the writing. I was struck by its purity, its precision, but most of all how it presented a blank canvas, a space in which to start imagining how it would look and feel. It arrived at exactly the right moment.
What was your first meeting like?
My very first conversation with Alex was a phone call. I really liked his no-bullshit approach to things – not just to filmmaking, but to pretty much everything. It was clear we shared the same aesthetic, and the same desire to make something as good as it can be, without the hindrance of any kind of aesthetic baggage. In that first conversation it became clear that we both knew exactly what Ex Machina should be.
You’ve often spoken about the concision and precision of Alex’s scripts. Do you think his previous practice as a novelist comes through in how he scriptwrites?
That feels more like a question for Alex than it does for me, but I would say that with every script of his that I have read, one always gets a sense of completeness. The story never feels lacking something – or overstuffed. It’s always perfectly balanced. Everything you need to know narratively is right there on the page. Among other things, Alex is a visual person, so in many respects you could argue that the novels were early forms of film scripts. This would be evident in the pacing and the ease with which the novels conjure a visual experience for the reader, and also in the structure of the narratives themselves. I think it was only ever a matter of time before The Beach was adapted to the screen, irrespective of its success as a novel.
Ex Machina is a kind of retelling of the Frankenstein story, in the age of Silicon Valley. It’s startling for how threadbare the cast is – a couple of humans, a few more robots. It’s also startling for how economical the setting is: essentially a remote lair belonging to an omnipotent tech billionaire. I know you and Alex eschew referencing previous films as examples. What was your initial discussion about how you’d approach things?
The approach Alex and I take at the beginning of each project is always the same: rather than discuss how we intend to approach it, we simply approach it, if that makes sense. Ultimately, we are just sculpting – the script being our lump of clay, which we slowly shape as we talk about the kind of film we want to make. Some ideas that are fairly obvious to telling the story will stick, others come and go, but each creative decision will inform the next. In the case of Ex Machina one of our two main strands had to do with pacing, which leads to images that are held on screen for longer than one would generally expect. The other main thread was the idea of creating a “360-degree environment.” Most of our energy went into the design side of things, the idea being that if we could create a tangible, believable environment for the characters to inhabit, then the photography would respond accordingly. This comes back to the idea that for me, rooms and spaces play an intrinsic role in the visual aspect of storytelling.
That reminds me of how Hampton Fancher once said that in the first script he wrote for Blade Runner (1982) everything was set entirely in small rooms, like a sequence of Edward Hopper paintings. Instead, Ridley Scott turned it into something epic. Sci-fi films are often sweeping in scale, so there’s something initially contrarian about Ex Machina’s intimacy. Was there something appealing to you about this subversion for a contemporary sci-fi story? For me, its scale is one reason the film feels so “now.”
There’s something compelling about putting a microscope to intimately performed moments and finding ways to give them scope, to make them feel epic in scale. Body and facial language can be read in so many different ways, and to try and explore that through visual means is far more challenging than simply creating scope for scopes’ sake. I honestly can’t speculate as to why Ex Machina connected as well with the audience as it did, other than the fact that it was a great example of pure collaborative filmmaking. By which I mean the process was incredibly democratic: every single person involved in the making was working at the best of their ability and truly believed in the script. This is a rare thing, and it wasn’t because we were lucky; it was because we strove to make it that way. The intimate scale came about since we all felt it was appropriate in telling the story without being encumbered by sci-fi cliché tropes. It’s a very intimate tale and had to be treated as such. It would have been wrong to do it any other way.
After the huge success of Ex Machina, your next project with Alex Garland was Annihilation, an adaptation of a novel by Jeff Vandermeer. I’m curious if you read Alex’s script first, or the novel first, or – if this is even possible – if you read them in parallel?
I read the script first, and then the novel – although Alex felt it was unnecessary for me to do so, in that we were making the film of the script, not the novel. I wanted to get to the source material because I’m always curious to see how an adaptation can differ from novel to screenplay. In this case, the novel was of course completely different to Alex’s script. He told me he read the novel once and wrote the screenplay without ever going back to it, which I thought was a really interesting way of doing things. For me, the best film adaptations always take just the essence of the story from the novel – the thing that makes a novel great. The film should always strive to exist on its own terms. After all, the two mediums are so different, and the experience of reading a novel and watching a film are so unalike, at least for me.
One of the things I read about the shooting process for Annihilation – something I hadn’t registered as a viewer – is that you changed the combination of cameras and lenses as the story proceeded, and as the protagonists ventured further into unknown realms. Can you say something about this?
Annihilation has a very linear narrative, which for the most part was shot in a linear fashion. I wanted to mirror Lena’s (Natalie Portman) physical and emotional journey in a subtle way, and to apply it to the visual palette. The digital sensors of certain cameras will respond differently to specific lenses, so I knew I could achieve a degree change by switching lens packages and cameras as the story progressed. Each lens set has a particular characteristic depending on how old the glass is or the way in which it is made. Ultimately, the way I see it is that lenses have individual personalities beyond simply their field of view, and I felt that I could use this aspect to help highlight the twisted, hallucinogenic, spiraling journey that each character takes as the story progresses. It was a great opportunity to experiment with that idea.
Another abiding element of Annihilation’s world is something called the “Shimmer.” In a way, novels have an easier time conjuring fantastical phenomena through suggestion: every reader is their own production designer. But film has to show at least some things. What was the process of conjuring the Shimmer for the film?
We wanted the Shimmer to have a very organic, layered look. Andrew Whitehurst, our VFX supervisor, would design the overall look and texture based on our group discussions. Like the Shimmer itself, the ideas were ever-changing and always quite fluid. For my part, I was focused on ensuring that it felt very much embedded into the fabric of the locations. This was incredibly important, in spite of the fact that the Shimmer was largely an unknown quantity.
Technically, how did you achieve this?
One of the devices I used was to create what I called a “Shimmer Library,” which involved taking all different kinds of very old film lenses of varying degrees of quality, and firing moving color projected light into them in a blacked out space. We would then use these analogue-produced camera flares in the final stages of post-production by “expanding” certain aspects of them that we liked, and layering them over the images to create a bedrock for the Shimmer. Andrew would then take those organic elements and “detune” them, much as you might the individual notes of an instrument. The overall effect gave the viewer a sensation that the air itself around the characters was imbued with subtle aspects of the Shimmer. At least that was the intention. I also used large light sources on set and location to simulate the sun as refracted through the atmosphere of the Shimmer itself.
There’s a pantheon of 1970s cerebral sci fi – Solaris (1972), THX 1138 (1971), Silent Running (1972), Stalker (1979), among others – that deal with existential issues. To me they feel from a bygone age, and one I miss a lot. Are these films important to you?
THX 1138 and Stalker are incredibly important to me. I would also arguably place Ridley Scott’s 1982 Blade Runner alongside them. For me, these three represent something otherworldly, which is so unique to their existence. They are as much feelings as they are films. But I don’t really see them as being from a bygone age – rather as timeless narratives with no equal. All of them, incidentally, have an incredible way of using sound as a narrative tool. The sound of these films is very much at the forefront of their aesthetic. THX being the prime example, where the sound design comes from Walter Murch.
“IT WOULD HAVE BEEN WRONG TO DO IT ANY OTHER WAY.”
Your most recent Alex Garland collaboration, also from an original script by Alex, is a TV drama called Devs. At stake this time are quantum computing, omniscience, and multiverses – set in San Francisco and Silicon Valley. Again, I’m curious what your first conversations focused on for this project.
We started discussing the idea of doing a more elaborate narrative in the later stages of shooting Annihilation. Those conversations were generally very informal. We liked the idea of telling a story that had room to breathe, visually, and had character arcs that were particularly complex. We quickly realized that, in terms of length, it required a different format than that of a traditional cinema release. The intention was never to make a “TV series” as such, but instead to find a format that could accommodate a film of the length that we needed for a more expansive story.
Like a movie, set over a number of episodes?
Yes. We see Devs as an expanded narrative using the language of a film. As it turned out, FX was incredibly supportive of that. When Alex sent me the first two parts of the script, I was in the middle of shooting Mission Impossible: Fallout. I consumed them as I would the first few chapters of a page-turner novel. It was incredible. I pushed him for the rest, but he was still writing, so effectively I was getting the new parts as soon as they were finished – first drafts that practically didn’t change til the point of shooting a year and a half later, which says a lot about Alex’s instinct for writing. Once it was all in, we started talking about how the world of Devs would look in terms of an overall design, for example how the landscape of San Francisco had the same visual language as the interior of the Devs cube. Much like the conceptual conversations we had from the very beginning on Ex Machina, what was, and is, important to us is the need to keep expanding, to keep challenging ourselves and to never repeat. So there was always a real sense of forward motion with the ideas behind Devs.
When I visited San Francisco last year, it seemed to perfectly encapsulate the violent contradictions that define America today. The wealth disparity is so large that Downtown, an apocalyptic number of people live on the street, many with disabilities or serious mental health conditions. Then, a scenic drive away, you’re in the lush Valley, amongst tech campus utopias designed by Norman Foster and Frank Gehry. How did you approach depicting this complex array of settings? Did you visit any of these iconic tech HQs for research?
I completely agree with the observation about San Francisco and the way it frames America as a whole. There is something profoundly disturbing and almost medieval about the disparity between the enormous wealth and the sheer volume of poverty and homelessness in that city. It’s very sad to see, and feels avoidable. In terms of how we approached the variety of settings, the best way to answer your question is to use the example of the exterior of the Devs campus. It was the cross-over point between the “real world” and the multi-layered world of Devs, which ultimately bled back into the “real” one. I wanted to find images that aligned the Brutalist architecture of the campus buildings with the imposing but benevolent natural architecture of the sequoia redwood trees that fenced it in. Rather than being juxtaposed, those two elements had to work in harmony – these were important compositions that had to bridge a narrative gap. That said, always in the back of our mind was the word “inexorable.” That word became our lone reference for the visual framework of Devs.
In Devs, there are so many different kinds of light, ranging from “natural” to all manners of mechanical – encircling trees, diffused through screens and through the architecture of the actual quantum computer and its enigmatic, isolated building. Can you talk about the way you dealt with light in Devs?
Again, space is everything: the light and the lenses make aesthetic decisions, whereas the camera makes narrative choices, and there are infinite ways to combine them. A good analogy for me would be the way sound behaves. I love the idea that the same instrument can sound completely different depending on how it is recorded in response to the room that it is in when played: it’s partly dependent on how you position the microphone in relation to the instrument, and the instrument’s relationship to the room. It’s the same approach with the way light behaves in a room and how you position the camera in relation to the light. After that, it comes down to a question of taste.
Coming back to scale, filmmakers 100 years ago knew that their audience would only see their final product in the cinema. Then came television and video, which drastically reduced the scale of the image. David Lynch has railed against watching his films on an iPhone; he considers it a kind of sacrilege. Does it matter to you that what you make may end up being seen on a very small device? Is this way of viewing changing the types of cinematographic images being conceived?
I’m a huge Lynch fan, by the way. I’d assume that how your work is viewed does matter, to any filmmaker, but that should never dictate the way in which you make it. We make the work to the best of our abilities, with every ounce of our creative energy, but there are so many ways in which the viewer can engage with the material, and that is their choice. To attempt to control that is not only futile; it is also in some respects very egotistical, born from insecurity. The reality is that cinemas are closing, and the way in which things are viewed is in a process of constant change. The ideal is that a film would work on any size screen, in any viewing context. That said, Annihilation really is a different and wholly fulfilling experience on the big screen, in part due to the incredible way it sounds. I once saw 2001 at Southbank Centre in London with the soundtrack stripped away and a live orchestra put in its place, and it created a whole new experience. The film – a film I’ve seen many, many times – was effectively reborn for me that night. It didn’t diminish the experience I’d had watching it before; it only added another surprise element to how it could be viewed. It’s important to stay focused on telling the story, and if it’s engaging enough, the film will work in any viewing context
Hardy’s next film, The Man from Toronto, is set to premiere on January 18, 2022.