I bought my first projector in 2017 and almost immediately began considering the different ways that I could conceptually employ the tool. I found myself honing in on the theme of time, and how it relates to my subject. Here are some examples:
I asked subjects who had survived a traumatic event to bring images or videos that represent the trauma, which I projected onto them as a way of processing and moving through the painful memories.
I projected childhood images of a subject back onto them, exploring the idea that the person we were as a child is always in us, no matter how old we get.
I tethered the projector to my camera so that every time I took a photo of the subject, the image immediately populated the computer screen and projected back onto them, effectively lighting them with the one-second-older version of themself.
My latest shoot is a convergence of these projector explorations and a recent fascination of cubism, photo-collage, and artificial intelligence.
Many AI images look are impressive at first, in a too-good-to-be-true kind of way. Interiors brag gravity-defying architecture, or scenic terrains depict features that have never before been seen on this planet. However, there is inevitably an element that seems off when viewing these images. They are too perfect, and lack the tactility and weight that feels believable, a phenomenon referred to as the uncanny valley. This begs the question, are AI images a success or a failure? Are they successfully creating visions of an ideal human or world according to some programming and an amalgamation of stock imagery, or are they a failure in coding, with the author lacking the awareness to include nuance and imperfection in their vision? Its these details that, I’d argue, that inform us that a person or a place is real.
This brings me to this shoot. My process was as follows:
Create macro photos of a range of body parts of my subject
Import the images to my computer
Project the images onto the subject
Take a portrait
I didn’t overthink which parts to photograph, nor how to light it. I knew that the unpredictable nature of the process would dictate the resulting images more effectively than I could ever come up with in my head. After a few scenarios I decided to make some in-camera multiple exposures, using the body parts as a base layer. Once again, I trusted that the process would take the series of controls that I put in place and spit out something wildly unexpected.
The difference from this approach and AI is that the source data is all from the same, very real person. There was no outside presence machine or creative team weighing in. No retouching. No photoshop. Just the skin, hair, fingers, and teeth that make us all human.