Photo: courtesy of the artist & Metro Pictures, New York
The Automation of Empathy
Much of the algorithmic output that Paglen includes is, in one way or another, questionable. In one image, the algorithm identifies Steyerl as 59.58 percent likely to be male and 40.42 percent likely to be female. This interpretation of Steyerl’s face shows us how these algorithms identify only probabilities within rigid categories, whereas a great deal of research on gender tells us of its fluidity and most certainly does not establish a definitive link between facial expression and gender. In other images, what would appear to a human eye as similar photographs are classified as representing completely different emotional states. Steyerl rolling her eyes back into her head, the algorithm tells us, expresses something between the states of “neutral” and “sadness.” But these seemingly incorrect identifications aren’t the point. Instead, Paglen is showing us that computational systems “see” in radically different ways than humans do. Each image in Machine Readable Hito is, effectively, a double representation. One is the visual representation of a face that can be understood and interpreted by human capacities of vision. The other is the representation — oblique, indirect, via text — of what a computer sees when it classifies a digital image. The second series of representations are the important ones here, imbued as they are with an assumed objectivity, if belied by the judgments at which the AI programs arrive.
This content is available with a Digital or Premium subscription only. Subscribe to read the full text and access all our Features, Off-Features, Portfolios, and Columns!
Already have a Digital or Premium subscription?
Don’t want to subscribe? Additional content is available with an Esse account. It’s free and no purchase will ever be required. Create an account or log in: