By Stefka Hristova
Introduction
With the increasing transformation of photography away from a camera-based analogue image-making process into a computerised set of procedures, the ontology of the photographic image has been challenged. Portraits in particular have become reconfigured into what Mark B. Hansen has called “digital facial images” and Mitra Azar has subsequently reworked into “algorithmic facial images.” [1] This transition has amplified the role of portraiture as a representational device, as a node in a network of distribution, and as a process. Portraits now function simultaneously as modes of self-expression, as networked data, and as the result of algorithmic logics. This shift in the ways in which portraits circulate in culture speaks to what Grace Kingston and Michael Goddard have described as the essence of the “networked image.” [2] They articulate the emergence of “dual beings with two habitations: one in a conventional organic body, delimiting the space and time, a ‘here and now’; and the other taking the form of a data cloud distributed across multiple networks and housed in who-knows-what and who-knows-where, in server farms and databanks.”[3]
This transfiguration of the image from a visual to a data artefact is particularly evident in the case of smartphone photography. The move from analogue camera-based portraits to mobile device networked images has challenged the ontological status of the photograph. In its initial ontology, photography was seen as a way to record the word visually and truthfully – to write with light. As Daniel Rubenstein and Katrina Sluis write,
An image on the screen of a smartphone or a laptop looks like a photograph not because it has some ontological relationship to the object in the world, but because the algorithmic interventions that ensure that what is registered on the camera’s CCD/CMOS sensor is eventually output as something that a human would understand as a photograph.[4]
The smartphone photograph was validated as an instance of “photography” by the continued use of canonical visual devices. The image generated on our phones looks like a photograph or a portrait, and thus we assume that it is one and can represent us in a fashion similar to that delivered by traditional photography. This mimicry obscures the role of smartphone photographs as data-sets used in both surveillance as well as algorithmic research about race, gender, age, sexual orientation, political orientation, emotional state, etc. The transplanting of visual conventions between different visual image-making processes is precisely what Lev Manovich refers to as an “aesthetics of continuity.”[5] The “aesthetics of continuity” patches over the disparate use of digital images as data. Data that is relevant for machine vision and machine learning and which is relevant to humans only in a secondary capacity.
It is through the continuous use of conventions of portraiture that smartphone image-making parades as photography-based portraiture, even though its main function as a “network image” is to operate as “invisible” and further “operational” image rather than “visual” image.[6] In other words, while consumers believe that they are participating in a visual regime of photography-based portraiture, the images that they generate are used in contemporary culture as raw data that trains a wide range of algorithms. The image is created by and for a set of computer commands. It is the “aesthetics of continuity” that obscures the important ways in which smartphone images, posing as self-portraits, have come to fuel algorithmically-driven surveillance assemblages. While photography has always been embedded in what Alan Sekula terms structures of representation and repression, in the context of smartphone photography, these two trajectories have merged even more profoundly.[7] In this article, I investigate the ways in which smartphone images operate as both self-portraits and as raw data harnessed in facial recognition and surveillance apparatuses. First, I outline a longer historical trajectory in which portraits have been used both as means of representation as well as means for anthropometric research and surveillance in the context of policing. Next, I highlight the use of smart phone portraits and selfies in AI-driven biometric research that seeks to articulate biotypes about race, gender, sexual orientation, political preference, etc. Further, I argue that the popularity of the selfie has led to the introduction of pervasive surveillance technologies that uses front-facing cameras. These surveillance technologies have become a staple of smart phone technology and now operate in a diverse set of contexts: from border checkpoints to grocery store kiosks and autonomous vehicles driver assistant technology. Last, I expose the mimicry of smartphone data images of people as portraits and selfies through by highlighting the conventions that obscure their role as surveillance and biometric data. I argue that this masquerade is carried through the “aesthetics of continuity” of blur and bokeh, which transposes the photographic portraiture convention of using shallow depth of field onto the mobile image through the use of algorithms.
Facial Recognition and Surveillance
From its inception, portraiture has acted both as a way of representing identity as well as a way of articulating quantified selves.[8] While this idea resonates with the contemporary use of AI and facial recognition, I would like to highlight the ways in which scientists as well as photographic critics of the time harnessed this idea. Joshua Lauer has detailed the ways in which as early as the 1880s, the portable camera was seen as a surveillance tool. Lauer writes that “the respectable soft surveillance of family albums and honorific photography can be contrasted with the camera’s repressive function as an instrument for detecting, classifying, and controlling social deviance.”[9] Alan Sekula has written extensively about the ways in which photography has been coupled with both portraiture and police surveillance since its beginning. He argued that in the 19th century, photographic portraiture came to “establish and delimit the terrain of the other, to define both the generalized look – the typology – and the contingent instance of deviance and social pathology.”[10] These processes were made possible by the linkage of photography to a “truth-apparatus” as the “camera is integrated into a larger ensemble: a bureaucratic clerical-statistic form of ‘intelligence’.”[11] In other words, photography became meaningful as a form of knowledge only when accompanied by data. As Sekula demonstrates, Alphonse Bertillon’s system of policing as well as Francis Galton’s anthropometric and racial human classification systems depended on both photography and data – it is anthropometric data that anchored photography into an archive.[12] Bertillon created the “first effective modern system of criminal identification” by coupling facial measurements with photography.[13] His system, however was rooted in racial hierarchies. Bertillon’s contribution to racial anthropology comes from his book Ethnographie moderne: Les Racial Sauvages, in which he describes and measures the bodily structure of the “lower races”.[14] In a passage on the cranial measurement of his subjects, he compares the Hottentot head to the Parisian head (1250 vs 1500) in order to conclude that the typical Hottentot has the mental capacity of an “idiot” in Paris.[15] Galton similarly conducted extensive anthropometric studies that included facial measurements and photographic documentation. He argued that by using composite portraiture he would be able to identify a “biologically determined criminal type.”[16] Galton coined the phrase eugenics as a way to describe the science and idea of breeding “human stock” and was the first to introduce statistical principles to the study of human intelligence. His work was also rooted in deep-seeded racism. Galton travelled to South Africa in 1851 – a journey he commemorated in his 1853 book Narrative of a Traveler to Tropical South Africa. In this book, he describes the Hottentot people he encountered as having a face that is common among the prisoners in England – a “felon face” as he put it.[17] In both cases, the photograph acted as metadata to the data of the catalogue card. In other words, the collection of data about subjects seen as aberrant was conducted in the realm of the physical – the subject him/herself was subjected to measurement. The photograph performed an important function of making data recognisable to human agents of surveillance and thus legitimising the idea of biotypes.
The idea of the face as a source of visual data was evoked not only by champions of anthropometrics such as Bertillon and Galton, but also by photography critics writing about the status of photography as art more broadly. The latter group is best represented by Lady Elizabeth Eastlake who, in 1857, positioned portraiture as caught between representation and quantification.[18] In the contested case of portraiture, where photography replaced miniature painting, she asks:
What indeed are nine-tenths of those facial maps called photographic portraits, but accurate landmarks and measurements for loving eyes and memories to deck with beauty and animate with expression, in perfect certainty, that the ground-plan is founded upon fact?[19]
These “facial maps” render visible one’s beauty, expression, as well as the “variable stages of insanity.”[20] Eastlake’s work echoes a number of contemporary studies that link photography to the study of hysteria, and hence the surveillance of affect. 19th century neurologist Jean-Martin Charcot studied hysteria by photographing the facial expressions of his medical subjects.[21] The facial maps deployed by Charcot attempted to taxonomize hysteria. The face, indeed, was to become a truthful indicator of madness. Sander Gilman’s volume The Face of Madness is a primer on the rise of psychiatric photography and the work of the English alienist Hugh W. Diamond in particular.[22] In another study from the 1850s, Guillaume-Benjamin-Armand Duchenne (de Boulogne) used photographs to study the expression of emotions on human faces, “which he provoked through electrical stimuli.”[23] In opposition to these views, in the 1870s, Charles Darwin conducted similar research, although he concluded that hysteria or insanity cannot be detected from facial expressions, or indeed, from any photography at all. The portrait thus became harnessed in anthropometric studies that attempted to justify the superiority of whiteness, in systems and scientific discourses that claimed that both criminality and intelligence are biologically defined by the size and shape of one’s head, and, last but not least, that hysteria and human emotions more broadly can be determined accurately by one’s facial expression. These discourses grounded photography in a knowledge domain driven by data and running counter to the idea of photography as means of identity expression.
The trend of using portraits to train surveillance and authentication systems because of their ability to isolate faces and people permeates contemporary algorithmic culture as well. As Joseph Ferenbok aptly points out, “[a]s faces, and the people behind them, are becoming more readable by the surveillance authorities, the technologies and overall socio-technical assemblage supporting the surveillance practices are becoming more sophisticated, complex, and opaque.”[24] In algorithmic technology development, portraiture has been used in order to access one’s race, gender, age, sexual orientation, emotional state, and political preference.[25]
Smartphone photography has played an important role in the development of biometric algorithms that aim to establish stable biotypes. Notable here is the Adience dataset, that has been used extensively in training algorithms to detect gender and age based on selfies.[26] Adience is a large dataset that contains images taken with iPhone 5 or later smartphones.[27] It contains 26,580 images, found “in the wild,” which means posted on the Internet. This database has been used by the developers of the Face Image Project Gill Levi and Tal Hassner to conduct research on AI-driven age and gender taxonomies.[28]
Smart phone images have also fuelled AI research on human emotion in particular. A contemporary database that uses selfies and portraits in relation to affect technologies is the infamous AffectNet: “a new database of facial expressions in the wild” which contains more than one million facial images collected from the Internet.[29] Numerous contemporary studies have harnessed “loving eyes” as data points useful in recognizing human emotions. Affect recognition technology has become even more pervasive and has thus revived 19th century conventions that supported the problematic studies of hysteria.[30] More specifically, it has renewed the belief that hysteria, as well as emotions more broadly, can be read through a quantitative analysis of facial features. While in the 19th century, photographic data of faces was disconnected from the portrait studio, these two practices have become increasingly conflated in the contemporary algorithmic landscape. Now, images taken by our mobile devices masquerade as photographs, as portraits, as selfies; at the same time, they operate as data-points, as information, as the raw material for AI-driven recognition.
Selfie to Self-Capture
The doubling of photography as means of identity expression and as tool for visual data gathering is evident in the case of selfie photography. Having outlined the ways in which portrait photography from its beginning has been wedded to discourses of anthropometrics, I want to draw attention to the significant role selfies have played in the emergence of contemporary algorithmic-driven biometrics.
Selfies first appeared in the early 2000s, initially as ways to document one’s own presence through the use of mirrors, self-timers, and later, a forward-facing lens. Selfies are part of a longer tradition of self-portraiture.[31] In the context of mobile technologies, selfies became connected to youth cultures and came to represent “self-performances where young people self-confidently participate in representing their own narratives in playful ways.”[32] Selfies were made possible by the use of a front-facing camera on mobile devices. These cameras emerged in 2010 with the introduction of Apple’s iPhone 4 and at first offered pixelated, low-quality visual images, since the lens was of secondary quality compared to the rear one.[33] Selfies entered the popular discourse in 2013 when they were officially added to the Oxford English Dictionary, defined as: “a photo of yourself that you take, typically with a smartphone or webcam, and usually put on social media.”[34] These images were understood as taken by mobile phone or webcam and posted on social media and became a visual signature for urban youth.[35] As The Guardian wrote in 2013, selfies became “the self-portrait of the digital age.”[36] This mode of self-expression has been both condemned as narcissistic and praised as an aspect of geek culture. Further, selfies have been connected to political agency.[37] As Mona Kasra has argued in relation to Aliaa Magda Elmahdy’s self-portraits, selfies can also become “deliberate and personal acts of political expression” for youth that “resituate political knowledge, power, and information distribution.”[38] Clair Hampton’s analysis of the #nomakeupselfie provides yet another example of the ways in which the selfie has been harnessed for the purposes of challenging hegemonic structures.[39] This context is important because the ubiquity of the selfie increased our comfort with front-facing cameras and articulated a discourse in which images produced through such camera are seen as intrinsically linked to questions of representation rather than surveillance.
Selfies have also been harnessed as big data for algorithmic research. The Selfie Data Set published by the University of Central Florida’s Center for Research in Computer Vision is a great example.[40] According to the website,
… [the] Selfie dataset contains 46,836 selfie images annotated with 36 different attributes divided into several categories as follows. Gender: is female. Age: baby, child, teenager, youth, middle age, senior. Race: white, black, asian. Face shape: oval, round, heart. Facial gestures: smiling, frowning, mouth open, tongue out, duck face. Hair color: black, blond, brown, red. Hair shape: curly, straight, braid. Accessories: glasses, sunglasses, lipstick, hat, earphone. Misc.: showing cellphone, using mirror, having braces, partial face. Lighting condition: harsh, dim.[41]
This selfie database is exemplary of the ways in which self-portraits have been harnessed for the purposes of facial recognition. Here the selfies are transformed into data points and circulated in big data structures. In another instance, selfie data sets were created by scraping Instagram accounts for images tagged with the hashtag #selfie.[42] As Kate Crawford and Trevor Paglen have aptly noted, these definitions are
unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation [that] hark back to historical approaches where people were visually assessed and classified as a tool of oppression and race science.[43]
In the context of algorithmic surveillance-based culture, selfie images have provided yet another avenue for training facial recognition and surveillance systems and have undermined the liberatory potential they were once endowed with. Recently, the term “selfie” itself has taken on a definition that is more closely related to surveillance. On smartphone devices, facial recognition authentication has become a standard feature. This harnessing of the selfie as a mode of facial recognition is based on a new surveillance-based definition of what a selfie is. In a research article from 2019 titled “DocFace+: ID Document to Selfie Matching,” Yichun Shi and Anil K. Jain argue for the necessity to match accurately ID documents to “selfie” images. As part of this paper, the authors offer a redefinition of what the term “selfie” means in the context of surveillance-oriented algorithmic culture: “Technically, the word selfie refers to self-captured photos from mobile phones. But here, we define “selfies” as any self-captured live face photos, including those from mobile phones and kiosks.”[44]
What is new here is that “selfies” no longer require one to physically take the photograph oneself. Selfies are images of “the self” captured by automated surveillance systems. The agency behind consciously taking one’s own photograph is negated by the automation of the process. The “selfie” is recognisable only through what Lev Manovich has coined as the “aesthetics of continuity” in which one sees oneself as the image is being recorded. Here the “selfies” are taken by surveillance systems such as Australia’s “SmartGate,” the e-Passport gates in the UK, Automated Passport Control in the US, and the ID card gates in China. This is significant because initially self-portraits and selfies were seen as ways of increasing the subject’s agency with regards to representation. In a selfie, the subject indeed had great control over their representation as this photographic genre required particular posing, and thus, a conscious construction of identity. The Guardian playfully outlined the embodied conventions of the selfie:
A doe-eyed stare and mussed-up hair denotes natural beauty, as if you’ve just woken up and can’t help looking like this. Sexiness is suggested by sucked-in cheeks, pouting lips, a nonchalant cock of the head and a hint of bare flesh just below the clavicle. Snap![45]
When selfies are displaced into “self-captured live face photos” the agency is displaced away from the self as the subject taking the photograph to the photograph emerging by itself. [46] Self here refers to the autonomous process of photography – photography operating by itself.
This significant shift in what the self means in regard to selfies and the self-captured face photos has been addressed in a subfield of surveillance called “selfie biometrics.” A recent book Selfie Biometrics: Advances and Challenges outlines the basic premises and techniques of this burgeoning field.[47] In the introduction of this edited volume, the editors Ajita Rattani, Reza Derakhshani, and Arun Ross make an argument for the increasing viability of the selfie as a valuable data-source for user authentication – in other words, for recognition and surveillance – because of the advancements in image resolution and lens aperture size. The lens discussion is important here, since proposed selfie lenses feature a wide aperture of f/1.4 – which, combined with a longer focal length, mimics a portrait lens and allows for the articulation of a sharp face against a blurry background.[48] Here, selfie biometrics is defined as “an authentication mechanism where a user captures images of her biometric traits (such as the face or ocular region) by using the imaging sensors available in the device itself.”[49] The idea of the selfie here has again shifted away from modes of representation and agency, towards an automated “capture” of biometric traits. Indeed, the selfie functions no longer as a self-portrait, but rather as a data-gathering mechanism – a “selfie capture.” Further, the authors distinguish between three types of selfie biometrics: face, ocular biometrics (imaging and use of characteristic features extracted from the eyes for personal recognition), and fingerphoto: “touchless fingerprint recognition technology, where the back-facing smartphone cameras acquire high-resolution photographs of finger ridge patterns.”[50] These features have commonly been used in both anthropometrics and biometrics and have been seen as staples of identification, policing, and surveillance. What is interesting in this article is the articulation of the so-called “soft” biometrics. In this biometric profile, ethnicity, gender, and age are assessed and recorded. Another chapter in this book explicitly links the raise of selfie soft biometrics with the front-facing camera on mobile devices: “selfie soft biometrics is gaining the most popularity due to the recent advancements in front-facing cameras in smartphones.”[51] It is worth noting that the same chapter details the ways in which convolutional neural networks (CCN) networks are able to assess one’s age, gender, as well as mood. Selfies, much like most smart phone portraiture, should thus been understood as an extension of the 19th century projects of surveillance and the anthropometric articulation of biotypes. Selfies today fuel algorithmically driven research similar to the work of Galton, Bertillon, as well as Duchenne.
Indeed, an increase in research on soft biometric data coincided with the release of Apple’s front-facing camera in 2010 with the Iphone4 and portrait mode in 2016.[52] The data collected with Apple’s front-facing and portrait mode cameras helped accelerate facial recognition research on mobile devices. It ultimately resulted in the popularisation of the selfie as an image of facial recognition and its mainstream acceptance as Apple’s new Face ID feature on its iPhone X, introduced in 2017.[53]
This transition of the “selfie” from an instance of “self-portraiture” to “self-capture” harnessed in biometrics speaks precisely of the ways in which smartphone photography has helped to usher the distillation of the photograph from a visual form to a data entity. The discourse of “capture” speaks precisely to the repressive function of photography. This time both symbolic (actions are captured and used to determine once social and economic status) and at times actualized imprisonment of the subject (captures are used to identify and convict criminals) are enacted. The “here and now” indexicality that François Arago praised when announcing the birth of photography is now parsed out into a set of distributed variables.[54] No longer “here,” no longer “now,” not even “us” for long, these facial maps speak to algorithmic logics and perform for algorithmic visions that separate our images from ourselves in profound ways. This distinction supports Kate Crawford’s claim that whereas anthropometrics and phrenology deployed photography in analysing “human subjects,” AI driven assessments have further people into “data subjects.”[55]
The Continuity Aesthetic of Blur
From its beginning, photography was seen as a way to capture a slice of real life and more specifically, to represent the people and places that make up everyday life. Prominent photo historian Geoffrey Batchen called the photograph a “single vertical slice cut through the horizontal passage of time and motion; a passage lived in the past.”[56] In this slice of life capture, because of technical limitations, people were photographed in sharp focus while backgrounds receded into a soft blur. This convention of using shallow depth of field in portraiture has remained a staple of photographic portraiture up until today. In its early stages, the fixity of the image involved capture of time with varying duration. Niepce’s first photograph took about 8 hours, while Daguerre managed to reduce exposure time down to 3-15 minutes. As technology advanced, the subject was captured not “in time” but “on time” – duration was reduced to the instant. In his essay “A Short History of Photography,” Walter Benjamin laments for the earlier photographic portraits as the subjects lived “into the instant not out of it” – they “grew as it were, into an image.”[57] The long exposures required subjects to sit still in front of the camera in order to emerge in sharper focus. Early portrait studios used blurry painted dioramas against which the subject appeared to be sharper. This technique was necessitated by the long exposure times, where subjects were asked to stay still in front of the camera for up to a minute and would often appear blurry against the perfectly still – thus perfectly in-focus – background.
The prolonged exposure in early photographic portraiture was necessitated by constraints in photographic lenses. As Rudolph Kingslake notes in his extensive book A History of the Photographic Lens, “the first lens to be used on a camera was the achromatic landscape lens of C. Chevalier (1804-1859).[58] The aperture of this lens was only f/15.[59] A portrait lens was introduced in 1840, the following year, by J.M Petzval, but even that lens was “not good enough for practical portraiture.”[60] The Petzval lens had a “telephoto” mode in which the aperture was narrow at f/3.6. The Petzval portrait lens became a staple of the photographer’s toolkit and in the 1890s was supplemented by the introduction of a telephoto lens.[61]
As photographic technology became more advanced, the photographic convention of blurred background was achieved with macro and telephoto lenses that created a shallow depth of field. In photographic terms, this means that portrait photographers use telephoto lenses spanning over 70mm and then select a small aperture in the f/2.0-2.8 range. This convention is often taught in photography books. For example, Erik Valind’s Portrait Photography: From Snapshots to Great Shots, one among many photographic manual books, specifies that:[a] shallow depth of field is often desired because it draws attention to the subject’s face while blurring out the less important features. This selective focus is a great way to create strong portraits by directing the viewer straight to the subject’s eyes.[62]
This convention was carried forward and reintroduced as a dominant aesthetic with the rise of digital photography in the late 1990s. Known as “bokeh,” a blurred orb-based background became a visual trademark of the digital visual aesthetic. This effect requires a telephoto lens with wide aperture in the range of f/1.4 to f/1.8.[63] It produces a background effect in which the setting appears to be a patchwork of fuzzy orbs.
With the introduction of cell phone photography, the convention of blurring the background when creating photographic portraits was delivered through algorithms that isolated the human figure and scrambled the perceived “background.” This algorithmic mode of generating blur mimics the physics of the dSLR camera. For example, the 2020 AIM challenge for rendering a realistic blur used a Canon 7D dSLR camera as a base and attempted to create similar images algorithmically with smartphone cameras.[64] The process for articulating blur and bokeh was introduced in 2014 and by 2016 was a common feature of most “portrait modes” of smartphone cameras. When Google’s Pixel phone introduced an algorithm for mimicking shallow depth of field, they termed the effect “Lens Blur.” The camera lens on the smartphone camera is fairly basic and operates at a level of sophistication similar to those in early photography: “standard cell phone cameras cannot produce [blur] optically, as their short focal lengths and small apertures capture nearly all-in-focus images”[65] The software developers found a way to simulate telephoto lens effects: “Lens Blur replaces the need for a large optical system with algorithms that simulate a larger lens and aperture”.[66]
In 2016, Apple shifted the language around blur and bokeh to make it explicitly part of the photographic portrait aesthetic with its “portrait mode.” As Sam Bayford writes,
Apple makes use of this tech to drive its dual-camera phones’ portrait mode. The iPhone’s image signal processor uses machine learning techniques to recognize people with one camera, while the second camera creates a depth map to help isolate the subject and blur the background. The ability to recognize people through machine learning wasn’t new when this feature debuted in 2016, as it’s what photo organization software was already doing. But to manage it in real time at the speed required for a smartphone camera was a breakthrough.[67]
The articulation of blur and bokeh in relation to information processing has been a central problem for AI developers. Researchers have focused a significant amount of work on attempting to isolate subjects from backgrounds and introduce blurring effects that mimic the photographic portraiture convention. This work has articulated both consumer practices – creating more realistic blur for selfies – as well as surveillance structures – identifying subjects for the purposes of facial recognition. A study on generating realistic bokeh notes specifically why selfies are a good candidate for training the algorithm to recognize human/data subjects. As the argument goes, “[s]uch images typically feature relatively large subject heads … further selfies are mostly captured on a mobile phone, thus they have a large depth of field.”[68] These features make them the perfect candidates for creating an algorithmic effect that is physically impossible given the limitation of the hardware itself. As is evident in convolutional neural networks (CNN) research, both bokeh and blur are being deployed as tools that allow for the isolation and recognition of the most significant object of a picture.
These features came again to the forefront when an image features multiple objects. As Holly Chiang and colleagues write:
Another instance is if you have a photo of a target person of interest in front of a famous landmark but there are too many tourists in the background, our detector will be able to determine that the person and the landmark are the most significant objects in the picture, and apply photography techniques to such as bokeh or blur to reduce the background noise. Bokeh with focus on multiple objects, in particular, is very difficult to achieve in the real world because cameras can only have one depth of view for focusing. Therefore, if we can identify the important objects’ bounding boxes, we can theoretically focus and blur multiple objects with a bokeh effect that is impossible to do otherwise.[69]
As the authors of the multi-object recognition paper note, “[t]o simulate the bokeh effect we applied a gaussian filter followed by randomly selecting pixels to enlarge into circles, followed by another gaussian layer.”[70] No longer a function of a camera lens, no longer aimed at now missing human vision, the bokeh effect here is created by machine learning algorithms for machine vision.
Algorithmically produced shallow depth of field (hence a blurry or bokeh background) legitimises the status of the algorithmic image as a photograph and obscures the deployment of the algorithmic image as a tool of surveillance. The algorithmic articulation of bokeh has provided grounds for implementing depth maps that isolate subject from background for the purposes of facial recognition and surveillance. Blur and bokeh, as aesthetics of continuity, have thus been transformed from a visual element used to centre one’s attention on the foreground object or subject to a data device made useful for information processing.[71] In the context of Apple, their facial recognition app “Recognizr” harnesses the ability to separate subject from background in order to automate the recognition of subjects across collections of photographs taken on a mobile device. This app renders the inner workings of surveillance systems as a “fun” consumer feature and obscures the long history of portraiture-based surveillance.[72]
Much like its 19th century counterpart, contemporary AI-driven surveillance mechanisms are laden with racial and gender bias. Joy Buolamwini and Timnit Gebru’s exceptional work on algorithmic inequality is a prominent example of this adaptation. In their groundbreaking study “Gender Shades,” Buolamwini and Gebru demonstrated that facial recognition software fails to classify darker females accurately at a greater rate than it does with later males.[73] Ruha Benjamin’s book Race after Technology has further detailed the ways in which AI technology continues to propagate anti-Blackness.[74] Yet AI facial recognition technology is presented as convenient, efficient, and fun. It is fueled by everyday consumer practices connected to cell phone portraiture and self-portraiture.
Conclusion
Computational photography and digital imaging, harnessed in the service of biometrics and facial recognition, have transformed loving eyes, pouting lips and sucked-in cheeks into data-points. The processes of translating analogue photographic images into computer data have transformed smartphone photography from a prominent device of self-expression to the ultimate tool for surveillance. Initially articulated as self-portraits, “selfies” became “selfie captures” in the context of selfie biometrics. We learned a new mode of posturing: away from making sassy faces to the straight and intent look required by Face ID authentication regimes. Portraits became portrait modes in which algorithms were given an opportunity to train themselves at isolating human subjects from perceived backgrounds. In reflecting on the ways in which photographic images produced on our smartphone devices are increasingly created for machine seeing by machine learning algorithms, it has become increasingly important to understand the history of photography and its lasting conventions. These conventions are continuously used in order to legitimise data-driven images as representative of our own image, as honourable. They appeal to the bourgeois aesthetic of photographic portraiture, while at the same time articulating neoliberal surveillance assemblages in which identities are constructed based on the intentionality of algorithms which decide when an image is taken and how many data points are gathered rather than that of the subject in front of the lens. Unpacking the photographic conventions, such as the “aesthetics of continuity” of blur and bokeh, behind this new class of computational photography, produced with ease on smartphone devices, is a crucial component of a newly emerging algorithmic literacy. It is by grappling with the historical roots of photographic portraiture as both a mode of representation and a mode of quantification that we are able to discern the new ways in which photography has been summoned as a veil for our increasingly datafied selves. Understanding the historical trajectory of the quantified self in relation to photography allows us to think critically about the ways in which cell phone photography is used in contemporary surveillance and biometric enterprises. Further, unpacking the visual conventions that conceal cell phone images as portraits when in really they are raw data for algorithmic calculation helps foster a much needed critical media literacy.
Notes
[1] Mark B.N. Hansen, “Affect as medium, or the ‘digital-facial-image’,” Journal of Visual Culture, 2(2) (2003): 206-228. https://doi.org/10.1177%2F14704129030022004. Mitra Azar, “Algorithmic Facial Image: Regimes of Truth and Datafication,” A Peer-Reviewed Journal About APRJA, 7(1) (2018): 27-35. https://doi.org/10.7146/aprja.v7i1.115062.
[2] Grace Kingston and Michael Goddard, “The Aesthetic Paradoxes of Visualizing the Networked Image,” Contemporary Arts and Cultures (2017): 6. https://contemporaryarts.mit.edu/pub/aestheticparadoxes.
[3] Ibid.
[4] Daniel Rubinstein and Katrina Sluis, “The digital image in photographic culture: algorithmic photography and the crisis in representation” In Martin Lister, ed. The photographic image in digital culture (London and New York: Routledge, 2013): 22-40, 28.
[5] Lev Manovich, The Language of New Media. (Cambridge, MA: MIT Press, 2002), 144.
[6] See Trevor Paglen, “Invisible Images: Your Pictures Are Looking at You,” Architectural Design, 89 (2019): 22-27, DOI: https://doi.org/10.1002/ad.2383 and Harun Faroki, “Phantom Images,” Public 29 (2004). https://public.journals.yorku.ca/index.php/public/article/view/30354. And Trevor Paglen, “Operational Images,” e-flux 59 (November 2014) https://www.e-flux.com/journal/59/61130/operational-images/.
[7] Alan Sekula “The Body and the Archive” October 39 (Winter, 1986): 3-64, 6.
[8] Deborah Lupton, The Quantified Self, (Cambridge UK, and Medford, MA: Polity, 2016).
[9] Josh Lauer, “Surveillance History and the History of New Media: An Evidential Paradigm.” New Media & Society 14, no. 4 (June 2012): 566–82, 573. https://doi.org/10.1177/1461444811420986.
[10] Sekula, 7.
[11] Ibid., 16.
[12] Ibid., 18.
[13] Ibid., 18.
[14] Alphonse Bertillon. Ethnographie moderne: les races sauvages (Paris: G. Masson, 1883). https://gallica.bnf.fr/ark:/12148/bpt6k104250m/texteBrut.
[15] Ibid.
[16] Sekula, 19.
[17] Francis Galton. Narrative of a Traveler to Tropical South Africa (London: John Murray, 1853) https://galton.org/books/south-west-africa/galton-1853-travels-in-south-africa-1up-linked.pdf.
[18] Lady Elizabeth Eastlake. “Photography.” In Alan Trachtenberg, ed. Classic Essays on Photography (New Haven, Conn: Leetes Island Books: 1981), 39-69.
[19] Ibid., 65.
[20] Ibid.
[21] Georges Didi-Huberman. Invention of Hysteria: Charcot and the Photographic Iconography of the Salpêtrière (Cambridge, MA: MIT Press, 2004).
[22] Sander L. Gilman, ed. The Face of Madness: Hugh Diamond and the Origin of Psychiatric Photography (Brattleboro, VT: Echo Point Books and Media, 2014).
[23] Thy Phu and Linda M. Steer, “Introduction,” Photography and Culture 2, no. 3, (2019): 235-239, 236, DOI: 10.2752/175145109X12532077132194
[24] Joseph Ferenbok, “Configuring the Face as a Technology of Citizenship: Biometrics, Surveillance and the Facialization of Institutional Identity.” In: Kalantzis-Cope P., Gherab-Martín K. eds, Emerging Digital Spaces in Contemporary Society. (London: Palgrave Macmillan, 2010), 126-127: 127. https://doi.org/10.1057/9780230299047_21.
[25] For an example of race identification, see Alexander Todorov, Christopher Y. Olivola, and others, “Social Attributions from Faces: Determinants, Consequences, Accuracy, and Functional Significance.” Annual Review of Psychology, 66, (January, 2015): 519-545. https://doi.org/10.1146/annurev-psych-113011-143831. For an example of gender identification, see Rajeev Ranjan and Vishal Patel and others, “HyperFace: A Deep Multi-Task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 41.1 (January 1, 2017): 121-135. DOI: 10.1109/TPAMI.2017.2781233. For an example of age identification, see Angulu Raphael and Jules R. Tapamo and Adremi O. Adewumi, “Age estimation via face images: a survey.” J Image Video Proc, 42 (2018). https://doi.org/10.1186/s13640-018-0278-6. For an example of sexual orientation identification, see Y. Wang and M. Kosinski, Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology, 114.2 (2018), 246-257. https://doi.org/10.1037/pspa0000098. For an example of emotional state identification, see Avita Saxena, Ashish Khanna, and Deepak Gupta, “Emotion Recognition and Detection Methods: A Comprehensive Survey,” Journal of Artificial Intelligence and Systems, 2 (2020), 53-79. https://doi.org/10.33969/AIS.2020.21005. For an example of political preference identification, see Michal Kosinski, “Facial recognition technology can expose political orientation from naturalistic facial images,” Scientific Reports 11.100 (2021). https://doi.org/10.1038/s41598-020-79310-1
[26] Tal Hassner, The OUI-Adience: Face Image Project, https://talhassner.github.io/home/projects/Adience/Adience-data.html
[27] Ibid.
[28] Gil Levi and Tal Hassner, Age and Gender Classification Using Convolutional Neural Networks, IEEE Workshop on Analysis and Modeling of Faces and Gestures (AMFG), at the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, June 2015. https://talhassner.github.io/home/projects/cnn_agegender/CVPR2015_CNN_AgeGenderEstimation.pdf. And Eran Eidinger, Roee Enbar, and Tal Hassner, Age and Gender Estimation of Unfiltered Faces, Transactions on Information Forensics and Security (IEEE-TIFS), Special Issue on Facial Biometrics in the Wild, Volume 9.12, (Dec, 2014): 2170 – 2179. https://talhassner.github.io/home/projects/Adience/Adience/EidingerEnbarHassner_tifs.pdf
[29] AffectNet, http://mohammadmahoor.com/affectnet/.
[30] Christoffer Heckman, “AI can now read emotions – should it? The Conversation. (January 8, 2020). https://theconversation.com/ai-can-now-read-emotions-should-it-128988
[31] Marika Lüders, Lin Prøitz, Terje Rasmussen, “Emerging personal media genres,” New Media & Society 12 (2010): 947–963, 959.
[32] Ibid., 959.
[33] Charles Arthur, “iPhone 4 unveiled by Apple,” The Guardian, (June 7, 2010) https://www.theguardian.com/technology/2010/jun/07/iphone-4-apple-wwdc.
[34] “Selfie,” Oxford Learner’s Dictionaries. https://www.oxfordlearnersdictionaries.com/us/definition/english/selfie.
[35] Nicholas Mirzoeff, How to See the World: An Introduction to Images, from Self-Portraits to Selfies, Maps to Movies, and More (New York: Basic Books, 2016).
[36] Elizabeth Day, “How selfies became a global phenomenon,” The Guardian, (July 13, 2013). https://www.theguardian.com/technology/2013/jul/14/how-selfies-became-a-global-phenomenon.
[37] Andre Gunhert, “The Consideration of the selfie: A cultural history” Julia Eckel, Jens Ruchatz, and Sabine Wirth, eds., Exploring the Selfie: Historical, Theoretical, and Analytical Approaches to Digital Self-Photography (Palgrave Macmillan, 2018).
[38] Mona Kasra, “Digital-networked images as personal acts of political expression: New categories for meaning formation,” Media and Communication, 5(4) (2017): 51–64, 51, 53, https://doi.org/10.17645/mac.v5i4.1065.
[39] Claire Hampton, “#nomakeupselfies: The Face of Hashtag Slacktivism,” Networking Knowledge: Journal of the MeCCSA Postgraduate Network 8(6) (2015). https://doi.org/10.31165/nk.2015.86.406 and Paul Frosh, The Poetics of Digital Media (Cambridge, UK, and Medford, MA: Polity, 2019)
[40] Selfie Data Set, https://www.crcv.ucf.edu/data/Selfie/.
[41] Ibid.
[42] “Data Collection and Analysis,” Selfiecity, http://selfiecity.net/#dataset.
[43] Kate Crawford and Trevor Paglen, Excavating AI: The politics of images in machine learning training, https://www.excavating.ai.
[44] Yichun Shi and Anil K. Jain, “DocFace+: ID Document to Selfie Matching,” IEEE Transactions on Biometrics, Behavior, and Identity Science 1.1 (January 2019): 56-67, 56. DOI: 10.1109/TBIOM.2019.2897807.
[45] Ibid.
[46] Ibid.
[47] Ajita Rattani, Reza Derakhshani, Arun Ross, eds. Selfie Biometrics. Advances in Computer Vision and Pattern Recognition (Springer, Cham, 2019).
[48] Ajita Rattani, Reza Derakhshani, Arun Ross, “Introduction to Selfie Biometrics.” in Rattani A., Derakhshani R., Ross A. eds, Selfie Biometrics. Advances in Computer Vision and Pattern Recognition. (Springer, Cham. 2019). https://doi.org/10.1007/978-3-030-26972-2_1.
[49] Ibid.
[50] Ibid.
[51] Ajita Rattani and Mudit Agrawal. “Soft-Biometric Attributes from Selfie Images,” in Rattani A., Derakhshani R., Ross A. eds, Selfie Biometrics. Advances in Computer Vision and Pattern Recognition. (Springer, Cham. 2019). https://doi.org/10.1007/978-3-030-26972-2_1.
[52] See Attaullah Buriro, Zahid Akhtar, Bruno Crispo and Fillipo Del Frari, “Age, Gender and Operating-Hand Estimation on Smart Mobile Devices,” 2016 International Conference of the Biometrics Special Interest Group (BIOSIG) (Darmstadt, Germany, 2016): 1-5, DOI: 10.1109/BIOSIG.2016.7736910.
[53] Russel Brandom, “The five biggest questions about Apple’s new facial recognition system.” The Verge. (September 12, 2017). https://www.theverge.com/2017/9/12/16298156/apple-iphone-x-face-id-security-privacy-police-unlock.
[54] Dominique Francois Arago. “Report” In Alan Trachtenberg, ed. Classic Essays on Photography (New Haven, Conn: Leetes Island Books: 1981): 15-26.
[55] Jacob Metcalf and Kate Crawford, “Where are human subjects in Big Data research? The emerging ethics divide,” Big Data & Society (January–June 2016), 1-14. https://doi.org/10.1177/2053951716650211
[56] Geoffrey Batchen, Each Wild Idea: writing photography history (Cambridge, MA: MIT Press, 2002), 47.
[57] Walter Benjamin, “Short History of Photography” (1931). Artforum. https://www.artforum.com/print/197702/walter-benjamin-s-short-history-of-photography-36010.
[58] Rudolph Kingslake, A History of the Photographic Lens, (Boston, Academic Press, 1989).
[59] Ibid., 7.
[60] Ibid., 5.
[61] Ibid., 8.
[62] Erik Valind, Portrait Photography: From Snapshots to Great Shots (Peachpit Press, 2014), 25.
[63] “7 Best Camera Lesnes for Bokeh Photography” Adorama.com, (May 21, 2020). https://www.adorama.com/alc/5-best-camera-lenses-for-bokeh-photography/.
[64] Andrey Ignatov, et al. “AIM 2020 Challenge for Rendering Realistic Bokeh” ArXiv, (2020), https://arxiv.org/abs/2011.04988.
[65] Neal Wadhwa, et. al. “Synthetic depth-of-field with a single-camera mobile phone.” ACM Transactions on Graphics, No 64 (July 2018). https://dl.acm.org/doi/10.1145/3197517.3201329.
[66] Carlos Hernandez, “Lens Blur in the new Google Camera app” Google AI Blog (April 16, 2014). https://ai.googleblog.com/2014/04/lens-blur-in-new-google-camera-app.html.
[67] Sam Bayford, “How AI is Changing Photography” The Verge (Jan 31, 2019). https://www.theverge.com/2019/1/31/18203363/ai-artificial-intelligence-photography-google-photos-apple-huawei.
[68] Friedrich, Nadine et al. “Faking it: Simulating background blur in portrait photography using a coarse depth map estimation from a single image.” WSCG 2016: short communications proceedings: The 24th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2016 in co-operation with EUROGRAPHICS: University of West Bohemia, Plzen, Czech Republic May 30 – June 3 2016, (2016): 17-23. https://dspace5.zcu.cz/bitstream/11025/29683/1/Friedrich.pdf.
[69] Holly Chiang, Yifan Ge, and Connie Wo, “Multiple Object Recognition with Focusing and Blurring” http://cs231n.stanford.edu/reports/2016/pdfs/259_Report.pdf
[70] Ibid.
[71] Antoly Nichvoloda “‘Hierarchical Bokeh’ Theory of Attention” in Dena Shottenkirk, Manuel Curado, Steven S. Gouveia eds, Perception, Cognition, and Aesthetics (New York and London: Routledge, 2019), 85-105.
[72] Joseph Ferenbok, “Configuring the Face as a Technology of Citizenship: Biometrics, Surveillance and the Facialization of Institutional Identity.” In Kalantzis-Cope P., Gherab-Martín K. eds, Emerging Digital Spaces in Contemporary Society (London: Palgrave Macmillan, 2010), 126-127, 127. https://doi.org/10.1057/9780230299047_21.
[73] Joy Buolamwini and Timnit Gerbu. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” Proceedings of Machine Learning Research. 81:1 (2018), 1-15.
[74] Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code, (Cambridge, UK and Malden, MA: Polity, 2019).
Bibliography
“7 Best Camera Lesnes for Bokeh Photography.” Adorama.com, (May 21, 2020). https://www.adorama.com/alc/5-best-camera-lenses-for-bokeh-photography/
AffectNet. http://mohammadmahoor.com/affectnet/
Arago, Dominique Francois. “Report.” In Alan Trachtenberg, ed. Classic Essays on Photography. New Haven, Conn: Leetes Island Books: 1981: 15-26.
Arthur, Charles. “iPhone 4 unveiled by Apple.” The Guardian, (June 7, 2010) https://www.theguardian.com/technology/2010/jun/07/iphone-4-apple-wwdc
Azar, Mitra. “Algorithmic Facial Image: Regimes of Truth and Datafication.” A Peer-Reviewed Journal About APRJA, 7(1) (2018): 27-35. https://doi.org/10.7146/aprja.v7i1.115062
Batchen. Geoffrey. Each Wild Idea: writing photography history. Cambridge, MA: MIT Press, 2002.
Bayford, Sam. “How AI is Changing Photography.” The Verge (Jan 31, 2019). https://www.theverge.com/2019/1/31/18203363/ai-artificial-intelligence-photography-google-photos-apple-huawei
Benjamin, Ruha. Race After Technology: Abolitionist Tools for the New Jim Code. Cambridge, UK and Malden, MA: Polity, 2019.
Benjamin, Walter. “Short History of Photography” (1931). Artforum. https://www.artforum.com/print/197702/walter-benjamin-s-short-history-of-photography-36010
Bertillon. Alphonse. Ethnographie moderne: les races sauvages. Paris: G. Masson, 1883. https://gallica.bnf.fr/ark:/12148/bpt6k104250m/texteBrut
Brandom, Russel. “The five biggest questions about Apple’s new facial recognition system.” The Verge. (September 12, 2017). https://www.theverge.com/2017/9/12/16298156/apple-iphone-x-face-id-security-privacy-police-unlock
Buolamwini, Joy and Gerbu, Timnit. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research. 81(1) (2018), 1-15.
Buriro, Attaullah and Akhtar, Zahid and Crispo, Bruno and Del Frari, Fillipo. “Age, Gender and Operating-Hand Estimation on Smart Mobile Devices.” 2016 International Conference of the Biometrics Special Interest Group (BIOSIG) (Darmstadt, Germany, 2016): 1-5, DOI: 10.1109/BIOSIG.2016.7736910
Chiang, Holly and Ge, Yifan and Wo, Connie. “Multiple Object Recognition with Focusing and Blurring” http://cs231n.stanford.edu/reports/2016/pdfs/259_Report.pdf
Crawford, Kate and Paglen, Trevor. Excavating AI: The politics of images in machine learning training. https://www.excavating.ai
“Data Collection and Analysis.” Selfiecity, http://selfiecity.net/#dataset
Day, Elizabeth. “How selfies became a global phenomenon.” The Guardian, (July 13, 2013). https://www.theguardian.com/technology/2013/jul/14/how-selfies-became-a-global-phenomenon
Didi-Huberman, Georges. Invention of Hysteria: Charcot and the Photographic Iconography of the Salpêtrière. Cambridge, MA: MIT Press, 2004.
Eastlake, Lady Elizabeth. “Photography.” In Alan Trachtenberg, ed. Classic Essays on Photography. New Haven, Conn: Leetes Island Books: 1981: 39-69.
Eidinger, Eran and Enbar, Roee and Hassner, Tal. Age and Gender Estimation of Unfiltered Faces. Transactions on Information Forensics and Security (IEEE-TIFS), Special Issue on Facial Biometrics in the Wild, Volume 9.12, (Dec, 2014): 2170 – 2179.
Faroki, Harun. “Phantom Images,” Public 29 (2004). https://public.journals.yorku.ca/index.php/public/article/view/30354
Ferenbok, Joseph. “Configuring the Face as a Technology of Citizenship: Biometrics, Surveillance and the Facialization of Institutional Identity.” In: Kalantzis-Cope P., Gherab-Martín K. eds, Emerging Digital Spaces in Contemporary Society. London: Palgrave Macmillan, 2010, 126-127. https://doi.org/10.1057/9780230299047_21
Galton, Francis. Narrative of a Traveler to Tropical South Africa. London: John Murray, 1853. https://galton.org/books/south-west-africa/galton-1853-travels-in-south-africa-1up-linked.pdf
Gilman, Sander L. ed. The Face of Madness: Hugh Diamond and the Origin of Psychiatric Photography. Brattleboro, VT: Echo Point Books and Media, 2014.
Gunhert, Andre. “The Consideration of the selfie: A cultural history.” In Julia Eckel, Jens Ruchatz, and Sabine Wirth, eds., Exploring the Selfie: Historical, Theoretical, and Analytical Approaches to Digital Self-Photography. Palgrave Macmillan, 2018.
Hampton, Claire. “#nomakeupselfies: The Face of Hashtag Slacktivism.” Networking Knowledge: Journal of the MeCCSA Postgraduate Network 8(6) (2015). https://doi.org/10.31165/nk.2015.86.406
Hansen, Mark B.N. “Affect as medium, or the ‘digital-facial-image’.” Journal of Visual Culture, 2(2) (2003): 206-228. https://doi.org/10.1177%2F14704129030022004
Hassner, Tal. The OUI-Adience: Face Image Project. https://talhassner.github.io/home/projects/Adience/Adience-data.html
Heckman, Christoffer. “AI can now read emotions – should it? The Conversation. (January 8, 2020). https://theconversation.com/ai-can-now-read-emotions-should-it-128988
Hernandez, Carlos. “Lens Blur in the new Google Camera app.” Google AI Blog (April 16, 2014). https://ai.googleblog.com/2014/04/lens-blur-in-new-google-camera-app.html
Ignatov, Andrey et al. “AIM 2020 Challenge for Rendering Realistic Bokeh.” ArXiv, (2020), https://arxiv.org/abs/2011.04988
Kasra, Mona. “Digital-networked images as personal acts of political expression: New categories for meaning formation,” Media and Communication, 5.4 (2017): 51–64, https://doi.org/10.17645/mac.v5i4.1065
Kingslake, Rudolph. A History of the Photographic Lens. Boston, Academic Press, 1989.
Kingston, Grace and Goddard, Michael. “The Aesthetic Paradoxes of Visualizing the Networked Image,” Contemporary Arts and Cultures (2017). https://contemporaryarts.mit.edu/pub/aestheticparadoxes
Kosinski, Michal. “Facial recognition technology can expose political orientation from naturalistic facial images.” Scientific Reports 11.100 (2021). https://doi.org/10.1038/s41598-020-79310-1
Lauer, Josh. “Surveillance History and the History of New Media: An Evidential Paradigm.” New Media & Society 14(1) (June 2012): 566–82. https://doi.org/10.1177/1461444811420986
Levi, Gil and Hassner, Tal. Age and Gender Classification Using Convolutional Neural Networks. IEEE Workshop on Analysis and Modeling of Faces and Gestures (AMFG), at the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, June 2015. https://talhassner.github.io/home/projects/cnn_agegender/CVPR2015_CNN_AgeGenderEstimation.pdf
Lüders, Marika and Prøitz, Lin and Rasmussen, Terje. “Emerging personal media genres.” New Media & Society 12 (2010): 947–963.
Lupton, Deborah. The Quantified Self. Cambridge UK, and Medford, MA: Polity, 2016.
Manovich, Lev. The Language of New Media. Cambridge, MA: MIT Press, 2002.
Metcalf, Jacob and Crawford, Kate. “Where are human subjects in Big Data research? The emerging ethics divide,” Big Data & Society (January–June 2016), 1-14. https://doi.org/10.1177/2053951716650211
Mirzoeff, Nicholas. How to See the World: An Introduction to Images, from Self-Portraits to Selfies, Maps to Movies, and More. New York: Basic Books, 2016.
Nadine, Friedrich et al. “Faking it: Simulating background blur in portrait photography using a coarse depth map estimation from a single image.” WSCG 2016: short communications proceedings: The 24th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2016 in co-operation with EUROGRAPHICS: University of West Bohemia, Plzen, Czech Republic May 30 – June 3 2016, (2016): 17-23. https://dspace5.zcu.cz/bitstream/11025/29683/1/Friedrich.pdf
Nichvoloda, Antoly. “‘Hierarchical Bokeh’ Theory of Attention.” in Dena Shottenkirk, Manuel Curado, Steven S. Gouveia eds, Perception, Cognition, and Aesthetics. New York and London: Routledge, 2019: 85-105.
Paglen, Trevor. “Invisible Images: Your Pictures Are Looking at You.” Architectural Design 89 (2019): 22-27. https://doi.org/10.1002/ad.2383
—. “Operational Images.” e-flux 59 (November 2014). https://www.e-flux.com/journal/59/61130/operational-images/
Phu, Thy and Steer, Linda M. “Introduction.” Photography and Culture 2(3) (2019): 235-239, DOI: 10.2752/175145109X12532077132194
Raphael, Angulu and Tapamo, Jules R. and Adewumi, Adremi O. “Age estimation via face images: a survey.” J Image Video Proc, 42 (2018). https://doi.org/10.1186/s13640-018-0278-6
Ranjan, Rajeev and Patel, Vishal and others, “HyperFace: A Deep Multi-Task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 41.1 (January 1, 2017): 121-135. DOI: 10.1109/TPAMI.2017.2781233
Rattani, Ajita and Derakhshani, Reza and Ross, Arun eds. Selfie Biometrics. Advances in Computer Vision and Pattern Recognition. Springer, Cham, 2019.
–. “Introduction to Selfie Biometrics.” in Rattani A., Derakhshani R., Ross A. eds, Selfie Biometrics. Advances in Computer Vision and Pattern Recognition. Springer, Cham. 2019. https://doi.org/10.1007/978-3-030-26972-2_1
Rattani, Ajita and Agrawal, Mudit. “Soft-Biometric Attributes from Selfie Images.” in Rattani A., Derakhshani R., Ross A. eds, Selfie Biometrics. Advances in Computer Vision and Pattern Recognition. Springer, Cham. 2019. https://doi.org/10.1007/978-3-030-26972-2_1
Rubinstein, Daniel and Sluis, Katrina. “The digital image in photographic culture: algorithmic photography and the crisis in representation.” In Martin Lister, ed. The photographic image in digital culture. London and New York: Routledge, 2013: 22-40.
Saxena, Avita and Khanna, Ashish and Gupta, Deepak. “Emotion Recognition and Detection Methods: A Comprehensive Survey.” Journal of Artificial Intelligence and Systems, 2 (2020), 53-79. https://doi.org/10.33969/AIS.2020.21005
Sekula, Alan. “The Body and the Archive.” October 39 (Winter, 1986): 3-64.
“Selfie.” Oxford Learner’s Dictionaries. https://www.oxfordlearnersdictionaries.com/us/definition/english/selfie
Selfie Data Set, https://www.crcv.ucf.edu/data/Selfie/
Shi, Yichun and Jain, Anil K. “DocFace+: ID Document to Selfie Matching’” IEEE Transactions on Biometrics, Behavior, and Identity Science 1.1 (January 2019): 56-67, 56. DOI: 10.1109/TBIOM.2019.2897807
Todorov, Alexander and Olivola, Christopher Y. and others, “Social Attributions from Faces: Determinants, Consequences, Accuracy, and Functional Significance.” Annual Review of Psychology, 66, (January, 2015): 519-545. https://doi.org/10.1146/annurev-psych-113011-143831
Valind, Erik. Portrait Photography: From Snapshots to Great Shots. Peachpit Press, 2014.
Wadhwa, Neal et. al. “Synthetic depth-of-field with a single-camera mobile phone.” ACM Transactions on Graphics, 64 (July 2018). https://dl.acm.org/doi/10.1145/3197517.3201329
Wang, Y. and M. Kosinski, “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.” Journal of Personality and Social Psychology, 114.2 (2018), 246-257. https://doi.org/10.1037/pspa0000098
Author Biography
Dr. Stefka D. Hristova is an Associate Professor of Digital Media at Michigan Technological University. She holds a PhD in Visual Studies with emphasis on Critical Theory from the University of California, Irvine. Her research analyses digital and algorithmic visual culture. Hristova’s work has been published in journals such as Transnational Subjects Journal, Visual Anthropology, Radical History Review, TripleC, Surveillance and Security, Interstitial, Cultural Studies, Transformations. She was a NEH Summer Scholar for “Material Maps In the Digital Age” seminar in 2019. Hristova is the lead editor for Algorithmic Culture: How Big Data and Artificial Intelligence are Transforming Everyday Life, Lexington Books, 2021.