UNREADABILITY AND BEING READ

excrept from “SEEING, NAMING, KNOWING” publication

Nora N. Khan

How do we make sense of reading images that aren’t even meant to be read by us? If a machine is reading a machine-produced image, what theoretical concepts can we use to describe what is being represented? What critical visual terms can we use to describe the algorithmically-generated image? As AI’s evolution moves from supervised to unsupervised learning, the process of naming is becoming less sensible and intentionally less readable to people. It is hard to know what one is looking at, let alone subjecting it to loving and rigorous critique. How do we describe seeing that reads much of the digital evidence of our lives? How do we even critique an eye that can “recall the faces of billions of people,” as Paglen points out?53 (He was then discussing Facebook’s DeepFace, which in the ancient days of 2014 had an accuracy of “97.35% on the Labeled Faces in the Wild dataset,” meaning it “closely approache[d] human-level performance.”)1

 The range of image datasets that AI now can train on is dizzying: all the world’s plants, cars, faces, dogs, colors. In a famous machine learning training set, where networks once struggled to discern a fox from the field behind it, the same fox can now be separated and described by its age, weight, and species. The best machine learning system can tell what time of day it was in the field, describe its markings, and tell us what other companions are hiding in the field behind it. Neural network papers give a sense of the many painstaking iterations needed to refine a vision system. Each year, the ImageNet Large Scale Visual Recognition Challenge asks competitors to train a neural network to try and identify objects within an image—like separating foxes from a grassy knoll. Each year these competing models classify images into 1000 different typologies with more precision.2 

 The rubric for evaluating these images as “successful” is precision. Is the image high resolution and easily readable? Does it “sharply represent” what we see? The other is the level of accuracy of tagging, naming what is there in direct, clear terms as possible. The result of all this computational power is a very basic level of clarity: the big man is on a field, the fox is in a field under the sun. The amount of complexity it takes to get here is staggering, and there is something elegant in the process, as scholar Peli Grietzer captures in depth, revealing how we also once learned the field-ness of a field, the triangular-ness of triangular objects, the fox-ness of fox-like creatures.3 The process necessitates that images are boiled down to receptacles of assorted qualities that are isolated and determined to be significant. So vast and global is this effort that the computational production of this named reality appears as a truth.

If anyone can technically train a neural network, who gets to train the ones that organize our lives? Machine learning skips the jerky sorting and matching process that earlier vision recognition systems (from eight to ten years ago) undertook. It is a system that learns as we do, modeled after the structure of animal brains, in which neurons are layered. A machine learning system creates its own algorithms, rewriting them to more accurately identify patterns, as it learns from seeing the environment. It distributes this learning along a network of other machine nodes, each learning and competing.

 We may look at images with our eyes, but our lives are shaped by a different kind of partial, broken seeing that posits accuracy, that is made continuously through relational, active, and emerging algorithms. In much of the popular literature on neural networks, they are posited as dreaming, or as imagining images. But we don’t solely “dream up” images in our mind from some thick, gooey subconscious—and neither do these networks. We actively generate images through our biases, our memories and histories, our styles of narrative, our traumas. And just as training sets also “reveal the historical, geographical, racial, and socio-economic positions of their trainers,” so do neural networks, seeing from the hilltop over the entire known world.4

Artists are tackling the gaps with humor. In Us, Aggregated (2017) artist Mimi Onuoha points out the absurdities in many of a search engine’s classifications by working backwards.5 She asks, “who has the agency to define who ‘we’ is?” She uses her personal family archives, and runs them through Google’s reverse-image search algorithms, and then frames the resulting photographs according to their labels. In Us, Aggregated 2.0, (2018) she frames the many diverse intimate photos that have been tagged with the basic label “girl.”6 In Machine Readable Hito (2017) Paglen worked with artist Hito Steyerl to make legible machine learning processes marking character and gender and personality.7 They performed facial analysis of Steyerl’s many facial expressions. In many where she is frowning, she is labeled as a man; in neutral or confused expressions, she is some percentage of female. The projects suggest how the standard of a good, right face, can reify extant politics of visibility, and suggest what the system sees as the norm for gender, the norm for emotional expression.

 “Should we teach facial recognition technology about race?” reads a recent Wired headline.8 Every few months, a comparable strawman headline agonizes over how tenable a partial model of the world can be. Even in our most advanced technologies, the dumb fantasy of a world without race or difference or weird outliers persists. And the results are dumb and dumber: pictures of stoves with men at them are still labeled as “women.” More and more, the values—or willing blindness— placed into machine-learning technologies exacerbate its shortcomings. Software  is trained to categorize at scale “to a high level of accuracy.” Note how that phrase, a high level of accuracy, becomes its own justification, despite the very best algorithms lacking the ability to use common sense, to form abstract concepts, or refine their interpretation of the world.

There are countless examples of flawed programmatic bias embedded in fallaciously-named “neutral” imaging processes. The most infamous might be Google’s 2015 “gorilla” PR disaster, in which photos of African-American employees and friends of Google employees were labeled as gorillas. Google responded by erasing the word “gorilla” entirely from the library, such that its evolving image-recognition system, integrated increasingly across platforms, would not embarrass the corporation again.9 The underlying issue was simple: the training sets constituted mostly white faces, as they were built by mostly white engineers.

 We interpret images poorly or well in part because of political or cultural imperatives that are either open or closed. Visual recognition systems reinforce the violence of typing according to the same imperatives. There is a clear technological imperative to ignore through partial seeing, to support a social narrative, and a culture war. Every decision to name images becomes a profound ethical issue. While some engineers prefer a political agonism and that their codes be thought of as written in isolation from the outside world, their social impacts are too profound. The eye cannot just dispense its choices and float on. 

Machine-learning engineers and designers deploying their vision systems must account for their blind spots instead of gesturing at the machine, offloading responsibility. That “we all bleed red,” that “we’re all members of the human race,” that one feels they can be “blind to race and gender,” should be called what they are: simulations of supremacy, in which everyone loses.

 It’s time to ask whether feel-good, individualist techno-libertarian sentiments that allow the eye to shut off to the effect of its own seeing, serve us as a culture. We must make a practice of actively naming the flaws embedded in bad seeing. We take seemingly innocuous computational interpretations of photographs and digital images to be political and ethical acts. There need to be collaborative paths to a machinic naming that restores dignity and complexity of the imaged and imagined, with encoded sensitivity to context and historical bias, and an understanding of traditionally bad readings. 

In this massive machine symbolic system we must still try to read intelligently. The great literary critic N. Katherine Hayles calls for us to carefully consider nonvisual aspects along with the visual when examining how networked machines see. Hayles’s penchant for a “medium specific criticism,” as Wendy Chun interprets it, means that we need to understand how a machine reads to critique it.10 We see how technological design flattens our identities even as it gives the illusion of perfect self-expression; we have looked at the strange categorization and typing of ourselves along parameters of affect and trustworthiness. It is not a surprise that technology created through centralized power has watered a past promise down. What we have is a banal, distributed corporate information collection service running under the banner of intellectual inquiry. Its tendrils gather up our strong and weak desires to freeze us as consumers forever, progressive or not, Nazi or not.

 Paul Christiano of Microsoft’s OpenAI, one of the most distinguished thinkers on the future possibilities of artificial intelligence, has written recently that the question of “which AI is a good successor” is one “of highest impact in moral philosophy right now.”11 Christiano does not shy away from what machines see, embracing their foreignness to our desires and needs, and their evolution into cognitive systems we understand less and less.

 Companies will not open their black boxes any time soon, though ethicists, journalists, and activists vigorously advocate and shape the creation and deployment of AI towards more just and open frameworks, demanding accountability and transparency. Even if the black box stays closed, we do not need to willingly stay blind. We hold the responsibility of understanding an underlying ideology of a system that interprets images, and to fully grasp why it needs to pretend to be objective in order to function as a system.

 The machine-machine seeing described in this essay demands we draw on all the critical faculties of seeing we have developed through history and have at our disposal, while also acknowledging the crucial lacks in our critical visual language.

 On one hand, we must stay alert to automation bias, in which we begin to value information produced by machines over ambiguous human observation. If the world begins to affirm the vision of the simulation, faith in the machine eye overrides all. But we need ambiguous observation, doubts, backtracking, and revision. These are qualities of careful thinking, to not make a set conclusion without revisiting assumptions.

 I suggest we practice asking the same questions we might in critically evaluating art:

Is what I’m seeing justifiably named this way?

What frame has it been given?

Who decided on this frame?

What reasons do they have to frame it this way?

Is their frame valid, and why?

What assumptions about this subject are they relying upon?

What interest does this naming serve?

This is one step towards intelligent naming. This is where we might best intervene, to shift predominant attitudes and perspectives that shape virtual evidence and generate machine-machine knowledge. For truly nuanced naming of images of people, places, and things, we must practice breaking the loop, to consider and describe the likely frame and ideology being effected. Looking at dozens of personal family photos labeled “girl,” can we articulate everything that is lost in that tag? What happens if we do not give the narrative? Can this break for rhetorical imagination, consideration, and reevaluation be built into the machine learning process? For now, these systems are obsessed, understandably, with the empirical, but once the world is named, how will these systems evolve, as we have had to in the world?

If I see an image of a mugshot of a man of color online, and the tags “arrests,” “larson,” and “battery,” I should take pause. Am I on looking at a government site of images in arrest records? Is the image floating freely in a spam ad, the kind that populates less reputable sites, paired with a CLICK HERE TO SEE CRIME IN YOUR AREA, unmoored from context and narrative? Does the man look like an immigrant, like someone in my own family? Am I looking at an alt-right site filled with rabid xenophobic news on the border caravan and who is supposedly coming to get “us” up in remote, landlocked towns? How am I seeing this image? What thread did I follow to get here? How long do I linger on this image before moving on, and what did that lack of careful looking produce in my mind? What bias of my own was affirmed, and what was instantly dissonant? Could I resist the urge to click on easily, or did it feel hard?

When I have misread a representation—meaning, when I have hastily made a narrative about an image, a person, their presentation—I recognize that a mismatch has occurred, between reality and my false virtual evidence. I had instantly decided that specific visual cues mean something certain or likely true about the internal life of a person, about their possibility, though I know how foolish that is in practice—and how painful it is to experience. In the world, we do this constantly, in hurtful and unjust—but ultimately revisable—ways. If I walk into a job interview disheveled with holes in my clothes, the interviewer might assume I both didn’t care about the job, and that am in some kind of distress. They may immediately assess me as not employable, no matter how fit I am for the job. I’m not fit for the mental work with holes in my clothes—this is a quick, dashed off-decision that we make an allowance for through a social understanding in which people who want jobs will dress the part. 

Can we build machine vision to be critical of itself? Even as we learn to see alongside the machine, and understand its training sets, its classifications, its gestures, these must be more intervention points, in which corrections, adjustments, and refinements accounting for history, for context, for good reading of images, is made. There may be a fusion of the sensitivities and criticality we use for human visual image interpretation with the language specific to machine vision. Machine learning can be improved to be fair, checks made rigorously for statistical parity to check what groups or races are being classified incorrectly by the algorithmic eye.

 But Paglen isn’t convinced. “It’s not just as simple as learning a different vocabulary,” he notes. “Formal concepts contain epistemological assumptions, which in turn have ethical consequences. The theoretical concepts we use to analyze visual culture are profoundly misleading when applied to the machinic landscape, producing distortions, vast blind spots, and wild misinterpretations.”12 To counter, some suggest that what we need is better-tagged training sets of images, more accurate ones “without bias,” so we will be seen perfectly, and we will then be treated well.

 The gesture to enforce “algorithmic violence,” as Mimi Onuoha has written, is perhaps the most terrifying example of what we’re up against.13 An AI paper from two years ago suggests that we could figure out who is a criminal based on their cheekbone height, eye size, and general facial structure. In other words, a criminal could be predicted, determined by a “type” of face—where eye size, nose structure, and other elements in a data set of convicted criminals are extrapolated to form a model for what a criminal type is—in effect, a self-enforcing loop in which the biases and limits of the dataset are not accounted for.

It seems a total fallacy that a computer vision algorithm would have no subjective weight or baggage. Even though we understand this claim is impossible, it remains the most prevalent idea in technological development. A neural network, as magical and strange as it can seem, is always produced by biases, desires, interests, bad readings, creators, and engineers with no regard for society who throw up their hands to say, “I only make the thing!” For a neural network to read the image “objectively,” it would have to not be made by human hands or run on historical data of any kind.

But the desire for a “perfect” dataset in which people are seen perfectly is misguided; when are we ever seen perfectly? Why can’t we demand this machine eye be better than our own occluded, hazy, partial, lazy seeing? Maybe it isn’t perfect seeing, but critical seeing that we need. Critical seeing requires constant negotiation. We negotiate incorrect or imprecise naming through revision of our own beliefs. When we see, we take in the “data-points” of an image: color, form, subject, position. We organize the information into a frame that we can understand.  

 Some of the more doom and gloom accounts of modern AI and vision recognition suggest all is lost; that we are victims of addictive neurobiological targeting tools, slavishly trained to obey a high resolution display. Even as this new visual culture becomes more unwieldy, more insane, the sources of images more impossible to define, the ways they are marked unreachable, we are still supposed to evaluate our own judgments about the truth or reality of an image. In more humanist (and moralistic) veins of theory, seeing is always an ethical act: we have a deep responsibility for understanding how our interpretation of information before us, physical or digital, produces the world. 

Without doubt our cognitive capacity is being outstripped, and precisely for that reason, there is no better moment to reassert the value of critical seeing. We have evolved cognitively to be able to negotiate visual meanings, holding them lightly until we have contemplated and thought through the questions above. It is imperative to do so when looking at any image passed through machines. As this is already incredibly hard to do, we might need more flexible frameworks through which to evaluate the construct of machine vision and its suggestion of value and truth. We have to be more critical visual readers, because we are ultimately the bodies and lives being read. 

Recall how machine learning can be both supervised and unsupervised. Our own perception and meaning-making is similar to “unsupervised deep learning.” We too learn to make patterns out of the “data” of what we see, noting differences and similarities, confluences and comparisons, from one image to the next. In our comparison of images, we create narrative representations, a sense of the world, and a corpus of representations that we carry out in our life. But we also are built to grow in response to resistance, and to the harm we cause. Training sets—which form beliefs—might be subject to this same provisional process, in which the choices of tags, simulation parameters, and mechanics across difference, are subject to revision. A final decision is made after a wider group of ethically minded stakeholders, literary scholars, and social scientists, hypothetically, compare and debate interpretations and frames. 

In Benjamin Hale’s short story “Don’t Worry Baby,” a woman, her child, and the child’s father leave—possibly escape—an anarchist commune in the ’70s.14 The story takes place on the plane ride back to the States. The woman accidentally takes a powerful hallucinogenic slipped into a piece of chocolate by the cultish father of her child. He tells her to just ride it out. As she holds their baby in her lap, she begins to feel her perception softly morph, and shift.

 What follows is a viscerally awful sequence, as her synapses flood with the drug: the father’s face disintegrates, the forms of other passengers in the claustrophobic, cigarette-smoke filled plane cabin fall away. She hears language as symbols, and sees faces as signs. She feels everything moving inside of her, from the cilia in her gut to how her veins move to help her pass milk into her child. Mid-flight, the child’s eyes reveal themselves as dilated. This is a total loss of control: the mother suffers through a hellish, speechless meltdown as she can no longer read her child’s face. It is locked far away, “in its own mind,” turned completely inward.

The story’s drama arises in part from the implied unraveling the utopian order of the commune and its worldview, where each person had a sure role, a sure name, and a position in tightly proscribed bounds of the social order. Plummeting through this psychological horror, the reader feels how tenuous our hold on reality is, how deeply tied it is to facial recognition and cognitive faith, how quickly a sense of safety is lost without it. One screwy, distorted face unpins the fabric. We see how closely allied seeing is to naming and knowing. We get the sense that this unmooring is also an opportunity; a face that is only partly readable can be a challenge for better reading. A better visual reading can expand our sense of possibility. This is of course the power of surreal images, which confound, defamiliarize, shift the frame of what one assumes is true.

 Settling in partial comfort with unknowing is endemic to our survival. We actually need to be able to create partial models of the world. Very rarely do we have all of the information of reality around us. The versioning of programming implies that constant revision and rewrites are essential, as in any language. It’s unclear whether machine learning as it is being currently designed—at the scale it is seeking—even has space for such “unknowing,” for provisional change of the dataset’s vigorous naming. It would seem removing criticality is necessary for machine vision.  

I return here to Detroit, a city that has been consistently abandoned, abused, and defunded. The most vulnerable who are hovering right at 35% unemployment are of course the demographic most affected by the green light eyes of T.J. Eckleburg over the ruined cityscape. Project Green Light, combined with facial recognition software, combined with license plate reading, means that a person with a suspended license can be arrested while walking into a pharmacy to get cough medicine.

PredPol is a company that sells software that uses a predictive policing algorithm, which is itself based on an earthquake prediction algorithm. To predict crime, the software uses the same statistical modeling used to predict earthquakes, a method that researchers have named as too simple and deeply flawed to be used. The company’s data scientist compares crime modeling to “self-excitation points” and posits the forecast is made of “hard data,” and is objective and fair, allowing police to offload their decisions to police a red-outlined area to “the machine.”15 The software does not take into account the most deeply unethical issues involved in policing: what the police’s predispositions to the red zone are, how the police already seek to penalize petty crime more in some neighborhoods than others (“broken windows” policing), how they target and harm people of color more than non-. PredPol masks its data input, which is flawed and deeply biased arrest records. In using supervised machine learning to send police out to the same area, the model is, as Caroline Haskins reports, only predicting how an area will be policed, not how crime will occur.16

 All this set aside, the police now can cite that the software’s heat map led them to where a crime might occur. The conceit of PredPol is almost beyond comprehension: that we can produce a predictive map of where crime is likely to occur by tracking “human excitation” or excited movement (defined loosely) along city streets. This heat map, combined with facial recognition software that tries to guess at criminal facial structures, opens up a nightmarish realm of possible abuse, where police are now shielded by the “lack of bias” of machine learning. This has been widely argued as an example of technology used to wash away racially oppressive and violent tactics and mass surveillance.17

 Earlier this year, PredPol went a step further. They were funded by the military to “automate the classification of gang-related crimes,” using an old map of gang territory and previous criminal data, which is well known to be highly biased, anti-black, and in favor of the overstepping power of the police.18 The trained neural network “learned” to classify a gang affiliation, and a gang affiliation would add to sentencing time and fines, earning money for the police department or county, say, that decided to use it.19 At the conference presentation, the research study’s co-author, Hau Chan, junior co-author, was met with outrage from conference attendees. He stated “I’m just an engineer” in response to questions about the ethical implications of the research.20  

 Most disturbing here is that the one mitigating ethical pause, the human factor— an actual person who would read and evaluate the narrative text which police had to collect about the supposed gang arrest itself—was the most costly factor and so eliminated. The neural network, according to Ingrid Burrington and Ali Winston, would instead generate its own description of the crime, without a single human being reading it, to then be turned “into a mathematical vector and incorporated into a final prediction.”21

 Not only would this AI-generated description be flawed and completely mismatched, the use of historical crime data means that future crimes could be described as gang involvement, making “algorithms of a false narrative that’s been created for people … the state defining people according to what they believe.”22 They’d then set the system to run without oversight, making a policing process that is already fraught with abuse as authoritarian as possible. Geographic bias encodes racial bias, and without talking to a single human being, a city is remapped and reformed. The god’s eye view comes right around, AI enforcing exactly what its makers want to see in the world.

 This is the likely future of AI seeing us at scale. Let’s look back to the green lights in Detroit. Once this $4,000 surveillance camera is installed to channel data back to a Real Time Crime Center, the Detroit Police department notes they hardly have manpower to surveil all the cameras all day long. The partial seeing of street surveillance is much the same seeing as some police practice while looking at members of marginalized and high-risk, high poverty communities. A former chief in litigation at The Department of Justice’s Civil Rights Division has noted that Project Green Light is a “civil liberties nightmare,” in which money is poured out of communities into these cameras, enforcing a further ‘hands-off’ approach to neighborhoods already desperately underserved, without adequate education, employment, or housing opportunities.23 Nightmare it may be, but the green lights were still installed in food deserts, at the most trafficked areas for staples for miles.

 Racial capitalism, weak machine learning, and algorithmic surveillance intersect to create a world that is not better seen, but less seen, less understood, more violent, and more occluded. In a nation where anti-blackness is and has been the institutional and cultural norm, and is an enormously lucrative position, hoping for the Green Light program to reprogram itself, to offer up a “provisional space” in which surveillance is somehow rethought in its methods and outcomes, seems facile. The system is working for them as is.

 So in place of civic and human investment are machine vision cameras, promising security and peace of mind for owners, creating a self-affirming loop. This might work in some cases, but it is overall more disastrous for the vulnerable, as it opens overpoliced communities to the specter of punishment at any possible moment. A population desperate for services, for good governance, is forced to see this devastating possible surveillance as a net positive over nothing at all.24 A freeze frame of a camera feed in an area with a “predilection to crime” can be pulled, a subject in that frame can be used as evidence, their misdeeds imagined or maybe real (a suspended license, say) but named as a likely crime. The photo is held as a prompt for punishment along an endless scale of time. Determined by the freeze frame, they are given a new fingerprint of who they are, of what kind of person they are likely to be.

 Abuses of machine vision are not hard to imagine. Think of immigration authorities with a camera feed on a wide city street on a southern Californian city, seeking out a general description of a six-foot tall individual in jeans, in a nighttime crowd. The reading of license plates forms the meat of databases, as the numbers are photographed, read, stored, and then sold to companies. Cameras sit in the foyer of banks, watching expressions as we look at our bank account.  

Looking up from the street to the camera, we begin to understand how our “individual realms of personal power,” to use Stewart Brand’s motto in the Whole Earth Catalog, have reflected a very narrow vision of the world back to us.25 Our knowing became channeled through violent, tired logics. But technological design has become so powerful that it can be used to persuade users to desire, and strongly suggests they should even want the world totally made in their image, reflecting those desires.

It’s in the interest of this machine eye to create a plethora of life signatures for us. We become profiles—avatars—rich with recorded experiences, filling a demand to be legible for companies, municipal organizations, and bureaucracy to hone in on. There’s no break between the constructed model that’s underneath the world and the reality that is produced.

We might ask, if AI is able to learn language on its own at levels of unprecedented mathematical complexity, then why shouldn’t we have better models of people, with added layers embedded for history, context, and drags they place in simulations that account for trauma and oppression? Is it that we just can’t yet imagine a simulation that isn’t from a god’s eye view? Can we imagine the machine eye can tumble from the top of the hill to the wild below, down to the ground and in it, that it can see beyond the flesh for each individual, unmoored, roving, seeing in every direction at once? What simulation of society would this eye produce, recognizing, seeing, and accounting for what is hard to model?

 If you were to fill out a god’s eye view of society, what bodies do you imagine in it? What do you look like in this simulation? What exactly is the model of your body moving through time? What does this simulation account for, or not account for? What hidden or not sensible qualities are erased? What are you able to name easily? What are your blind spots? What should the machine eye visualize that you cannot? What is the simulation of America in which a person of color lived a full and healthy life? In which the mentally ill were cared for? In which debt slavery was abolished? In which racialized capitalism was acknowledged as real and accounted for in all aspects of society? What could technology look like if it were not built around efficiency alone, if history and narrative context were not costly aspects to be erased, but in fact essential to a complete simulation? How would our seeing, naming, and knowing change, if the practice of technology was not framed so relentlessly as constituting objective observation of phenomena, but instead as an active creator of an illusion of empirical, measurable, stable, and separate world?  

 Future ideology in technology might abolish the idea of a tabula rasa as a starting point, which has failed us over and over again. We might experiment with a worldview that does not look down at the world from the hill. Instead of starting over, we insist on not being empty models. If we are to be predicted, let us be seen and represented and activated and simulated as difficult, complex, contradictory, opaque, as able to change, as comprised of centuries of social movement and production, personal history, and creative, spontaneous, wild self-invention. Let us see back into our machine eye as it sees us, to try and determine if it even imagines us living on in the future. If not, we must engineer worlds that produce a reality that is bearable, in which we are seen in full.

 

ENDNOTES

 

1. DeepFace: Closing the Gap to Human-Level Performance in Face Verification. Found at: http://www.image-net.org/challenges/LSVRC/ 

2. “Large Scale Visual Recognition Challenge (ILSVRC).” ImageNet Large Scale Visual Recognition Competition (ILSVRC), www.image-net.org/ challenges/LSVRC/. Currently, Convolutional Neural Network (CNN) models do very well on visual recognition. Researchers check their work against ImageNet, with iterations in models getting stronger and image datasets (Inception, on to Inception-v3) better each year. For a fantastic walkthrough of deep learning explanation, see Colah on Conv Nets: A Moderular Perspective: https://colah. github.io/posts/2014-07-Conv-Nets-Modular/, which is easily one of the most readable primers, or check out https://www.learnopencv. com/deep-learning-based-object-detection-and-instance-segmentation-using-mask-r-cnn-in-opencv-python-c/).

3. For a stunning tour-de-force work by a literary theorist on auto-encoding, cognitive mapping, the aesthetic complexity of machine learning, please see Ambient Meaning: Mood, Vibe, System, Peli Grietzer’s dissertation written as a Harvard Comparative Literature student in 2017. The above is inspired by Grietzer’s discussion of children’s mental, geometric compressions: “We might think about a toddler who learns how to geometrically compress worldly things by learning to compress their geometrically idealized illustrations in a picture-book for children. Let m be the number of sunflowers, full moons, oranges, and apples that a toddler would need to contemplate in order to develop the cognitive schema of a circle, and n the number of geometrically idealized children-book illustrations of sunflowers, full moons, oranges, and apples that a toddler would need to contemplate in order to develop this same cognitive schema …” Found at: http://marul. ffst.hr/_bwillems/fymob/ambient.pdf

4. Paglen, “Invisible Images.”

5. Mimi Onuoha, http://mimionuoha.com/us-aggregated/.

6. Mimi Onuoha, http://mimionuoha.com/us-aggregated-20.

7. Hu, Caitlin, and Caitlin Hu. “The Secret Images That AI Use to Make Sense of Humans.” Quartz, Quartz, 1 Nov. 2017, qz.com/1103545/ macarthur-genius-trevor-paglen-reveals-what-ai-sees-in-the-humanworld/.

8. Chen, Sophia. “Should We Teach Facial Recognition Technology About Race?” Wired, Conde Nast, 15 Nov. 2017, www.wired.com/story/ should-we-teach-facial-recognition-technology-about-race/.

9. Simonite, Tom. “When It Comes to Gorillas, Google Photos Remains Blind.” Wired, Conde Nast, 20 Nov. 2018, www.wired.com/story/whenit-comes-to-gorillas-google-photos-remains-blind/.

10. Chun, Wendy Hui Kyong. Control and Freedom Power and Paranoia in the Age of Fiber Optics. MIT, 2008. Page 17.

11. Paul Christiano, “When Is Unaligned AI Morally Valuable?” AI Alignment, 3 May 2018, ai-alignment.com/sympathizing-with-ai-e11a4bf5ef6e?gi=f81396e3c39d.

12. Paglen, “Invisible Images.”

13. Onuoha, Mimi, “Notes on Algorithmic Violence,” found at: https://github.com/MimiOnuoha/On-Algorithmic-Violence.

14. Hale, Benjamin. “Don’t Worry Baby.” The Paris Review, 25 Oct. 2016, www.theparisreview.org/fiction/6434/dont-worry-baby-benjamin-hale.

15. Described in detail in: Haskins, Caroline. “Academics Confirm Major Predictive Policing Algorithm Is Fundamentally Flawed.” Motherboard, VICE, 14 Feb. 2019, motherboard.vice.com/en_us/article/xwbag4/academics-confirm-major-predictive-policing-algorithm-is-fundamentally-flawed.

16. Ibid.

17. For a deep, intensive survey of algorithmic policing and the politics of PredPol, please see Jackie Wang’s excellent book, Carceral Capitalism (MIT Press, 2018), a chapter of which is excerpted here: https://www.e-flux.com/journal/87/169043/this-is-astory-about-nerds-and-cops-predpol-and-algorithmic-policing/

18. Winston, Ali, and Ingrid Burrington. “A Pioneer in Predictive Policing Is Starting a Troubling New Project.” The Verge, 26 Apr. 2018, www.theverge.com/2018/4/26/17285058/predictive-policing-predpol-pentagon-ai-racial-bias.

19. Ibid.

20. Hutson, Matthew, et al. “Artificial Intelligence Could Identify Gang Crimes-and Ignite an Ethical Firestorm.” Science _ AAAS, American Association for the Advancement of Science, 24 Jan. 2019, www. sciencemag.org/news/2018/02/artificial-intelligence-could-identify-gang-crimes-and-ignite-ethical-firestorm.

21. Winston, Ali, and Ingrid Burrington. “A Pioneer in Predictive Policing Is Starting a Troubling New Project.”

22. Ibid.

23. Jonathan Smith, quoted in: Gross, Allie. “Does Detroit’s Project Green Light Really Make the City Safer?”

24. Ibid.

25. A copy of the Whole Earth Catalog can be found at: http://www.wholeearth.com/issue/1010/article/196/the.purpose.of.the.whole. earth.catalog

 

Nora N. Khan
Nora N. Khan is a writer. She writes criticism on emerging issues within digital visual culture, experimental art and music practices, and philosophy of emerging technology. She is a professor at RISD, in Digital + Media, where she currently teaches MFA graduate students critical theory and artistic research, critical writing for artists and designers , and history of digital media. She is a longtime editor at Rhizome based at New Museum in New York. She is currently editor of Prototype, the book of Google’s Artist and Machine Intelligence Group forthcoming in spring of 2019. In 2020, she is the Shed’s first guest curator, organizing Manual Override, an exhibition featuring Lynn Hershman Leeson, Sondra Perry, Martine Syms, Morehshin Allahyari, and Simon Fujiwara.
Khan’s writing practice extends to a large range of artistic collaborations, which includes shows, performances, fiction for exhibitions, scripts, and sometimes, librettos. Last year, she collaborated with Sondra Perry, Caitlin Cherry, and American Artist to create A Wild Ass Beyond: ApocalypseRN at Performance Space, New York.
Her most recent work is a short book published by The Brooklyn Rail, titled Seeing, Naming, Knowing. She consistently publishes criticism in places like 4Columns, Art in America, Flash Art, Mousse, California Sunday, Spike Art, The Village Voice, and Rhizome. Last year, she wrote a small book with Steven Warwick, Fear Indexing the X-Files, published by Primary Information in New York. She has contributed essays and fiction to exhibitions held at Serpentine Galleries, Chisenhale Gallery, and the Venice Biennale, within books published by Koenig Press, Sternberg Press and Mousse.
Her writing practice has been supported by many awards over the last decade, including, most recently, a Critical Writing Grant given through the Visual Arts Foundation and the Crossed Purposes Foundation (2018), an Eyebeam Research Residency (2017), and a Thoma Foundation 2016 Arts Writing Award in Digital Art for an emerging arts writer. Here is a good interview with Khan about her writing practice.
General Interests: Understanding grounding ideology beneath technology; how we manage to express joy and wonder, and maintain our creative energy, within the bounds of increasingly oppressive systems; how to consistently ground analysis of creative work in the social, political and material realities that make the work possible; the ongoing play between affect, cognitive studies, and emerging technology; how new tech- makes us feel, think, and relate to one another in new ways; the hope of digital, networked, and virtual systems that might just allow for a more open, learned, and compassionate world.