Skip to content

Gender and Artificial Intelligence: a limited imagination or a non-superficial indicator?

The term “artificial intelligence” conjures up science-fiction images of androids and supercomputers, sentient spaceships and robots bent on destroying the humans. In actually, artificial intelligence (AI) just refers to a computer that uses complex programming algorithms to imitate human intelligence. Humans created AI in their own image, and this image is a gendered one. Human have creating and then providing algorithms for AI to imitate gendered intelligences. The concept behind AI is one of passing, of performing gender in such a way that these machines are perceived as humans. Early concepts of AI were gendered, and researchers today persist in gendering it despite AI’s potential to move beyond a gendered system.

“Artificial Intelligence” is a broad term that encompasses many different projects that mimic human intelligence to different extents and in different ways. “Strong AI” takes the form of multi-billion dollar projects that attempt to nail down every aspect of human intelligence. “Weak AI”, which attempts to imitate humans in a specific way, is much more prevalent. Ariane, the online “girlfriend”, that SarahLeia in our panel of imaginary characters is one manifestation of weak AI, as are chatterbot programs that interact with a human through a chat client.

AI is a series of projects, each with a specific goal. Mimicking gender intelligence is the primary goal of some AI, such as Aline. Other AI are programmed to act gendered, even if their whole existence isn’t based on gendering. For example, chatterbots are sometimes supposed to act like a specific gender. On the strong AI end of the spectrum, the Cyc project is working on recognizing and mimickin different models of gender (Adam 86). Some AI works to imitate aspects of human intelligence not related to gender, although the very method of their knowing may still be gendered.

One useful model to measure the success of AI’s imitation is the Turing Test, first proposed in 1950[1]. The Turing Test an “imitation game” (Turing 433) where an AI tries to respond to questions from a human interrogator in such a way that the the AI is indistinguishable from a human subject. In essence, the AI passes for a human (Douglass). This “imitation game” originally arose from a gendered consideration, not a technological one. In his paper, Turing first proposes a test in which the questioner is trying to tell the difference between a man and a woman. In essence, the test asks whether a person can pass for the opposite gender <a href=”http://www.english.ucsb.edu/grad/student-pages/jdouglass/coursework/hyperliterature/turing/”>(Douglass). Turing then goes on to propose that this same game could be played with an AI and a person. Turing links the male/female divide to the human/machine divide, implying that both of the divides are a matter of algorithms of passing and performance.

AI is attempting to turn gender into an algorithm, which implies that the human body—the source of that algorithm—is some sort of “meat machine” (McCorduck 85) capable of computing gender. We have talked in the course about how the biological body itself can be considered a technology (cf. Natasha’s post about DNA as code; (DNA) and Susan Stryker’s quote about the “flesh as a medium, with all identifications technologized” . On the blog, Natasha, Ryan, and I discussed the difficulty and perhaps impossibility of determining a “computational rule for gender.” As Natasha wrote, it may be theoretically possible to find such a rule, but practically impossible, since the interactions between brain, mind, and environment are so complex. As Natasha put it, there are so many “crazy details” (Modelling) that designing such an algorithm is doomed to failure.

What is the value of gendering AI? Circuit boards and command lines have no gender. AI itself does not have a set agenda, a drive to be gendered. So why do humans insist that AI perform gender? It could be a matter of habituation. We ourselves are so accustomed to existing in a gendered space that we feel the need to gender that which we create. If this is the case, then the failure to create non-gendered machines has to do with the limits of our own imagination, and not because gender is an inherent property of human intelligence.

On the other hand, gender could be necessarily to replicate and perform human intelligence. We have spent a lot of time in this course considering gender as a culturally constructed product, as a collection of “superficial indicators”. Often, there has been the implication that gender is just a construct, that constructs are somehow less valid or less important than that which is not constructed. This may not be the case; gender could be a fundamental construct. It could be a “non-superficial” indicator (Ryan) that is built into our mind and our intelligences. If this is the case, then our inability to construct non-gendered artificial intelligence may be because it is impossible to have a non-gendered human intelligence.

From another perspective, people may have failed to create non-gendered AI because the very process of its creation may be fundamentally engendering. The algorithm for gender may be impossible to find or impossible to apply, but hat if the programming language in which people would write the algorithm is itself gendered? From Alison Adam’s perspective, the artificial computer languages in which (mostly male) researchers have written AI binds the creation to a masculinely constructed logic (109). This then becomes a matter of being rather than performing. From this perspective, whether or not AI can pass as male is irrelevant, since AI is male. Adam approaches the engendering process from a linguistic perspective and thus focuses almost entirely on the specific symbolic gendering of language.

If language is indeed so rigid that it is always gendered and if gender is a fundamental indicator of human intelligence, then our failure to create non-gendered machines may be inevitable. And, since the inscription process is bi-directional, since machines also inform human gender, this would suggest that the post-gender Haraway cyborg is an unattainable goal. In contrast, if the failure is due to limits of human imagination, exposure to AI could stretch our imagination, stretch our conceptions of gender, and eventually aid in the creation of a cyborgized human. But perhaps this binary is false. Perhaps people can simultaneously fail and success at creating a non-gendered AI. Perhaps we are creating a differently-gendered AI that can inform our understanding both of a gendered and non-gendered version. Circuit boards and neural networks could merge and partially gender one another without achieving a complete or absent gendering.

Works Cited

Adam, Alison. Artificial Knowing: Gender and the thinking machine. New York: Routledge, 1998.

Douglass, Jeremy. “Machine Writing and the Turing Test.” ENGL236:Hyperliterature. University of California, Santa Barbara English Department, Fall 2001. 5 March 2009. <http://www.english.ucsb.edu/grad/student-pages/jdouglass/coursework/hyperliterature/turing/>

Hodges, Andrew. “What Did Alan Turing Mean by ‘Machine’?” The Mechanical Mind in History. Eds. Philip Husbands, Owen Holland, Michael Wheeler. Massachusetts: MIT Press, 2008. 75-90.

McCorduck, Pamela. Machines Who Think: A personal inquiry into the history and prospects of artificial intelligence. Massachusetts: A K Peters, Ltd., 2004.

Natasha. “Modelling Gender.” Gender and Technology Spring 2009. 15 February 2009. 5 March 2009. <http://gandt.blogs.brynmawr.edu/2009/02/15/modelling-gender/>

Natasha. “DNA Technology (and Decisions of Passing).” Gender and Technology Spring 2009. 11 February 2009. 5 March 2009. <http://gandt.blogs.brynmawr.edu/2009/02/11/dna-technology-and-decisions-of-passing/>

Ryan. Comment on “Apparently I’m a Man on the Internet?”. Roisin Foley. Gender and Technology Spring 2009. 15 February 2009. 5 March 2009. <http://gandt.blogs.brynmawr.edu/2009/02/15/apparently-im-a-man-on-the-internet/>

Stryker, Susan. Quote in “Notes Towards Day 12: Collective Panel #2”. Gender and Technology Spring 2009. 5 March 2009. <http://gandt.blogs.brynmawr.edu/class-notes/notes-towards-day-12-collective-panel-2/>

Turing, Alan. “Computing Machinery and Intelligence.” Mind. 59.236(1950): 433-460.


[1] The term “artificial intelligence” did not exist in Turing’s lifetime, but the Turing Test is widely considered to be a test of artificial intelligence (Hodges 75)

One Response
  1. Anne Dalke permalink*
    March 16, 2009

    Rebecca–

    As with your last paper, which used the concept of ownership to think about our relationship to our bodies, this one is philosophical, using the concept of imitation to think about intelligence. So I guess my first question here parallels the first one I asked last month: why imitation as an organizing idea? What does that concept get you, and how is it limiting, in thinking about what you want to think about: what it means to be/to have/to demonstrate intelligence? Why “imitate,” or copy, rather than perform? Can one perform without imitating? Must the activity, in other words, be one of passing?

    Once past the question of these framing terms, you get into even more intriguing, complex territory. Most striking to me-as you work your way through a variety of possible reasons we haven’t been able to turn gender into an algorithm, a computational rule—is the suggestion that our “very method of knowing may be gendered,” that rather than being the “superficial indicator” we’ve made it in this class, it could be a “fundamental construct”: “the very process of creation may be fundamentally engendering.” It thus “becomes a matter of being rather than performing.”

    You don’t let yourself get trapped in that possibility, but end w/ the hopeful possibility that we might create an artificial intelligence that is “differently-gendered”—but I’m still stuck w/ this possibility of the fundament. It made me gasp when I first read it, but now I’m having second, philosophical thoughts, about this whole matter of “fundamentalism.”

    In a course I taught last year on “Emerging Genres” (and again in one I’m doing this semester on The Story of Evolution and the Evolution of Stories) we talked @ length about the different kinds of stories—“non-narrative foundational, narrative foundational, emergence, anti-stories”– we tell to make sense of the world. Once we accept that anything is “foundational,” or “fundamental”—that the “past is the determinant of and hence best guide to the future” (rather than “the grist from which as yet unknown futures can be shaped”) then change really isn’t possible, is it? How would the story you tell here change, if you didn’t accept the possibility of there being anything “essential” or “fundamental” about gender, humanness, or intelligence?

Comments are closed.