The Cyborg Memory: The Brain, the Computer, and the Self
Memory gives us the ability to store information about ourselves—our pasts, our present, our hopes for the future. It is an integral part in the construction of the self. Indeed, it may be impossible to construct a self without memory, without any information about who one was and who one is. Another kind of memory constructs a different entity, the computer. Computers rely on pieces of their hardware, such as RAM and hard drives, to store the 1’s and 0’s that make up all of their files and all of their programs. Although these two types of memory seem distinct in that one forms the human self and the other forms an electronic computer, it is not that simple. The line between human and machine is blurring in ways as theoretical as the Harawayian cyborg and as concrete as the man who embedded a flash drive in his prosthetic finger. Cyborgs, mixes of the organic and the inorganic, manifest in multiple ways in our culture and in our imaginations, and cyborgs, just like other entities, have memory. This cyborg memory melds the boundaries between neurons and transistors, creating a contradictory existence for the self it helps define.
In order to best explore memory, the self, and the blurring of boundaries between the human and the machine, basic structural understandings of the brain and the computer are necessary. Understanding this dichotomy of the categorical human and computer allows for its dissection and recombination.
The central nervous system, comprised of the brain and the spinal cord, directs the body’s actions and processes information from the external environment. The brain is organized hierarchically, with more complex functioning occurring in the cortex, and the midbrain and hindbrain taking care of basic bodily functions. At the most fundamental level, all of the brain’s communications occur through neurons. There are three specialized types of neurons: sensory neurons, which receive information from the environment; motor neurons, which send directions to other parts of the body; and interneurons, which facilitate communication between the other types of neurons.
Neurons communicate electrochemically, using both electrical signals and release of chemicals. An action potential occurs when a charge builds up in a neuron and then crosses a certain threshold. It creates a chain reaction that results in the charge traveling through the length of the neuron and triggering the release of chemicals, which affect the neighboring neurons. Action potentials are binary events; they either occur or they do not. Where the subtlety of communication comes in is in the frequency of the action potential. A stronger signal can be communicated by the neuron firing more rapidly.
The computer is comprised of many different parts all with specialized functions. The power supply, sound card, graphics card, and wireless card play important roles in the existence of the computer, but it is the central processing unit (CPU) and memory that are most relevant to this conversation. Computers are organized hierarchically, with the CPU at the head. The CPU is in charge of organizing the computer, thus, it both receives information and responds to that information. It receives bits of data from either the memory or from some external device, such as a sensor, another computer, or a human. It then processes those data and responds with some action, such as changing the pixels on the screen or contacting another part of the computer.
Everything in the computer is electrical, and the information that the CPU processes comes in long strings of 1’s and 0’s, with each 1 representing a bit that is charged, or “on”, and each 0 representing a bit that is not charged, or “off”. This binary communication is used both between parts of the computer, such as when the wireless card sends data to the CPU, and between the computer and some external entity, such as when the computer sends a file to another computer over a network connection.
The CPU shares some structural similarities with the brain, as both of them both receive and send information, both use a binary form of communication, and both work in a hierarchical structure. The two are not identical, however. The brain gives rise to an unquantifiable consciousness, while computers remain constrained by their programming. Later sections will discuss how these boundaries are beginning to break down, but it is important to note that these categories still shape how we understand ourselves and computers.
The word “memory” is an overarching term that is used, in part, to describe a variety of different processes that occur in the human brain.[1] Memory includes processes such as remembering word meanings, motor movements, short term retention of information, and long term storage of life events. These last two are the most explicitly connected to the creation of the self, although knowing what a word means and being able to interact with the environment do factor into the human experience of the self. In retaining and storing information about events, memory gives people the ability to situate themselves in time and space.
Working memory, also known as short term memory, gives people the ability to interact immediately with their environment. Working memory contains information about the past 20 seconds and can hold on average seven “chunks” of information (e.g. a seven digit phone number or seven people’s names). If the brain pays attention to the information in the working memory, that information is encoded into the long term memory, which is more stable and retains information for more extended periods such as months, years, and decades. A opposing process, retrieval, occurs when the working memory attempts to find information stored in the long term memory in order to use it. Thus, the general structure of information processing is a bidirectional exchange of information between the working memory and the long term memory.
Autobiographical memories, a specific subset of long-term memories that involve the person doing the remembering, form an integral part of the self system, providing a “record of past selves and records of events that were at one time significant to the self” (Conway 167). Autobiographical memories serve as individual building blocks that together allow the person to create an interpretation of who they are that they then refer to as the self. Interpretation is the key word, since autobiographical memories are re-imaginings and reconstructions of events, not a factual remembering of the events themselves (Conway 167). It seems that for the construction of the self, what the person perceives is more important than what the person experienced. Not only is the self created by these interpretations, the self also influences the way in which events are interpreted (Conway 167). Thus, in terms of autobiographical memories, memory and the self seem to be caught in a recursive loop in which each writes on the other.
The bidirectional relationship between self and memory means that the self can be understood as a “process”, that which creates the interpretation, and as a “structure”, that which forms through the interpretation (Markus 110). Put another way, the self is “the knower” and the self is “that which is known” (James in Markus 110). Memory, then, is also both “the knower” and “that which is known”; it is at once the active process of storing information and the passive information accessed by the self. In this interpretation, memory and the self are inseparable. It is impossible to have one without the other.
In the computer, as in the brain, memory comes in several different forms. The biggest distinction is between volatile forms, which lose any data stored in them if power is shut off, and non-volatile forms, which can store data even if the power is shut off. The two most common types of volatile memory are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Both are random-access, which means that information is not stored hierarchically, so any given bit can be accessed without needing to access other bits. SRAM is the larger of the two, needing six transistors to store one bit, as opposed to DRAM’s one transistor and one capacitor. SRAM, however, is about seventy times faster than DRAM, accessing information in 1 nanosecond (Church).
The charge (0 or 1) in SRAM remains fairly stable over time, but in DRAM, the charge leaks. In DRAM, the transistor is used to move a charge into the capacitor and the read the capacitor. However, this charge is transient as it has to be refreshed several times a second (Church), which is why this type of memory is called “dynamic”. The 1’s and 0’s leak and blend into one another as the charge dissipates, and frequent refreshing is needed to keep it in the binary state. Even at the most basic level of transistors in volatile memory, there is some ambiguity in the computer binary.
The second form of computer memory is the non-volatile form, which includes magnetic hard-drives and flash memory, also know as solid-state memory. Magnetic hard drives are mechanical, using a magnet to polarize a thin layer of magnetic material into 0’s and 1’s. Flash memory uses transistors with floating gate structures. Electrons get stuck in the gate and create a charge, 1, or the absence of electrons reads as no-charge, 0. Unlike DRM, the electrons do not leak, but are instead firmly affixed in the binary (Church).
There is current research into multi-level cells (MLC), which could theoretically exist in volatile and non-volatile memory and would allow a computer to store more than one bit in a given transistor. At the moment, researchers have succeeded in creating two bit cells, allowing for four options, 0, 1, 10, 11, instead of the usual two, 0 and 1 (Perkowski). This technology breaks down the binary in some ways and supports it in others. The information is still expressed in 0’s and 1’s, but there are now four options, and if research continues, there could potentially be more options. MLC gives computers the ability to move from a strictly binary system to a quadary system (4 options) and potentially to a octary system (8 options) or a hexadecimal system (16 option), allowing for a choice within the binary structure.
These different types of memory result in the computer having multiple options in which to store different types of memory. Memory that needs to be accessed quickly is stored in transient forms such as SRAM and DRAM, where as information critical for the overall composition of the computer is stored in non-volatile forms. Non-volatile memory allows the computer to sustain itself over the period of inactivity when the power is shut off. Both types of memory, volatile and non-volatile, work to construct the computer’s self. The computer’s self is the aggregate of the operating system (OS), the programs, and the files stored in the hardware, or more specifically, stored in the memory. All computers have different sums of data and therefore have differently experiences as machines, even if they are not aware of their differing experiences. My computer, which runs Ubuntu and is used mainly for word processing and internet access, has a different experience than a PC gaming computer or a Mac computer used for graphic design.
Unlike the human, the computer is unaware of its self. It does not know that other computers with which it is exchanging data may run other operating systems. Indeed, it only runs into difficulty when it encounters a file format that is incompatible with its OS. Even so, human others are aware of computers’ different selves. They differentiate between computers and in doing so give them identities based on the 1’s and 0’s stored in their memory.
Additionally, the computer has the unique property of being able to exist differently in two distinct states. When it is supplied with electricity, it is able to run an operating system, execute programs, and interpret input from external sources. Deprived of electrical power, however, the computer becomes completely inert. No parts function; no data is processed; the computer is effectively dead. This distinction far surpasses the awake/asleep distinction in humans, because in both of these human states, the body continues to function. The computer, however, is nothing more than an empty shell without electricity. Thus, a computer’s memory is completely dependent on a single external source.
So far, the human and the computer have been discussed as separate identities that can be compared to one another but are fundamentally distinct and differentiated. The cyborg, however, blends those identities, creating a hybrid of human and machine. The term cyborg, like the term memory, is a broad designation for many different combinations of the organic and the inorganic, some of which are imaginings of possible futures and some of which exist in our world today.
The Harawayian cyborg is the iconic imagining of a cyborg future. Cyborgs are ironic and paradoxical results of the breakdown of barriers between human and animal, man and machine, and real and non-real (152-153). Haraway discusses the complex dance between self, body, human and machine in the cyborg, writing: “It is not clear who makes and who is made in the relation between human and machine. It is not clear what is mind and what is body in machines that resolve into coding practices” (177). The cyborg is thus a creature that makes as it is made in a bidirectional process that echoes that of the brain and the CPU. It is a figure that both follows a scripted program, as the CPU does, and interprets its own interactions with the environment, as the brain does.
A much more concrete example of the mixing of human and machine comes from entities such as Aimee Mullins and Jerry Jalava, who have modified their bodies in such a way as to surpass the everyday human experience. Mullins, who uses prosthetic legs, refers to herself as “super-abled”, since she has abilities that other humans do not, including the ability to dramatically change her height at will (Mullins). She has worked to create a cyborg aesthetic, asserting that there are aesthetics beyond the human one, and that the future lays in such mixings of the human and the technological. Jerry Jalava has mixed human and computer by embedding a USB flash drive into his prosthetic finger. He carries around flash memory with him disguised as a part of his human body, taking the computer memory into himself and making it part of himself.
However, Wajcman raises a critique to this physical representations of the cyborg. For her, a prosthesis or a technological modification alone doesn’t mean that one transcends humanity and enters a cyborg category, else “every old-age pensioner with a pace-maker” is a cyborg (92). Wajcman is correct in saying that the mixing of technology and human does not automatically create a cyborg. It only does so if in combining the two, something new is created in a way that affects practical or theoretical discourse.
Jerry Jalava, with his USB drive finger, provides a concrete and perhaps overly simplistic example of the cyborg memory. For more integrated and complex examples of the dissolution of the boundary between human and computer, the different structures and types of memories would begin to merge. Caputi fears that organic memory will be replaced by machine memory (in Halberstam 7), but the cyborg solution would seem to be combining organic memory and machine memory. Working memory and DRAM would mix with one other, walking the line between binary and non-binary with ever dissipating information. Long-term memory and non-volatile memory would mix, the former’s ability to produce interpretations fusing with the latter’s perfect binary remembering. Since memory is integral to the construction of self, the possible compositions of the cyborg memory would then affect the cyborg self, along with this cyborg self writing on the memory structures.
The cyborg’s rapid short term storage capacity would combine DRAM and short term memory to create some new form. DRAM, dynamic random access memory, works to store charges, to code 0’s and 1’s, in capacitors that slowly leak electricity. DRAM refreshes itself several times a second (Church), trying to keep its binary distinctions in line. Working memory is likewise pressed for time, holding 20 seconds worth of information. In both, the information comes in bits. For the computer, it’s 0’s and 1’s; for the brain, it’s seven information packets. The cyborg re-imagining of the storage capacity could be called dynamic short term memory (DSTM), and could take many different contradictory forms. It could operate on a time scale between working memory and DRAM, or it could sit at one end of the spectrum. It could do both, oscillating between 20 seconds and a fraction of a second. DSTM would probably “leak” more than either of its parents. Between the forgetfulness of the human memory and the dissipating charge, DSTM may find itself grasping for bits. Alternatively, the brain could stabilize the capacitor, and the rigid 0’s and 1’s could stabilize the brain.
The offspring of the long term memory and non-volatile memory has equally contradictory potentials. Long term memory, especially autobiographical memory, has great potential for ambiguity. The self forgets memories; the self interprets memories; the self fabricates memories. In contrast, non-volatile memory, such as hard drives and flash drives, remembers all the information exactly as it was entered. The CPU replicates data when it retrieves information from non-volatile memory, while the brain reproduces data when it retrieves information from the long term memory. The cyborg’s non-volatile long term memory could exist in a multiplicity of states. The memory could be stored in binary structures as in the computer, but this binary could be paradoxically open to interpretation. There could be spaces between the 0’s and 1’s. Alternatively, this cyborg memory could faithfully replicate information and still use that information to construct the self, or perhaps the replication would not be able to fully create the interpretations of the self, and the cyborg would find its sense of self through some different mechanism.
However, it may be that memory, whether in a human, machine, or cyborg form, is something that is necessary but not sufficient to the development of the self. The self may be a fundamentally human element that humans then project onto machines. If this is the case, we “self” our laptops the same way we “gender” our boats. There may be a human essence that exists independently of any environmental interactions and that must be present in order for the memories to be able to form an aggregate. Maybe this essence could be called a soul, maybe it could be called an ego, but if it exists, then it has significant implications for the cyborg memory. Cyborgs could perhaps have this soul/ego, but it may depend on exactly what paradoxical re-imagining of memory they possessed. If their DSTM operated more like DRAM than like working memory, the balance may be shifted more towards machine, and the soul/ego may not be able to situate itself. If the non-volatile long term memory had the interpretive abilities of the long term memory as opposed to the replication of non-volatile memories, then there may be enough ambiguity for the soul/ego to reside.
However, all this depends whether “cyborg” is understood to be a spectrum or its own distinct ontological category. If cyborgs fit into a spectrum, then it is possible that there are human cyborgs and machine cyborgs. This seems like a contradiction, but the whole nature of cyborg is contradictory. If, instead, cyborgs are to be understood as a separate category, as “illegitimate offspring” that are “unfaithful to their origins” (Haraway 151), the this discussion of soul/ego may be irrelevant. The essentialist human part does not mix into the cyborg. The soul/ego may transform into some new entity, but that the soul/ego cannot be shared with the cyborg isn’t a particular concern, just as the human self and computer self differing is of no concern.
Outside of a soul/ego question, the gendered existence of the human influences the cyborg and the cyborg memory. If gender—not sex, but gender—is programmed into our DNA, then gender may be more hardwired in the human memory and self than culturally constructed. If, however, gender is a cultural construct, if “one is not born a woman, one becomes a woman”[2] (de Beauvoir 13), then the cyborg memory would be better able to choose the gender it remembers and to choose the gendered memories it writes on itself. If cyborg’s non-volatile long term memory uses the same interpretative self-encoding processes as the human long term memory, then the gendered content of those memories could affect the self, depending on how they were processed. In a non-essentialist situation, there is more ambiguity in the processing mechanism, whereas the essentialist processing mechanism limits the range of interpretation and reduces gender to more of a binary.
On the computer end of the cyborg equation, there is some debate as to whether computers are neutrally gendered entities or if they are gendered male in some way. Alison Adam argues that the language of computers is inherently masculine because it operates in a formal logical manner associated with masculine culture (109). So, while computers do not have a sex, they are constructed male. In this case, the fusing of the human and computer memory would seem to provide a way out of the formal logical binary in the leaking transistors and the interpretive logic. Adam goes on to argue that the disembodiment of the computers distances them from the feminine, because the female gender is connected with the body. Adam’s critique is relevant if there is in fact an essential connection between the feminine and the body, and it may not be as much of a problem for corporeal cyborgs with human forms, as it would be for cyborgs that more approximate Haraway’s “sunshine machines” (153). It seems then, that as with many constructs surrounding the cyborg, gender could go both ways. It could either be fused into something beyond the human and computer gender, or it could reinforce those genders.
The cyborg is a creature of contradiction and bidirectionality, so it follows that the cyborg memory would behave in a similar way. The cyborg memory draws on human memory and computer memory, creating a range of contradictory interpretations of memory and of the cyborg self. In one interpretation, the cyborg lies on a continuum with poles of human and machine, while another interpretation places cyborg as a distinct category where the hybrid is separate from its origins. The cyborg self and the cyborg memory are impossible to pin down. They are simultaneously multiple contradictory constructs that shift and oscillate, fusing to create new possibilities. Any discussion of the cyborg self and the cyborg memory can therefore not assert that the cyborg is any one thing but can only explore the possible imaginings of the cyborg self and cyborg memory.
References
Adam, Alison. Artificial Knowing: Gender and the thinking machine. New York: Routledge, 1998.
Caputi, Jane. In Judith Halberstam. “Automating Gender: Postmodern Feminism in the Age of the Intelligent Machine.” Sex/Machine: Readings in Culture, Gender, and Technology. Ed. Patrick Hopkins. Indiana University Press, 1998. 468-483.
Church, Craig. Computer Engineer for Unisys Corporation. Personal Interview. 9 May 2009.
Conway, Martin A. “Autobiographical Memory.” Memory. Eds. Elizabeth L. Bjork and Robert A. Bjork. Academic Press.
de Beauvoir, Simone. Le deuxième sexe II. Handout from French 248.
Haraway, Donna. “A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century.” Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge, 1991.
Jalava, Jerry. “USB finger, more details.” Protoblogr.net Blog. 10 March 2009. 11 May 2009. <http://protoblogr.net/blog/view/usb_finger-more_details.html>
James, William. In Hazel Markus. “The Self in Thought and Memory.” The Self in Social Psychology. Eds. Daniel Wegner and Robin Vallacher. Oxford University Press, 1980.
Markus, Hazel. “The Self in Thought and Memory.” The Self in Social Psychology. Eds. Daniel Wegner and Robin Vallacher. Oxford University Press, 1980.
Mullins,Aimee. “Aimee Mullins and her 12 Pairs of Legs.” Video of conference presentation. TED. January 2009. 11 May 2008. <http://www.ted.com/index.php/talks/aimee_mullins_prosthetic_aesthetics.htm>
Perkowski, Marek A. “Multi-Level Cell Technology from Intel.” Maseeh College of Engineering and Computer Science at Pennsylvania State University. 11 May 2009. <http://web.cecs.pdx.edu/~mperkows/ISMVL/flash.html>
Wajcman, Judy. “The Cyborg Solution.” TechnoFeminism. New York: Polity, 2004. 78-101.