Justin Raden
The Mind Factory, Its Give and Take
ISSUE 27 | DRONE ROBOT CYBORG | APR 2013
Illustration by Emily Balsamo
“If I am not wholly deceived, then states and systems, the most artificial products of man, are often so artificial that one simply can’t admire the wisdom of their creator enough.”
-Friedrich Schlegel, “On Incomprehensibility”
“bababadalgharaghtakamminarronnkonnbronntonnerronntuonnthunntrovarrhounawnskawntoohoohoordenenthurnuk!”
-James Joyce, Finnegans Wake
I feel certain that Stanisław Lem’s mechanical bard would not have been capable of producing Finnegans Wake. That it is a feeling is important, because I also think that surely the machine could have produced Finnegans Wake. Just as Joyce seemed to do, Lem’s machine contained all of history, and it would only have taken the appropriate amount of input for its algorithm to reproduce Joyce’s polyphonic language. Perhaps a machine such as Lem’s was even the source for Borges’s library of Babel—a library containing all possible alphabetical combinations would be the logical extension of the machine’s output. But I still don’t want to concede this ability to such a machine. To any machine. Something, someone, resists. The rational and emotional minds are at odds, as they so often are. I’ll have to come at the question a different way in order to outwit my own superstitions.
Finnegans Wake: Joyce’s virtually unreadable novel, or, at least that how we’re given to thinking about it. It reminds me of portrayals of the antagonistic relationship between man and machine in which a robot or cybernetic intellect is overcome by some aporia and shuts down or self-destructs. What is it about the difference between human and artificial intelligences that permits us to be unscathed by the aporia that overwhelms the cybernetic mind? Are these narratives cautionary tales about overcommitment to analytic thinking? Should the mathematician forego Joyce’s prose for fear of brain hemorrhaging?
The trope is reductive, to be sure, but in its reductiveness it is revelatory of a process of conceiving of and narrating models of cognition and intelligence. It may be that our concepts of non-human intelligence are inextricably tied to a narrative, or a series of narratives, in which aspects of our own intelligence are recapitulated. Hiding just below the surface of these narratives are two poles of possibility: either there is not a constitutive difference in intelligence as we have imagined, or other forms of intelligence are somehow unknowable or incommunicable.
*
Imagine a Künstlerroman for cybernetic intelligence, a Künstler-robot if you will. The cheapness of that humor aside, it seems to me a particularly uncanny premise. Technological singularity, the idea that intelligent and recursively generating machines will surpass human intelligence, thus precipitating our obsolescence, provokes justifiable terror. But this is a story of simple processing power, and as such pays little or no mind to the unquantifiable realms of aesthetics and imagination. So, what about the idea of machines making art, and – stranger still – of machines narrativizing their own artistic development? Do we find this disturbing because it’s too human? Can machines have Negative Capability? Or does this line of thought relegate all creativity to the realm of algorithmic probability, like the infinite monkey theorem?
Human authors have certainly produced such narratives, and again I’m thinking of Lem’s poem generating machine in Cyberiad. However, Lem’s mechanical poet produces poetry for humans. In that text, even the possible exception (an epic poem of which we are given only a few lines, which seem even to hint at technological singularity) is intended for a human audience. In narrativizing a relationship between machine and man our depictions have been almost invariably anthropomorphic. In their form, the way they perceive the world, their frequently Machiavellian strategizing, and (most telling of all) in their flaws, the machines we conceive are analogues for man. Might the concept of a cyborg, then, be apocryphal? The combination of cybernetic and organic material is only relevant to an insinuation of the machine into an anthropocentric world. We have dreamt the cyborg, but a cybernetic intelligence must surely dream a cyber-centric dream. How do we escape the anthropocentric dragnet?
*
Of all the arguments that attempt to differentiate between human and animal intelligence the one I find the most compelling simply claims that only humans have the ability to imply and infer. In other words, we are capable of communicating things indirectly. We have a whole litany of terms for our methods—metaphor, innuendo, metonymy, suggestion, metalepsis, analogy, synecdoche, and the list goes on. In fact, it would seem that indirect communication is the only means we have at our disposal – and a dubious means at that. Thus language has long enjoyed a principal role in philosophies of the mind.
One notable deviation from language-centered conceptions of the mind has been the rise of cybernetic theory in the cognitive sciences. The late ’80s and early ’90s witnessed an obsession with the cognitive models developed by the likes of Douglas Hofstadter and Daniel Dennett. Of primary concern in their system of thought are the autopoietic nervous system and pattern recognition. Ray Kurzweil has taken a similar line of thought in his new book, but the critical reception has been ambivalent. The founder of the science of cybernetics, Norbert Wiener, had outlined the similarities between computation and the activities of the nervous system as far back as 1948, and Heinz von Foerster’s book of essays Understanding Understanding goes to great length to establish neural activity and the self-regulation of cybernetic machines as synonymous.
In the cybernetic account of cognition, what makes Kantian philosophy possible is a signal loss at the level of the nerve cell. In the retina, for example, electromagnetic radiation is translated into an electrochemical signal without any preservation of qualitative data concerning the signal’s source. What the brain receives is already in a state of translation. Is this a necessary condition of being? Could, for example, a cybernetic intelligence be outfitted with organs of perception such that it could report back on the raw data of the world in such a way as to get at the thing-in-itself? Are the thing-in-itself and the thing-as-percieved translatable? Or perhaps it’s more appropriate to simply ask: are the thing-as-perceived and the thing-as-perceived-by-the-other-intelligence translatable? Imagine a triangle between an object (anything, for fun let’s say it’s a Daft Punk album), yourself, and an artificially intelligent robot. You and the robot both receive information about the album from your respective sensory organs; information that has been, in the sense we have just described, translated. These are the first two edges of translation, so to speak. The third exists between you and the robot, and this is where transverse intelligence could be said to exist. This is the area in which information about the world, already in a problematic state of translation, is exchanged.
In Tractatus Logico-Philisophicus, Wittgenstein claims that language structures information in such a way as to qualitatively manipulate its object. When we describe an object we are bound by certain lexical limitations which impose their own meanings (connotation, context) on the object. Perfect transliteration of the objects of language is not possible because each language imposes its own meanings. Of course, this doesn’t prevent any measure of translation (for example, I have to read Wittgenstein in translation since I don’t read German). Even forgetting, momentarily, the complications of language, the premise of Tractatus still applies if we are dealing with a cybernetic intelligence whose organs of perception were not modeled on our own. Our modes of perception would be irreconcilable. Interestingly, if similarly extended, Wittgenstein’s idealization of a logical language suggests the technological possibility of overcoming disparate modes of perception.
So is transverse intelligence possible? Lem’s novel Solaris answers negatively. In it, the phenomena observed on a sentient ocean can only be expressed and understood (though, strictly speaking, it is not understood) anthropomorphically. The more extreme the methods of scientific investigation, the more deeply the psyche is penetrated—but not the psyche of the sentient ocean, the psyche of the team of scientists. The ultimate result of the investigation into the alien intelligence is a penetration of the ego, a view into aspects of the human mind that often remain hidden. Lem’s novel also allows us to play the game of conceiving the incommunicable intelligence, thus dramatizing the opposition between Anselm’s proof of the existence of God, in which merely conceiving of something is proof of its existence, and Russell’s tea cup, in which conception of an unobservable phenomena (or, in this case, an incomprehensible phenomena) is not a sufficient basis for proof. Through narrative, we synthesize an existence for our various conceptions of intelligence and problematize those conceptions simultaneously.
*
Solaris dramatizes both the way in which attempts to conceptualize non-human intelligence is mired in anthropocentrism and also the fact that narratives that emerge from this process are less a view of these alternative intelligences than they are a reflection of our own minds. The exploding robot head is the uncanny vision of our own superstitious fear that Hofstadter is right, that we are just biological computing machines. So then what can infer about these dramatizations of cognitive models? Something like an answer can be found in Jacques Derrida’s essay “Two Words for Joyce”:
…there [is] no simple confusion between him and a sadistic demiurge, setting up a hypermnesiac machine, there in advance, decades in advance, to compute you, control you, forbid you the slightest inaugural syllable because you can say nothing that is not programmed on this 1000th generation computer—Ulysses, Finnegans Wake—beside which the current technology of our computers and our micro-computerified archives and our translating machines remains a bricolage of a prehistoric child’s toy.
Paraphrase is not an option with Derrida, but the “hypermnesiac machine” is, I think, intelligible in relation to Derrida’s concept of “archive fever,” in which the anticipation of the future intervenes in the chain of causality to produce and record at the same time. But Joyce, Derrida claims, has inverted this paradigm by anticipating, through an immense memory, the recirculating currents of history (Vico looms large, particularly in Finnegans Wake). A portion of what Finnegans Wake is up to is the dramatization of the communication we elucidated earlier, that is, between Shem and Shaun, between the pen and the post, between production and reception of texts. It is in the space between the latter two that intelligence could be said to occur—the space in which information is mysteriously transferred. As Heinz von Foerster puts it, “Here come two people, one knows something, the other one doesn’t know something. They go together, sit in a room for 2 or 3 days at Harvard or some other fancy place, and suddenly the one who doesn’t know anything knows something[...] So order is essentially the arrival of redundancy in a system, a reduction of possibilities.” Particularly in relation to Finnegans Wake, where the space between is packed with all the linguistic computations derived from Joyce’s polyphonic language, does Derrida’s claim mean that we are bound by the borders of possibility, which Joyce greatly extended, or by the exhaustion of possibilities, as von Foerster has suggested? Derrida supposes that even his essay was anticipated by Joyce’s work. Does Derrida mean, then, that Finnegans Wake has simply calculated an infinite set of systematic redundancies? I don’t think this is the case, but I’m at a loss to explain why. I suspect, again, that I’m falling victim to an emotional resistance to a mechanized conception of the human mind.
In The Mind’s I, Daniel Dennett relates a (presumably) imaginary scenario in which his brain is removed from his body and the two are linked over a transmitted signal. After the procedure, Dennett has a kind of Cartesian crisis in which he says he “knows” that he is suspended in a tank in a lab but that he is being fooled, by the sensory information transmitted by his body, into thinking he is wherever his body is. This literal mind/body split allows him to rethink the assumptions that things like personality and desire are derived from physical stimulus/response. Dennett imagines that at one point, when circumstances require that his brain and body remain disconnected for a fair length of time, his favorite music recordings are pumped directly into his brain. Pure signal reception with no recourse to a physical response, like the production of serotonin or norepinephrine. The same scientists who have managed this medical miracle also “backup” and sync Dennett’s brain to a special kind of computer and use it to mirror his brain’s activity. Once this has been accomplished Dennett can use a switch to alternate between the signal from his brain (which he has named Yorick) and the signal from the computer simulation of his brain without noticing a difference. We can wonder, analogously, whether the computer-brain, disengaged from Dennett’s body, might enjoy similar signal reception. Engagement with the world, in opposition to the Cartesian model, seems here to be a fundament of the way we perceive intelligence.
As we’ve already acknowledged, in order to engage with the world, we need organs of perception; a way to gather information about the world. So endowed, a cybernetic intelligence equal to or greater than (at least in the analytic terms we’ve been entertaining) human intelligence would inevitably fall victim to self-consciousness, that is to say, consciousness of consciousness, or consciousness of the methods and limitations of perception. Would this necessarily express itself in the form of narrative? philosophical inquiry? aesthetics?
*
Ronald Sukenick, a writer whose fiction is at times only marginally less incomprehensible than Joyce’s, conceives of fiction as a tool for constructing reality: “If reality exists, it doesn’t do so a priori, but only to be put together. Thus one might say reality is an activity, of which literature is a part, an important part, but one among many.” In Derrida’s view, “joyceware” has a controlling stake in the literary art of reality creation. Reading a work like Finnegans Wake doesn’t make us more human (though my emotional mind wants this to be the case); rather, it reprograms us as constructors or narrators of reality, and that reality is dominated by our narration of the mind’s relationship to it. If, as Derrida seems to be claiming, Joyce’s work explodes the process of redundancy that von Foerster claims characterizes the transfer of knowledge by anticipating all possibilities in a word generating machine, perhaps that gives us a kind of preview of a technology of transverse intelligence. Perhaps Joyce’s word-generating machine and Lem’s poem-generating machine are not so different after all.
Part of the irresistibility of reading is the desire to read ourselves into another mind. Fiction is a particularly well suited staging ground for the game of transverse intelligence, the same game Wittgenstein claims we are playing with language and translation. All language is translation, and thus part of the game.
Perhaps one day the game will include “reading” our way into another intellect—alien, cybernetic, or otherwise. I’ll return one last time to Lem. In an early tale in Cyberiad, soldiers in two opposing armies are fitted with devices that, when activated, link their minds together in such a way as to produce a collective mind with a processing capacity commensurate with the number of brains in the circuit. Rather than becoming the ultimate fighting machine, whole companies and regiments set to work on a philosophy of the subject and ultimately both armies go skipping off, hand in hand, picking flowers and reciting poetry. There is a double irony when we recognize this as a ludicrous result of elevated intelligence and when we further recognize the ludicrousness of that very judgement, bound as it is by narrative conceptions of elevated intelligence. That these tend to be either cynical or highly logical are reflections of the way in which we have perceived and characterized our own minds. This doesn’t rule them out as indications of the actual potentialities of intelligence, but it could obscure other possibilities. Perhaps technological singularity will result in the final unbinding of the Promethean project of philosophical thought. Perhaps not. In the meantime we have only the narratives with which we bind our own thoughts. We can submit to the “mind-forged manacles” lamented by Blake, narratives both limited and limiting in their visions of the mind and the world, or we can simultaneously disassemble and anticipate all of our imagined acuity in the work of Joyce.