Afterthought to “A Robo-Poet Does Not Scare Us” | Ryan Walach | The Hypocrite Reader


Ryan Walach

Afterthought to “A Robo-Poet Does Not Scare Us”


ISSUE 28 | TRAUMA AND LAUGHTER | MAY 2013

As Andrew Branch notes in his introduction to Un robot-poète ne nous fait pas peur, Vian’s text begs the very question it raises: just what exactly is a “robo-poet”? While the logic of his argument is predicated on a polymathic apprehension of all things mechanical and cybernetic, Vian seems dubiously ignorant of the kind of machine he pretends to be so concerned about. Not that I necessarily blame him; if the paucity of Google hits for Albert Ducrocq is any indication, a thorough depiction of his work may have always been a something of a struggle to come by. To be sure, Vian’s blurry picture of the robotic automaton seems roughly in tune with the state of electronics at the time. As unseemly as Vian’s “fatty” may sound to readers today, a 1949 article in Popular Mechanics predicted that “[c]omputers in the future may weigh no more than 1.5 tons.” Furthermore, for all their technical inscrutability, Ducroqc’s most admired inventions appear to be a minor genus of steampunk Roombas, rusty little gearboxes that scuttle around, indeed, a bit like crabs. Yet these details do little to illustrate, much less dispel, the existential threat imputed to them.

Perhaps the most fearsome aspect of Ducrocq’s little creatures is that they were designed to learn and make epistemologically valid “choices.” Informed by a cybernetic model of the organism, Ducrocq equipped each with unique sensory apparatuses that enabled it to collect and synthesize external data into a crude picture of their environment. Google Translate gives us this from Pierre de Latil’s Thinking By Machine:

But the element that appeared most interesting property was the introduction of a real “decision-making body” which could be referred to [as] an embryo[nic] personality… While the turtles were moving automatically to[wards] low light, while in contact with a barrier engine exploration was automatically put into action in a specific way[:] we have an inverter mounted on the engine of exploration…so that, encountering an obstacle, the animal can turn right or left…according to formulas variables…Under these conditions, we were able to ensure that the animal ‘dislikes’ rotations.

From what little I’ve been able to gather about Ducrocq’s “robo-poet,” Calliope, these modest attempts at artificial life were much more disposed to developing any sort of “personality” than the putative target of Vian’s essay. They were indeed a cause of paranoia: an American newspaper article published in October of 1953 characterizes Ducrocq’s pet automata as “suggestive of witchcraft,” adding that the author has “friends who tell [him] that the mere idea…terrifies them.” As Vian notes, not only could the realization of quasi-intelligent machines compromise the security of human labor in a competitive global market, but they might also confirm certain cybernetic theories of consciousness that are deeply unsettling and dehumanizing. One of his mitigating claims, however, presumes the speciation of these automata, whereby the mechanical turtles and foxes and so on are morphologically and structurally distinct from the “robo-poets.” Certainly, at the time, the consolidation of their respective operations into a universal Turing Machine would render them too expensive and too huge to ever be commercially viable. Today, the ubiquity of personal computers encourages us to revisit Vian’s other argument, that we must first understand the machine in order to subdue it.

Following Ducrocq, artificial intelligence research was largely concerned with symbolic logics completely abstracted from any real-world sensory data. Dubbed, in retrospect, Good Old Fashioned AI (GOFAI), the approach characteristic of the 60s and 70s was to wield the unprecedented processing power of the computer as an accelerated incubator for various experimental, axiomatic models of so-called thought. Despite the intuitive advantages that an embodied, interactive machine like Ducrocq’s seems to hold over the sterility of GOFAI, they both ultimately failed for the same reason—namely, the absence of a suitable environment. Where the corpulence of Ducrocq’s robots precluded the versatility of their processing capabilities, the development of early artificial intelligence programs was stunted by their relative isolation from external systems.

Naturally, this disparity could only be reconciled by the invention of the Internet. As a virtual nexus of digital information from anywhere and everywhere in the world, the Internet has since proven to be a fecund breeding ground for what is called statistical AI, online software that sifts through user traffic and data in order to glean intelligible patterns. These surreptitious operators are what make the vastness of the Internet both tractable and incredibly opaque. They are the true representatives of those hegemonic instruments that Vian enjoins us to keep at our mercy.

Probably the most crucial aspect of our contemporary network structures to bear in mind is the inseparability of reading and writing on the Internet. That is, computing machines abstract the conventional notion of writing from its origins in arbitrary mark-making and transpose it into a stream of infinitely plastic binary digits. Content becomes indistinguishable from interpretative commands, at least as far as the hardware is concerned. On a molecular level, bit-shifting operators such as addition and subtraction are encoded as numbers; only their location in memory, itself a numerical attribute, specifies their role at any given moment during execution. For instance, to multiply a number in binary by two, each bit is simply shifted one position to the left, just like multiplying a number by ten in base-10. But because there are only two possible digits in binary, adding any two non-zero placeholders results in a carry to the left. In this way, all arithmetic operations can be reduced to recursive calls to bit-shifting, and, therefore, numbers themselves.

This polymorphic mutualism constitutes our relationship to the network: behind even the most austere of web sites (e.g., the now virtually defunct GeoCities) is robust HTML, which web browsers compile automatically to render the text and links visible to the visitor. Just the ostensibly inert activity of web “surfing”—often achieved graphically through mouse-clicks—switches segments of code off and on, catalyzing legible adjustments to the active scripts in a constant transaction of text—at least as far as the machine is concerned—between the visitor to a website and his or her browser. Naturally, browser softwares collect these transactions and preserve them for user-oriented features such the now ubiquitous “forward” and “backward” commands and web history. But websites, which are of course software themselves, can erect their own caches of this kind of information to adapt their services to other users, which is exactly where some of the most profitable companies in the world (Google, Facebook, and Amazon, for instance) invest the greater part of their resources—that is, in parsing the trail of signs users leave behind them, in order to lead subsequent users back along the most well-trodden cyber-pathways. In this way, Google’s PageRank algorithm ensures that what we “read” “speaks” at least as loudly as what we “write.”1

On one hand, user-generated data, that reading/writing amalgam conducted continuously by, now, almost three billion people—which is not merely transcribed by but materially instantiated in the integer transactions described above—is precisely what nourishes the massive, sublime infrastructure we call the World Wide Web. However, in a chiastic echo of Nietzsche’s early observations about the typewriter2, our thoughts are also working on our writing tools, which ordain the information we gather in turn. This feedback loop is in full, fluent circulation today, with T9 and auto-correct features on smart cell phones (as well as assistive programming interfaces such as Ruby and X-Code), and the strange new rhetoric we commit to search engines. To observe the reciprocal implications this holds for so-called ordinary language, one only has to consider the torrid word-salad that confronts us when we review our daily search histories. In fact, entire blogs and Twitter accounts have been devoted to parodying the awkward search queries of the elderly and pre-adolescent. This trend only reinforces the impression that large-scale immersion in this discourse network has practically recast the interrogative as a blunt, paratactic proposition—nearly pure data.

A good example is @oldmansearch: “My dad[, Norman,] is 82 years old. I’m teaching him how to use the internet. I told him twitter was how to search things on Google. These tweets are what he’s searching.” Although it’s a pretty obvious fake, the cleverer of the jokes nevertheless reveal the tacit boundaries of our networked epistemology. If it seems obvious, for instance, that searching “tv shows NO singing” would only return results pertaining to TV shows about singing, it’s because, as digital natives, we are attuned to the non-semantic parsing mechanisms of common search utilities. But even if we think we could do better by putting the query in a less discursive form, we betray the unsettling assumption we share with Norman N. that just about any information can be found by using Google. Where Vian’s plea for a “Mirandola-ized” populace once challenged the specialized dominance of the mechanical robot, this polymathic impulse actually feeds our contemporary electronic apparatuses. Otherwise without desire of their own, they nourish themselves on the credibility lent to them as arbitrators of all human knowledge.

In other words, today’s sublime is the overwhelming abundance of digital information that would be completely intractable without algorithmic filtering mechanisms. Nevertheless a Google search will typically return a nearly inexhaustible number of links even if few of them are particularly helpful. This basically sums up my attempt to research Albert Ducrocq’s Calliope. In retrospect, Calliope, being the Greek muse of poetry, seems like a predictable name for a robot but even qualifying my search with “ducrocq” and “1952”—the year Ducrocq’s automaton was invented—didn’t really bring me much closer to the one I was looking for. So, while I can’t say for sure, my fairly extensive inquiry leads me to believe that whatever expository literature there may be on the machine—including the poems it arguably authored—was never digitized, or at least never uploaded online. It’s a tragic fate; being ungoogleable is bad news for any writer, but it’s especially poignant that Ducrocq, arguably the progenitor of the pixelated image, should have been overshadowed by the electronic network he anticipated. It is also telling that the greatest shortcoming of his automata—their embodiment in a material substrate—should also be what endangered his digitally processed poetry; both the algorithm and the poetry were produced as hardware—circuit boards and paper, respectively—which never made the leap to electronic data.

Meanwhile, nearly the opposite holds true in the case of something like @oldmansearch. That is, Twitter accounts are a good example of public content that is exclusively digital and online. Furthermore, being a basically anonymous account that doesn’t follow or re-tweet anyone else, it’s easy to imagine it getting completely lost in the sea of digital ephemera. But because @oldmansearch’s Tweets are concerned with the weirdness of search engine syntax in practice, otherwise ungoogleable phrases like “gyms with no teenagers indianapolis” or “e mail to voter46321 @yahoo.com rick” actually begin to point back to their source, @oldmansearch itself. Only these kinds of awkward search entries would furnish few enough pertinent results to allow a novelty Twitter account to rise through the page ranks.

I’ll acknowledge that it’s highly unlikely that anyone would actually Google one of those exact phrases without having read the Tweet beforehand. But before dismissing this as a petty détournement, it’s crucial to note that, by eluding a quasi-indexical relationship to the potential information they signify, the feedback loop generated by @oldmansearch’s tweets actually augment the scope of the apparatus they seem to disarticulate. That is, rather than circumscribing the scope of googleability, they fortify the lattice of links or nexuses that conjoin Google and Twitter. In giving the lie to notion of the internet as a medium, they emblematize its facility as a membrane, consolidating all channels of mediation into a self-contained virtual arena. The stakes of this phenomenon are exemplified by what Jacques Rancière calls “regimes of visibility.” In his Politics of Aesthetics, Rancière writes that “[t]he arts only ever lend to projects of domination or emancipation what they are able to lend to them, that is to say, quite simply, what they have in common with them: bodily positions and movements, functions of speech, the parceling out of the visible and the invisible.” Rancière ’s claim that all aesthetics can be reduced to so many countervailing crusades for ideological promulgation is pretty innocuous. What is more salient is his conflation of mediation and sensibility (as in “reasonable” and “intelligible,” as well as literally apprehensible via sensations); underlying the reflexive/mimetic relationship Rancière draws between art and the body, speech, and sight is the notion that media delineate what is knowable. The Internet, having ordained and codified the particular media deemed requisite for a viable information economy, therefore constitutes a regime of its own.

Like a one-way mirror, its “visible” side—the graphic interface—projects its monopoly of information as a facsimile of the world. Its “invisible” is the phenomenal surplus of “real” reality, whose “sensible” attributes is relentlessly transposed into digital patterns. The presiding conceit of Web 2.0 was to promote the visibility of social networks and other forms of non-technical interactivity, thereby occluding their more corporate prerogative: automated data-mining. This in turn furnished even more sophisticated and surreptitious apparatuses for channeling the ensuing influx of new information. Social interaction and cultural production coalesced into a kind of nebulous emission, whereby the putative human authors of these transactions were reconfigured into a series of numerical codes—usernames, passwords, cookies, Likes, upvotes, and reblogs. In practice, then, there is no difference between Amazon purchase suggestions and Tumblr; they are neither input nor output, neither subjective nor objective—they are only measurable.

The emission par excellence is the now-ubiquitous reCAPTCHA program—basically a Matrix-esque reverse-Turing Test designed to disable other machines from impinging upon the information loops of those who can afford it. To prevent web-crawling bots from scraping protected information or posting unremitting solicitations, reCAPTCHA makes users verify their so-called humanity by mindlessly transcribing pictures of words for Google Books. The initiative has been so successful that, according to Google, given about 130 million unique books in the world, it should be able to transcribe them all by the end of this decade. But reCAPTCHA will likely be outmoded before then; whether due to the development of more adroit word-recognition software or the diffusion of user protocol into multifarious automated web utilities, its already salutary method of distinguishing between human and machine can hardly be expected to retain much currency in coming years. But the fact that such a deterrent exists in the first place is evidence enough that the autonomy of software, already so many degrees abstracted from the hardware both humans and programs are plugged into, is unquestionable.

In light of this radical autonomy, vying with the machines for agency, as Vian proposes, is tempting in spite of itself. Although, being essentially algorithms with objective schematics, they acquire obtain a certain transparency and (arguably) democracy, this instrumental quality also lends itself to Vian’s dubious Cartesian prejudice. However, to prove one’s mortality amid a legion of machines would be simple enough, seeing as this disparity is precisely what animates the environment both human and electronic agents subsist in/on. As we saw with reCAPTCHA and, to a certain extent, @oldmansearch, what you might call human intelligence is continually relegated in this regime to the interstices of computational facility—a kind of “Cogito of the gaps.” The narrower these gaps become—that is, the more intuitive and user-friendly the interfaces through which our media is siphoned—the more convincing the illusion that the machines understand us, bolstering rather than dissolving this dichotomy. In this way, digital utopianism is basically equivalent to neo-Luddism; where the former advocates a definitive synthesis of Nature and digital control structures, and the latter a regress into a more primordial condition, they nevertheless both valorize a notion of escalating human agency that is quite at odds with the technologies that mediate this fantasy. After all, the Internet is only legible as a kind of synecdoche for the world because its users have been inculcated in practices that were legible to computers in the first place. Understanding the machine, in other words, should not be confined to “belittling” it, but should also include getting over ourselves, so to speak, by recognizing our wholesale complicity in a network that has become both our boundary and our horizon.

Conversely, it would be unreasonable to expect some sort of emancipatory “robo-laureate” to rise from the ashes of humanism. Vian describes a machine that authors every combination of every word, every letter—but who would read any of it? Ducrocq’s claim to the title of poet-maker benefits from a kind of circular logic: he characterized Calliope’s apocryphally mangled output as an imitation of surrealism, which was already an imitation of automatism. Perhaps a more pertinent thought experiment is the corollary to Vian’s hypothesis, that infinite monkeys with infinite typewriters, given infinite time, would produce the works of Shakespeare. Taking some liberties with Darwin, it’s easy enough to verify this claim, since it may as well have already happened. We also know that it’s possible to find any and all poetry encoded in some segment of the infinite decimal expansion of almost every real number, provided you already know what you’re looking for. But that means missing out on the other stuff that the proverbial monkeys type. If anything, the threat that the Internet’s regime poses is a normalization of our expectations; because its institutions modify themselves according to our requests, they tend to return what we are looking for, even if we don’t exactly know what that is. Insofar as the “distribution of the sensible,” as Rancière puts it, has already been largely given over to the discrimination of machines, the regime they confabulate can only be challenged by diverting our emissions with less deterministic apparatuses, injecting randomness into the stream.

Consider Chris Martin’s CHAT series. Published online by Flying Object Press, CHAT consists of loose, discursive poems that are generated by Martin’s conversations with the garrulous AI program, Cleverbot. Like the eponymous interface they both resemble and deconstruct, each “chat” is structured as a dialogue, with individual sentences delineated into atomic propositions or inquiries. But the legibility of this call-and-response method is counterposed by a greater coherence of voice that consolidates the entries into a singular poetic vision. Because the lines are conspicuously unattributed, the reader is left guessing who prevails: the machine pretending to be human, or the human pretending to be a machine.

If that sounds like a loaded question, it’s only because Cleverbot was programmed, in so many words, to trick people. According to the arguably outmoded stipulations of the Turing Test, Cleverbot is the most “intelligent” machine “alive” today. Unlike its GOFAI forebears, which were installed on self-contained machines, Cleverbot’s “mind,” as it were, is diffused across distributed networks online. Cleverbot gleaned its fluency in our contemporary vernacular over the course of millions and millions of conversations, synthesizing all this data into a crude facsimile of discourse. Even aside from any question of subjectivity or consciousness, this crowd-sourcing approach makes Cleverbot, statistically speaking, disposed towards stupidity rather than intelligence. But this virtually unremitting dimness—which anyone who has ever spent more than five minutes with the program will attest to—only makes Martin’s project that much more impressive. Brimming with wit, vivid imagery, and allusion, the transcripts that comprise CHAT demonstrate an unlikely virtuosity on the part of both of its contributors.

On one hand, this kind of cursory appraisal is about the most faithful close reading that the poems afford. Because each sequence consists of a chain of evanescent, quasi-random contingencies, any semantic gesture they betray is purely epiphenomenal. While certain typographical anomalies—such as “I didn’t say that./You sad before.”—may be legible as polysemy, or as a jargonization of sloppy online discourse, their effect is overwhelmingly bathetic. Furthermore, in order to even appreciate the ambiguity of the line breaks, one must necessarily apprehend the aleatoric method they denote. On the other hand, however, Martin’s chats are always already subjected to scrupulous analysis even in the absence of an uninitiated, critical human audience. That is, the fact that Cleverbot quantifies and incorporates every conversation into its repertoire is precisely what makes these transcripts so remarkable. Albeit to a virtually negligible degree, Martin’s interactions with Cleverbot have changed it forever. What may not be so negligible, though, is Martin’s influence on future visitors to the site, who, galvanized by his example, could conceivably augment the machine with a much richer stock of language.

In Chaosmosis, Felix Guattari remarks that “machines... relieve thought of inert schemas.” As exemplified by CHAT, the linguistic aggregating algorithms of Cleverbot allow Martin to largely avail himself of his unique lexical and grammatical inheritance. In turn, this affords him both a kind of objective distance from as well as an unprecedentedly intimate incorporation into the streams of discourse he channels poetically. I notice a similar imperative in the work of Lance Newman, whose 3by3by3 project appropriates language from Google News articles into poems with structurally rigid parameters. From his blogspot page:

About 20 people a day find their way here in one of 3 ways. They know or follow the site and check in for recent posts. They link in from a site like Silliman’s blog or Selby’s List or one of many poetry zines and blogs. They google a phrase like ‘draft game rules’ or ‘spanking Britney’ or ‘porn agency’ or ‘american animals’ or ‘monkey material.’ All are welcome, but the last group is especially well come.

Last but not least, Eric Elshtain and Jon Trowbridge’s Gnoetry program and his online press Beard of Bees feature both an excellent algorithmic conduit for recycling electronic writing using Markov Chains, as well as an aesthetically and methodologically catholic arena for what they call “excellent chapbooks.” These three practices have numerous affinities, not least of which is the fact that they all cite non-human co-authors (in Beard of Bees publications, Gnoetry is often cryptically invoked as “the machine”). Vian’s “robo-poet” is a problematic term; since the poetry in question emerges from a network, the iconographical purchase of a mechanical, embodied “robot”—a kind of singular, monadic automaton analogous to the flesh-and-blood genius of the Romantics—is pretty marginal. Instead, they posit a neo-Oulipean poetic model that conflates the meaning of the instrument or parameter with how it is deployed. Providing technical instruction as a model for reading draws attention to the way in which the program works and what it is literally capable of, and away from any question of univocal meaning-making. Perhaps by relearning how to conduct these apparatuses, we can actually begin to dispense with the question of mortal genius, and help us appreciate the viral imperative of language in a world comprised of networks.

1 According to Google, “PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page’s value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at more than the sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves ‘important’ weigh more heavily and help to make other pages ‘important.’”

2 “Our writing tools are also working on our thoughts.”