Thanks to all who have been making the Kickstarter for Neoreaction a Basilisk such a success – at $4469 right now, and happily plugging along towards “Theses on Trump.” Here’s a third excerpt from the main book, as we digress into another discussion of red pills and the notion of “pwning” a person, as in Moldbug’s multi-part essay “How Dawkins Got Pwned” before, inevitably, arriving at Hannibal Lecter. As always, if you enjoy, please consider backing the Kickstarter here.
Moldbug unpacks this in terms of Dawkins’ own famed biological metaphor for ideas as “memes,” focusing on the idea of a parasitic memeplex, which is to say, a hostile and destructive cluster of ideas. Being Moldbug, he approaches this in preposterously manichean terms, proclaiming that “when we see two populations of memes in conflict, we know both cannot be healthy, because a healthy meme is true by definition and the truth cannot conflict with itself.” Which, hahaha no. I mean, you don’t even need to plunge into postmodern notions of multiple and variant truths to recognize that, when we’re working in any sort of immunology, biological or memetic, the notion of “healthy” and “unhealthy” is not a straightforward binary nor a situation where something is reliably one or the other. Readers interested in theorizing this in detail might try introducing words like “chemotherapy” into the discussion.
But to this end, Moldbug contemplates the idea of a “generic parasitic memeplex” and how one might come up with a generalized immune response to it. The goal here is, as Moldbug puts it, “a formula for total world domination,” which is to say, spoilers, he’s just looking to reverse engineer the Cathedral according to a more or less arbitrarily imported heuristic of contagion, morbidity, and persistence. This results in most of the mistakes you’d expect, which is to say that he identifies seemingly random parts of the Protestant tradition and then comes up with reasons why they’re especially clever and vicious adaptations suitable for maximum pwnage. The high point of this is when he decides that asceticism offers “a clear adaptive advantage” because the only people who can be ascetics are the rich and powerful, so it serves as a status marker. (Yes, you read that right – Moldbug suggests that asceticism is a fast track to popularity.)
But there’s a larger problem, which is that Moldbug is, broadly speaking, using the engineering technology of the red pill to try to build the Cathedral. Pwnage is clearly a red pill sort of concept. Indeed, they’re both firmly from the same technophilic cyberpunk aesthetic that is, at the end of the day, the fundamental connection among Moldbug, Yudkowsky, and Land. And that aesthetic is very much based around a sort of individual targeting. The word we’re circling around, obviously, is “hacking,” and as oversignified as the word is, it’s not actually a bad image for what we’re talking about. The red pill, pwnage, and for that matter the horror reading, monstrous offspring, and Satanic inversions all follow the same basic pattern – a sort of conceptual infiltration of someone’s thought in which their own methods and systems are used against them. Done as a philosophical move – whether on the conceptual level of Deleuze’s monstrous offspring or Thacker’s horror reading or the individual level of Dawkins’ supposed pwnage or Land’s genuine break – it requires the creation of a rhetorical construct to engage in dialogue with the target. The hacker is as fine a model as Satan for this, as is the virologist imagined by Moldbug in his “generic parasitic memeplex” engineering.
The problem is simple: this cannot possibly be how the Cathedral works. It’s not spread by this sort of intimate seduction. And this is evident in the sort of ridiculous parameters Moldbug is setting out for it. Contagion, for instance, takes place, in Moldbug’s mind, both through parental and educational transmission (which is to say as an ideology drummed into people from birth in the same way that “God chose the King so you cannot question him” was) and through social transmission, which he defines as “informal transmission among adults, following existing social networks,” which, if you guessed that his example of how not to do that would be “Nazis,” good work. So he proclaims that “our parasite should be intellectually fashionable. All the cool people in town should want to get infected.” This is stupid in ways so fundamental that it is almost easy to miss amidst the idiosyncratic detail of Moldbug’s approach: why the fuck would the Cathedral still want to be transmitting among adults according to notions of coolness, which is after all pretty fundamentally opposed to the notion of educational transmission. The phrase is not “just as cool as school.” What Moldbug clearly wants in engineering the Cathedral is for social transmission to be a matter of persistence, but because he’s approaching it from a model of pwnage he ends up fundamentally building it wrong. Or, to put it another way, what happens to Dawkins isn’t pwnage. And while it is still worth understanding, this isn’t quite the context we care about doing it in.
Instead, let’s pick at this idea of pwnage through conversation – what we might describe as textual hacking. Framed in those terms two important examples present themselves. The first is Eliezer Yudkowsky, who grappled briefly and intriguingly with something along these lines in the form of the AI-Box Experiment. Like Roko’s Basilisk, this is an element of Yudkowsky’s thought that is notable for attracting more attention from people who aren’t Yudkowsky than it did from him. Unlike the Basilisk, however, it is not something that forms much of a problem for Yudkowsky’s thought. Indeed, it’s why intelligent people with actual achievements have taken Yudkowsky seriously.
Like any self-respecting bit of Yudkowsky, it exists to solve a deeply idiosyncratic problem. Specifically, it addresses a theoretical argument about whether a particular type of AI research that’s not actually possible right now is safe. The problem is simple enough: obviously we want to build a superintelligent AI to run the world. But that could be dangerous – what if it’s an unfriendly AI that wants to take over the world like in The Matrix or Terminator 3 or something? So we build the AI in a secure and isolated computer that can’t start taking over random systems or anything – a box, if you will. The question is this: is that safe? Yudkowsky argues that it is not, because a superintelligent AI would be able to talk its way out of the box. Or, to offer the hypothesis in his precise formulation, “”I think a transhuman can take over a human mind through a text-only terminal.” And he proposes the AI Box experiment as a means of demonstrating that this is true. In it, two people make a monetary bet and then roleplay out a dialogue between a boxed AI and a person given the authority to decide whether to let it out or not in which the AI tries to talk its way out of the box. And it is important to stress that it is roleplayed: valid exchanges include things like “give me a cure for cancer and I’ll let you out.” “OK here.” “You are now free.”
Depending on your perspective, Yudkowsky either completely misunderstands why this is interesting or understands it too well for his own good. The answer is not, obviously, because this is a pressing issue that requires settling. Rather, it is because the setup is that of a really good science fiction story, and indeed of several classics. What is interesting is less the rules than the content of the debate itself – how the AI presents its case and what strategies it uses to try to talk its way out, and what the human does and doesn’t consider valid evidence of the AI’s good nature. Much less interesting than “who will win and what does that say about AI research” is the simple drama of it – one imagines any actual rendition of the experiment would be fascinating to read. Yudkowsky, however, treated this as an actively useful game that helped demonstrate the correctness of his views. Indeed, he played the game five times under officially codified rules, winning twice against people from within his community, then winning one out of three times against people who he suspected were not actually convinced his proposition was wrong but were “just curious” and willing to offer thousands of dollars as stakes before quitting the game because, as he put it, “I didn’t like the person I turned into when I started to lose.”
Like I said, it’s the sort of thing you really want to read the transcripts of; especially of the three he won. So it’s fascinating that Yudkowsky has refused to release said transcripts, saying that people “learn to respect the unknown unknowns.” Which is to say that he thinks what’s most important about the game is what it reveals for strategies in AI research, as opposed to what it reveals about people. The result is something that mostly just reveals things about Eliezer Yudkowsky like “he’s crap at recognizing his own best ideas.”
But for all of that, it’s clear that Yudkowsky has a healthy respect for the idea that it’s possible to pwn a human consciousness through words alone, and a regard for the artistry and beauty involved in the attempt. Indeed, Yudkowsky has credited the idea (contrary to those who suggested he nicked it from Terminator 3) to the scene in Silence of the Lambs in which Hannibal Lecter convinces a fellow inmate to commit suicide simply by talking to him from another cell – a magnificent instance of textual hacking, albeit one that, having been previously unmentioned, cannot serve as our second example. Although now that we’ve brought it up…
It’s not that Silence of the Lambs itself is particularly interesting or relevant. It’s actually the only part of Thomas Harris’s cycle of novels to be absent from Bryan Fuller’s television adaptation, which is a murder-drenched dramatization of the entire literary style we’ve demonstrated thus far and the bit of plumage we’re currently diving off the path towards. Its basic unit of interaction is a psychoanalytic dialogue; an exchange that never quite settles straightforwardly into a pattern of interrogation or debate or mutual exploration or parallel monologue, but instead twists and winds through all four. Consider this snippet, which interpolates a famous monologue from Red Dragon:
Hannibal: Killing must feel good to God, too. He does it all the time, and are we not created in his image?
Will Graham: Depends on who you ask.
Hannibal: God’s terrific. He dropped a church roof on 34 of his worshipers last Wednesday night in Texas, while they sang a hymn.
Will: Did God feel good about that?
Hannibal: He felt powerful.
This exchange is most obviously interesting in how it navigates a relationship between abstract and material authority. God is simultaneously cast as a genuinely sovereign authority – a Platonic Form that man merely echoes – and as a brutal dictator capriciously executing people to assert his power. It comes wickedly close to satirizing and deconstructing the whole of Moldbug, and undoubtedly does so to Milton’s God. The does this often, worrying the bone of authority and creation, refracting it over and over again through its Chesapeake Gothic hall of mirrors. Consider, for instance, this revisitation of the exchange two seasons later, this time between Will and an imagined interlocutor:
Abigail Hobbs: Do you believe in God?
Will: What I believe is closer to science fiction than anything in the Bible.
Abigail: We all know it, but nobody ever says that G-dash-d won’t do a G-dash-d-damn thing to answer anybody’s prayers.
Will: God can’t save any of us because it’s… inelegant. Elegance is more important than suffering. That’s his design.
Abigail: Are you talking about God or Hannibal?
Will: Hannibal’s not God. He wouldn’t have any fun being God. Defying God, that’s his idea of a good time. There’s nothing he’d love more than to see this roof collapse mid-Mass, choirs singing… he would just love it, and he thinks God would love it, too.
In the first exchange authority and power are at loggerheads; God’s authority as creator seems necessarily legitimate, and yet he kills to feel powerful. Notably, he does not even kill for power, but rather for the feeling of power, this being strangely decoupled from its actual exertion. The second exchange, however, removes power from the equation. The suffering that exists is not there to make God feel good, but is an irrelevant byproduct of an elegant design. The use of “design” is, within Hannibal, a catchphrase; Will utters it at the climactic moments of his psychological murder reconstructions, marking the moment when he has achieved understanding of the mind whose creation he observes. Notably, this means that Will is profiling God here, a fact that complicates any effort to read this exchange as a redemptive revision of the earlier one.
But the word “design” resonates in other ways for our purposes, implying creation and engineering. If the first exchange seemed to satirize Moldbug, this one seems even more so. It is, after all, the great one-liner critique of Mencius Moldbug: he’s exactly what you’d expect to happen if you asked a software engineer to redesign political philosophy. And crucially, Moldbug basically agrees with it – he just also genuinely believes that the Silicon Valley “disruptor” crowd would be capable of running the world with no problems if only people would let them. Which in turn sheds light on the other part of the second exchange, Will’s subsequent assessment of Hannibal’s desire to defy God. Obviously this casts Hannibal in the role of Milton’s Satan, and we’ll pull that thread in a moment, but consider first the suggestion that God would enjoy Hannibal’s defiance. This is an accusation of perversity, of course, and one Moldbug at least would furiously reject.
Oh look, Phil even put a helpful link to the Kickstarter at the end of the piece.