For Amusement Only, Really

Some writers have embarrassing juvenilia. I have that too, actually, but mostly I have embarrassing academia. The death of Douglas Engelbart a bit ago struck me a bit because his “mother of all demos” was a fairly sizable topic within my dissertation, which, of course, nobody will ever see because it’s a dissertation and nobody ever sees those, and revising it into a book is not really a priority given my estrangement from academia.

Accordingly, I’m serializing it here on a “whenever I’m hard up for content” schedule. I make no guarantees of its readability, quality, or entertainment value. Indulge me. It took a long period of my life and it seems like it should have some sort of a home.

I also make no guarantees that parts of it are not abandoned mid-revision, or that the citations are not in the odd markup language used by the citation manager software I used when writing it. In any case, hopefully someone will be entertained by this. It is, of course, academic writing, with all of the associated stylistic tics and occasional dryness. 

CHAPTER 1

On August 6, 1991, at 3:31 PM, Tim Berners-Lee, an employee at the European Organization for Nuclear Research (CERN) made a post to the alt.hypertext newsgroup responding to a seemingly innocuous question by Nari Kannan, who asked if there was any development towards “hypertext links enabling retrieval from multiple heterogeneous sources of information.”  In his response, Berners-Lee described a project he was working on that would do just this by creating a protocol, HTTP (Hypertext Transfer Protocol), via which hypertext files could be requested and returned by servers on the then-nascent Internet, complete with links that would, if followed, use HTTP to request files from other servers (Berners-Lee). The project was called WorldWideWeb, and this Usenet post marked its public debut.
This historical moment poses something of a problem for a media theorist. What, role does or did this moment play in terms of the World Wide Web as it is today? On the one hand, from a historical perspective, it is factually the case that Tim Berners-Lee created key parts of the basic technology that underlies the World Wide Web. On the other hand, the development of the Web does not slot so easily into a sort of “great man” theory, and there is a degree to which the centrality of this single event to the history of the Web belongs more to myth than history.[1]Associated with this dualism is a significant problem: on the one hand, there are clearly a set of events in the history of any medium that are historically significant in its establishment as a functional piece of technology. On the other hand, retrospectively, those events play a fundamentally myth-making role – in this case providing a particular narrative of the creation of the Web, one that privileges the specific concerns of Berners-Lee over the vast amount of work done by both his predecessors and followers in the overall development.
            To continue with the example of Berners-Lee’s USENET post, it is clear that the historical line from this post to Google, Facebook, and Wikipedia is lengthy and features other significant developments. Regardless, it is difficult to ignore the transformative effect of this post and Berners-Lee’s subsequent one outlining the project. And it is also important to note that the transformative effect of this moment stands independent of later commercial pushes for the World Wide Web. In truth, the World Wide Web as Berners-Lee launched it was fairly unimpressive, consisting only of a handful of sites. It would not be until December of that year that the Stanford Linear Accelerator Center would set up the first web server outside of Europe. The only existing graphical web browser, also named WorldWideWeb, ran only on the NeXT computing platform, which had near negligible consumer adoption.[2]The project was, in other words, difficult to use by anyone but a handful of very technically-savy enthusiasts – who were, after all, more or less the only sorts of people using the Internet in 1991.
            Over time, however, the sell became easier. In 1993 the largest practical barrier to the World Wide Web’s influence was removed with the release of Mosaic, a graphical web browser that could run on Macs and PCs – thus opening the technology to more users.[3]Over the next few years consumer online services such as Prodigy, America Online, and Compuserve added Web functionaqlity to their services, and, over time, the Web as it is today both came into existence and was successfully marketed to a mass audience. All of this was accompanied by enthusiastic rhetoric touting a techno-utopian future that it is easy to find fault with. But it is important to note that there really was an innovative and desirable product underneath the silliness of catchphrases like the “Information Superhighway” and the associated vision of a utopian information future, most notably framed in Bill Gates’s ephemeral best-seller The Road Ahead. The marketing may have gotten the word out, and developments like Mosaic and America Online may have made it accessible to many more users, but once people got their hands on the Web, it had a striking ability to sell itself. The marketing apparatus and cultural phenomena followed causally from Berners-Lee’s original post. This is what allows the post to serve its mythic role: at the heart of it, Berners-Lee had an idea that was compelling enough that people took the time to improve on the implementation.
On the other hand, it would be irresponsible to get too caught up in the glamour of the post and treat it as some sort of dramatic shift or epistemic break. Marshall McLuhan’s maxim that “the content of a medium is always another medium” (McLuhan 8) is instructive here. Despite its allure, Berners-Lee’s central invention wasn’t all that new. In truth, his contribution was relatively narrow. The primary Internet protocol, IPv4, had been set up by DARPA in 1981. By 1991 the Internet already had significant uses, most notably e-mail and a very robust distributed discussion system in Usenet. Berners-Lee didn’t invent the concept of hypertext – the term was coined by Ted Nelson in the 1960s, and a clear line of influence can be traced back to Vannevar Bush’s 1945 landmark essay “As We May Think.”[4]Berners-Lee’s contribution was to use existing Internet protocols to host hypertext documents and support their sharing and editing. And, notably, although both hypertext and the Internet existed before the World Wide Web, neither was anywhere close to a cultural phenomenon. The Internet was a computer network used primarily for military and academic purposes, and hypertext was a neat idea in data structuring that was familiar to computer geeks, but lacked any major implementations.
The questions, then, are: how the World Wide Web progressed from a linking of two significant but niche media technologies to a cultural institution that rivals television and film as the most important mass media paradigms of the 20th century? What was it that people saw in the initial concept that was so promising? And, more generally, how do media advance from embryonic forms based on individual, discrete technological innovations into large scale expressive paradigms whose influence extends over broad swaths of culture?
This is one of the central problems of media theory, before and after the digital age. There are a number of levels on which to approach such questions. The most obvious is a historical level, tracing technological and commercial developments and their implementations in widely-used forms. And indeed, numerous such histories of the Web, film, and other media have been offered. But such historical approaches have certain unsatisfying limitations. Cultural moves and marketing rhetoric can be tracked easily enough, but when one tries to look at the actual development of a medium as a viable paradigm one starts to disappear into a haze of subjective and shared mythology. In some cases one can get quite far back on some major points – Tim Berners-Lee’s original proposal for the World Wide Web, entitled simply “Information Management: A Proposal,” is preserved and he has written extensively on the circumstances of its conception. But such records are the exception, not the rule. For the most part, the particulars of individual design decisions and the contributions of others along the Web’s development render key steps in the medium’s evolution inaccessible. For instance, although his initial proposal survives, the predecessor hypertext program he wrote, called Enquire, has been lost. Similarly, much of the work of his collaborators at CERN and of other early developers of web browsers is far less well documented.
This problem has been noted by others. Lev Manovich has remarked with remorse on the absence of theorists “at the moment when the icons and buttons of multimedia were like wet paint on a just-completed painting, before they became universal conventions and this slipped into invisibility” (Manovich 7). And it is a theoretical-critically significant problem – it leaves a hole in the narrative of how something moves from an idea to a convention. But the commercial realities of creating successful technologies requires communication about how a piece of technology can move from inception to convention. This is the central problem of evangelism – how to, in the minds of someone who has not yet seen the light, get a basic concept to become a paradigm. The answer settled on by the savvier media evangelists has been the act of demonstration – that is, of showing a medium’s potential future usage.
Demos are not exclusive to new media. All media, in their ascendancy, go through a period of demonstration. Film’s evolution was marked by numerous demos: the Lumière shorts and The Jazz Singer are obvious cases. The main characteristic of these texts[5]is that, aside from their explicit expressive content, they are also self-referential – they actively present their act of mediation as an argument for its appeal. Some demos are interesting in their own right and for their expressive content. To choose a recent example, Super Mario Brothers is without a doubt a demo of the Nintendo Entertainment System, but it is also a classic and noteworthy video game in its own right.[6]Others are interesting purely for their demo content and have little to no expressive content – Douglas Engelbart’s 1968 demo of the oNLine System, widely known as “the mother of all demos,” but has basically no expressive content in the way I mean here. Regardless, it is of tremendous historical import, as it includes (among other innovations) the first public showing of a computer mouse. Still other demos are interesting in spite of their expressive content – House of Wax is not a critically well-regarded film (although it is far from reviled), but it remains very interesting as a demo of the narrative and formal potential of 3-D film.[7]
What defines the demo is not, then, the presence or absence of significant expressive content, but rather the presence of a sort of willful reflection in its technique. The demo, I propose, is characterized by a use of technology or method that shows off and announces itself as novel. The demo is thus not limited to the early days of a piece of technology. Color film had existed for decades when The Wizard of Oz was released, and yet the moment drab black and white Kansas gives way to the lush colors of Oz remains a clear demo of the imaginative potential of color film. Similarly, the 1980s revival of 3-D film (and, indeed, the ongoing revival) repeated the act of demonstration despite the prior popularity of the medium. The historical age of a medium provides little barrier to the act of demonstration: it would not be out of line to read Pale Fire as a demo of the novel some 800 years after it was invented. What is significant, in all of these cases, is a self-referential turn where the medium displays the conditions of its operation and reception.[8] 
Despite this, however, the demo seems most important in its role in the earliest days of a technology or medium, where it serves as part of the establishing of basic affordances of the invention. It is in this context that its peculiar power seems most arresting and most focused on the rhetoric of radical transformation. These early demos are the ones most concerned with the task of managing the transformation from technology to medial paradigm. The paradox of the demo – its vision of dramatic change and its simultaneous dependence on past forms – is also at its most vivid in these early exemplar texts. The work they need to do in order to show the viability of their paradigm is more dramatic, given that the paradigm is not established. But part and parcel of that is the lack of reference points for the paradigm, requiring that the work be done primarily in the context of the paradigm that the demo is ostensibly rejecting.
In this respect, there are two important observations to make about the demo. The first is that the demo is about more than the marketing of commercial objects. That is not to say that commercial transactions are not a key part of the demo – the release of The Jazz Singer, for instance, was clearly about the sale of tickets to the film and of the Vitaphone systems needed to show the film in theaters. But it was also about the marketing of a new paradigm of cinema. The message of the film is, in part, that synchronized sound is a productive vision of the future of film. This paradigm was inexorably tied to commercial concerns, but it is clear that there is markedly more to it than that. More strikingly, the paradigm seems to have existed as a goal of the film at least some extent against the intentions of the film’s producers – Jack Warner had gone on record a year before the film’s release declaring the technology doomed. Thus to some extent the desire for use that that the demo must create is not merely a product of a marketing paratext of the demo. Rather, the desire is something that is constituted by the text (and, because of the self-referential nature of the text, by the medium) itself. This is not a new observation – as McLuhan observes, “the ‘message’ of any medium or technology is the change of scale or pace or pattern that it introduces into human affairs” (McLuhan 8).
Closely related to this observation, however, is the fact that the demo elicits a desire – or a degree of desire – that it cannot possibly satisfy. The sort of formal display of the demo typical of its modern forms occurs in an uneasy relationship with the content of the demo, such that the medial paradigm often seems to speak louder than the ostensible content. This is evident in Engelbart’s demo, where his example of a document consists simply of the word “WORD” written over and over again, reducing the paradigm in a literal sense to a formal system that displays only potential.
But, of course, the system Engelbart was demoing (from which sprang many of the conventions of “direct manipulation” user interfaces) was appealing not as a formal system but as a medium through which actual, usable content could be created, manipulated, and shared. The manipulation of meaningless strings of characters does not make a killer app. What is crucial about Engelbart’s demo is the degree to which the audience-as-user is invited to project future content and applications onto the form presented in the demo. This sense of futurity is crucial to the demo, which exists not for its own sake but for the sake of potential future work. But that future work, existing largely as a fantasy at the moment of demonstration, does not and never will correspond to actual processes. In reality, the demo is generally not showing a radical shift but a subtle, if significant, tweaking of existing methods and media. The new paradigm offered by the demo is fundamentally ensconced in and defined by the paradigm from which technologically descends. 
In fact, in the demo phase of the paradigm the most genuine differences are often the hardest to discern. Tim-Berners Lee originally presented the World Wide Web as a way of usefully organizing and accessing the information generated by  “complex evolving systems” (Berners-Lee “Information Management”). In practice, this does not seem to have been the primary appeal of the invention in its subsequent uses. Indeed, the best method for what Berners-Lee wanted turned out to be the wiki, a later development that grew out of the World Wide Web.[9]The Web’s primary appeal turned out to be its ability to provide ready access to, and broadcast of, a heterogeneous set of systems. Berners-Lee’s initial proposal provides for heterogeneity, but he conceives of this on an IT level – essentially, making sure that the data can be accessed by different computer systems, and therefore by a select group of expert users with specialized needs. But in the end, the Web as designed was primarily a project about, as Berners-Lee described it, “the management of general information about accelerators and experiments at CERN” (Berners-Lee). Although his proposal anticipates a future point where this problem will occur throughout the world, the proposal he maps out is strikingly different from most uses of and descriptions of the Web as it exists today.[10]
In practice, of course, the primary appeal of the World Wide Web was precisely its expansion beyond CERN to include radically heterogenous types of information – news stories, contact information for old friends, pornographic images, reference materials, etc. – and its intersections with existing broadcast paradigms of print publishing, television, and film. This heterogeneity, however, could not possibly exist until after it had been successfully demoed and had taken root as a genuinely practicable world wide phenomenon. Early demos of the Web’s potential, therefore, had an uneasy relationship with its actual potential.[11]This future potential is always displaced in the demo, however; we can find it as a distant glint that catches the eye and draws the viewer towards newly-imagined – or as-yet-unimagined possibilities for its use. This is the central feature of the demo – what I call its shininess.
Shininess is an allure that is at once broad-reaching and facile. It is, at its core, an allure of surfaces, focused on technical and formal properties that, like Engelbart’s new kind of document consisting only of repetitions of “Word,” show only possibility. The surface of the shiny object reflects attempts to probe it, returning only the desires of the gaze itself. This reflectiveness tells us nothing about the actual depth of the object – a cheap chrome veneer is as reflective as the surface of the ocean. But on the other hand, the reflectiveness suggests a potential for limitless depth. That this potential may never, in fact, be achieved does not make it any less appealing; in truth, it makes it all the more so. It is not enough for a medium to be shiny – in the end, the superficial allure must give way to some kind of depth in order for a new medium to become an established paradigm. But on the other hand, it is necessary, in demonstrations of a new medium, to engage in this sort of proleptic gesture, disingenuous as it may be. In the end, although shininess is an allure of surfaces, it remains a potentiallydeep allure. Indeed, it is the depth of allure and the proleptic nature of the object of desire that makes the aesthetic of shininess so central to an understanding of progress. Because shininess is focused on a vague and elusive future promise, but is still evoked not by idle dreaming but by present, tangible technology, it can be viewed as the fundamental aesthetic of technological progress.
Implicit in the idea of shininess is the oft-cited observation that the future is only understood in light of the past. Accordingly, what looks like potential within something that is aesthetically shiny is, in fact, just the observation of a lack in some earlier piece of technology. In terms of media the most doctrinal version of this observation is McLuhan’s maxim that the content of a medium is always another medium – that is, that any new medium is understood only in terms of how it modifies past media. On the one hand this is self-evidently true: an emerging medium has to explain itself in terms of existing media. This process is central to understanding the demo. But it is not sufficient to understanding it.
To this end, we ought look at Jay David Bolter and Richard Grusin’s Remediation, a book that attempts to create a notion of technological progress that extends from the atavistic focus of new media technology. They coin the eponymous word “remediation” to describe this process, defining it as “the representation of one medium in another” (Bolter and Grusin 45). They distinguish this from the phenomenon of adaptation – where, for example, a book is adapted into a film. What they describe is, in their words, the quoting of a medium itself in another medium – for example, the ways in which Microsoft Encartaappropriates conventions and framings of traditional encyclopedias into digital form.
Remediation, in Bolter and Grusin’s eyes, is the product of two contrasting logics: immediacy, and hypermediacy. The former of these, immediacy, is the familiar utopian image of the transparent and unencumbered representation of reality, presented, initially in Bolter and Grusin’s formulation, in terms of virtual reality. The goal of immediacy is two-fold – first is an attempt to suppress the material trappings of mediation such that it “erases itself so that the user is no longer aware of confronting a medium” (Bolter and Grusin 24). This lack of awareness leads directly to the second aspect of immediacy, that the medium becomes equivalent or indistinguishable from real experience – as they put it, “the medium itself should disappear and leave us in the presence of the thing represented” (Bolter and Grusin 6).
Contrasting with this is hypermediacy, in which the medium becomes discernible, and the interface is actively viewed and interacted with in a fundamentally atavistic fashion. Their primary example of this is the layout of juxtaposed and overlapping windows that constitutes the basic appearance of the modern GUI, though they extend it back to historical “fascination for mirrors, windows, maps, paintings within paintings, and written and read epistles” (Bolter and Grusin 36). This explicit engagement in the materiality of the medium is taken to itself be pleasurable – a pleasure that is explicitly linked in their argument to the Modernist shift in pictorial and narrative arts. This turn to the medium’s visible presence is an atavistic pleasure inasmuch as it involves engagement with the technology as opposed to an embrace of its expressive possibility.
In Bolter and Grusin, the process of remediation is based on the interplay of these two concepts. In remediation, a medium uses a hypermediated presentation of multiple medial forms in order to show “why one medium might offer a more appropriate representation than another,” i.e., why one medium is more immediate (Bolter and Grusin 44). On the surface, this seems like it might adequately addresses the problem of the demo. After all, it accounts both for the atavistic tendencies of the demo and for its utopian drive towards a novel future.
But upon closer inspection, Bolter and Grusin’s explanation proves deeply unsatisfying. The heart of this problem lies in their conception of immediacy. Although they are aware of the problems of the concept, and refer to the “utterly naïve or magical conviction that the representation is the same thing as what it represents,” they do little to avoid this pitfall in their own argument. Instead, they defend themselves on the grounds that “computer graphics experts, computer users, and the vast audiences for popular film and television continue to assume that unmediated presentation is the ultimate goal of visual representation” (Bolter and Grusin 30). That is to say, while immediacy may not actually work in a seamless fashion, the fantasy of its operation is sufficiently a part of the cultural rhetoric of mediation that it must be taken seriously.
            Although Bolter and Grusin are right to recognize the social prevalence of the logic of immediacy, in the end they do not compellingly distinguish between how media actually function on a technological and material level, and the terms of the social and commercial rhetoric and paratext that enframes media with an eye towards immediatist aims. Although the idea that these two are inexorably linked is central to my argument, it is important to avoid confusing social effects (i.e., the rhetoric of immediacy) with the actual workings of the medium. Bolter and Grusin further muddy this issue by taking the social phenomenon of immediacy as evidence for its actually existing in the medium when they return to the familiar myth of the early audiences of the Lumière films panicking in fear that the filmed train was a real train that threatened their lives. Bolter and Grusin explain that “the audience members knew at one level that the film of a train was not really a train, and yet they marveled at the discrepancy between what they knew and what their eyes told them… there was a sense in which they believed in the reality of the image” (Bolter and Grusin 30-31). That is, while the conviction that representation and thing are equivalent is magical and naïve, somehow this naïvete (which they consign to scare quotes shortly thereafter) is present in a secret and unexplained form within the media.[12]This claim is justified simply because people evince a desire for immediacy. Thus in place of the magic and naïve believe in the equivalence of signifier and signified, Bolter and Grusin substitute the magic and naïve belief that a fantasy of a thing is equivalent to the thing’s actually exerting influence on real media practices. Centuries of theorization and wishing for immediacy in no way amounts to evidence of the presence of immediacy in the actual media subject to remediation. If anything, such a long-standing fantasy of immediacy – and the failure to achieve it – ought be taken as clear evidence that there is something deeply wrong with immediacy as a concept. Instead, however, Bolter and Grusin treat remediation as a process of reformation, where media proceed ever onwards towards immediate experience.
            To be fair, Bolter and Grusin seem aware that this tendency is problematic, devoting much of their third chapter to the problems of immediacy. They spend this chapter attempting to find some theoretical apparatus that allows for the simultaneous presence and impossibility of it. They end by attempting to frame remediation in terms of the film studies debate on the male gaze, allying immediacy with the male gaze, and hypermediacy as being fundamentally deviant and multi-valiant. In the last paragraph, they turn to Judith Butler. As they put it, Butler argues “that heterosexuality itself depends on homosexuality for its cultural meaning. While the socially accepted practice of heterosexuality seeks to exclude other sexual practices as deviant, it is precisely this exclusion that enables heterosexuality to define itself as normal and normative.” From this, they conclude that “As the sum of all unnatural modes of representation, hypermediacy can then be used to justify the immediacy of linear perspective. It would be for this reason that hypermediacy always reemerges in every era, no matter how rigorously technologies of transparency may try to exclude it. Transparency needs hypermediacy” (Bolter and Grusin 84).
            This is a staggering passage. Do Bolter and Grusin really mean to equate immediacy with the (repressive) discourse of heterosexuality, and treat hypermediacy as a (usefully oppressed) discourse through which immediacy gains its legitimacy? And furthermore, do they really intend to link this newly repressive structure of remediation to the progress narrative of an endless march towards immediacy? Even if the final and ultimate legitimization of immediacy is, as they suggest, impossible, it is a shocking claim to suggest that any sort of continual motion toward this goal is a matter of reform and progress, given the comparison.
            I have no doubt that Bolter and Grusin would resist such a characterization of their argument. And, to be clear, I am not accusing them, implicitly or explicitly, of embracing the repressive discourse of heteronormativity. Rather, I am pointing to Remediation as a cautionary tale about the dangers of taking the idea of immediacy to be something that is actually experienced in any way, shape, or form by users a given medium. This is an easy trap to fall into, given the degree to which a utopian rhetoric of immediacy is prevalent in marketing and media theory. But, as I said earlier, the existence of a desire for something cannot be taken as evidence that the thing can have actual effects on experience. In fact, it is much more productive to treat immediacy not as something that actually happens in media, but as a particular fantasy and desire constituted by the description and reception of media.
Bolter and Grusin come close to this realization when they discuss the purported realism of photography, noting that the belief in the immediacy of photography comes from “the belief in some necessary contact point between the medium and what it represents. For those who believe in the immediacy of photography, from Talbot to Bazin to Barthes, the contact point is the light that is reflected from the objects on to the film” (Bolter and Grusin 30). But in this moment, they are just re-constituting a sort of Benjaminian aura, where the film gains authenticity because of its proximity to a mythical original hand. Aura, however, cannot readily be taken as an actual thing that is in a material object – rather, it’s a quasi-mystical experience that, as Benjamin puts it, “is never entirely separated from its ritual function” (Benjamin 795-96). The major medial developments that Bolter and Grusin identify as immediate – photography and film – are the developments that Benjamin opposes to this mystical experience, saying that this ritual value is replaced with “the exhibition value of the work” (Benjamin 797).
Bolter and Grusin bypass this schism in their own account of Benjamin, suggesting that Benjamin represents a sort of hedge between immediacy and hypermediacy. On the one hand, they say, “Benjamin seems to be suggesting that mechanical reproduction is responding to and even satisfying a desire for transparent immediacy – that removing the aura makes the work of art formally less mediated and psychologically more immediate. On the other hand, Benjamin’s mechanical reproduction also seems to evoke a fascination with media” (Bolter and Grusin 74). It is a valid point inasmuch as Benjamin does not, it is true, slot straightforwardly into either of the categories of immediacy and hypermediacy. But it is, in the end, more revealing of their own blind spot regarding immediacy: they treat it both as a quasi-mystical experience and as a transparent experience – a further consequence of their failure to distinguish between fantasies of immediacy (which are, indeed, quasi-mystical) and any actual process of reception (which is not mystical except inasmuch as it pretends to engage with the fantasy).
The root problem here is that Bolter and Grusin take for granted the idea that what we think of as technological progress comes from some sort of increased clarity of representation. This is an easy mistake to make, as that claim is overtly offered as the fantasy upon which new technology is marketed. But there is, I think, an alternative approach to the idea of progress that allows for a more productive understanding of how new technology works – one that allows for the idea that new technology has genuine potential without assuming some transcendent moment of perfect communication. This approach, drawn from the work of the Situationist movement of the 1960s, views technological and medial progress not as something that allows for more perfect communication, but instead as something that allows for new and productive ways of disrupting communication.
One of the central ideas of the Situationist movement is that of the spectacle, mapped out in Guy Debord’s seminal The Society of the Spectacle. Although there is a certain thematic compatibility between the terms “shininess” and “the spectacle,” the relationship between the concepts is subtle. The spectacle, for Debord, is an extremely diffuse phenomenon – an entire worldview that encompasses “news, propaganda, advertising, entertainment” and represents “the dominant model of life” (Debord). Broadly speaking – and there is no other way to talk about the spectacle – it describes the way in which society reinforces its own status quo by endlessly valorizing images, typically images of products and commodities, without meaningful regard to the material realities of itself such that it forms “a social relation between people that is mediated by images” (Debord).
A standard example is the sort of compulsory national and international celebrations involved in things like the Olympics. The 2012 Olympic Games in London were, by national consensus, terribly important things. The importance of having the Olympics and making them come off well was the justification for throwing huge amounts of money at things like the budget for Danny Boyle’s opening ceremonies, which included a section praising the United Kingdom’s National Health Service. But these massive expenditures on the opening ceremonies coincided with a government policy of austerity, complete with deep cuts to the very National Health Service it was praising on international television.
Similarly, the Olympics were used as the justification for all manner of things that would normally be considered appalling. Installing missile silos on the top of crumbling tower block apartments? Helicopter-based sniper patrols over London? Bulldozing large portions of the poorer areas of London to build white elephant stadiums? Empowering law enforcement to conduct warrantless searches of private residences out of a suspicion that they might be harboring material that violated Olympic trademarks? All of these things were not just considered acceptable but were largely unquestioned in the face of the overwhelming need to hold the Olympics while measures not half as radical intended for poverty reduction were, as ever, deemed not even worth talking about.
The narratives surrounding the Olympics are, of course, an inspiring bunch about hard work, champions, self-sacrifice, and other good upstanding values. In practice, on the other hand, the Olympics are a massive profit engine based on corporate sponsorship and television deals. They have next to no value for the country holding them – the overwhelming majority have been money-losing. The real content of the Olympics is perhaps best summed up by the following fact: in order to relieve traffic throughout London, temporary Olympics lanes were established for games officials and their “guests,” typically large donors. The importance of allowing these donors a traffic-free stay in London was so great that even ambulances were barred from the Olympics lanes except in the direst of emergencies.
This is a classic example of the spectacle – the abstract image of “The Olympics,” with all the associated symbolic content about human excellence and struggle, is treated prima facie as an important thing. The Olympics are so important that all other political concerns must be put to the side, but the only reason that can be mustered for their importance is that they are the Olympics, and thus important. And yet they roll on, entrancing the entire planet for a few weeks and making billions of dollars for the people they are supposed to make billions of dollars for.
Similar examples abound within the concerns of media and technological progress. Perhaps the most obvious – the product launches of Apple and the almost religious fervor with which they are received – forms the bulk of the next chapter. But Apple are merely the most adept practitioners at this aspect of the spectacle – they did not invent it, and are not its only users. The development of new media technology and the universally agreed upon importance of buying it in general is part and parcel of the spectacle. Debord talks explicitly about this, treating the technological development implicit in product launches as a subordinate part of the spectacle, stressing that “the spectacle is not the inevitable consequence of some supposedly natural technological development. On the contrary, the society of the spectacle is a form that chooses its own technological content. If the spectacle, considered in the limited sense of the ‘mass media’ that are its most glaring superificial manifestation, seems to be invading society in the form of a mere technical apparatus, it should be understood that this apparatus is in no way neutral and that it has been developed in accordance with the spectacle’s internal dynamics” (Debord).
For a strict reading of Debord, then, the phenomenon of shininess is nothing more than a product of the spectacle – the consumerist desire for new products that it creates in order to fuel itself. But although I think Debord is tremendously useful for understanding shininess, I do break with him here. Debord’s insistence that technological development is wholly artificial and determined by the spectacle seems to me very much wrong. It would, of course, be absurd to suggest that technological development happens without any reference to commercial concerns. But equally, the suggestion that technological development is simply determined by the commercial is unsupportable.
The way in which the Internet proved a genuinely surprising development for which large swaths of the entrenched media industry was unprepared is proof enough of this. A model in which technological development is a mere epiphenomenon of the spectacle would work along predictable means like those often touted by science fiction writers and professional futurists: smart people would identify likely future technological developments, these developments would get seeded into popular media like science fiction, and then a few decades later the products we’ve all been dreaming of would come out. But the Internet actively broke that. Nothing like contemporary digital media was predicted by science fiction writers. Those that did predict digital media at all dramatically underestimated both its scope and its decentralized nature.
And more to the point, existing commercial interests were wholly unprepared for the practical consequences of digital media. The music industry nearly collapsed because nobody realized until it was too late that the Internet had developed sufficiently that files the size of songs could easily be downloaded and traded there. Large scale digital piracy was simply something that the larger corporate culture failed to anticipate. If Debord’s hypothesis that technological development is determined by the spectacle were correct then something like that shouldn’t happen. This is not, to be clear, a grievous failing on Debord’s part. Like most 20th century Marxists, his view on technology is based primarily on the Industrial Revolution, in which technological development did heavily favor the exact sort of capitalist development Debord suggests. That he did not foresee the way in which digital technology proved deeply problematic for global capitalism can hardly be seen as a significant failure when nobody else saw it either.
Nevertheless, it does require some revisions to Debord’s sense of technological development. As I said, it is clearly folly to completely separate technological development from the workings of the spectacle. Nevertheless, it clearly has some independent characteristics. The unexpected development of digital music piracy is, in the end, explained by the fact that people were very wrong about how far the development of computer technology could go. There is a truism in the world of computer design called Moore’s Law, which pegs the rate at which the number of transistors that can be fit on a circuit as doubling more or less every two years. In practical terms, this means that there’s a fairly steady rate at which computers get smaller, cheaper, and more powerful. The thing that is not often mentioned about Moore’s Law is that it was formulated in 1965, and that Gordon E. Moore, when proposing it, expected it to last for about another decade. In truth it has still not given up the ghost, meaning that computers got wildly more efficient and cheap than anyone expected that they would. As a result computer technology became a consumer good in a way that few people foresaw, and thus some networks created for defense and educational purposes and a hobbyist project by a programmer at CERN thus turned out to have implications well beyond what anyone envisioned. The underlying reason for this – that the point at which computer technology would stop quickly developing was further out than people expected – is not something that can be attributed to the spectacle even in the most sweeping of accounts.
It is more accurate, then, to treat technological development not as an epiphenomenon of the spectacle but as an intersection between the spectacle’s self-sustaining processes and the essentially arbitrary vagaries of the physical universe that determine, for instance, how easy it is to increase the density of transistors on an integrated circuit. The technological development that takes place is still driven by the spectacle and its concerns, but there is a gap, often trivially small, but occasionally substantial, between what the spectacle wants and what technological progress actually delivers.
This gap is the essential content of shininess. Because the shiny piece of technology does represent a partial intrusion of something from outside the spectacle. Inevitably, given time, the spectacle will manage to integrate this intrusion into itself and proceed as normal. But in the initial moment in which the demo takes place there remains a gap in which the spectacle has been breached – a momentary instance in which it is on the defensive. This is what accounts for the uncanny aspect of shininess’s allure: the way in which it points fleetingly at an exit from the system. It suggests the presence of something previously inexpressible. Even if it does this in an exclusively atavistic way, by highlighting a previously invisible limit of the technology it proposes to supplant, the glint of shininess is still that of an external factor forcing the spectacle to not only confront that which it renders unspeakable, but to praise it.
Within Debord’s Situationist approach the most obvious term to reach for is détournement. Détournement is a form of “extremist innovation” (Debord and Wolman 14) in which familiar objects and conventions are taken out of context and juxtaposed in ways that destabilize the norms of their original context. The approach is at once parodic and serious – a barbed joke that breaks through the stasis of the spectacle in order to more permanently destabilize it. As Debord and Wolman rather charmingly put it, “life can never be too disorienting” (Debord and Wolman 20). The purpose of détournement, then, is not to make an affirmative statement of the way that things should be, but is instead to aggressively resist the way that things are. As Debord puts it in Society of the Spectacle, “détournement is the flexible language of anti-ideology. It appears in communication that knows it cannot claim to embody any definitive certainty… [it] has grounded its cause on nothing but its own truth as present critique.” This parallels neatly the fleeting nature of the demo’s superficially limitless potential. The appeal of shininess is precisely that it offers nothing other than what is inexpressible in the previous medial status quo.
But this in and of itself remains cynical. That medial progress offers a moment of détournement at the dawn of a new piece of technology is worth little as long as that détournement is promptly reintegrated into the consumerist spectacle. But rare as it might be for the demo to be successfully used that way, it is not only a moment in which the spectacle détourns itself, but one that offers the potential for more productive détournements. The gap opened in the discourse when what was previously unspeakable becomes speakable (and, in turn, when what was previously a commonplace feature of the culture becomes archaic and outdated) is exploitable. That shininess is on balance primarily a tool of the spectacle does not mean that its allure is wholly unwarranted.


[1] Berners-Lee himself seems aware of this fact, albeit strangely. His autobiographical account of the events, Weaving the Web, is on the one hand written with the implicit assumption that he was the dominant figure in the creation of the World Wide Web. On the other hand, Berners-Lee’s account is full of lengthy descriptions of other people’s contributions and disclaimers of the magnitude of his own work.
[2] The NeXT platform was uniquely suited to what Berners-Lee was doing, however. Berners-Lee notes the ease with which the development tools of the NeXT let him work, and also noted the existence of “a spare thirty-two-bit piece of memory, which the developers of NeXT had graciously left open for future use by tinkerers like me” (Berners-Lee and Fischetti 28). Of course, Berners-Lee also suggests that some of the motivation for getting a NeXT in the first place was the desire for what was, at the time, a sleek, shiny new toy.
[3] Mosaic was not the first web browser to follow WorldWideWeb, nor even the first graphical one. An important intermediate step came when a text browser was implemented on CERN servers that could be accessed from any computer via Telnet. Beyond that, a number of browsers sprung up before Mosaic, all of which failed to gain mass acceptance for various reasons: Erwise, written at Helsinki University of Technology, ViolaWWW, written by Pei Wei, Tony Johnson’s Midas, and Samba for the Mac (Berners-Lee and Fischetti 55-65). Of the browsers from this era, the only one to continue to have any significant usage was Lynx, adapted from a hypertext browser at the University of Kansas. Lynx, however, was a text-based browser, and survives today primarily because it can be used on extremely low-power computers, and, more significantly, because it is better for screen-readers used by sightless users. Mosaic was the first browser to succeed in being multi-platform, easy to install, and graphical. Mosaic was eventually adapted into Netscape Navigator, which in turn became Firefox, which as of 2012 has a 22.8% share of browser use, behind only Internet Explorer and Chrome.
[4] Berners-Lee was, by his own admission, not entirely aware of this history – in particular he was unaware of Vannevar Bush when he made his initial design of the World Wide Web.
[5] The question of what to use as a general term for “expressive works in any given medium” is a vexed one. The concept is important enough to want a single term, but no clear choice presents itself. Rather than create an unsatisfying neologism, I have opted to follow the trend within literary studies of using the word “text” to refer to an increasingly broad category of objects, including ones that have few (or no) textual components.
[6] Indeed, the game is so culturally significant that nostalgia for it and the era of gaming it represents is, as we will see in the fifth chapter, a fundamental part of Nintendo’s marketing of the Wii 20 years later.
[7] As I will discuss in detail in my fourth chapter, which focuses on 3-D film, this can be said of almost all 3-D films, as the medium is ultimately suitable only for this demonstration phase.
[8] Needless to say, I am not suggesting that all self-referential texts are demos. What is necessary is not merely self-referentiality, but a particularly formalist self-referentiality, where what is referred to is not merely the text, but the form of expression, which is foregrounded and made conspicuous in its potentiality.
[9] It is interesting to note, however, that the wiki shares more than a slight resemblance to the original vision of the Web that Berners-Lee proposed. In his view, the browser was intended to be integrated with an editor so that reading and writing were twinned activities. But this feature ended up being basically distilled out of the Web by others. Berners-Lee notes that he “was amazed by this near universal disdain for creating an editor… most were more excited about putting fancy display features into the browsers – multimedia, different colors and fonts – which took much less work and created much more buzz among users” (Berners-Lee and Fischetti 70-71). This creates an interesting situation which is illustrative of the strange ways that media develop – the technology that Berners-Lee was endeavoring to create, it seems, required the technology he actually created instead to take hold before it could develop.
[10] This problem of the disjunction between manifestos and actual medial invention will be a major topic of the third chapter on Scott McCloud’s proposal of the infinite canvas as the future of comics.
[11] To be fair, Berners-Lee, in Weaving the Web, describes the efforts he made to navigate between his “larger vision of creating a global system” and the need to have a “good, visible reason to be doing this at CERN” (Berners-Lee and Fischetti 31). That said, Weaving the Web was written in 1999, well after much of the Web’s future trajectory had become clear. In any case, the documents available from the Web’s creation make it clear that the technology was designed primarily with CERN’s internal needs at heart, and that its outside success was a secondary concern.
[12] The myth of panic at the Lumière train short is an unusually important instance of the mythology of the demo, and one that I will treat in more detail in chapter four.