Rhetoric and the truth claims of documentaries

Rhetoric is the art of organizing words in such a way that they have an effect. At times, this effect can be to achieve a certain outcome, i.e. to persuade someone of a course of action. At other times, it can be to make someone come around to a point of view. Most of the time, it is merely to achieve clarity where there previously was none. Clarity is an underappreciated effect to achieve, and more often than not simply being able to boil down a message to its most basic and easily understood form can rescue a situation from much untold strife. Getting everyone on the same page, even if only so they can disagree more constructively, is a superpower in disguise.

The organizing of words is not contained to writing or a speech, however. It does not even begin there. It begins in one’s head, as a method of making sense of things. It is a function of paying attention to how things fit together, and how they can be made to relate to one another. Everything is related to everything else, and every connexion is a potential line of argument. Most relations are tenuous or non-intuitive, and can be safely put aside in search of better ones. Eventually, after having looked at enough possible connexions and discarding enough bad ones, what remains is a few potential ways of organizing the words such that they can be understood. What follows from this process is a lot of bad ways to put something, and a few good ones. The key is to select something from the good category.

This organization of thought and narrowing down of possible alternatives to a few good ones is often pitted against a natural, intuitive way of going about figuring out what to say. Rhetoric is the art of lying, it is often said, and a person’s true thoughts & feelings are expressed spontaneously, without prompt or preamble. Indeed, it is maintained, the more one thinks about something the less truthful it becomes. At length, this line of argument ends up saying that the process of choosing one’s words cannot but end up in dishonesty; whatever truth was to be found in spontaneity is lost once the moment has passed, leaving only the malingering specter of doubt. It is a line of thinking as silly as it is wide-spread.

The thing of it is, it is seldom clear what the most effective, truthful or proper way of saying something is. If someone asks for direction to a place, is it best to describe the route by means of street names, landmarks or numbers of turns left/right? All three options lead to the destination, and it is unclear whether one is more truthful than the other. The criterion of spontaneous honesty does not give us any guidance as to which to choose, and would only serve to induce anxiety in those who are not able to make a choice right there on the spot. If we allow ourselves to ponder whether the asker is familiar with the local street names, can remember an abstract list of left/right turns, or would be best served by the most concrete visual input the city has to offer – then we are in a better position to pick the most appropriate option.

From this, we can gather that rhetoric is the art of figuring out what is useful to say. What flows from the heart is not always the most informative of thoughts, and saying it reflexively might at times lead to confusion. By organizing one’s thoughts in relation to the overall situation, it becomes possible to be a more constructive participant in the conversation. It streamlines the process, as it were.

This brings us to documentaries, who – much like spontaneous honesty – are assumed to be bearers of unmediated truth. If something is presented in the form of a documentary, then it is generally assumed to be the true story. This follows both from convention, and from the fact that many generations of film making has established the genre as an effective vehicle for conveying information. Form and function go together, leading to a very persuasive mode of presentation.

However, just as in the above example of asking directions, there are always multiple ways of arriving at a certain truth. Documentaries do not spring fully formed out of nothing; they are structured around an organizing principle which puts things in relation to one another. The organizing principle was chosen at some point during production, and then executed in the form of the finished product. Whatever the choice, there were always other ways of presenting the information that were not chosen. And, conversely, if one of those other options were picked, then we would end up not picking the first option. Neither of them are false, but at the same time are neither of them the whole truth. They are equally and both at once, choices. (Here, Booth would make an amused aside about the need for an infinite number of documentaries, to cover all bases.)

This places us in a tricky spot when it comes to evaluating the truth claims of documentaries. Or, rather, it makes the truth somewhat orthogonal to the choice of organizing principle. Any given choice is bound to have advantages over another, with corresponding disadvantages. The difference is one of emphasis rather than of veracity, which is an important difference, but – and this is the point of this text – we can not arrive at a constructive criticism the significance of this difference if we talk about it in terms of true and/or false. Truth is not the issue; the mode of presentation is.

The purpose of rhetoric is to organize words so as to arrive at clarity. The same goes, mutatis mutandis, for documentaries. As a critical reader and/or viewer, your task is to evaluate the organizing principle to see where it places its emphasis. Should you find that there is an alternate organizing principle, then you are richer for knowing it, and can proceed to compare & contrast until interesting nuggets of insight emerge. These nuggs will, no doubt, be useful in your upcoming attempts to find something useful to say. –

Rhetoric and the truth claims of documentaries

Byung-Chul Han: the burnout society

The translation of Byung-Chul Han’s small book the Burnout Society is unfortunate. For one, it immediately places the work in the discourse of the burnout, which connotes all sorts of self-help positive thinking bear the burden alone nonsense that are ever so perpendicular to what the book is about. For another, it is wrong. The original title, Müdigkeitsgesellschaft, is more accurately translated as “the tiredness society” or “society of the tired”. The actual medical condition of being burned out is an aspect of this tiredness, but it is not the main focus of the book. Which, unfortunately, means that the point of the closing reflection on what it means to be tired together tends to get lost on English readers. It comes as a surprise, rather than as a fitting conclusion.

As these words are written, the importance of quarantining oneself against the Covid-19 virus is entering into public consciousness. This is an interesting point in time, since many latent patterns of thought are directed at and applied to the upcoming quarantine situation. The common sense interpretation of how to deal with the new situation emerging, and thus we get an unusually clear picture of the common sense interpretation of how to deal with the old situation. For a brief moment in time, we can see the change in action, and contrast what’s new with what’s old.

One common reaction to the quarantine is to say “gosh, I’m gonna get so much done! This sure is going to be a very productive time!”. A quarantine is seen as a temporary reprieve from restraints that prevent the full forces of creative output to be unleashed into the world, and thus as a potential time of unparalleled getting shit done. Unread books, hobby projects, writing ideas, gardening feats, culinary experiments – whatever it is, now is the time for getting it done. The pent up creative energy will flow with wild abandon. ushering in a new era of unprecedented personal productivity.

Byung-Chul Han contends that the last few decades have seen a shift from what he calls negative production to positive production. The former is a process of standardization and error elimination, whose goal at all times is to remove flaws in order to maximize efficiency. These flaws can either be technical, in the sense that the productive machine is not optimally configured, or social, in the sense that abnormal elements of society have to be removed or repressed so that the eternal productivity can continue without interruptions. Deviant forms of life, queer sexualities or non-conformist ideologies are examples of such abnormalities. The goal of negative production is to make each part of the production process standardized and interchangeable, including the human components. A worker is a worker, and workers do as they are told.

Positive production, in contrast, relies not on standardized units of production doing what they are told. Instead, these very same units have internalized the imperative to be productive to such an extent that they tell themselves what to do. The specifics vary from person to person, but the imperative to Produce remains a constant. At work, this expresses itself as an ever increased effort to attain maximum productivity, to be the utmost exemplar of whatever work is performed in all aspects. At home, it expresses itself as a nagging sense that one should do something productive. Academics have a name for this nagging sensation: “I should be writing”. The same goes for any other productive endeavor: I should be reading, painting, remixing, organizing, meditating. Whatever the activity, the same nagging sensation arises that it should indeed be done.

The upcoming Covid-19 quarantine, a period of time within which a person is specifically obligated to stay at home, is a good way to see the two mentalities of production in action. For those working under the paradigm of negative production, this would be a period to unwind – to be themselves, to sleep in, to not give a darn about the Man. Those laboring under the new paradigm, however, have to get themselves ready to (as paradoxical as it might seem) get to work. There imperative is still there, and it is even stronger for there not being anything else to do. The quarantine is a great opportunity and an even greater obligation.

Based on this, we should be able to predict that a substantial number of people will end up more tired at the end of the quarantine than at the beginning of it. The amount of actual work accomplished during the quarantine is beside the point; the tiredness is not a result of sustained effort, but of constantly feeling that the Work should be performed. Positive production allows no time for rest, only for more of itself, more production. Instead of the quarantine being two weeks of rest, relaxation and recovery, it will be two weeks of constant anxiety over not sufficiently productive during our allotted time. At the end of it, tiredness and exhaustion will be the words of the day.

The point of this book is not to argue that we should go back to negative forms of production. This is a book of philosophy, after all, which means the point is to get us to ask ourselves if this really is how we want to spend our lives (in and out of quarantine). Like all good works of philosophy, it does not lead to a clear-cut easy to implement answer, and leaves readers with more questions than they previously had. I say we acknowledge our tiredness, and then proceed to be as unproductive as we need to be.

Byung-Chul Han: the burnout society

Frye: Anatomy of criticism

Northrop Frye’s Anatomy of Criticism was published in 1957. This is not only a bibliographically necessary nugget of information for when you want to compile a list of works cited, but also an important touchstone when reading the book. A great many things have happened since 1957, and it is interesting to waltz through the realms of criticism as it were back then with the knowledge of how it turned out later. Armed with the knowledge of the future, returning to this writ is akin to a tour of what might have been. The present back then was contingent in a way that our present is not.

To take an example: at length, Frye gestures to the emerging trend of replacing criticism proper with the act of producing ranked lists, and the inherent methodological problems of such an approach. For one, merely ranking things is not a critical act, it is merely the application of a more or less explicitly defined set of criteria on a limited set of objects. Going through the motions of such a procedure does not increase our understanding of the works in question, or even why they were included in the ranking process to begin with. With the modern phenomenon of listicles firmly in mind, we can look back on these musings and nod an extended agreement. Not to mention the trend of modern mission statements to replace grammatical structures in favor of disjointed yet prominently displayed keywords, where even the pretense of an overarching organizational principle has been abstracted out of the picture.

To take another example: while delineating the different roles of critics and authors, Frye makes a joking aside that Dante, who proclaimed that a certain poem was the best he had ever written, was in so doing an indifferent critic of Dante, and that others had gone on to write better critiques of said poem. Little did Frye know that a mere decade later, the whole death of the author hubbub would flare up in earnest when Barthes kicked the hornets nest. And then kept it going for quite a spell.

A funny third example is how Frye points out how there were no standard introductory books on criticism. He then goes on to speculate what the first page of a potential such book would say, pondering that perhaps it might modestly begin with the question “what is literature?”. Then, he ventures that the second page might expand on this question, in terms of verse and rhythm, and that subsequent chapters then deal with the complexities of genre and other nebulous yet necessary literary terms. Terms which, although recognizable in action and principle, seems strangely resistant to being theoretically explicated.

It is funny in the sense that there are now several such books, who do not necessarily agree with each other on the finer points of what criticism actually is. It is also funny in the additional sense of a reader being able to go to their local university library, scour the shelves of every book that looks vaguely introductory to the enterprise of literary criticism, and empirically investigate how Frye’s prediction turned out. So I went and did just that. Here follows, in the order of which they stacked up next to me after I had done the aforementioned scouring, the results.

First out is Persson (2007), titled Varför läsa litteratur? (Why read literature?). An introductory title if there ever was one. The book begins by mentioning that merely asking this question is seen as blasphemous in certain context, almost taboo. Persson then continues to outline that, in practice, this very question has had a variety of responses throughout the ages, relating to the building of such things as character, nationhood and a (well-read) democratic citizenry. He then gestures towards the contemporary trend within organizations to demand a justification (a stronger word than an explanation) for everything that happens within it. Thus, being prepared with answers with slightly more rhetorical and conceptual bite than “it’s a traditional value held throughout literally all of recorded human history (more often than not constituting said history)” is a modern virtue.

Next up is Barry (2009), with Beginning Theory. It opens up with the observation that the “moment of theory” has passed, and that we now find ourselves in the “hour of theory” – the enthusiastic fervor with which theory was introduced has been replaced with the slightly less enthusiastic aftermath in which we can look back upon what has gone before, and calmly set to work organizing and cataloging the aftermath. Theory, literary theory among them, has become a day-to-day business, and thus it needs standardized books like this one so everyone in said business are, as far as such things are possible, on the same page.

Observant readers will note that “criticism” seems to have been replaced with “theory”. Just theory in general, with “literary“ added on as a reminder that books are somehow involved. Culler (1997) picks up this theme on the first page of Literary Theory, where he differentiates between capital-T Theory in general and literary theory in particular, and then goes on to discuss how the two have been so thoroughly intertwined over the last decades that keeping them separate is a fool’s errand. Non-literary theory (defined broadly) has impacted on how literature has been written, which has then affected how criticism of said literature has taken form, which in turn has influenced literary theory, and to fully understand it all a modern readers has to know a little about every step of this series of events to keep up. Basically, a critic also needs to be a theorist, in order to understand the books they claim to critique.

Franzén (2015), in Grundbok i litteraturvetenskap, (Introduction to literary science), take a slightly more analytic approach, and defines theory in the scientific sense of being a comprehensive set of ideas relating to something; the ‘something’, in this case, is literature in its many forms. Franzén notes that there has been a move from writing about literature in a normative sense – i.e. how it should be – to writing in about it in a descriptive sense – how it actually does what it does. The book then proceeds to outline a number of themes in this straightforward manner.

Eagleton (1996) opens up Literary Theory with the striking formulation “[i]f there is such a thing as literary theory, then it would seem obvious that there is something called literature which it is the theory of”. After this opening salvo, Eagleton takes a closer look at what the category of “literature” includes (e.g. the Illiad) and what it, more importantly, does not include (comic books), and how this selective applicability affects the theory which claims to be about those things included. What is literature indeed.

Peck and Coyle (2002) introduces Literary Terms and Criticism with the assertion that “literary criticism is primarily concerned with discussing individual works of literature”. The authors then immediately clarifies that aspects slightly less particular to an individual work, such as its genre or its historical context, also play into the process of criticism. The tension between books always being singular, unique and one of a kind, and also very possible to group together with other similar monads, is as of yet one of the unresolved questions of theory, literary or otherwise.

Next up is Norton’s monumental tome the Norton Anthology of Theory and Criticism (2010), which features 2758 large pages of small print, covering just about every aspect of theory and/or criticism there is. It starts off by proclaiming that there are those who claim to be anti-theory, who hold the position that all this circumlocution is a mere distraction from the real work of getting it done. Slyly, the anthology then points out that this in itself is a theoretical position, whose assumptions can be critically examined and thus better understood. Not said, but heavily implied, is that the following thousands of pages might be of some use in this critical endeavor.

Finally – it was a big stack, dear reader – Bennett and Royle (2009) begin their An Introduction to Literature, Criticism and Theory by posing the rhetorical question: when will we have begun? From this provocation, the authors then set out to problematize the beginning of a text. Do early drafts count, or shall we limit ourselves to the finished publication? What about marginal notes, commentary, public reception or influential works of criticism? When, indeed, can we with confidence proclaim that we have read and understood enough to finally get on with doing either literature, criticism or theory?

It is tempting to say that Frye is still correct in his assertion that there is no standard introductory work on criticism. The prevalence of many introductory works, plural, only serves to underline this point, albeit probably not in the spirit with which Frye made it. But I reckon it would be more fruitful to say that there is indeed a standard of introductory works, and that what unites them is an unwillingness to once and for all proclaim what literature (and the criticism of it) actually is. Literature is at once both the baseline of human expression (in its many forms), and the gradual expansion of the possibilities of human expression. We all agree that there is such a thing as literature, and then immediately start to argue about the finer points beyond this first principle. Establishing a firm definition of what literature is invites future authors to blur the line by new and creative literary feats, and criticism must always – lagging behind as it is – try to keep up with whatever tools it can get its hands on, theoretical or otherwise. Which is indeed a hopeful thought to take into an uncertain future. It certainly makes the present ever so slightly more contingent.

Works cited

Barry, P. (2009). Beginning Theory: an Introduction to Literary and Cultural Theory. Manchester: Manchester University Press.

Bennet, A., & Royle, N. (2009). An Introduction to Literature, Criticism and Theory. Harlow: Pearson Longman.

Culler, J. (1997). Literary Theory: a Very Short Introduction. Oxford: Oxford University Press.

Eagleton, T. (1996). Literary Theory: an Introduction. Cambridge: Blackwell.

Franzén, C. (2015). Grundbok i litteraturvetenskap: historia, praktik och teori. Lund: Studentlitteratur.

Frye, N. (1957). Anatomy of Criticism: Four Essays. Princeton: Princeton University Press.

Leitch, V. (ed). (2010). The Norton Anthology of Theory and Criticism. New York: W. W. Norton & Co.

Peck, J., & Coyle, M. (2002). Literary Terms and Criticism. Basingstroke: Palgrave.

Persson, M. (2007). Varför läsa litteratur?: om litteraturundervisningen efter den kulturella vändningen. Lund: Studentlitteratur.

Frye: Anatomy of criticism

The I Ching

The I Ching – the book of changes – is a strange thing. It is, all at once, a divinatory practice, a meditative technique, a highly significant cultural document and a vocabulary. All crammed into a very small package, most of which – for western readers – will consist of contextual information, clarifications and useful forewording. The actual text is a mixture of commentary, general life advice and technical documentation, all intertwined. Those looking for a straightforward read will be highly disappointed.

In technical terms, the I Ching is a six bit binary system with 64 different states. As with a computer binary, each bit can either be 0 or 1, yin or yang. Depending on which six bits are given by the divinatory process, the resulting sign can give very differing interpretations of the situation you find yourself in. The sequence 000111 gives you the sign Stagnation, a very clear indication that the situation is hopeless and that nothing good can come out of persisting; the general advice is to leave as quickly as humanly possible. This can be contrasted with the at first glance seemingly opposite 111010, the sign for Calm Anticipation, which advises that a great or dangerous moment is imminent, yet that the time to act is not quite here; the general advice is to wait energetically. Two very different moods to find oneself in, yet very compactly conveyed through the use of merely six lines.

This efficiency is ever so slightly opaque to those who do not know the signs. It is also a remarkable achievement. It manages to place wildly disparate life experiences into the same framework, and thus allows for comparisons between different situations and the appropriate courses of action for each such situation. When under the sign of Stagnation, the only possible way forward is to just drop everything and get out, since nothing can be salvaged. When under the sign of Calm Anticipation, however, the opposite is true – the winning move is to firmly keep your eyes on what’s ahead and sticking to the plan. The wisdom imparted by comparing these two signs is that these are two possible life situations to find oneself in, and that being able to tell which applies to the current moment is crucial to getting ahead.

As you might imagine, there are a great number of possible comparisons to make with 64 available signs. To make things even more interesting, each sign is subdivided into six subvariations depending on which line gets emphasized in the divinatory process. Take 111010 as an example. Emphasis on the first line indicates that the danger is far away still, and that the best way to prepare is to live in such a way that the appropriate virtues are firmly in place when it finally does arrive. This can be contrasted with fourth line, which indicates that the danger is already clear and present, and that the proper move is to not make things worse in a blind panic, but to calmly hold fast. Both indicate that things will get better once the approaching danger can be overcome, but that overcoming this danger is a function of the actions taken in the calm moments of preparation.

Math enthusiasts will quickly figure out that the sum of these subvariations is 384, a respectable number of possible life situations. When I earlier called the I Ching a vocabulary, this is very much what I meant; being able to systematically distinguish between such a large number of possible situations (and the prudent courses of action for each) is a whole dedicated skill in itself. Being able to talk with confidence about the subtle differences between the different signs and their subvariations is yet another skill, one which may very easily be (as the character of Chidi in the TV series The Good Place so eloquently exemplifies) mistaken for wisdom. It is the allure of what is signified through 101001, Effortless Grace, which ever so slyly emphasizes the former over the latter.

The great number of variations points towards one of the inherent paradoxes of the I Ching system. On the one hand, the sheer volume indicated that just about everything ought to be covered in there somewhere. On the other hand, any student of creative writing will surely be able to think up more than six variations for each sign, once they have gotten the general gist of what it is about. Indeed, anyone with sufficient life experience will be able to recall that one time when the sign itself was applicable, but none of the variants really fit. The world is greater than the attempt to systematically categorize it.

This paradox is not a bug, however. It is a feature. Once someone has gotten so used to the signs and variations that they are able to identify the blind spots of the system, they have mastered a vocabulary of situations, remedies and moods so vast as to be able to conceptualize just about anything they stumble upon. If a peculiar situation does not fit into the system, then that too is useful information, and indicates that there is something there that warrants thinking more intently about.

Thinking intently is one of the things the I Ching encourages its practitioners to do. Going through the motions of a divinatory session takes everything from 30 to 90 minutes, during which it is advisable to keep out all distractions. Not only because it is easy to lose count whilst going through said motions, but also because the sheer act of sitting still with the problem firmly in mind is itself a kind of thinking. As Jung almost phrased it, the hands are busy whilst the mind is giving space to consciously and unconsciously process the situation. Once the answer is given and a sign appears, the practitioner is more than ready to see how it applies to the present circumstance, in extensive detail.

The I Ching is a peculiar text, a discursive anomaly. It is, I dare say, a small book of big moods.

The I Ching

Dark Souls (2011)

Dark Souls is in many ways the prototypical video game. When you first boot it up, there is a grand cinematic explaining the scope and breadth of the narrative universe – there is a god of lightning, a lord of death, a fire witch, a dragon, an epic battle! It’s all very dramatic and cinematic, and then

New Game

The player character is in a dungeon for some reason, and an unknown NPC throws down a key so as to make a timely escape possible. What follows is a period of getting used to the controls, possibly dying once or twice (the big boulder is a contender for this outcome), and an indirect lesson that sometimes you are not ready to fight the big demons just quite yet. The broken sword you begin with might be thematically proper, but something more pointy is required for actual combat. Thus players are introduced to the concepts of switching to appropriate gear and running past enemies, as need be.

When looking at gameplay after this point, what is striking is that so much of it conforms to the image of video games kids have. The player character is a dude (or dudette) with a sword, who fights generic enemies (whose individuality can be safely ignored) and bosses (whose uniqueness make their backstories as interesting as their fighting techniques). All this in a setting steeped with backstory, lore and hidden secrets, who can be uncovered by players enthusiastic and determined enough to give it a go.

In other words, it is very much like when we were young and played early NES games. The graphics were pixelated to perfection, and the physical cartridges the games came on barely fit enough information to convey any narrative information outside the mechanics. Each and every pixelated enemy had a name, a backstory and a place in the universe. And, more importantly, an entry in the manual that came in the box – lovingly crafted to ensure the differentiation of one colored set of pixels from an identical albeit differently colored set of pixels. The Goombas and Bullet Bills had canonical names, and all the implied narrative infrastructure that comes from having a name.

In those archaic pre-internet days, this narrative infrastructure turned into local myths and legends. Part of it came from simply informing everyone involved about the facts – given time and enough double-checking of the manual, soon enough the Bowsers and the Lakitus were known entities. An even bigger part came from the telling and retelling of ideas of how the implied, never shown but carefully named, kingdoms or future settings had to be organized. The world of Super Mario had a princess and a whole series of monarchs being turned into various creatures, establishing that the mushroom kingdom was indeed a magical kingdom. The world of Mega Man implied a whole host of futuristic machines subverted to the twisted ways of Dr Wily, and so a setting could be imagined around that. And so on and so forth.

Given the lack of available textual information (the manual was only so large, and the cartridge could only contain so many bits), there was plenty of room for imagination and extrapolation. Indeed, even speculation. Many a friend group had informal theories of what may or may not have transpired – I dare not call them fan theories, lest the gamers grow restless – some of which are still remembered fondly to this day. These theories served as a springboard and expression for young imaginative minds, and as an informal social glue in a time when such things were rare indeed. If you ever get the chance, do probe someone about their childhood imaginings of these virtual worlds. There is more there than might meet the eye.

When I say that Dark Souls is a prototypical video game, I mean that it harkens back to this earlier era of mythological expansion and exegesis. An enemy is not just an enemy – they have names and backstories. The bosses are not just slightly tougher enemies – they have intricate relationships with each other and the world they find themselves in. The world is not just something put in place by virtue of the necessity of having to render something on the screen to make the gameplay look appealing – everything is significant, every detail conveys important information, every aspect contributes to the overall story. There is more backstory to be uncovered, and more importantly, more stories to be told. Dark Souls is very good at bringing out the forensic storytellers inside its players.

With the advent of the internet, the social space of this storytelling has shifted from the geographically available friend group to a more global setting. The Dark Souls portion of Youtube has viewerships in the millions, with cooperating and competing exegesists comparing notes. The drive to tell, retell and refine the stories found implied in the games – always implied, just at one remove – is still there, burning like a great bonfire. Or, more accurately, like many small bonfires scattered across the lands.

There are those who speak of Dark Souls only in terms of difficulty, as some great obstacle to be overcome by those worthy enough. While I do acknowledge that this, too, is part of the myth building that eventually leads to storytelling, and that there are parallels to the whole Nintendo Hard thing, I must say that such simplistic takes miss the point. If difficulty is your only point of reference for talking about the game, then I am sad to inform you that you have officially failed at Dark Souls.

Take heart, however, for there is always an opportunity to play again. The age of fire is still with us, for a brief time longer. A new game awaits, and new stories. Tell them well.

Dark Souls (2011)

Terminator: Dark Fate (2019)

When anticipating the new Terminator movie, I had two sets of expectations as to how it would play out. Interestingly enough, these expectations map almost seamlessly onto two modern entries into the first person shooter genre of computer games, DOOM (2016) and the Wolfenstein series. These represent two radically different takes on the same genre, thus serving well as a template for expectations for what a new movie in a roughly adjacent genre might play out.

The old DOOM, of 90s fame, was an unabashed feast of running, gunning and rock music. When you did not run, you gunned. When you did not gun, you sure did run. At higher skill levels, the player did both at once, in a non-stop action romp accompanied by the most rolling of rocks. To call it a cerebral experience would be an insult, given its heavy duty focus on running, gunning and nothing else. See the thing, shoot the thing; nothing is too big to blow up, preferably by gun.

This trend continues in the 2016 incarnation, which manages something as seemingly contradictory as an intelligent take on the shooter genre. Rather than trying to smarten things up with an intricate storyline, sophisticated dialogue or morally ambiguous gameplay choices, the developers intentionally pushed all that aside in favor of even more running and gunning (and ripping and tearing). Cleverly, they chose to express this through the actions of the player character, who at times react to the increasingly over the top story beats in either of two ways: blowing it up or tearing it apart. The most iconic expression of this is a passage where the antagonist lays out how to carefully disassemble a complex device, so as to be able to put it back together later, and the player character responding by simply smashing it to pieces. DOOM is not a game about careful deliberation or consideration; let there be no uncertainty on this point.

Naturally, this is an expectation which fits neatly onto the prospect of a new Terminator movie. It would be all too possible to make a deliberate decision to go all in on the action aspects of the franchise. Big guns, big robots, even bigger explosions, even longer car chases. Drop all pretenses of plot and polish, in favor of a big badaboom spectacle rumbling and tumbling, going back to the series bad to the bone origins, all the while knowing that this is exactly what’s what.

As a contrasting template, we have the Wolfenstein series. The originals played out much like the DOOM series, albeit with slightly less rock music and way more Nazis. See a bad guy, shoot a bad guy. See a suspiciously spaced section of wall, press suspiciously spaced section of wall; three times out of ten, it was a secret door leading to a hidden chamber. Whilst generally slower paced than the DOOM series, those having a rough recollection of computer games from the time would place them in the same overall category. Bang bang.

The modern installations of the franchise, however, are a surprisingly nuanced take on what it means to engage in a sustained campaign of resistance against an overwhelming force with every advantage on its side. The big plot points – with time travel and Nazi moon bases and mecha troopers and the rest of it – are as over the top as all get-out. The discussions amongst the groups of people the player character belongs, and even more so between player character and said groupings, tell a story of resigned hope, sustained subversion of the new order, and of a perpetual iron will to affect change despite everything. It also, between the lines, serves as a critique of the idea that one person, no manner how ridiculously overpowered in terms of computer game conventions, can truly change anything by rampaging through the societal institutions defining our time. The world is ever so slightly too large for a one man fix all solution, and in order to affect institutional change, numbers are needed.

This rejection of the great man theory of history, too, could serve as a template for expectations of a new Terminator movie. Whilst acknowledging its past as a big badaboom spectacle, it would be very possible for it to distance itself from the assumptions inherent in the genre henceforth and strike out in a new direction. Either in terms of simply adjusting the numbers on either side – no longer one robot vs the world, but a whole host of robots against a slightly smaller subset of the world – or by introducing subtle complexities into the concept of killer robots which turn everything on its head. The potential for making things cerebral is, as the case of Wolfenstein has showed, very possible to realize indeed.

The new Terminator movie tries to have it both ways, and partially succeeds. Those expecting a traditional escapade of explosions and excitement get their fill, with some room to spare. At one point, there is a fight in a rapidly descending airplane on fire, because of course there is. At another point, the movie goes out of its way to depict scenes from domestic life in a Hispanic neighborhood, with a quiet dignity and somber pace quite at odds with the aforementioned explosions. As it unfolded, I found myself thinking: this is a Terminator movie?

Make no mistake.  It is a Terminator movie. Only, it’s thirty years later, and the movie makes great efforts to acknowledge this. The world has moved on. Skynet belongs to a bygone era, a timeline which has ceased to exist. No one remembers it other than a select few involved with stopping it; the new killer robots have no names, no known motivations, no known backstory. Their only defining feature is that they are legion, and that they are out to get us. Their sheer anonymity makes them that much scarier – Skynet might be the devil we know, but at least we know it. We do not know what fate ails us in the coming apocalypse, and that makes the old apocalypse stories less interesting. The coming apocalypse will not be instigated by something with a name, and thus we need more nuanced stories to foretell its arrival.

In conveying this message, the new Terminator movie succeeds. As to what comes next, there is no fate but that which we imagine ourselves.

Terminator: Dark Fate (2019)

Robert McLuhan: Randi’s prize

An ancient rhetorical truth is that it is easier to get an audience to agree with something if they have already agreed with other things. It does not have to be big things, or important things, or even significant things. The mere act of having once nodded in agreement is a gateway drug, as it were, to keep nodding. Part of it is momentum – if a, then b, then probably c too, given the trajectory. Another part is the sheer fact that the orator has been agreeable thus far, thus having had time to establish themselves as someone who knows what they are talking about. Even if the things agreed upon are the rhetorical equivalent of small talk, the dynamic is much more favorable than if the orator went all in for the main points right from the get go. The audience has become familiar to the voice talking, and rhetorically that goes a long way.

One needs to keep this point in mind whilst reading Randi’s prize. Both in order to understand why the book is structured the way it is – it takes quite a long time to actually get talking about the titular prize – and in order to be aware that there comes a point where the book trades in its early merits for future favors. The painstaking historical account given in the early chapters do not, strictly speaking, inform the claims made later on. Yet, upon reading, an uncritical reader might find themselves nodding along out of sheer habit.

Before delving deeper, it might be prudent to specify just what Randi’s titular prize is. Its official title, the One Million Dollar Paranormal Challenge, gives us a hint both as to the amount of money involved, and to what manner of activities are involved in winning it. The challenge was, to phrase it in its simplest form, to prove under scientifically controlled experimental conditions, agreed upon in writing beforehand, that something paranormal is going on. The ‘something paranormal’ could, by virtue of the vastly varying nature of paranormal claims made historically and at present, be any of a long list of things, from dowsing to mediumship to remote sensing. The exact nature of the scientific tests of these proclaimed paranormal abilities would naturally vary depending on which ability were to be tested. The general gist of it was that if the science bore out, then the prize money would follow.

It has to be said that this is a rather rational bet on the part of James Randi. Things that are outside the scope of modern science are (by definition) not prone to be tested under scientifically controlled experimental conditions (lest they had already been thusly tested, and become part of regular science). Claims of paranormal or supernatural activity are thus always-already outside the scope of scientific testing, which means that the likelihood of someone showing up with something that actually works is as close to zero as any human being would rationally factor in. There is a theoretical possibility that it might happen, but mathematically speaking winning the lottery would be more likely.

Here, we bump into an important dividing line. On the one side, we find those who claim that supernatural things are not real, which means they can not be demonstrated experimentally. On the other side, we find those who claim that supernatural things are real, and that the fact they can not be tested experimentally is a fault on the part of science. The author of this blog post leans toward the former (cf the anomaly on astrology), whilst McLuhan (no relation) leans towards the latter.

The early chapters outline the history of paranormal activity in 19th and early 20th century British and American context. They provide an interesting and informative introduction to the cultural practices of séances, table-turning and the consultation of mediums. They also make a point of correcting various accounts made by sceptics (of which Randi is one) about these very same cultural practices. Again and again, it is shown that sceptics got the historical facts wrong, and could have made their cases better had they but bothered to do their homework, rather than just dismiss the whole thing as mere nonsense. The rhetorical trajectory of proving sceptics wrong again and again is firmly established during these chapters, with many examples and at great length.

Here, it should be noted that there is a historical record of paranormal activity throughout time, and that it is important to keep it straight. The book does a great job of taking sceptics to task and demand that they apply the same attention to detail when discussing paranormal activity as when they do actual science. Being wrong in the name of being right is not a good look, and should not be among the virtues cultivated by those proclaiming to love science.

However – and here our introductory remarks on the rhetorical efficacy of small agreements come into play – the book then leverages these inconsistencies on the part of sceptics into a positive argument for the existence of telepathy and remote sensing. If we agree that the sceptics were wrong on these historically documented facts, the thrust of argumentation goes, then we might probably also agree that they are wrong too when they say that telepathy is impossible. After all, they do have a track record of being wrong – there were in fact whole chapters devoted to how wrong they were.

The philosophically appropriate objection to this line of reasoning is that someone being wrong about something does not mean the opposite is correct. Indeed, someone being wrong about one thing does not even mean they are automatically wrong about another thing. It does not follow from sceptics being wrong about séance culture that they are wrong about telepathy – moreover, even if they are, proving this would not constitute proof of telepathy as an actually existing thing. The momentum of rhetorical prowess does not hold sway on these things.

The irony is that if this book had limited itself to correcting the historical record, it would be a nice addition to the collections of esoteric books found in unexpected places (you know the ones). However, since it posits itself as a polemic against sceptics in general (and the titular James Randi in particular), it most likely will not find its way to those whose historical understanding needs amending, nor will it (since the titular prize was terminated in 2015) be anything but a historical, rhetorical and discursive anomaly.

Robert McLuhan: Randi’s prize

Prefiguring Cyberculture

Do not be fooled by its glossy exterior. While Prefiguring Cyberculture might look like a coffee table book, it does indeed fill the function of coffee table book admirably. It is big, it prominently features the word “cyber” on the cover, and it even has pictures. In short, those in the market for these display items could do worse than to seek out a copy to strategically place in a prominent spot.

Those daring to pick up the ever so slightly oversized tome and open its pages, may or may not be delighted to find that it was published in the early 00s. The dividing line between dismay and delight lies with one’s familiarity with literature pertaining to things cyber. Newcomers might harbor the intuition that this is an outdated scripture whose insights have been superseded by actually existing history, useful only as a way for historians to keep track of what happened when. Aficionados of the genre, however, know that the future is not what it used to be, and that 90s and (very) early 00s cyberoptimism was a radically different beast than what came before or after. In short, knowing its publication date informs a prospective reader of what manner of reading is to come.

This temporal aspect runs though the anthology at every turn. Indeed, its preface even acknowledges that readers in some distant and yet unknown cyberfuture might find its speculations quaint, fanciful or accurate in equal measure. In the same vein, the book’s project to investigate the roots of cyberculture – to prefigure it – means that the “now” is an ever negotiated position. The history of cyber is not only a future endeavor, but is also something that harkens back to decades and centuries well before there were learned books on the subject. Historically speaking, merely looking at things prominently featuring the word “cyber” does not tell the whole story.

Those familiar with the genre, will not be surprised that one of the first essays is on the topic of Cartesian dualism as it relates to fictional portrayals of artificial intelligence. In more ways than one, this is a prototypical choice of topic for an essay of this era – it takes something really, really old and applies it to something really, really new. The ensuing discussion regarding how Cartesian dualism has been criticized just about every way it could possibly be criticized (and then some), yet somehow finds purchase in literary depictions of computer intelligences alive without corporeal form – is par for the course. Indeed, I suspect not a few readers will nod and think “yes, this is the content that I crave”. A subset of these readers might then happen upon the second thought: why don’t people write this way any more?

It is a question that radiates from every page, all the while the individual essays are busy discussing this or that historical aspect in detail. It is tempting to propose that one reason might be that the arrival of the cyberfuture itself, which has served to make casual longform writing obsolete; we have online video essays, podcasts and extensive subtweeting to replace the old style communicative form of structured written words. The nature of technological change means new technologies are used (lest it not be much of a change). Given the new capacities of our cyberreality, it would be somewhat archaic to keep doing it old style. To phrase it in contemporary parlance: blogs do not generate engagement or drive traffic.

Framed this way, the book finds itself in the ironic position of painstakingly outlining how the written word has predicted futures (plural) up until the point where the written word is firmly something of the past. Once we got here, the tools of our ancestors were replaced with something more modern. The future arrived; time to let go of the past.

On that note, another chapter features a lengthy account of medieval sponges capable of storing the spoken word (replayable upon the proper squeeze), which then transitions into a pondering of just how we think about the various memory devices we use every day. Memory is not just a number ascribed to hard drives, but also the very thing we use to navigate our way through the whole ordeal of being alive. If we do not remember something, it in some sense ceases to exist. If we outsource our memory processes to external machines, then what becomes of the subject, left to its own devices?

If history and memory are cyber, this raises the question of just what is not cyber. Careful analytical readers will possibly object that this all seems a case of overreach, of overapplication of an underdefined concept. This is, potentially, true. But it also pokes at a contemporary trend of things going post-digital. The 90s ended, cyberculture became the default mode of everyday life, and we are now able to grapple with such complex phenomena as tinder dating rituals without having to discuss at length the various interface affordances of the platform. In very short order, we have gotten past the changes and barged into a future without hesitation or the nostalgic foresight of erecting milestones. One of the few visible legacies remaining is the seemingly mandatory introductory sentence “the improved capacity in communication technologies over the last decades have changed our ways of communication”, with its countless variations on theme. Indeed, bewildered cyberyouth often find themselves wondering how people did these things (for any given definition of ‘these things’) back in the old days.

The book, ultimately, tries to answer part of the question of where the big ideas of cyber came from. Inadvertently, it also raises the question of where the big questions of cyber went.

Prefiguring Cyberculture

Lindner: Walks on the wild side

The subtitle of Rolf Lindner’s Walks on the wild side is “a history of urban studies”, a nod to the fact that most historical research on urban places consist of literally walking into the wilder, seedier parts of town. The first urban explorers – an apt title for the 19th century journalists acting as anthropologists in the white spots of their own cities – made an entire genre out of describing their descent into the uncivilized wilds. According to the genre conventions of travel literature, such accounts began not at the thresholds of the wilder sides, but in the dressing rooms of the journalists themselves; the more detailed and exotic the description of one’s preparation for the Descent, the more titillating the read. Ideally, not only the outward appearance of the intrepid explorer would change, but their demeanor and behavior had to undergo a transformation as well. Famously, cleanliness and proper manners were the first virtues to go. Walking on the wild side is not as easy as simply ambulating there – you have to look and act the part of a slum-dweller, lest the natives know you’re not from around and close ranks, sealing off access to their strange ways forever.

It goes without saying that there is something of a class bias inherent in these early journalistic endeavors to depict the wild men at home. What does not go without saying is how much of this sentiment remains in contemporary urban studies, and how the endeavor to this day resembles those early exploratory efforts. Those in the know – the civilized world, the academes, the middle classes – step outside the safe zones of their everyday lives and walk into the dark unknown. The resulting travel reports made for great newspaper pieces back in the early days, and those of a critical bent might suggest they make for great articles in the scientific journals of today – with the important difference that the lengthy description of wardrobe changes have been replaced with lengthy descriptions of methodological considerations. Sometimes, the two coincide.

This historical account serves as a backdrop for Lindner’s discussion of the Chicago school sociologists and their urban escapades. Said escapades usually consist of leaving the confines of the university campus and venturing out into the cityscape in search of interesting places to become familiar with and at. Not in the descent into darkness fashion of earlier eras, but in a more horizontal fashion – the study of sociology requires you to be in social situations, and thus the way to go about it is to find yourself in these very same situations. Not as observer, but as participant. Often, the way to go about it is to simply show up and ask if you can tag along (especially if you are a young student of average delinquency), and then see where things take you. At some point, a return to the campus area is necessary, but it is not the end all and be all of sociological activity. The world under study is, and will always be, out there.

What emerges between the lines is an unfettered fascination with the wide range of different forms of life humans can evolve despite being physically proximate to each other, a happiness over being able to partake in a multitude of contexts, and the sheer joy of exploration. These are not typical traits associated with academics (stereotypical or actual), and the contrast can be seen even more starkly in the dissertation titles from that era. A Columbia dissertation styled itself Modes of cultural release in western social systems, which (with a small dose of counterintuitive thinking) can be parsed to be about the consumption of alcohol. Meanwhile, a Chicago dissertation had the slightly more locally flavored title Social interaction at Jimmy’s: a 55th St. bar. One need not too much imagination to picture the different styles of sociological account given under each heading.

Lindner’s methodological move from being an outside observer to becoming a part of the gang (at times literally) is not only an historical account. It is also a methodological poke at the present. The journalistic descents of old into the uncivilized parts of town were unabashedly meant for a middle-class readership, and it would be a minor scandal indeed if one of the dangerous classes had the temerity to point out that these accounts were by definition limited in scope and accuracy. The Chicago sociologists, meanwhile, had the advantage of knowing the ins and outs of the situations they integrated themselves into, but inevitably ran into the problem of how to generalize from particular situations to general tendencies (whether this is an inevitable feature of human social life is a question for another post). Both genres had the advantage of knowing just who their target audiences are (the middle classes and the social context being investigated, respectively), but the same can not be said about contemporary researchers of urban phenomena. Just who do we, as researchers and explorers, write for? Our employers, our editors, some abstract notion of shareholders, our patrons, the zeitgeist, our friends, our enemies?

Whether or not you have an interest in sociology or urban studies, Walks on the wild side will leave you methodologically confused. Perhaps it will even encourage you to do something as radical as taking a walk into a neighborhood you are not overly familiar with, just to see what’s there. I suspect you will find it is not too different from your current circumstance. But one can not be sure before taking the methodological step of going there.

Lindner: Walks on the wild side

Wimsatt & Beardsley: The intentional fallacy

Back in 1946, two gentlemen named Wimsatt and Beardsley published a short text on literary criticism. Its title, designed to draw readers as well as perhaps spark the sort of mild controversy that only literary critics can muster, was the Intentional Fallacy. These words were chosen with care, so as to pinpoint exactly where the contentious issue is to be found. There is a fallacy, increasingly common, and it has to do with intention. Specifically, critics spent too much time focusing on whether an author intended this or that, making cases either way using the critical tools at hand. This in itself is not the problem – such cases can be made, and they can be made well, in ways that illumine the analyzed works in interesting and useful manners. The fallacy consists not in caring what an author may or may not have intended, but of making authorial intent the main locus of one’s critical endeavor, the metric with which success is measured. The purpose of criticism is not to provide an exegesis of what an author intended, after all, and accuracy in this regard should not be the thing that distinguishes adequate critics from excellent ones.

If this seems a rather subtle point, then it is because this is a very subtle point indeed. At the core of it lies the distinction between two related but very different questions: “what did the author mean?” and “what does the text say?”. This might seem a very semantic point, but the methods one would go about finding answers to these questions differ greatly.

The first question suggests a method of biographical and historical analysis, which compares the particulars of a text to various ideas and sentiments floating about in the general environs of the person holding the pen. Such an analysis can, as mentioned above, be performed to great effect, but it requires a very careful hand and an even more careful eye for detail; reconstructing past mindsets is a skill rare indeed. Most of it comes down to sentiment, a thing that can not be proven definitively one way or another, but which through the effort of skilled criticism can be conveyed. The authors themselves can, of course, simply opt to tell us what the whole idea was, were we but to ask them.

The second question is more immediate, and easier to get to work on in a methodological fashion. In essence, it consists of empirically analyzing the words of a text to see what they do, and how they go about doing it. This, too, requires a keen awareness and attention to detail, especially when it comes to those aspiring to critique a poem, where placing a word this way rather than that can imply a world of difference (and different worlds). The poem performs this intricate dance of implication as it is written, and a careful critic can tease out what’s what by means of looking very closely.

The fallacy, according to Wimsatt and Beardsley, lies in conflating the two questions and ending up dismissing the efforts to answer the one by reference to how they would have answered the other. Which seems abstruse in the abstract, but by virtue of the 70 years of popular culture taking place after the date of publication, we have the advantage of a very concrete example to straighten the whole deal out. The time of Rowling has come.

Rowling, famously, proclaimed that Dumbledore (benevolent headmaster and dubious pedagogue) was in fact gay the whole time. This is a clear statement of intent, solidly and unequivocally answering the question of what she meant Dumbledore to be. A careful reading of the books, however, finds scant evidence of the proclaimed gaiety, to such an extent that there is no textual evidence whatsoever to be found. Which actualizes the second question in a very dramatic fashion, and forces us to confront the fact that author and text say different things.

One option would of course be to simply accept the proclamation and read Dumbledore as gay from now on. A straightforward solution, but one that lacks the quality of being critical, or (given the lack of in-text occurrences where it would have made a dramatic difference) of even being meaningful. Accepting the oracular proclamation answers the question, but it does not further our understanding of anything. Critically, it is a dead end.

A more interesting option would be to interrogate the significance of a character being written in such a way that their sexuality does not seem to matter in the slightest, to such a degree that it can flip-flop back and forth without anything changing. This, I reckon, is a more interesting question to ponder, and one which furthers our understanding of (among other things) representation in popular media. By sticking to the text as written, we can get to work on the important stuff.

There is more to the intentional fallacy, however. The fate of criticism itself hangs in the balance. If a critic, after painstakingly walking us through a body of work, concludes that it says something very specific, something which fundamentally contradict the professed values of the author, then this conclusion should stand on the basis of the critical demonstration that lead to said conclusion. It is a question of the second type, answered by a methodology suited for that kind of question. Given this, the author should not then be able to make the whole thing go away by simply saying “no it doesn’t”.

Such a state of things would render the whole critical endeavor meaningless. Which, as fallacies goes, is a big one.

Wimsatt & Beardsley: The intentional fallacy