Robert McLuhan: Randi’s prize

An ancient rhetorical truth is that it is easier to get an audience to agree with something if they have already agreed with other things. It does not have to be big things, or important things, or even significant things. The mere act of having once nodded in agreement is a gateway drug, as it were, to keep nodding. Part of it is momentum – if a, then b, then probably c too, given the trajectory. Another part is the sheer fact that the orator has been agreeable thus far, thus having had time to establish themselves as someone who knows what they are talking about. Even if the things agreed upon are the rhetorical equivalent of small talk, the dynamic is much more favorable than if the orator went all in for the main points right from the get go. The audience has become familiar to the voice talking, and rhetorically that goes a long way.

One needs to keep this point in mind whilst reading Randi’s prize. Both in order to understand why the book is structured the way it is – it takes quite a long time to actually get talking about the titular prize – and in order to be aware that there comes a point where the book trades in its early merits for future favors. The painstaking historical account given in the early chapters do not, strictly speaking, inform the claims made later on. Yet, upon reading, an uncritical reader might find themselves nodding along out of sheer habit.

Before delving deeper, it might be prudent to specify just what Randi’s titular prize is. Its official title, the One Million Dollar Paranormal Challenge, gives us a hint both as to the amount of money involved, and to what manner of activities are involved in winning it. The challenge was, to phrase it in its simplest form, to prove under scientifically controlled experimental conditions, agreed upon in writing beforehand, that something paranormal is going on. The ‘something paranormal’ could, by virtue of the vastly varying nature of paranormal claims made historically and at present, be any of a long list of things, from dowsing to mediumship to remote sensing. The exact nature of the scientific tests of these proclaimed paranormal abilities would naturally vary depending on which ability were to be tested. The general gist of it was that if the science bore out, then the prize money would follow.

It has to be said that this is a rather rational bet on the part of James Randi. Things that are outside the scope of modern science are (by definition) not prone to be tested under scientifically controlled experimental conditions (lest they had already been thusly tested, and become part of regular science). Claims of paranormal or supernatural activity are thus always-already outside the scope of scientific testing, which means that the likelihood of someone showing up with something that actually works is as close to zero as any human being would rationally factor in. There is a theoretical possibility that it might happen, but mathematically speaking winning the lottery would be more likely.

Here, we bump into an important dividing line. On the one side, we find those who claim that supernatural things are not real, which means they can not be demonstrated experimentally. On the other side, we find those who claim that supernatural things are real, and that the fact they can not be tested experimentally is a fault on the part of science. The author of this blog post leans toward the former (cf the anomaly on astrology), whilst McLuhan (no relation) leans towards the latter.

The early chapters outline the history of paranormal activity in 19th and early 20th century British and American context. They provide an interesting and informative introduction to the cultural practices of séances, table-turning and the consultation of mediums. They also make a point of correcting various accounts made by sceptics (of which Randi is one) about these very same cultural practices. Again and again, it is shown that sceptics got the historical facts wrong, and could have made their cases better had they but bothered to do their homework, rather than just dismiss the whole thing as mere nonsense. The rhetorical trajectory of proving sceptics wrong again and again is firmly established during these chapters, with many examples and at great length.

Here, it should be noted that there is a historical record of paranormal activity throughout time, and that it is important to keep it straight. The book does a great job of taking sceptics to task and demand that they apply the same attention to detail when discussing paranormal activity as when they do actual science. Being wrong in the name of being right is not a good look, and should not be among the virtues cultivated by those proclaiming to love science.

However – and here our introductory remarks on the rhetorical efficacy of small agreements come into play – the book then leverages these inconsistencies on the part of sceptics into a positive argument for the existence of telepathy and remote sensing. If we agree that the sceptics were wrong on these historically documented facts, the thrust of argumentation goes, then we might probably also agree that they are wrong too when they say that telepathy is impossible. After all, they do have a track record of being wrong – there were in fact whole chapters devoted to how wrong they were.

The philosophically appropriate objection to this line of reasoning is that someone being wrong about something does not mean the opposite is correct. Indeed, someone being wrong about one thing does not even mean they are automatically wrong about another thing. It does not follow from sceptics being wrong about séance culture that they are wrong about telepathy – moreover, even if they are, proving this would not constitute proof of telepathy as an actually existing thing. The momentum of rhetorical prowess does not hold sway on these things.

The irony is that if this book had limited itself to correcting the historical record, it would be a nice addition to the collections of esoteric books found in unexpected places (you know the ones). However, since it posits itself as a polemic against sceptics in general (and the titular James Randi in particular), it most likely will not find its way to those whose historical understanding needs amending, nor will it (since the titular prize was terminated in 2015) be anything but a historical, rhetorical and discursive anomaly.

Robert McLuhan: Randi’s prize

Prefiguring Cyberculture

Do not be fooled by its glossy exterior. While Prefiguring Cyberculture might look like a coffee table book, it does indeed fill the function of coffee table book admirably. It is big, it prominently features the word “cyber” on the cover, and it even has pictures. In short, those in the market for these display items could do worse than to seek out a copy to strategically place in a prominent spot.

Those daring to pick up the ever so slightly oversized tome and open its pages, may or may not be delighted to find that it was published in the early 00s. The dividing line between dismay and delight lies with one’s familiarity with literature pertaining to things cyber. Newcomers might harbor the intuition that this is an outdated scripture whose insights have been superseded by actually existing history, useful only as a way for historians to keep track of what happened when. Aficionados of the genre, however, know that the future is not what it used to be, and that 90s and (very) early 00s cyberoptimism was a radically different beast than what came before or after. In short, knowing its publication date informs a prospective reader of what manner of reading is to come.

This temporal aspect runs though the anthology at every turn. Indeed, its preface even acknowledges that readers in some distant and yet unknown cyberfuture might find its speculations quaint, fanciful or accurate in equal measure. In the same vein, the book’s project to investigate the roots of cyberculture – to prefigure it – means that the “now” is an ever negotiated position. The history of cyber is not only a future endeavor, but is also something that harkens back to decades and centuries well before there were learned books on the subject. Historically speaking, merely looking at things prominently featuring the word “cyber” does not tell the whole story.

Those familiar with the genre, will not be surprised that one of the first essays is on the topic of Cartesian dualism as it relates to fictional portrayals of artificial intelligence. In more ways than one, this is a prototypical choice of topic for an essay of this era – it takes something really, really old and applies it to something really, really new. The ensuing discussion regarding how Cartesian dualism has been criticized just about every way it could possibly be criticized (and then some), yet somehow finds purchase in literary depictions of computer intelligences alive without corporeal form – is par for the course. Indeed, I suspect not a few readers will nod and think “yes, this is the content that I crave”. A subset of these readers might then happen upon the second thought: why don’t people write this way any more?

It is a question that radiates from every page, all the while the individual essays are busy discussing this or that historical aspect in detail. It is tempting to propose that one reason might be that the arrival of the cyberfuture itself, which has served to make casual longform writing obsolete; we have online video essays, podcasts and extensive subtweeting to replace the old style communicative form of structured written words. The nature of technological change means new technologies are used (lest it not be much of a change). Given the new capacities of our cyberreality, it would be somewhat archaic to keep doing it old style. To phrase it in contemporary parlance: blogs do not generate engagement or drive traffic.

Framed this way, the book finds itself in the ironic position of painstakingly outlining how the written word has predicted futures (plural) up until the point where the written word is firmly something of the past. Once we got here, the tools of our ancestors were replaced with something more modern. The future arrived; time to let go of the past.

On that note, another chapter features a lengthy account of medieval sponges capable of storing the spoken word (replayable upon the proper squeeze), which then transitions into a pondering of just how we think about the various memory devices we use every day. Memory is not just a number ascribed to hard drives, but also the very thing we use to navigate our way through the whole ordeal of being alive. If we do not remember something, it in some sense ceases to exist. If we outsource our memory processes to external machines, then what becomes of the subject, left to its own devices?

If history and memory are cyber, this raises the question of just what is not cyber. Careful analytical readers will possibly object that this all seems a case of overreach, of overapplication of an underdefined concept. This is, potentially, true. But it also pokes at a contemporary trend of things going post-digital. The 90s ended, cyberculture became the default mode of everyday life, and we are now able to grapple with such complex phenomena as tinder dating rituals without having to discuss at length the various interface affordances of the platform. In very short order, we have gotten past the changes and barged into a future without hesitation or the nostalgic foresight of erecting milestones. One of the few visible legacies remaining is the seemingly mandatory introductory sentence “the improved capacity in communication technologies over the last decades have changed our ways of communication”, with its countless variations on theme. Indeed, bewildered cyberyouth often find themselves wondering how people did these things (for any given definition of ‘these things’) back in the old days.

The book, ultimately, tries to answer part of the question of where the big ideas of cyber came from. Inadvertently, it also raises the question of where the big questions of cyber went.

Prefiguring Cyberculture

Lindner: Walks on the wild side

The subtitle of Rolf Lindner’s Walks on the wild side is “a history of urban studies”, a nod to the fact that most historical research on urban places consist of literally walking into the wilder, seedier parts of town. The first urban explorers – an apt title for the 19th century journalists acting as anthropologists in the white spots of their own cities – made an entire genre out of describing their descent into the uncivilized wilds. According to the genre conventions of travel literature, such accounts began not at the thresholds of the wilder sides, but in the dressing rooms of the journalists themselves; the more detailed and exotic the description of one’s preparation for the Descent, the more titillating the read. Ideally, not only the outward appearance of the intrepid explorer would change, but their demeanor and behavior had to undergo a transformation as well. Famously, cleanliness and proper manners were the first virtues to go. Walking on the wild side is not as easy as simply ambulating there – you have to look and act the part of a slum-dweller, lest the natives know you’re not from around and close ranks, sealing off access to their strange ways forever.

It goes without saying that there is something of a class bias inherent in these early journalistic endeavors to depict the wild men at home. What does not go without saying is how much of this sentiment remains in contemporary urban studies, and how the endeavor to this day resembles those early exploratory efforts. Those in the know – the civilized world, the academes, the middle classes – step outside the safe zones of their everyday lives and walk into the dark unknown. The resulting travel reports made for great newspaper pieces back in the early days, and those of a critical bent might suggest they make for great articles in the scientific journals of today – with the important difference that the lengthy description of wardrobe changes have been replaced with lengthy descriptions of methodological considerations. Sometimes, the two coincide.

This historical account serves as a backdrop for Lindner’s discussion of the Chicago school sociologists and their urban escapades. Said escapades usually consist of leaving the confines of the university campus and venturing out into the cityscape in search of interesting places to become familiar with and at. Not in the descent into darkness fashion of earlier eras, but in a more horizontal fashion – the study of sociology requires you to be in social situations, and thus the way to go about it is to find yourself in these very same situations. Not as observer, but as participant. Often, the way to go about it is to simply show up and ask if you can tag along (especially if you are a young student of average delinquency), and then see where things take you. At some point, a return to the campus area is necessary, but it is not the end all and be all of sociological activity. The world under study is, and will always be, out there.

What emerges between the lines is an unfettered fascination with the wide range of different forms of life humans can evolve despite being physically proximate to each other, a happiness over being able to partake in a multitude of contexts, and the sheer joy of exploration. These are not typical traits associated with academics (stereotypical or actual), and the contrast can be seen even more starkly in the dissertation titles from that era. A Columbia dissertation styled itself Modes of cultural release in western social systems, which (with a small dose of counterintuitive thinking) can be parsed to be about the consumption of alcohol. Meanwhile, a Chicago dissertation had the slightly more locally flavored title Social interaction at Jimmy’s: a 55th St. bar. One need not too much imagination to picture the different styles of sociological account given under each heading.

Lindner’s methodological move from being an outside observer to becoming a part of the gang (at times literally) is not only an historical account. It is also a methodological poke at the present. The journalistic descents of old into the uncivilized parts of town were unabashedly meant for a middle-class readership, and it would be a minor scandal indeed if one of the dangerous classes had the temerity to point out that these accounts were by definition limited in scope and accuracy. The Chicago sociologists, meanwhile, had the advantage of knowing the ins and outs of the situations they integrated themselves into, but inevitably ran into the problem of how to generalize from particular situations to general tendencies (whether this is an inevitable feature of human social life is a question for another post). Both genres had the advantage of knowing just who their target audiences are (the middle classes and the social context being investigated, respectively), but the same can not be said about contemporary researchers of urban phenomena. Just who do we, as researchers and explorers, write for? Our employers, our editors, some abstract notion of shareholders, our patrons, the zeitgeist, our friends, our enemies?

Whether or not you have an interest in sociology or urban studies, Walks on the wild side will leave you methodologically confused. Perhaps it will even encourage you to do something as radical as taking a walk into a neighborhood you are not overly familiar with, just to see what’s there. I suspect you will find it is not too different from your current circumstance. But one can not be sure before taking the methodological step of going there.

Lindner: Walks on the wild side

Wimsatt & Beardsley: The intentional fallacy

Back in 1946, two gentlemen named Wimsatt and Beardsley published a short text on literary criticism. Its title, designed to draw readers as well as perhaps spark the sort of mild controversy that only literary critics can muster, was the Intentional Fallacy. These words were chosen with care, so as to pinpoint exactly where the contentious issue is to be found. There is a fallacy, increasingly common, and it has to do with intention. Specifically, critics spent too much time focusing on whether an author intended this or that, making cases either way using the critical tools at hand. This in itself is not the problem – such cases can be made, and they can be made well, in ways that illumine the analyzed works in interesting and useful manners. The fallacy consists not in caring what an author may or may not have intended, but of making authorial intent the main locus of one’s critical endeavor, the metric with which success is measured. The purpose of criticism is not to provide an exegesis of what an author intended, after all, and accuracy in this regard should not be the thing that distinguishes adequate critics from excellent ones.

If this seems a rather subtle point, then it is because this is a very subtle point indeed. At the core of it lies the distinction between two related but very different questions: “what did the author mean?” and “what does the text say?”. This might seem a very semantic point, but the methods one would go about finding answers to these questions differ greatly.

The first question suggests a method of biographical and historical analysis, which compares the particulars of a text to various ideas and sentiments floating about in the general environs of the person holding the pen. Such an analysis can, as mentioned above, be performed to great effect, but it requires a very careful hand and an even more careful eye for detail; reconstructing past mindsets is a skill rare indeed. Most of it comes down to sentiment, a thing that can not be proven definitively one way or another, but which through the effort of skilled criticism can be conveyed. The authors themselves can, of course, simply opt to tell us what the whole idea was, were we but to ask them.

The second question is more immediate, and easier to get to work on in a methodological fashion. In essence, it consists of empirically analyzing the words of a text to see what they do, and how they go about doing it. This, too, requires a keen awareness and attention to detail, especially when it comes to those aspiring to critique a poem, where placing a word this way rather than that can imply a world of difference (and different worlds). The poem performs this intricate dance of implication as it is written, and a careful critic can tease out what’s what by means of looking very closely.

The fallacy, according to Wimsatt and Beardsley, lies in conflating the two questions and ending up dismissing the efforts to answer the one by reference to how they would have answered the other. Which seems abstruse in the abstract, but by virtue of the 70 years of popular culture taking place after the date of publication, we have the advantage of a very concrete example to straighten the whole deal out. The time of Rowling has come.

Rowling, famously, proclaimed that Dumbledore (benevolent headmaster and dubious pedagogue) was in fact gay the whole time. This is a clear statement of intent, solidly and unequivocally answering the question of what she meant Dumbledore to be. A careful reading of the books, however, finds scant evidence of the proclaimed gaiety, to such an extent that there is no textual evidence whatsoever to be found. Which actualizes the second question in a very dramatic fashion, and forces us to confront the fact that author and text say different things.

One option would of course be to simply accept the proclamation and read Dumbledore as gay from now on. A straightforward solution, but one that lacks the quality of being critical, or (given the lack of in-text occurrences where it would have made a dramatic difference) of even being meaningful. Accepting the oracular proclamation answers the question, but it does not further our understanding of anything. Critically, it is a dead end.

A more interesting option would be to interrogate the significance of a character being written in such a way that their sexuality does not seem to matter in the slightest, to such a degree that it can flip-flop back and forth without anything changing. This, I reckon, is a more interesting question to ponder, and one which furthers our understanding of (among other things) representation in popular media. By sticking to the text as written, we can get to work on the important stuff.

There is more to the intentional fallacy, however. The fate of criticism itself hangs in the balance. If a critic, after painstakingly walking us through a body of work, concludes that it says something very specific, something which fundamentally contradict the professed values of the author, then this conclusion should stand on the basis of the critical demonstration that lead to said conclusion. It is a question of the second type, answered by a methodology suited for that kind of question. Given this, the author should not then be able to make the whole thing go away by simply saying “no it doesn’t”.

Such a state of things would render the whole critical endeavor meaningless. Which, as fallacies goes, is a big one.

Wimsatt & Beardsley: The intentional fallacy

McLuhan: Understanding media

Not very many know this, but McLuhan is an inherently funny author. Especially if you take him at his words and think in terms of media as the extension of man. Not only does this center the human being as the locus of analysis (if something extends something else, then the characteristics of this something else become of vital importance to understanding what is afoot) it also makes media a very bodily experience. Television is an extension of your eye, and allows you to see really far. A car is an extension of your legs, and allows you to run very fast. A book is an extension of your memory, and allows you to precisely recall things on demand. A house is an extension of your skin, and allows you to stay warm and cozy. Everything becomes very useful all of a sudden, and thinking about media this way centers just in what way it is useful. Depending on which bodily part it extends, its usefulness by necessity takes on different forms.

When McLuhan said that the medium is the message, what he meant was that because it extends your body, you necessarily form yourself around whatever the medium might be. Watching television means facing the set, often sitting down for extended periods of time. Driving a car means being in the car for the duration of the ride. Reading a book means entering into the complex negotiation between eyes, hands and various other body parts who struggle to make themselves relevant as the pages turn. And a house only works as a second skin for as long as you are in it; going outside means you’ve stopped engaging with it as a medium. Whatever is going on content-wise, your body contorts to suit the demands of the medium. This is, in the most direct of senses, the message.

Once you get into the habit of thinking of objects as media, and media as bodily extensions, it’s hard to get out of it. Sofas are extensions of our backsides, keyboards are extensions of fingers, flutes are extensions of our lips, – everything is an extension of something, and knowing the specific something of an object allows us to think about it in a clearer manner. if nothing else, it allows us to understand what we are doing all day. And, possibly, why we ache in places we’d seldom think of otherwise.

This only goes so far, however. Being centered around the human body is refreshing and all, but eventually you will run up against something that is not an extension of an individual human being, and the whole thing breaks down. Parliamentary democracy, for instance, does not extend any particular body part, in as much as it is a form of collective decision making. It is by definition a group of human bodies coming together, which is something else than extending any one of the individuals present. The whole is greater the sum of its parts. We can only run with it so far; eventually, other metaphors and ways of thinking will be necessary to complete the picture.

Having encountered a limit to the usefulness of a line of thought does not obviate said usefulness. As with everything else in life, it just means we have to use it in moderation, applying it when it might garner useful insights and picking up some other concept when it does not. A screwdriver does not a complete toolset make, but a toolset would do well to include at least one screwdriver.

This central limitation can spark our imagination in interesting directions, though. If there are things that are not extensions of the human body, then just what are they extensions of? What non-human entity is the intended user of these strange devices and artifices which have sprung into being? Are they friend, foe or utterly indifferent? How are we to think of these non-humans amongst us?

When gazing upon a great piece of machinery, far beyond any single human being in size and mechanical complexity, this is a humbling thought to have. And it is a thought that forces us to question just who it is for. If it works beyond human scale, without human intervention, along trajectories utterly orthogonal to the human form – have we not built ourselves out of relevance?

The medium is, indeed, the message.

McLuhan: Understanding media

Lachman: Turn off your mind

We are living in strange times, where everything seems to be getting stranger every day. All that is solid melts into air, and everything we take for granted turns out to have been a temporary coincidence imprinted upon us by the accident of being impressionable at a certain place at a certain time. Kids these days don’t know the first thing about the most obvious of topics, while the old ones (the supposed fount of established wisdom) are profoundly ignorant about what’s what these days. Strangeness is afoot, and thus there is a need for some stable, familiar, non-controversial comfort food for the soul.

What, then, could be more comfortable than a potted history of the Age of Aquarius, the psychedelic 60s, the explosion of occult mysticism into the mainstream culture? Surely, by now, this is the most familiar footing to be found, if such a thing is to be found anywhere.

And, indeed, it is strangely comforting to read Lachman’s who’s who of the occult 60s. There was magic in the air, most of which eventually boiled down to the trifecta of sex, drugs and a perpetual need to keep enough of a media buzz rolling to ensure sufficient funds were available to keep the magic lantern alight. It should come as no surprise that rock and roll was one of the primary means of ensuring positive vibes and cash flows, but it was far from the only means of keeping it up. Just as the Beatles incorporated superficial elements of Jung and eastern mysticism into their musical works, psychedelic evangelists on the lecture circuit pushed the virtues of turning on, tuning in and dropping out, with the briefest of hand gestures towards ancient spiritual practices. The goal being not so much to shepherd the lost souls of the post-war generation towards enlightenment as it was to secure another gig or another book contract. Or, as the case might be, scoring another hit. All of this was profoundly new and profoundly strange at the time; the alchemy of time passing has turned it into a well-worn familiar cultural touchstone. The UFO arrived, and we were on it.

The comfort of familiarity is at odds with the stated premise of the book, to expose the dark side of the flowery 60s. Beneath the peace, love and understanding lurks a vast subterranean architecture of (distinctly non-spiritual) drug abuse, non-consensual sex and brainwashy cults with varying degrees of manslaughter attributed to their names (or to the names of their invented deities, who sometimes coincided with the personage of the cult founder). Far from being harmless wishy-washy mumbo-jumbo, the new age inherited a legacy of depravity to rival anything the old age could throw at us. Indeed, the frequent and explicit reverent depictions of past nazi occult practices (actual or imagined) hints that the new age might very well be a not too subtle continuation of the old age, albeit in slightly more flowery prose. And yet, the familiarity is all-encompassing. The mood is one of high weirdness, but it is weirdness that has been around for so long that imagining a world without it would be a more herculean effort of reconstructive archaeology than simply accepting the presence of a third eye or astral body. The Age of Aquarius did not come to pass, but its failure to materialize brought it about as firmly as any immanentization of the eschaton ever would.

What did come to pass between the publication of the book in 2001 and now was twenty years of accelerated weirdness. Some of this acceleration can be attributed to the passage of time and the opportunity to get used to the ideas, – time being the great alchemical cauldron – but the internet is to be blamed and/or praised in equal measure. Getting the word out in the 60s was an ordeal, meaning that the words that did get out had overcome the challenge of effort; while not impossible to do, this ensured there would be less verbiage overall. There was a Crowley, a Lovecraft, one set of Beatles, and any set of derivatives or combinations would have to effort to get heard beyond their immediate physical surroundings. High weirdness it might be, but it was also high weirdness with a manageably low rate of iteration; given enough library time, a person could eventually catch up. To be contrasted to the faster pace of today, where being offline for a week means certain portions of occult developments are simply unavailable to you, the iterations having morphed so fast that retracing the steps becomes both impossible and meaningless. By the time the latest doge purveyor has turned out to be a milkshake duck, four new distracted boyfriends have taken their place. There is simply too much strangeness afoot to catch, let alone keep, up.

Part of the familiarity emanating from the book, I suspect, comes from the eternal recurrence of the same motivations then as now. A 60s mass producer of somewhat coherent neo-spiritualist gobbledygook (goo goo g’joob) aiming to pay the bills is eerily similar to the present day mass producer of “content” (goo goo g’joob) aiming to pay the bills. The words, frames of reference and amounts of drugs consumed might be different (might), but the overall aim remains the same. Gotta pay the bills, man, and the capitalist system allows you to do so whilst raging against the very selfsame man.

What, then, might the familiar dark side of the present be? Readers of Turn off your mind will not be enlightened in this regard, but they will end up thoroughly introduced to the weirder aspects of contemporary counterculture. Which, all things considered, is not a bad thing to be; time being cyclical, these things are bound to return again and again in different forms, gently nodding in recognition to those in the know.

Lachman: Turn off your mind

de Beauvoir: the Second Sex

With some books, reading the table of contents is sufficient. Merely by knowing the topics that are covered within the bounds of a book, an educated person can glean the kinds of arguments made, the overall gestalt of the discourse. Some books are quite explicit about this, while others require a more subtle reading between the lines. Sometimes, it is a mixture between the two, where a retroactive glance (perchance to find a specific section) reveals that it was all there all along, plain for everyone with eyes to see. Some books are meant to open those eyes, and the Second Sex is definitely one such book.

To be sure, “The point of view of historical materialism” might not strike the casual reader as a key heading at first glance, but retroactively it stands out as a significant keystone. So too does the headings listed under the keyword “situation”: The married woman; The mother; Social life; Prostitutes and hetaeras; From maturity to old age; Woman’s situation and character. The instant, intuitive takeaway from this string of words is that this is a book about women. The retroactive, even more intuitive takeaway is that this is a no-nonsense, materialist book about women. And about how everything else is not about women.

Reading de Beauvoir – the aforementioned chapter in particular – is a tour the force introduction into gender dynamics. A marriage is not just something that happens once a young man finds a suitable woman to settle down with; it is quite literally the determining factor with regards to how one’s life trajectory will shape itself over the coming (possible remaining) decades. Becoming pregnant is not just a cute period of time preceding parenthood filled with anecdotes about ice cream; it is quite literally a matter of life and death, where complications could prove fatal to mother and child both. The following period of being a mother – at home, unpaid, isolated – is not an unproblematic given either. At every stage of life, de Beauvoir takes womanhood to task and shows it to be a constant struggle under unfair conditions against the full brunt of social expectations. All this in a no-nonsense, straightforward way which would require more effort to misunderstand than to comprehend.

The key to understanding the significance of the Second Sex is to know that this – any of it – was simply not done before she did it. Like any other thing that is simply not done, it had been done with regularity and alacrity since time immemorial. It had also been swept under the rug, like so many other embarrassing official secrets one simply does not speak about. Everyone knows it, everyone does it, but no one speaks about it. Until now, in unequivocal terms. Now there is a book which documents it all, for everyone to see. The cat is firmly out of the bag.

The key to understanding the backlash to the book – and indeed to feminism in general – is to see it as an attempt to get the cat into the bag again. To return to a state of blissful willful ignorance, where women’s issues were pushed aside and delegated to those it belonged: women. Women’s issues were private, personal issues, and therefore it was categorically wrong to seek public recourse to solve these problems, no matter how systematically recurring they were. Things were fine the way they were, when women grinned and bore it in silence. Don’t ask, don’t tell. Definitely don’t voice your long-held discomfort using the new vocabulary gained from having them formulated in a book.

Things have gotten better since the book’s publication in 1949. Some progress has been managed in the seventy years since then. But it would be a mistake to think this dynamic a solved issue. Even now, gender studies is seen as an optional extra, something one does above and beyond the actual work, the things that matter. Bringing up the very real material consequences of actually existing policies that primarily affect women is still seen as a minor speed bump, an inconvenience. An embarrassment, albeit a publicly known one. Everyone knows it, everyone does it. But to speak of it?

That is simply not done. Not even in 2019.

de Beauvoir: the Second Sex