What else does the novel, by the very nature of its elemental form, teach us than that there is some relation, or at least should be, between our internal subjective states and the world in which we move. Foreground / background. Protagonist / context. Romance / history. The family / the city. Wires run between the one to the other, from the outside in and back again. Almost every name of a novelistic subgenre or period movement (realism, naturalism, modernism, postmodernism, to name just a few of the recent ones) names a different mode of wiring. Shifts in genre represent new ideas about how to write the machine. How tangled or untangled it is, how many wires run hither and how many yon, what buttons there are to push to control the voltage and wattage of the link up, how much bandwidth in total is carried.
Has there ever been a “terrorist attack” as uncanny as the one that happened yesterday in Woolwich? And uncanny is the right word – utterly familiar (tropes of beheading, tropes of “bringing the fight back to the oppressor,” the visibility of violence) yet at the same time utterly not (the refusal of both escape or self-immolative martyrdom, the implicit invocation of the laws of war when it comes to “innocent bystanders,” the further refusal to “let the event speak for itself,” or be spoken for by leadership organisations far away and ex post facto, or through pre-recorded statements aired after the event, and the immediate extinguishing of the fear of further attacks, at least by the same actors, as per Boston). With this one, we seem to slip from the genre called “terrorism” to something else: a gruesome morality play about the calculus of war, the algebra of carnage. Street theatre allegory that trades the fake blood for the real.
So was it the “genre shift” that explains the strange reactions of the bystanders who observed the attack and its aftermath? Women reportedly ran over, in the course of the attack itself, to attempt to help the dying or dead soldier, thinking that the three actors in this play were rehearsing an all-too-common everyday scene we call “a car accident.” Who was it, and why was it, that someone stayed to film a man whose arms were drenched in blood, who carried a knife and a cleaver in his left hand, while he delivered his final soliloquy? What to make of these recorded conversations between the killers and their audience?
Is there a better answer than that a genre had been disrupted or reinvented, and thus the rules that normal apply (murders try to escape, bystanders flee, etc) were unavailable for consultation?
Genre is also another name for myth. While it sometimes postures as science, it has far more in common with superstition. Throw salt over your shoulder, and lucky will occur. One character says something, the other, naturally, touches wood. We now, in our pharmacologically-lexiconed period, are far more likely to call superstitious practices the symptoms of Obsessive Compulsive Disorder. One has to check, and check again, that the water’s not running in the bathroom before one leaves the flat. Push hard three times on the front door to make sure it’s locked… or else another storyline will ensue, the one that has an evening return to a gaping door, the laptop gone, the bedroom drawers dumped. This is literally it – some sort of chemical depletion or superfluity occurs, some traumatic event takes place, and then an almost mystical belief in certain irrational storylines takes over. To disobey the mandates of genre is to open oneself to an unhappy ending.
Last night: this news-story. On television and especially on the web. Fraught conversations about the arithmetic of death. And then a phone call. Bad news of the sort that late night phone calls usually bring. The trope of the middle-aged son and the ailing parent. The novel teaches us to think of the one thing as related, if complex, to the other. At least metaphorically, or even just formally. What is happening out there of course is a prelude to what is about to happen right in here, in the space of the family home and especially the skulls (and bodies) of those that inhabit it.
Think of the script. The call in the night in the movie. The early middle-aged son who ignores the call momentarily, caught up as he is in an argument about the gruesome news on television. The politics of violence, the physics of the world system. The cigarette whose space allows a second thought, a second glance at the mobile phone. Ominous – we can imagine what will happen next. The film that will play out from its start in a graphic sequence of news images morphs into a dark family drama. How does one cope when the worst comes home to roost?
A fallacy (a word quite close to “myth” and “superstition”) that doesn’t have a name, one that is hardwired into the DNA of the novel as a form. I’ve tried to name it in things that I’ve written, in seminars that I’ve led. Sometimes it seems to have more to do with temporality. What happens after what, or at the same times as each other. We could call it presumptive fallacy. Retro-prospective fallacy. The fallacy of coincidence. Sometimes it’s simply about the structural mandate that the foreground be read in the light of the background and vice versa. Contextual fallacy? Flaubert, disrupter through over-fulfilment of so many genre mandates, so early in the game, was aware of the problem. Think of Frédéric waiting for Madame Arnoux while the revolution kicks off a few blocks away in L’Éducation sentimentale. The New Critics liked to label fallacies on the part of the reader. I am more interested in the fallacies inherent in artistic forms themselves, even though obviously these can turn into the former and often do through the sort of training that novels provide.
But of course, myths are also true in a very serious sense. I don’t simply mean that what we believe we are. What we think is the only thing there is. Although that may well be true. In this case, it is also useful to think of myth or superstition or even fallacy as a customary practice, a mode of operation, running orders against confusion. The world, as we know, lives out the demands of its many operative genres every single day. Perhaps now as much as ever. A myth is habitus, generated by practice, an operating manual written and re-written each time we act.
The novel makes us stupid in one sense, solipsistic, tends to make us look for our angle on things, what does this mean to us? What were the attackers yesterday, in both his words and deeds, and deeds both during and after the attack, trying to say to me? Or at least us? There is a counter-instinct, for those disciplined a certain way, to try to climb up the ladder of transcendent wisdom, to disavow the inwrought narcissism of our conditioned response. To gasp and yell when the news commentators reduce a global to a local question, an a serious question to a matter of insanity or unanchored spite. They might think what they want, but they have no right to act it out here. To force us into these stringent attempts to adjust the genre back to something we’re comfortable with.
But the attempt to climb out of the fray of self-interest, however complex, however Wallace-ianly convoluted and self-reflexive, is of course a trope in yet another sort of story, another sort of myth, one that – we need to remind ourselves – has the deepest affinities with an imperial mindset, one that takes the world panoptically, one for whom impersonality is a transferable skill.
What retards political development – and really contemporary thought as a whole – more right now than an inability to come to terms with the relationship between the self, located wherever it might be, and the world-system as a whole? At least here where we are? What are we, sequestered in the posh uptowns and suburbs of the global system, meant to think or say when we are in the wrong jurisdiction? We know not to fall into the ethical mode, charity is of no use, but there may be an exitless cloverleaf, a highway cul de sac, ahead if
Despite all the complicities of the novel, these generic demands and the demands of its sub-genres, the promise remains that the bad faith strictures themselves make space for revelatory manipulation, clarifying detournage. They even, potentially, lead us toward the formulation of simpler questions, question more pressing in their semi-solipsistic simplicity. Like this one, that with the little revision, some shifts in seemingly inevitable consequence, the script I outlined above could be made to ask:
Who has to die in the prime of life, and who is afforded the luxury of death that comes at an actuarially appropriate stage?
I was asked by students and others several times last week what I made of Coetzee’s new novel. I’ve been a bit annoyed with myself that I haven’t really had any good answers yet, and have been forced to make the same gestures towards “bafflement” that just about all the reviews I’ve read have made. But I’m starting to think that its our bafflement itself that we should be looking into – that there’s more to be made of it than a shoulder-shrug.
Chris Tayler, in his review of the novel in the LRB, gives us a good start at a list of the questions begged but left unanswered in the course of the narrative:
As a reading experience it’s utterly absorbing, with almost painful levels of meta-suspense as you try to work out where the story is aiming to lead you. Questions are as close as Coetzee comes to direct statements, and the novel is richly generative of these. Is the world it depicts an afterlife, a pre-life, a mere stage in an unending transmigration of souls, a realm of ideal images as discussed in Coetzee’s recent essay on Gerald Murnane in the New York Review of Books, or none of the above? How does the Jesus plot fit in with this? How come Inés has access to sausages? Do the deadpan jokes get less frequent or just ascend to a higher sphere?
One of the things that I try to teach my students is to developed a more nuanced take on literary “difficulty.” Most of us, especially when we’re starting out at reading “difficult” books and thus insecure about our ability to understand, let alone intrepret, them, take it on instinct that there always is something to figure out in such works. One acquires a “reader’s guide” to Ulysses, one takes up the challenge of the notes at the end of The Waste Land – one struggles to “solve” the riddles of the poems, to understand the allusions, etc. But what if (so I argue in my first-year seminars) we’re meant in dealing with these texts not so much to penetrate the difficult but to have an experience of difficulty’s opacity itself. (My favourite example is to use in teaching is the beginning of the second section of The Waste Land, where I think Eliot’s putting us through a sort of routine having to do with the “dissociation of sensibility.” We simply can’t see the image described, and perhaps that’s meant to make us feel our own post-lapsarianness…)
Why does Inés have access to meat – and what is La Residencia in the first place?
It has been a preoccupation of Coetzee’s for quite awhile, to tantalise the reader with the sense that there are answers to questions raised by the text, that there is an interrogate-able reality lurking behind the narrative itself, and thus, when the answers fail to arrive, perhaps to push the reader back into an awareness of her or his own need for answers in the first place. (Think for instance of Disgrace, where the reader is left in the same position as David Lurie himself – completely unable to understand the reasons why his daughter Lucy does what she does [or doesn't do what she doesn't do] in the wake of her rape.) In this case, why, in the end, are we bothered by Inés’s access to sausages? Why are we worried about the nature of La Residencia? It feels as though, at the beginning of the work, Simón would have asked them too – but by the end of the novel, he’s lost his appetite for questions of this sort – his appetite for questions about appetite and its fulfilment. In other words, the reader’s persistence in wondering falls out of sync with the characters in the text – it’s we readers who remain new arrivals at Novilla.
Likewise with the question “How does the Jesus plot fit in with this?” Not only is the abstraction inherent in this sort of typology or allegorical sense incompatible with the putative Jesus’s incessant refusal of such abstraction, but the question is exactly the sort that Coetzee’s fiction time and again refuses to solve for us – or stages the struggle and failure to solve on the part of his characters. Again, think of Lurie’s attempts to place is daughter into a discernable “category” of rape victim after their attack, or even more pressingly, the efforts of the administrators of the camp that Michael K ends up in at the end of his novel to deduce the “meaning” of this man who has come into their care and custody.
Michaels means something, and the meaning he has is not private to me. If it were, if the origin of this meaning were no more than a lack in myself, a lack, say, of something to believe in, since we all know how difficult it is to satisfy a hunger for belief with the vision of times to come that the way, to say nothing of the camps, presents us with, if it were a mere craving for meaning that sent me to Michaels and his story, if Michaels himself were no more than what he seems to be (what you seem to be), a skin-and-bones man with a crumpled lip (pardon me, I name only the obvious), then I would have every justification for retiring to the toilets behind the jockey’s changing-rooms and locking myself into the last cubicle and putting a bullet through my head.
With just a shift of a few details and a reduction in intensity, this passage from Michael K could stand as a rendition of what I was feeling when asked last week “what the new novel means” and probably isn’t all that far away from the sort of frustration that the reviewers felt as they worked up their pieces for the magazines, or so I guess…
Coetzee is often – with obvious justification – labelled a “meta-fictional” writer: his works build on and distort previous literary works, or are “about” the act of writing itself. But they are also books that generate – or should generate – a sort of “meta-reading.” Just as the writer is writing about writing, when we read them, we are reading about reading. Or at least that seems to be the point. Were a new (or even the first) messiah to arrive on earth, would we be so concerned with his meaning and relation to precedent, his conformity or lack of conformity to the models that we would impose, that we would fail to listen to him right from the start? With inherited instrumental logics and instinct to abstract categorization, our need to extract reified meanings from things, would we be able to read him at all?
2012 is over. Now back to the regularly scheduled programming.
This is not to suggest that minimalism finds its realisation in the repudiation of the category of expression as such. On the contrary, the inaugural model of minimalism, Ernest Hemingway, simply opened up another alternative path to expression, one characterised by the radical exclusion of rhetoric and theatricality, for which, however, that very exclusion and its tense silences and omissions were precisely the technique for conveying heightened emotional intensity (particularly in the marital situation). Hemingway’s avatar, Raymond Carver, then learned to mobilise the minimalist technique of ‘leaving out’ in the service of a rather different and more specifically American sense of desolation and depression – of emotional unemployment, so to speak.
Interesting thought: that the outrolling of literary history and influence reveals that the apophatic isn’t just “mentioning by not mentioning” but in the long run is an index of the fact that there was nothing to mention in the first place. Carver takes up a style that is meant to suggest depths by remaining on the surface only to realise that they’re only ever surface. The ineffable shifts from what can’t be said to what’s not there to be said in the first place. Or even that the adoption of minimalism leads fiction into perversely-Pascalian situation: Minimalise, delete your words, and you will believe that there was nothing to delete in the first place.
Interesting synchronictiy. The other day I was in a Waterstones and was stunned yet again at the fact that the “headless women” book covers are still proliferating. What are the “headless women” book covers? Well, take a look here or here or here. Or take a look at this one, which happened to be on display on the 3-for-2 rack at the Waterstones in question, and which was written by an author I’ve met a few times.
It’s pretty obvious what’s interesting / discomforting / grating about the proliferation of covers of this sort. Implicit in their ubiquity is a sense on publishers’ parts that female readers, when choosing a novel, want to be able to project themselves into the work, to occupy the place of the female protagonist. If the person pictured on the cover of the book were to possess a head, and in particular a face, this would somehow block the ability for them to do so: But I don’t have red hair! But my eyes aren’t that colour! My cheekbones aren’t at all like that! It’s notable that works aimed at male audiences don’t take the same tack – often foregoing the depiction of people on the cover altogether.
Pretty condescending, isn’t it? Unfortunately one has a sense that the publishers know what works, and wouldn’t be doing this if it didn’t work to some degree. I’ve seen an argument on twitter – now lost to us, as it was months ago – in which a PR person for a publisher responded to criticism of the practice with something like “I know, I know – it’s awful. But what do you want us to do about it? The books won’t move off the shelves if we don’t.”
Depressing. But here’s the interesting part. It just so happens that I had assigned – and had to prepare to teach early this week – a fantastic essay by Catherine Gallagher called “The Rise of Fictionality,” which was published in Franco Moretti’s magisterial anthology on the novel. (Luckily for you – and for me as I rushed to get the students a copy of it – PUP has the essay on-line here.) The essay is a vivid and succinct historicization of the emergence of fiction as a category in eighteenth-century Britain, a category born out of divergence both from “factual” writing and (and here’s where the brilliance of the piece truly lies) “fantastical” writing as well.
I won’t go into all the nuances of the argument here – do yourself a favour and read the piece. But here’s a few paragraphs that seem especially relevant to the acephalous women of Waterstones:
That apparent paradox—that readers attach themselves to characters because of, not despite, their ﬁctionality—was acknowledged and discussed by eighteenth-century writers. As I have already mentioned, they noticed that the ﬁctional framework established a protected affective enclosure that encouraged risk-free emotional investment. Fictional characters, moreover, were thought to be easier to sympathize or identify with than most real people. Although readers were often called to be privileged and superior witnesses of protagonists’ follies, they were also expected to imagine themselves as the characters. “All joy or sorrow for the happiness or calamities of others,” Samuel Johnson explained, “is produced by an act of the imagination, that realizes the event however ﬁctitious . . . by placing us, for a time, in the condition of him whose fortune we contemplate” (Johnson 1750). What seemed to make novelistic “others” outstanding candidates for such realizations was the fact that, especially in contradistinction to the ﬁgures who pointedly referred to actual individuals, they were enticingly unoccupied. Because they were haunted by no shadow of another person who might take priority over the reader as a “real” referent, anyone might appropriate them. No reader would have to grapple with the knowledge of some real-world double or contract an accidental feeling about any actual person by making the temporary identiﬁcation. Moreover, unlike the personae of tragedy or legend, novelistic characters tended to be commoners, who would fall beneath the notice of history proper, and so they tended to carry little extratextual baggage. As we have noticed, they did carry the burden of the type, what Henry Fielding called the “species,” which he thought was a turntable for aiming reference back at the reader; a ﬁctional “he” or “she” should really be taken to mean “you.” But in the case of many novel characters, even the “type” was generally minimized by the requirement that the character escape from the categorical in the process of individuation. The fact that “le personnage . . . n’est personne” was thought to be precisely what made him or her magnetic.
Some recent critics are reviving this understanding and venturing to propose that we, like our eighteenth-century predecessors, feel things for characters not despite our awareness of their ﬁctionality but because of it. Consequently, we cannot be dissuaded from identifying with them by reminders of their nonexistence. We have plenty of those, and they conﬁgure our emotional responses in ways unique to ﬁction, but they do not diminish our feeling. We already know, moreover, that all of our ﬁctional emotions are by their nature excessive because they are emotions about nobody, and yet the knowledge does not reform us. Our imagination of characters is, in this sense, absurd and (perhaps) legitimately embarrassing, but it is also constitutive of the genre, and it requires more explanation than the eighteenth-century commentators were able to provide.
That is to say, the “headlessness” of the fictional character, their availability to us because they are unblocked by connection to a “real person” and thus readily available for readerly identification, may be “absurd and (perhaps) legitimately embarrassing,” as are the images on the covers in the bookshop, but it is also one of the things that makes fiction what it is, and is what accounts for the special mental and emotional states that we experience as we read them.But to take this a step further (and here I am drawing out some of Gallagher’s arguments and taking them in a slightly different direction) it’s possible that reflections of Gallagher’s sort (and even the instinct catered to by the contemporary covers) point us to different sensibility about the ideology of fiction.
In short, we are made anxious about the protagonism of fiction, the structural mandate that it forces or soothes us into identification with the autonomous or semi-autonomous individual as such, that it serves as an advertisement for intricate interiority and in so doing may urge us away from the consideration of the exterior. But if it is the case that the fictionality of the fictional character is grounded on a certain availability, a certain openness, even a certain whateverness, we might be licensed to think that the ideological underpinnings of fiction are far more complex than conventional (literary Marxist) wisdom suggests. Rather than a cult of personality, fiction, at base, might start to seem a space for the emergence of impersonality – and rather than simply markers of readerly solipsism and commercial cynicism, the book covers above might suggest a nascently radical instinct lurking just below the surface of the Waterstones transaction.
At the place where I teach, we still have the students do two courses (one at the beginning of their time with us, and one at the end) in “practical criticism.” We don’t call it that (we just call it “criticism”) but that’s what it is. If we were an American institution, we’d think of it descending out of what is termed “The New Criticism,” but because we are where we are, it’s seen as an import from Cambridge. As the folks to the north-north east describe it on their department website:
Practical criticism is, like the formal study of English literature itself, a relatively young discipline. It began in the 1920s with a series of experiments by the Cambridge critic I.A. Richards. He gave poems to students without any information about who wrote them or when they were written. In Practical Criticism of 1929 he reported on and analysed the results of his experiments. The objective of his work was to encourage students to concentrate on ‘the words on the page’, rather than relying on preconceived or received beliefs about a text. For Richards this form of close analysis of anonymous poems was ultimately intended to have psychological benefits for the students: by responding to all the currents of emotion and meaning in the poems and passages of prose which they read the students were to achieve what Richards called an ‘organised response’. This meant that they would clarify the various currents of thought in the poem and achieve a corresponding clarification of their own emotions.
If you’ve been a reader of this site for awhile, or are familiar with my work in “the real world,” you might think that’d I’d buck against this model of instruction. Any good materialist critic of course should. It approaches the literary work in isolation of its context – the work as an ahistorical entity that emerged autonomously and without the frictional influence of the writer who wrote it or the world that the writer wrote it in.
But on the other hand – and this is why I not only do not buck against it but actively enjoy teaching on this course, perhaps more than any other – it is an extremely valuable method for enabling students to develop “against the grain” critical insights about texts. In the absence of astute attention of the “practical criticism” variety, it’s very difficult for students (or, really, anyone) to develop convincingly novel interpretations of texts. The close attention to the words on the page, and the dynamics of their interaction, not only sets the stage for an appreciation of the “value added” that comes of distilling whatever contextual and personal issues inform the piece once the history is added back in, but, due to the multiplicity and idiosyncrasy of possible interpretations, provides an opening for critical newness – for the saying of something provocatively different about the work.
So how do I teach “practical criticism”? In the seminar groups that I lead, I model and encourage the following “flow chart” of thought: Anticipate what other intelligent readers of this piece might say about it. Try to imagine the “conventional wisdom” about it that would emerge as if automatically in the minds of the relatively well-informed and intelligent. And then, but only then, figure out a perverse turn that you can make within the context of but against this conventional wisdom. “Of course that seems right, but on the other hand it fails to account for…” “On first glace, it would be easy and to a degree justifiable to conclude that…. But what if we reconsider this conclusion in the light of….”
Students tend to demonstrate resistance, early on, to this practice. For one thing, especially in the first year, they don’t really (and couldn’t possibly) have a fully developed sense of what the “conventional wisdom” is that their supposed to be augmenting, contradicting, perverting. At this early stage, the process requires them to make an uncomfortable Pascalian wager with themselves – to pretend as though they are confident in their apprehensions until the confidence itself arrives. But even if there’s a certain awkwardness in play, it does seem to exercise the right parts of the students’ critical and analytical faculties so that they (to continue the metaphor) develop a sort of “muscle memory” of the “right” way to do criticism. From what I can tell, encouraging them to develop an instinct of this sort early measurably improves their writing as they move through their degree.
But still (and here, finally, I’m getting to the point of this post) there’s a big problem with all of this. I warn the students of this very early on – generally the first time I run one of their criticism seminars. There’s a big unanswered question lurking behind this entire process. Why must we be perverse? What is the value of aiming always for provocative difference, novelty, rather than any other goal? Of course, there’s a pragmatic answer: Because it will cause your writing to be better received. Because you will earn better marks by doing it this way rather than the other. Because you will develop a skill – one that can be shifted to other fields of endeavour – that will be recognised as what the world generally calls “intelligence.” But – in particular because none of this should simply be about the pragmatics of getting up the various ladders and depth charts of life – this simply isn’t a sufficient response, or at least is one that begs as many questions as it answers. What are, after all the politics of “novelty”? What are we to make of the structural similarity between what it takes to impress one’s markers and what it takes to make it “on the market,” whether as a human or inhuman commodity? What if – in the end – the answers to question that need (ethically, politically) answering are simple rather than complex, the obvious rather than the surprising?
In my own work, I’m starting to take this issue up. And I try to keep it – when it’s appropriate – at the centre of my teaching, even if that can be difficult. (And there’s the further matter that to advocate “simple” rather than “complex” answers to things is itself an “against the grain” argument, is itself incredibly perverse, at least within an academic setting. There’s a fruitful performative contradiction at play that, in short, makes my advocacy of non-perversity attractively perverse!)
I’ll talk more about what I’m arguing in this new work some other time, but for now, I’m after something else – something isomorphic with but only complexly related to the issues with “practical criticism” and the issues that it raises. It has to do with politics – in particular the politics of those of a “theoretical” or in particular “radically theoretical” mindset, and the arguments that they make and why they make them.
Take this article that appeared yesterday on The Guardian‘s “Comment is free” website. The title of the piece (which of course was probably not chosen by the author, but is sanctioned I think by where the piece ends up) is “What might a world without work look like?” and the tag under the title continues, “As ideas of employment become more obscure and desperate, 2013 is the perfect time to ask what it means to live without it.” While the first two-thirds of the article is simply a description of the poor state of the labour market, it is the end that gets to the “provocative” argument at play.
But against this backdrop – rising inflation, increasing job insecurity, geographically asymmetrical unemployment, attacks on the working and non-working populations, and cuts to benefits – a debate about what work is and what it means has been taking place. Some discussions at Occupy focused on what an anti-work (or post-work) politics might mean, and campaigns not only for a living wage but for a guaranteed, non-means-tested ”citizen’s income” are gathering pace.
The chances of a scratchcard winning you a life without work are of course miniscule, but as what it means to work becomes both more obscure and increasingly desperate, 2013 might be the perfect time to ask what work is, what it means, and what it might mean to live without it. As Marx put it in his 1880 proposal for a workers’ inquiry: “We hope to meet … with the support of all workers in town and country who understand that they alone can describe with full knowledge the misfortunes from which they suffer and that only they, and not saviours sent by providence, can energetically apply the healing remedies for the social ills that they are prey to.”
In other words, the best place to start would be with those who have a relation to work as such – which is to say nearly everyone, employed or otherwise.
It may be a somewhat bad faith line to allege that “interesting perversity” rather than some well-founded and straightforward belief is at work behind an argument of this sort, but in the absence of any substantive suggestions of what the answers to these questions might be, or in fact why these are the right questions to ask at the moment, what else are we to assume? It is provocatively perverse to suggest, at a time of stagnant employment rate and when people are suffering due to the fact that they are out of work or locked in cycles or precarity, that we might do away with work altogether. It isn’t the standard line – but it’s a line that allows the author to avoid repeating the conventional wisdom about what a left response to such a crisis might be. This in turn affords an avenue to publication, as well as a place in the temporary mental canons of those who read it.
Unfortunately, of course, the Tories (and their ideological near-cousins in all of the other mainline parties) are also asking the same sort of questions about a world (or at least a nation) without work. How might one keep the tables turned toward what benefits employers? How might one keep wages (and relatedly, inflation) low but still spur “growth”? How might one manage this system of precarious non-work, at once depressing wages but keeping the employable populace alive and not building barricades. In short, the question of “What a world without work might look like” is a question that is just as pressing to the powers that we oppose as to people like the writer of this article.
We’ve seen other episodes of the same. During the student protests over tuition increases (among other things) I myself criticised (and had a bit of a comment box scrap over) the Really Free School and those who were busily advocating the destruction of the university system…. just as the government was doing its best to destroy the university system. That many of those making such “radical” arguments about university education were themselves beneficiaries of just such an education only made matters more contradictory, hypocritical, and frustrating.
In short, in countering some perceived conventional wisdom, in begging questions that seem to derive from a radical rather than a “reformist” perspective, the author (and others of her ilk) ends up embracing an argument that is not only unhelpfully utopian, but actually deeply compatible with the very situation that seems to provoke the advocacy of such a solution. I can’t help but sense that the same instinct towards perversity that makes for a good English paper – and, perhaps even more pressingly, a good work of reputation-building “theory” – is what drives a writer to take a line like this one at a time like this. One might counter that I’m being a bit of a philistine – that I’m closing off avenues of speculative thought and analysis. I’m not. I’m just wondering what the point of writing all this up in a questi0n-begging article in a popular publication is, an article that does little more than raise unanswerable questions and then ends with what might as well be the banging of a Zen gong.