I was asked by students and others several times last week what I made of Coetzee’s new novel. I’ve been a bit annoyed with myself that I haven’t really had any good answers yet, and have been forced to make the same gestures towards “bafflement” that just about all the reviews I’ve read have made. But I’m starting to think that its our bafflement itself that we should be looking into – that there’s more to be made of it than a shoulder-shrug.
Chris Tayler, in his review of the novel in the LRB, gives us a good start at a list of the questions begged but left unanswered in the course of the narrative:
As a reading experience it’s utterly absorbing, with almost painful levels of meta-suspense as you try to work out where the story is aiming to lead you. Questions are as close as Coetzee comes to direct statements, and the novel is richly generative of these. Is the world it depicts an afterlife, a pre-life, a mere stage in an unending transmigration of souls, a realm of ideal images as discussed in Coetzee’s recent essay on Gerald Murnane in the New York Review of Books, or none of the above? How does the Jesus plot fit in with this? How come Inés has access to sausages? Do the deadpan jokes get less frequent or just ascend to a higher sphere?
One of the things that I try to teach my students is to developed a more nuanced take on literary “difficulty.” Most of us, especially when we’re starting out at reading “difficult” books and thus insecure about our ability to understand, let alone intrepret, them, take it on instinct that there always is something to figure out in such works. One acquires a “reader’s guide” to Ulysses, one takes up the challenge of the notes at the end of The Waste Land – one struggles to “solve” the riddles of the poems, to understand the allusions, etc. But what if (so I argue in my first-year seminars) we’re meant in dealing with these texts not so much to penetrate the difficult but to have an experience of difficulty’s opacity itself. (My favourite example is to use in teaching is the beginning of the second section of The Waste Land, where I think Eliot’s putting us through a sort of routine having to do with the “dissociation of sensibility.” We simply can’t see the image described, and perhaps that’s meant to make us feel our own post-lapsarianness…)
Why does Inés have access to meat – and what is La Residencia in the first place?
It has been a preoccupation of Coetzee’s for quite awhile, to tantalise the reader with the sense that there are answers to questions raised by the text, that there is an interrogate-able reality lurking behind the narrative itself, and thus, when the answers fail to arrive, perhaps to push the reader back into an awareness of her or his own need for answers in the first place. (Think for instance of Disgrace, where the reader is left in the same position as David Lurie himself – completely unable to understand the reasons why his daughter Lucy does what she does [or doesn't do what she doesn't do] in the wake of her rape.) In this case, why, in the end, are we bothered by Inés’s access to sausages? Why are we worried about the nature of La Residencia? It feels as though, at the beginning of the work, Simón would have asked them too – but by the end of the novel, he’s lost his appetite for questions of this sort – his appetite for questions about appetite and its fulfilment. In other words, the reader’s persistence in wondering falls out of sync with the characters in the text – it’s we readers who remain new arrivals at Novilla.
Likewise with the question “How does the Jesus plot fit in with this?” Not only is the abstraction inherent in this sort of typology or allegorical sense incompatible with the putative Jesus’s incessant refusal of such abstraction, but the question is exactly the sort that Coetzee’s fiction time and again refuses to solve for us – or stages the struggle and failure to solve on the part of his characters. Again, think of Lurie’s attempts to place is daughter into a discernable “category” of rape victim after their attack, or even more pressingly, the efforts of the administrators of the camp that Michael K ends up in at the end of his novel to deduce the “meaning” of this man who has come into their care and custody.
Michaels means something, and the meaning he has is not private to me. If it were, if the origin of this meaning were no more than a lack in myself, a lack, say, of something to believe in, since we all know how difficult it is to satisfy a hunger for belief with the vision of times to come that the way, to say nothing of the camps, presents us with, if it were a mere craving for meaning that sent me to Michaels and his story, if Michaels himself were no more than what he seems to be (what you seem to be), a skin-and-bones man with a crumpled lip (pardon me, I name only the obvious), then I would have every justification for retiring to the toilets behind the jockey’s changing-rooms and locking myself into the last cubicle and putting a bullet through my head.
With just a shift of a few details and a reduction in intensity, this passage from Michael K could stand as a rendition of what I was feeling when asked last week “what the new novel means” and probably isn’t all that far away from the sort of frustration that the reviewers felt as they worked up their pieces for the magazines, or so I guess…
Coetzee is often – with obvious justification – labelled a “meta-fictional” writer: his works build on and distort previous literary works, or are “about” the act of writing itself. But they are also books that generate – or should generate – a sort of “meta-reading.” Just as the writer is writing about writing, when we read them, we are reading about reading. Or at least that seems to be the point. Were a new (or even the first) messiah to arrive on earth, would we be so concerned with his meaning and relation to precedent, his conformity or lack of conformity to the models that we would impose, that we would fail to listen to him right from the start? With inherited instrumental logics and instinct to abstract categorization, our need to extract reified meanings from things, would we be able to read him at all?
2012 is over. Now back to the regularly scheduled programming.
This is not to suggest that minimalism finds its realisation in the repudiation of the category of expression as such. On the contrary, the inaugural model of minimalism, Ernest Hemingway, simply opened up another alternative path to expression, one characterised by the radical exclusion of rhetoric and theatricality, for which, however, that very exclusion and its tense silences and omissions were precisely the technique for conveying heightened emotional intensity (particularly in the marital situation). Hemingway’s avatar, Raymond Carver, then learned to mobilise the minimalist technique of ‘leaving out’ in the service of a rather different and more specifically American sense of desolation and depression – of emotional unemployment, so to speak.
Interesting thought: that the outrolling of literary history and influence reveals that the apophatic isn’t just “mentioning by not mentioning” but in the long run is an index of the fact that there was nothing to mention in the first place. Carver takes up a style that is meant to suggest depths by remaining on the surface only to realise that they’re only ever surface. The ineffable shifts from what can’t be said to what’s not there to be said in the first place. Or even that the adoption of minimalism leads fiction into perversely-Pascalian situation: Minimalise, delete your words, and you will believe that there was nothing to delete in the first place.
Interesting synchronictiy. The other day I was in a Waterstones and was stunned yet again at the fact that the “headless women” book covers are still proliferating. What are the “headless women” book covers? Well, take a look here or here or here. Or take a look at this one, which happened to be on display on the 3-for-2 rack at the Waterstones in question, and which was written by an author I’ve met a few times.
It’s pretty obvious what’s interesting / discomforting / grating about the proliferation of covers of this sort. Implicit in their ubiquity is a sense on publishers’ parts that female readers, when choosing a novel, want to be able to project themselves into the work, to occupy the place of the female protagonist. If the person pictured on the cover of the book were to possess a head, and in particular a face, this would somehow block the ability for them to do so: But I don’t have red hair! But my eyes aren’t that colour! My cheekbones aren’t at all like that! It’s notable that works aimed at male audiences don’t take the same tack – often foregoing the depiction of people on the cover altogether.
Pretty condescending, isn’t it? Unfortunately one has a sense that the publishers know what works, and wouldn’t be doing this if it didn’t work to some degree. I’ve seen an argument on twitter – now lost to us, as it was months ago – in which a PR person for a publisher responded to criticism of the practice with something like “I know, I know – it’s awful. But what do you want us to do about it? The books won’t move off the shelves if we don’t.”
Depressing. But here’s the interesting part. It just so happens that I had assigned – and had to prepare to teach early this week – a fantastic essay by Catherine Gallagher called “The Rise of Fictionality,” which was published in Franco Moretti’s magisterial anthology on the novel. (Luckily for you – and for me as I rushed to get the students a copy of it – PUP has the essay on-line here.) The essay is a vivid and succinct historicization of the emergence of fiction as a category in eighteenth-century Britain, a category born out of divergence both from “factual” writing and (and here’s where the brilliance of the piece truly lies) “fantastical” writing as well.
I won’t go into all the nuances of the argument here – do yourself a favour and read the piece. But here’s a few paragraphs that seem especially relevant to the acephalous women of Waterstones:
That apparent paradox—that readers attach themselves to characters because of, not despite, their ﬁctionality—was acknowledged and discussed by eighteenth-century writers. As I have already mentioned, they noticed that the ﬁctional framework established a protected affective enclosure that encouraged risk-free emotional investment. Fictional characters, moreover, were thought to be easier to sympathize or identify with than most real people. Although readers were often called to be privileged and superior witnesses of protagonists’ follies, they were also expected to imagine themselves as the characters. “All joy or sorrow for the happiness or calamities of others,” Samuel Johnson explained, “is produced by an act of the imagination, that realizes the event however ﬁctitious . . . by placing us, for a time, in the condition of him whose fortune we contemplate” (Johnson 1750). What seemed to make novelistic “others” outstanding candidates for such realizations was the fact that, especially in contradistinction to the ﬁgures who pointedly referred to actual individuals, they were enticingly unoccupied. Because they were haunted by no shadow of another person who might take priority over the reader as a “real” referent, anyone might appropriate them. No reader would have to grapple with the knowledge of some real-world double or contract an accidental feeling about any actual person by making the temporary identiﬁcation. Moreover, unlike the personae of tragedy or legend, novelistic characters tended to be commoners, who would fall beneath the notice of history proper, and so they tended to carry little extratextual baggage. As we have noticed, they did carry the burden of the type, what Henry Fielding called the “species,” which he thought was a turntable for aiming reference back at the reader; a ﬁctional “he” or “she” should really be taken to mean “you.” But in the case of many novel characters, even the “type” was generally minimized by the requirement that the character escape from the categorical in the process of individuation. The fact that “le personnage . . . n’est personne” was thought to be precisely what made him or her magnetic.
Some recent critics are reviving this understanding and venturing to propose that we, like our eighteenth-century predecessors, feel things for characters not despite our awareness of their ﬁctionality but because of it. Consequently, we cannot be dissuaded from identifying with them by reminders of their nonexistence. We have plenty of those, and they conﬁgure our emotional responses in ways unique to ﬁction, but they do not diminish our feeling. We already know, moreover, that all of our ﬁctional emotions are by their nature excessive because they are emotions about nobody, and yet the knowledge does not reform us. Our imagination of characters is, in this sense, absurd and (perhaps) legitimately embarrassing, but it is also constitutive of the genre, and it requires more explanation than the eighteenth-century commentators were able to provide.
That is to say, the “headlessness” of the fictional character, their availability to us because they are unblocked by connection to a “real person” and thus readily available for readerly identification, may be “absurd and (perhaps) legitimately embarrassing,” as are the images on the covers in the bookshop, but it is also one of the things that makes fiction what it is, and is what accounts for the special mental and emotional states that we experience as we read them.But to take this a step further (and here I am drawing out some of Gallagher’s arguments and taking them in a slightly different direction) it’s possible that reflections of Gallagher’s sort (and even the instinct catered to by the contemporary covers) point us to different sensibility about the ideology of fiction.
In short, we are made anxious about the protagonism of fiction, the structural mandate that it forces or soothes us into identification with the autonomous or semi-autonomous individual as such, that it serves as an advertisement for intricate interiority and in so doing may urge us away from the consideration of the exterior. But if it is the case that the fictionality of the fictional character is grounded on a certain availability, a certain openness, even a certain whateverness, we might be licensed to think that the ideological underpinnings of fiction are far more complex than conventional (literary Marxist) wisdom suggests. Rather than a cult of personality, fiction, at base, might start to seem a space for the emergence of impersonality – and rather than simply markers of readerly solipsism and commercial cynicism, the book covers above might suggest a nascently radical instinct lurking just below the surface of the Waterstones transaction.
At the place where I teach, we still have the students do two courses (one at the beginning of their time with us, and one at the end) in “practical criticism.” We don’t call it that (we just call it “criticism”) but that’s what it is. If we were an American institution, we’d think of it descending out of what is termed “The New Criticism,” but because we are where we are, it’s seen as an import from Cambridge. As the folks to the north-north east describe it on their department website:
Practical criticism is, like the formal study of English literature itself, a relatively young discipline. It began in the 1920s with a series of experiments by the Cambridge critic I.A. Richards. He gave poems to students without any information about who wrote them or when they were written. In Practical Criticism of 1929 he reported on and analysed the results of his experiments. The objective of his work was to encourage students to concentrate on ‘the words on the page’, rather than relying on preconceived or received beliefs about a text. For Richards this form of close analysis of anonymous poems was ultimately intended to have psychological benefits for the students: by responding to all the currents of emotion and meaning in the poems and passages of prose which they read the students were to achieve what Richards called an ‘organised response’. This meant that they would clarify the various currents of thought in the poem and achieve a corresponding clarification of their own emotions.
If you’ve been a reader of this site for awhile, or are familiar with my work in “the real world,” you might think that’d I’d buck against this model of instruction. Any good materialist critic of course should. It approaches the literary work in isolation of its context – the work as an ahistorical entity that emerged autonomously and without the frictional influence of the writer who wrote it or the world that the writer wrote it in.
But on the other hand – and this is why I not only do not buck against it but actively enjoy teaching on this course, perhaps more than any other – it is an extremely valuable method for enabling students to develop “against the grain” critical insights about texts. In the absence of astute attention of the “practical criticism” variety, it’s very difficult for students (or, really, anyone) to develop convincingly novel interpretations of texts. The close attention to the words on the page, and the dynamics of their interaction, not only sets the stage for an appreciation of the “value added” that comes of distilling whatever contextual and personal issues inform the piece once the history is added back in, but, due to the multiplicity and idiosyncrasy of possible interpretations, provides an opening for critical newness – for the saying of something provocatively different about the work.
So how do I teach “practical criticism”? In the seminar groups that I lead, I model and encourage the following “flow chart” of thought: Anticipate what other intelligent readers of this piece might say about it. Try to imagine the “conventional wisdom” about it that would emerge as if automatically in the minds of the relatively well-informed and intelligent. And then, but only then, figure out a perverse turn that you can make within the context of but against this conventional wisdom. “Of course that seems right, but on the other hand it fails to account for…” “On first glace, it would be easy and to a degree justifiable to conclude that…. But what if we reconsider this conclusion in the light of….”
Students tend to demonstrate resistance, early on, to this practice. For one thing, especially in the first year, they don’t really (and couldn’t possibly) have a fully developed sense of what the “conventional wisdom” is that their supposed to be augmenting, contradicting, perverting. At this early stage, the process requires them to make an uncomfortable Pascalian wager with themselves – to pretend as though they are confident in their apprehensions until the confidence itself arrives. But even if there’s a certain awkwardness in play, it does seem to exercise the right parts of the students’ critical and analytical faculties so that they (to continue the metaphor) develop a sort of “muscle memory” of the “right” way to do criticism. From what I can tell, encouraging them to develop an instinct of this sort early measurably improves their writing as they move through their degree.
But still (and here, finally, I’m getting to the point of this post) there’s a big problem with all of this. I warn the students of this very early on – generally the first time I run one of their criticism seminars. There’s a big unanswered question lurking behind this entire process. Why must we be perverse? What is the value of aiming always for provocative difference, novelty, rather than any other goal? Of course, there’s a pragmatic answer: Because it will cause your writing to be better received. Because you will earn better marks by doing it this way rather than the other. Because you will develop a skill – one that can be shifted to other fields of endeavour – that will be recognised as what the world generally calls “intelligence.” But – in particular because none of this should simply be about the pragmatics of getting up the various ladders and depth charts of life – this simply isn’t a sufficient response, or at least is one that begs as many questions as it answers. What are, after all the politics of “novelty”? What are we to make of the structural similarity between what it takes to impress one’s markers and what it takes to make it “on the market,” whether as a human or inhuman commodity? What if – in the end – the answers to question that need (ethically, politically) answering are simple rather than complex, the obvious rather than the surprising?
In my own work, I’m starting to take this issue up. And I try to keep it – when it’s appropriate – at the centre of my teaching, even if that can be difficult. (And there’s the further matter that to advocate “simple” rather than “complex” answers to things is itself an “against the grain” argument, is itself incredibly perverse, at least within an academic setting. There’s a fruitful performative contradiction at play that, in short, makes my advocacy of non-perversity attractively perverse!)
I’ll talk more about what I’m arguing in this new work some other time, but for now, I’m after something else – something isomorphic with but only complexly related to the issues with “practical criticism” and the issues that it raises. It has to do with politics – in particular the politics of those of a “theoretical” or in particular “radically theoretical” mindset, and the arguments that they make and why they make them.
Take this article that appeared yesterday on The Guardian‘s “Comment is free” website. The title of the piece (which of course was probably not chosen by the author, but is sanctioned I think by where the piece ends up) is “What might a world without work look like?” and the tag under the title continues, “As ideas of employment become more obscure and desperate, 2013 is the perfect time to ask what it means to live without it.” While the first two-thirds of the article is simply a description of the poor state of the labour market, it is the end that gets to the “provocative” argument at play.
But against this backdrop – rising inflation, increasing job insecurity, geographically asymmetrical unemployment, attacks on the working and non-working populations, and cuts to benefits – a debate about what work is and what it means has been taking place. Some discussions at Occupy focused on what an anti-work (or post-work) politics might mean, and campaigns not only for a living wage but for a guaranteed, non-means-tested ”citizen’s income” are gathering pace.
The chances of a scratchcard winning you a life without work are of course miniscule, but as what it means to work becomes both more obscure and increasingly desperate, 2013 might be the perfect time to ask what work is, what it means, and what it might mean to live without it. As Marx put it in his 1880 proposal for a workers’ inquiry: “We hope to meet … with the support of all workers in town and country who understand that they alone can describe with full knowledge the misfortunes from which they suffer and that only they, and not saviours sent by providence, can energetically apply the healing remedies for the social ills that they are prey to.”
In other words, the best place to start would be with those who have a relation to work as such – which is to say nearly everyone, employed or otherwise.
It may be a somewhat bad faith line to allege that “interesting perversity” rather than some well-founded and straightforward belief is at work behind an argument of this sort, but in the absence of any substantive suggestions of what the answers to these questions might be, or in fact why these are the right questions to ask at the moment, what else are we to assume? It is provocatively perverse to suggest, at a time of stagnant employment rate and when people are suffering due to the fact that they are out of work or locked in cycles or precarity, that we might do away with work altogether. It isn’t the standard line – but it’s a line that allows the author to avoid repeating the conventional wisdom about what a left response to such a crisis might be. This in turn affords an avenue to publication, as well as a place in the temporary mental canons of those who read it.
Unfortunately, of course, the Tories (and their ideological near-cousins in all of the other mainline parties) are also asking the same sort of questions about a world (or at least a nation) without work. How might one keep the tables turned toward what benefits employers? How might one keep wages (and relatedly, inflation) low but still spur “growth”? How might one manage this system of precarious non-work, at once depressing wages but keeping the employable populace alive and not building barricades. In short, the question of “What a world without work might look like” is a question that is just as pressing to the powers that we oppose as to people like the writer of this article.
We’ve seen other episodes of the same. During the student protests over tuition increases (among other things) I myself criticised (and had a bit of a comment box scrap over) the Really Free School and those who were busily advocating the destruction of the university system…. just as the government was doing its best to destroy the university system. That many of those making such “radical” arguments about university education were themselves beneficiaries of just such an education only made matters more contradictory, hypocritical, and frustrating.
In short, in countering some perceived conventional wisdom, in begging questions that seem to derive from a radical rather than a “reformist” perspective, the author (and others of her ilk) ends up embracing an argument that is not only unhelpfully utopian, but actually deeply compatible with the very situation that seems to provoke the advocacy of such a solution. I can’t help but sense that the same instinct towards perversity that makes for a good English paper – and, perhaps even more pressingly, a good work of reputation-building “theory” – is what drives a writer to take a line like this one at a time like this. One might counter that I’m being a bit of a philistine – that I’m closing off avenues of speculative thought and analysis. I’m not. I’m just wondering what the point of writing all this up in a questi0n-begging article in a popular publication is, an article that does little more than raise unanswerable questions and then ends with what might as well be the banging of a Zen gong.
“Massive open online courses,” or MOOCs, have caught fire in academia. They offer, at no charge to anyone with Internet access, what was until now exclusive to those who earn college admission and pay tuition. Thirty-three prominent schools, including the universities of Virginia and Maryland, have enlisted to provide classes via Coursera.
For his seven-week course — which covers advanced math and statistics in the context of public health and biomedical sciences — [Brian Cafo, who teaches public-health at Johns Hopkins] posts video lectures, gives quizzes and homework, and monitors a student discussion forum. On the first day, the forum lit up with greetings from around the world. Heady stuff for a 39-year-old associate professor who is accomplished in his field but hardly a global academic celebrity.
In other words, these systems allow universities to create web-based versions of the courses that they teach their paying students. In general, these MOOCs has thus far been free to take: a sort of public-service cum branding operation for the universities (largely elite) who participate.
Ignoring for a minute all of the limitations on what’s happening (especially in terms of credential-distribution and also the probability that the companies that run these systems will eventually start extracting profit from them), these MOOCs seem to be a version of exactly what we want: the (albeit incremental, limited) expansion of access to educational facilities to anyone who would like to use them.
It’d be, at least from one angle, massively hypocritical for someone to gleefully feel that the ability to suck down so much of what I want from the internet, even or especially in evasion of the copyright rules in play, represents a sort of technologically-inevitable communization of media and information, but on the other hand to hold that that the communization of the commodities that I distribute should somehow be exempt from such liberation.
So of course there are reasons to be pleased by the development of these courses and systems. It is a cheering thought that some kid without access to great teaching is sitting in her bedroom doing MIT engineering courses in her spare time. And why shouldn’t anyone be able, if only virtually, wander into my lecture hall and hear what I have to say about this novel or that movement? There’s no way that I or anyone else should be able to dismiss that possibility with a shrug.
But on the other hand: as often is the case when capitalist enterprises (or non-capitalist enterprises stuck within a surrounding capitalist system, like publicly-funded or not-for-profit private universities) take up utopian and even pseudo-communist aspirations, we should know by know to check that our wallets are still in our pockets.
First of all, there’s the issue of academic labor. It’s not as if these institutions, under the guidances of the consultants who swarm their corridors, haven’t been at work on a decades-long experiment in reducing staffing costs. As of yet, that experiment has focused on the casualization of academic labour: the replacement of tenure-track and tenured staff with contingent lecturers and cheap graduate students. It’s hard not to imagine that the development of MOOCs isn’t a sort of sandbox in which universities play with the possibility of even further reductions in staffing. If you could, for instance, record my lectures and then somehow throw me off the payroll (or, as is more likely, simply not hire another me now that I can literally appear in more than one lecture room at the same time), while simultaneously sticking far more students in my now-virtual lecture hall, well, what’s to stop that?
If you think I’m being paranoid, check out the attention to the issue of marking in both articles. According to the one in the Times:
Assignments that can’t be scored by an automated grader are pushing MOOC providers to get creative, especially in courses that involve writing and analysis. Coursera uses peer grading: submit an assignment and five people grade it; in turn, you grade five assignments.
But what if someone is a horrible grader? Coursera studied the peer grading of 2,500 student submissions for a Princeton sociology MOOC by having them graded a second time by Princeton instructors — yes, the professors hand-graded all 2,500 assignments — and found comparable results. Still, Coursera is developing software to flag those who assign very inaccurate grades to give their assessment less weight.
Ah – just as the airlines have passed the work of checking yourself into your flight and getting your bags on the conveyor belt to the consumer (in order to find themselves massive savings on payroll), now students will mark themselves, thus saving universities the cost of employing actual human beings to do such things. Never mind that marking and commenting on students work is actually what I consider the aspect of my job that requires the most expertise (anyone can read a lecture aloud, while knowing how to fix problems with students’ writing is an art) – the students in aggregate can achieve enough accuracy in clicking the “like” button or not underneath their peers essays that people like me with the red pen in hand late at night are no longer necessary!
(By the way, those of you that think of the credentialing aspect of universities as merely some sort of half-and-half mixture of a tyrannical ISA and a class confirmation machine, should remember that the less grades have to do with things the more other metrics will take over. “Everyone gets an A” in the US system, sure, but that simply means that out of all those students with the same marks, the ones who went to the most elite schools or have the most hookup are those that get to proceed to the next level. In other words, strict meritocracy is deeply suspect, but it is also better than its utter absence… While of course I understand the very obvious problems with it, marking fairly and accurately still seems to me an essential part of higher education and my place within it. Sure, eliminate marks, degree classifications and the like – it will only make it all the more likely than it already is, and it’s already plenty likely, that the kid whose parents go to the right cocktail parties will get the opportunities that should have gone to a more deserving candidate…)
Secondly, beyond the issue of academic labor, and very much true to the direction that higher education is rapidly moving in the UK due to the recent and massive government cuts, these MOOCs seem like a precursor step towards the further “consolidation” of the higher education sector. Notice who, for the most part, is involved in these schemes: elite universities. If they could find a way to credential the students who take them, who’s to say that a free or cheap “Harvard Extension Degree” for those who never once pass through the gates of the campus wouldn’t be seen as “better value for money” that a regular (and state-funded) degree at UMASS-Boston down the road? Why even bother continuing to fund the non-elite universities, where there’s a perfectly good and exquisitely branded degree available at low-cost and in a radically scalable way right there on everyone’s home computer or mobile device?
Finally, from inside the whale, there is something ominous about how these developments are being pushed on us from within the university, the rhetoric that’s used to push it, that sets off very clear alarms. Someone came in to one of our recent department meetings to preach to us the virtues of the recording of lectures and their eventual mass distribution. He let us know that this is on it’s way and we had better get used to it. When I asked, given that I might disagree with his list of virtues, or at least formulate my own list of non-virtues, why I “had better get used to it,” why we “had” to do it, he informed me that it was because it was “already happening elsewhere,” and that if we didn’t do it, we would be “left behind by other universities.” Right. If there’s one way not to go about convincing me to do something, this is the way to do it. After all, from austerity outward, this is the mode of collective and mindless non-decision making that basically rules and systematically fucks up our world on a day-to-day but ever intensifying basis. Just as “if we don’t impose austerity measures the same or deeper than nation X, the banks will destroy us” is basically the trumping argument at play in the wider world, the deployment of the argument “Harvard is doing this, and if we don’t follow suit, whatever the possible consequences” is to me a sign that we are probably about to set sail into the lowering tide that sinks all boats.The way things have generally been going, it’s hard for me to imagine that it doesn’t end somewhere the looks more like the following than the system that we have now. (Go to 3:48 on the video).