Archive for January 2013
2012 is over. Now back to the regularly scheduled programming.
This is not to suggest that minimalism finds its realisation in the repudiation of the category of expression as such. On the contrary, the inaugural model of minimalism, Ernest Hemingway, simply opened up another alternative path to expression, one characterised by the radical exclusion of rhetoric and theatricality, for which, however, that very exclusion and its tense silences and omissions were precisely the technique for conveying heightened emotional intensity (particularly in the marital situation). Hemingway’s avatar, Raymond Carver, then learned to mobilise the minimalist technique of ‘leaving out’ in the service of a rather different and more specifically American sense of desolation and depression – of emotional unemployment, so to speak.
Interesting thought: that the outrolling of literary history and influence reveals that the apophatic isn’t just “mentioning by not mentioning” but in the long run is an index of the fact that there was nothing to mention in the first place. Carver takes up a style that is meant to suggest depths by remaining on the surface only to realise that they’re only ever surface. The ineffable shifts from what can’t be said to what’s not there to be said in the first place. Or even that the adoption of minimalism leads fiction into perversely-Pascalian situation: Minimalise, delete your words, and you will believe that there was nothing to delete in the first place.
Interesting synchronictiy. The other day I was in a Waterstones and was stunned yet again at the fact that the “headless women” book covers are still proliferating. What are the “headless women” book covers? Well, take a look here or here or here. Or take a look at this one, which happened to be on display on the 3-for-2 rack at the Waterstones in question, and which was written by an author I’ve met a few times.
It’s pretty obvious what’s interesting / discomforting / grating about the proliferation of covers of this sort. Implicit in their ubiquity is a sense on publishers’ parts that female readers, when choosing a novel, want to be able to project themselves into the work, to occupy the place of the female protagonist. If the person pictured on the cover of the book were to possess a head, and in particular a face, this would somehow block the ability for them to do so: But I don’t have red hair! But my eyes aren’t that colour! My cheekbones aren’t at all like that! It’s notable that works aimed at male audiences don’t take the same tack – often foregoing the depiction of people on the cover altogether.
Pretty condescending, isn’t it? Unfortunately one has a sense that the publishers know what works, and wouldn’t be doing this if it didn’t work to some degree. I’ve seen an argument on twitter – now lost to us, as it was months ago – in which a PR person for a publisher responded to criticism of the practice with something like “I know, I know – it’s awful. But what do you want us to do about it? The books won’t move off the shelves if we don’t.”
Depressing. But here’s the interesting part. It just so happens that I had assigned – and had to prepare to teach early this week – a fantastic essay by Catherine Gallagher called “The Rise of Fictionality,” which was published in Franco Moretti’s magisterial anthology on the novel. (Luckily for you – and for me as I rushed to get the students a copy of it – PUP has the essay on-line here.) The essay is a vivid and succinct historicization of the emergence of fiction as a category in eighteenth-century Britain, a category born out of divergence both from “factual” writing and (and here’s where the brilliance of the piece truly lies) “fantastical” writing as well.
I won’t go into all the nuances of the argument here – do yourself a favour and read the piece. But here’s a few paragraphs that seem especially relevant to the acephalous women of Waterstones:
That apparent paradox—that readers attach themselves to characters because of, not despite, their ﬁctionality—was acknowledged and discussed by eighteenth-century writers. As I have already mentioned, they noticed that the ﬁctional framework established a protected affective enclosure that encouraged risk-free emotional investment. Fictional characters, moreover, were thought to be easier to sympathize or identify with than most real people. Although readers were often called to be privileged and superior witnesses of protagonists’ follies, they were also expected to imagine themselves as the characters. “All joy or sorrow for the happiness or calamities of others,” Samuel Johnson explained, “is produced by an act of the imagination, that realizes the event however ﬁctitious . . . by placing us, for a time, in the condition of him whose fortune we contemplate” (Johnson 1750). What seemed to make novelistic “others” outstanding candidates for such realizations was the fact that, especially in contradistinction to the ﬁgures who pointedly referred to actual individuals, they were enticingly unoccupied. Because they were haunted by no shadow of another person who might take priority over the reader as a “real” referent, anyone might appropriate them. No reader would have to grapple with the knowledge of some real-world double or contract an accidental feeling about any actual person by making the temporary identiﬁcation. Moreover, unlike the personae of tragedy or legend, novelistic characters tended to be commoners, who would fall beneath the notice of history proper, and so they tended to carry little extratextual baggage. As we have noticed, they did carry the burden of the type, what Henry Fielding called the “species,” which he thought was a turntable for aiming reference back at the reader; a ﬁctional “he” or “she” should really be taken to mean “you.” But in the case of many novel characters, even the “type” was generally minimized by the requirement that the character escape from the categorical in the process of individuation. The fact that “le personnage . . . n’est personne” was thought to be precisely what made him or her magnetic.
Some recent critics are reviving this understanding and venturing to propose that we, like our eighteenth-century predecessors, feel things for characters not despite our awareness of their ﬁctionality but because of it. Consequently, we cannot be dissuaded from identifying with them by reminders of their nonexistence. We have plenty of those, and they conﬁgure our emotional responses in ways unique to ﬁction, but they do not diminish our feeling. We already know, moreover, that all of our ﬁctional emotions are by their nature excessive because they are emotions about nobody, and yet the knowledge does not reform us. Our imagination of characters is, in this sense, absurd and (perhaps) legitimately embarrassing, but it is also constitutive of the genre, and it requires more explanation than the eighteenth-century commentators were able to provide.
That is to say, the “headlessness” of the fictional character, their availability to us because they are unblocked by connection to a “real person” and thus readily available for readerly identification, may be “absurd and (perhaps) legitimately embarrassing,” as are the images on the covers in the bookshop, but it is also one of the things that makes fiction what it is, and is what accounts for the special mental and emotional states that we experience as we read them.But to take this a step further (and here I am drawing out some of Gallagher’s arguments and taking them in a slightly different direction) it’s possible that reflections of Gallagher’s sort (and even the instinct catered to by the contemporary covers) point us to different sensibility about the ideology of fiction.
In short, we are made anxious about the protagonism of fiction, the structural mandate that it forces or soothes us into identification with the autonomous or semi-autonomous individual as such, that it serves as an advertisement for intricate interiority and in so doing may urge us away from the consideration of the exterior. But if it is the case that the fictionality of the fictional character is grounded on a certain availability, a certain openness, even a certain whateverness, we might be licensed to think that the ideological underpinnings of fiction are far more complex than conventional (literary Marxist) wisdom suggests. Rather than a cult of personality, fiction, at base, might start to seem a space for the emergence of impersonality – and rather than simply markers of readerly solipsism and commercial cynicism, the book covers above might suggest a nascently radical instinct lurking just below the surface of the Waterstones transaction.
At the place where I teach, we still have the students do two courses (one at the beginning of their time with us, and one at the end) in “practical criticism.” We don’t call it that (we just call it “criticism”) but that’s what it is. If we were an American institution, we’d think of it descending out of what is termed “The New Criticism,” but because we are where we are, it’s seen as an import from Cambridge. As the folks to the north-north east describe it on their department website:
Practical criticism is, like the formal study of English literature itself, a relatively young discipline. It began in the 1920s with a series of experiments by the Cambridge critic I.A. Richards. He gave poems to students without any information about who wrote them or when they were written. In Practical Criticism of 1929 he reported on and analysed the results of his experiments. The objective of his work was to encourage students to concentrate on ‘the words on the page’, rather than relying on preconceived or received beliefs about a text. For Richards this form of close analysis of anonymous poems was ultimately intended to have psychological benefits for the students: by responding to all the currents of emotion and meaning in the poems and passages of prose which they read the students were to achieve what Richards called an ‘organised response’. This meant that they would clarify the various currents of thought in the poem and achieve a corresponding clarification of their own emotions.
If you’ve been a reader of this site for awhile, or are familiar with my work in “the real world,” you might think that’d I’d buck against this model of instruction. Any good materialist critic of course should. It approaches the literary work in isolation of its context – the work as an ahistorical entity that emerged autonomously and without the frictional influence of the writer who wrote it or the world that the writer wrote it in.
But on the other hand – and this is why I not only do not buck against it but actively enjoy teaching on this course, perhaps more than any other – it is an extremely valuable method for enabling students to develop “against the grain” critical insights about texts. In the absence of astute attention of the “practical criticism” variety, it’s very difficult for students (or, really, anyone) to develop convincingly novel interpretations of texts. The close attention to the words on the page, and the dynamics of their interaction, not only sets the stage for an appreciation of the “value added” that comes of distilling whatever contextual and personal issues inform the piece once the history is added back in, but, due to the multiplicity and idiosyncrasy of possible interpretations, provides an opening for critical newness – for the saying of something provocatively different about the work.
So how do I teach “practical criticism”? In the seminar groups that I lead, I model and encourage the following “flow chart” of thought: Anticipate what other intelligent readers of this piece might say about it. Try to imagine the “conventional wisdom” about it that would emerge as if automatically in the minds of the relatively well-informed and intelligent. And then, but only then, figure out a perverse turn that you can make within the context of but against this conventional wisdom. “Of course that seems right, but on the other hand it fails to account for…” “On first glace, it would be easy and to a degree justifiable to conclude that…. But what if we reconsider this conclusion in the light of….”
Students tend to demonstrate resistance, early on, to this practice. For one thing, especially in the first year, they don’t really (and couldn’t possibly) have a fully developed sense of what the “conventional wisdom” is that their supposed to be augmenting, contradicting, perverting. At this early stage, the process requires them to make an uncomfortable Pascalian wager with themselves – to pretend as though they are confident in their apprehensions until the confidence itself arrives. But even if there’s a certain awkwardness in play, it does seem to exercise the right parts of the students’ critical and analytical faculties so that they (to continue the metaphor) develop a sort of “muscle memory” of the “right” way to do criticism. From what I can tell, encouraging them to develop an instinct of this sort early measurably improves their writing as they move through their degree.
But still (and here, finally, I’m getting to the point of this post) there’s a big problem with all of this. I warn the students of this very early on – generally the first time I run one of their criticism seminars. There’s a big unanswered question lurking behind this entire process. Why must we be perverse? What is the value of aiming always for provocative difference, novelty, rather than any other goal? Of course, there’s a pragmatic answer: Because it will cause your writing to be better received. Because you will earn better marks by doing it this way rather than the other. Because you will develop a skill – one that can be shifted to other fields of endeavour – that will be recognised as what the world generally calls “intelligence.” But – in particular because none of this should simply be about the pragmatics of getting up the various ladders and depth charts of life – this simply isn’t a sufficient response, or at least is one that begs as many questions as it answers. What are, after all the politics of “novelty”? What are we to make of the structural similarity between what it takes to impress one’s markers and what it takes to make it “on the market,” whether as a human or inhuman commodity? What if – in the end – the answers to question that need (ethically, politically) answering are simple rather than complex, the obvious rather than the surprising?
In my own work, I’m starting to take this issue up. And I try to keep it – when it’s appropriate – at the centre of my teaching, even if that can be difficult. (And there’s the further matter that to advocate “simple” rather than “complex” answers to things is itself an “against the grain” argument, is itself incredibly perverse, at least within an academic setting. There’s a fruitful performative contradiction at play that, in short, makes my advocacy of non-perversity attractively perverse!)
I’ll talk more about what I’m arguing in this new work some other time, but for now, I’m after something else – something isomorphic with but only complexly related to the issues with “practical criticism” and the issues that it raises. It has to do with politics – in particular the politics of those of a “theoretical” or in particular “radically theoretical” mindset, and the arguments that they make and why they make them.
Take this article that appeared yesterday on The Guardian‘s “Comment is free” website. The title of the piece (which of course was probably not chosen by the author, but is sanctioned I think by where the piece ends up) is “What might a world without work look like?” and the tag under the title continues, “As ideas of employment become more obscure and desperate, 2013 is the perfect time to ask what it means to live without it.” While the first two-thirds of the article is simply a description of the poor state of the labour market, it is the end that gets to the “provocative” argument at play.
But against this backdrop – rising inflation, increasing job insecurity, geographically asymmetrical unemployment, attacks on the working and non-working populations, and cuts to benefits – a debate about what work is and what it means has been taking place. Some discussions at Occupy focused on what an anti-work (or post-work) politics might mean, and campaigns not only for a living wage but for a guaranteed, non-means-tested “citizen’s income” are gathering pace.
The chances of a scratchcard winning you a life without work are of course miniscule, but as what it means to work becomes both more obscure and increasingly desperate, 2013 might be the perfect time to ask what work is, what it means, and what it might mean to live without it. As Marx put it in his 1880 proposal for a workers’ inquiry: “We hope to meet … with the support of all workers in town and country who understand that they alone can describe with full knowledge the misfortunes from which they suffer and that only they, and not saviours sent by providence, can energetically apply the healing remedies for the social ills that they are prey to.”
In other words, the best place to start would be with those who have a relation to work as such – which is to say nearly everyone, employed or otherwise.
It may be a somewhat bad faith line to allege that “interesting perversity” rather than some well-founded and straightforward belief is at work behind an argument of this sort, but in the absence of any substantive suggestions of what the answers to these questions might be, or in fact why these are the right questions to ask at the moment, what else are we to assume? It is provocatively perverse to suggest, at a time of stagnant employment rate and when people are suffering due to the fact that they are out of work or locked in cycles or precarity, that we might do away with work altogether. It isn’t the standard line – but it’s a line that allows the author to avoid repeating the conventional wisdom about what a left response to such a crisis might be. This in turn affords an avenue to publication, as well as a place in the temporary mental canons of those who read it.
Unfortunately, of course, the Tories (and their ideological near-cousins in all of the other mainline parties) are also asking the same sort of questions about a world (or at least a nation) without work. How might one keep the tables turned toward what benefits employers? How might one keep wages (and relatedly, inflation) low but still spur “growth”? How might one manage this system of precarious non-work, at once depressing wages but keeping the employable populace alive and not building barricades. In short, the question of “What a world without work might look like” is a question that is just as pressing to the powers that we oppose as to people like the writer of this article.
We’ve seen other episodes of the same. During the student protests over tuition increases (among other things) I myself criticised (and had a bit of a comment box scrap over) the Really Free School and those who were busily advocating the destruction of the university system…. just as the government was doing its best to destroy the university system. That many of those making such “radical” arguments about university education were themselves beneficiaries of just such an education only made matters more contradictory, hypocritical, and frustrating.
In short, in countering some perceived conventional wisdom, in begging questions that seem to derive from a radical rather than a “reformist” perspective, the author (and others of her ilk) ends up embracing an argument that is not only unhelpfully utopian, but actually deeply compatible with the very situation that seems to provoke the advocacy of such a solution. I can’t help but sense that the same instinct towards perversity that makes for a good English paper – and, perhaps even more pressingly, a good work of reputation-building “theory” – is what drives a writer to take a line like this one at a time like this. One might counter that I’m being a bit of a philistine – that I’m closing off avenues of speculative thought and analysis. I’m not. I’m just wondering what the point of writing all this up in a questi0n-begging article in a popular publication is, an article that does little more than raise unanswerable questions and then ends with what might as well be the banging of a Zen gong.