ads without products

Archive for the ‘theory’ Category

“against the grain”: on critical perversity

with 4 comments

At the place where I teach, we still have the students do two courses (one at the beginning of their time with us, and one at the end) in “practical criticism.” We don’t call it that (we just call it “criticism”) but that’s what it is. If we were an American institution, we’d think of it descending out of what is termed “The New Criticism,” but because we are where we are, it’s seen as an import from Cambridge. As the folks to the north-north east describe it on their department website:

Practical criticism is, like the formal study of English literature itself, a relatively young discipline. It began in the 1920s with a series of experiments by the Cambridge critic I.A. Richards. He gave poems to students without any information about who wrote them or when they were written. In Practical Criticism of 1929 he reported on and analysed the results of his experiments. The objective of his work was to encourage students to concentrate on ‘the words on the page’, rather than relying on preconceived or received beliefs about a text. For Richards this form of close analysis of anonymous poems was ultimately intended to have psychological benefits for the students: by responding to all the currents of emotion and meaning in the poems and passages of prose which they read the students were to achieve what Richards called an ‘organised response’. This meant that they would clarify the various currents of thought in the poem and achieve a corresponding clarification of their own emotions.

If you’ve been a reader of this site for awhile, or are familiar with my work in “the real world,” you might think that’d I’d buck against this model of instruction. Any good materialist critic of course should. It approaches the literary work in isolation of its context – the work as an ahistorical entity that emerged autonomously and without the frictional influence of the writer who wrote it or the world that the writer wrote it in.

But on the other hand – and this is why I not only do not buck against it but actively enjoy teaching on this course, perhaps more than any other – it is an extremely valuable method for enabling students to develop “against the grain” critical insights about texts. In the absence of astute attention of the “practical criticism” variety, it’s very difficult for students (or, really, anyone) to develop convincingly novel interpretations of texts. The close attention to the words on the page, and the dynamics of their interaction, not only sets the stage for an appreciation of the “value added” that comes of distilling whatever contextual and personal issues inform the piece once the history is added back in, but, due to the multiplicity and idiosyncrasy  of possible interpretations, provides an opening for critical newness – for the saying of something provocatively different about the work.

So how do I teach “practical criticism”? In the seminar groups that I lead, I model and encourage the following “flow chart” of thought: Anticipate what other intelligent readers of this piece might say about it. Try to imagine the “conventional wisdom” about it that would emerge as if automatically in the minds of the relatively well-informed and intelligent. And then, but only then, figure out a perverse turn that you can make within the context of but against this conventional wisdom. “Of course that seems right, but on the other hand it fails to account for…” “On first glace, it would be easy and to a degree justifiable to conclude that…. But what if we reconsider this conclusion in the light of….”

Students tend to demonstrate resistance, early on, to this practice. For one thing, especially in the first year, they don’t really (and couldn’t possibly) have a fully developed sense of what the “conventional wisdom” is that their supposed to be augmenting, contradicting, perverting. At this early stage, the process requires them to make an uncomfortable Pascalian wager with themselves – to pretend as though they are confident in their apprehensions until the confidence itself arrives. But even if there’s a certain awkwardness in play, it does seem to exercise the right parts of the students’ critical and analytical faculties so that they (to continue the metaphor) develop a sort of “muscle memory” of the “right” way to do criticism. From what I can tell, encouraging them to develop an instinct of this sort early measurably improves their writing as they move through their degree.

But still (and here, finally, I’m getting to the point of this post) there’s a big problem with all of this. I warn the students of this very early on – generally the first time I run one of their criticism seminars. There’s a big unanswered question lurking behind this entire process. Why must we be perverse? What is the value of aiming always for provocative difference, novelty, rather than any other goal?  Of course, there’s a pragmatic answer: Because it will cause your writing to be better received. Because you will earn better marks by doing it this way rather than the other. Because you will develop a skill – one that can be shifted to other fields of endeavour – that will be recognised as what the world generally calls “intelligence.” But – in particular because none of this should simply be about the pragmatics of getting up the various ladders and depth charts of life – this simply isn’t a sufficient response, or at least is one that begs as many questions as it answers. What are, after all the politics of “novelty”? What are we to make of the structural similarity between what it takes to impress one’s markers and what it takes to make it “on the market,” whether as a human or inhuman commodity? What if – in the end – the answers to question that need (ethically, politically) answering are simple rather than complex, the obvious rather than the surprising?

In my own work, I’m starting to take this issue up. And I try to keep it – when it’s appropriate – at the centre of my teaching, even if that can be difficult. (And there’s the further matter that to advocate “simple” rather than “complex” answers to things is itself an “against the grain” argument, is itself incredibly perverse, at least within an academic setting. There’s a fruitful performative contradiction at play that, in short, makes my advocacy of non-perversity attractively perverse!)

I’ll talk more about what I’m arguing in this new work some other time, but for now, I’m after something else – something isomorphic with but only complexly related to the issues with “practical criticism” and the issues that it raises. It has to do with politics – in particular the politics of those of a “theoretical” or in particular “radically theoretical” mindset, and the arguments that they make and why they make them.

Take this article that appeared yesterday on The Guardian‘s “Comment is free” website. The title of the piece (which of course was probably not chosen by the author, but is sanctioned I think by where the piece ends up) is “What might a world without work look like?” and the tag under the title continues, “As ideas of employment become more obscure and desperate, 2013 is the perfect time to ask what it means to live without it.” While the first two-thirds of the article is simply a description of the poor state of the labour market, it is the end that gets to the “provocative” argument at play.

But against this backdrop – rising inflation, increasing job insecurity, geographically asymmetrical unemployment, attacks on the working and non-working populations, and cuts to benefits – a debate about what work is and what it means has been taking place. Some discussions at Occupy focused on what an anti-work (or post-work) politics might mean, and campaigns not only for a living wage but for a guaranteed, non-means-tested “citizen’s income” are gathering pace.

The chances of a scratchcard winning you a life without work are of course miniscule, but as what it means to work becomes both more obscure and increasingly desperate, 2013 might be the perfect time to ask what work is, what it means, and what it might mean to live without it. As Marx put it in his 1880 proposal for a workers’ inquiry: “We hope to meet … with the support of all workers in town and country who understand that they alone can describe with full knowledge the misfortunes from which they suffer and that only they, and not saviours sent by providence, can energetically apply the healing remedies for the social ills that they are prey to.”

In other words, the best place to start would be with those who have a relation to work as such – which is to say nearly everyone, employed or otherwise.

It may be a somewhat bad faith line to allege that “interesting perversity” rather than some well-founded and straightforward belief is at work behind an argument of this sort, but in the absence of any substantive suggestions of what the answers to these questions might be, or in fact why these are the right questions to ask at the moment, what else are we to assume? It is provocatively perverse to suggest, at a time of stagnant employment rate and when people are suffering due to the fact that they are out of work or locked in cycles or precarity, that we might do away with work altogether. It isn’t the standard line – but it’s a line that allows the author to avoid repeating the conventional wisdom about what a left response to such a crisis might be. This in turn affords an avenue to publication, as well as a place in the temporary mental canons of those who read it.

Unfortunately, of course, the Tories (and their ideological near-cousins in all of the other mainline parties) are also asking the same sort of questions about a world (or at least a nation) without work. How might one keep the tables turned toward what benefits employers? How might one keep wages (and relatedly, inflation) low but still spur “growth”? How might one manage this system of precarious non-work, at once depressing wages but keeping the employable populace alive and not building barricades. In short, the question of “What a world without work might look like” is a question that is just as pressing to the powers that we oppose as to people like the writer of this article.

We’ve seen other episodes of the same. During the student protests over tuition increases (among other things) I myself criticised (and had a bit of a comment box scrap over) the Really Free School and those who were busily advocating the destruction of the university system…. just as the government was doing its best to destroy the university system. That many of those making such “radical” arguments about university education were themselves beneficiaries of just such an education only made matters more contradictory, hypocritical, and frustrating.

In short, in countering some perceived conventional wisdom, in begging questions that seem to derive from a radical rather than a “reformist” perspective, the author (and others of her ilk) ends up embracing an argument that is not only unhelpfully utopian, but actually deeply compatible with the very situation that seems to provoke the advocacy of such a solution. I can’t help but sense that the same instinct towards perversity that makes for a good English paper – and, perhaps even more pressingly, a good work of  reputation-building “theory” – is what drives a writer to take a line like this one at a time like this. One might counter that I’m being a bit of a philistine – that I’m closing off avenues of speculative thought and analysis. I’m not. I’m just wondering what the point of writing all this up in a questi0n-begging article in a popular publication is, an article that does little more than raise unanswerable questions and then ends with what might as well be the banging of a Zen gong.

 

Written by adswithoutproducts

January 4, 2013 at 2:11 pm

…in the (coming of) age (movie) of its technological reproducibility

leave a comment »

Haven’t seen the film yet, but strange, this from the New York Times review of Tiny Furniture:

One of the knots that Ms. Dunham requires you to untie while you’re watching “Tiny Furniture” is the extent to which she is playing with ideas about fiction and the real, originals and copies. Is the character Aura actually Ms. Dunham (the unique woman who lived in that loft) or is the director playing a copy of herself? Ms. Dunham doesn’t overtly say. One hint, though, might be the character’s unusual first name, which suggests that Ms. Dunham, at the age of 24 and herself a recent graduate, has read the social theorist Walter Benjamin’s 1930s essay “The Work of Art in the Age of Its Technological Reproducibility,” one of the most influential (and commonly classroom-assigned) inquiries into aesthetic production and the mass reproduction of art.

Benjamin argued that an original work of art (say, a Rodin sculpture), has an aura, which creates a distance between it and the beholder. But aura decays as art is mechanically reproduced (say, for postcards). This decay is evident in cinema, where instead of individuals contemplating authentic works of art, as in a museum, a collective consumes images in a state of distraction. While there were dangers inherent in this shift, and while cinema could uphold what he called “the phony spell of a commodity,” its shocks might also lead to a “heightened presence of mind.” (“The conventional is uncritically enjoyed, while the truly new is criticized with aversion.”) Cinema, in other words, might spark critical thinking.

Strange move, if that’s what’s going on. Seems perfectly evocative of the way that certain “canonical” theoretical texts turn, via the way they are presented in undergraduate classrooms at liberal arts colleges and the like, into a generalized soup of “life philosophy” and gnomic multi-use utterances. Someone texts their girlfriend / or boyfriend: Please stop texting me to check what I’m doing when I’m drinking with my friends – it’s like I’m living in the panopticon! Or, on a bros night out, Dude, she’s like your pharmakon – the medicine that you need but also the poison that’ll kill you.

Loss of the aura indeed. Suppose it’s bound to happen. “Every day the urge grows stronger to get hold of an object at very close range by way of its likeness, its reproduction…” and so forth.

Written by adswithoutproducts

April 2, 2012 at 9:35 am

Posted in benjamin, movies, theory

what before what: theory or literature

with 14 comments

I’m working on presumably the final revision of the book and I’ve done something a bit strange, something that feels to me both a) just what I want to do and b) bound to cause problems. Basically, if nearly every literary monograph with any interest in theory or theoretical questions starts with the definition of key terms via philosophy and then turns to the literature, I’m running things in reverse. I’ll develop working definitions of the key terms via little tour of literary history (broadstroke longview, more narrowing with the period in question) and then turn to the philosophical heritage in order to compare and contrast. At any rate, I just put in the following footnote. What do you think – too much?

If we have grown well accustomed to analyses that apply theory to literary texts – in order to understand or critique them, in order to shed light on their inner workings or the world that they represent – in my choice of trajectory here I propose to do something different. This work attempts to expose the theorizations of time implicit in the literary works themselves and explore these theorizations in (generally contrasting) comparison with what we might slightly reductively call the philosophical “conventional wisdom” on the subject. While any attempt to “forget theory” in writing about literature would either be naïve or haunted by invisible philosophical or ideological presuppositions, it on the other hand seems to be a disciplinary bad habit reflexively to consult philosophy in order to define our terms and only then to turn these terms to literary application.

In general, I simply don’t accept the reflexive necessity of consulting philosophy first. I don’t think of theory as a little machine that one builds in grad school, like a woodchipper or a blender, that one takes texts and runs them through, or at least that’s not how I think one should think of theory. I think that literature has as much to tell us – if differently – about so-called “philosophical” issues as philosophy itself. And I further believe that tons of theory is grounded in strange if not bad readings of literature… or even, more importantly, a kind of unconscious or unacknowledged “literariness” that haunts the answers developed.

Written by adswithoutproducts

August 23, 2011 at 8:15 pm

Posted in literature, theory

zizek and linksfaschismus

with 30 comments

I’m not sure there’s a clearer index of the basic intellectual dysfunction of the anglo-american theoretical left than the persistent popularity of Zizek and his work. The dysfunction is this: rather than conceive of themselves as participants in an on-going conversation, a-a theorists see themselves as the passive recipients of truths formulated elsewhere, generally on the continent. These passive recipients then apply these truths as they will – questioning them, revising them, arguing with them, developing one’s one alternative or oppositional versions and takes is not really part of the bargain. Theory is something, in the end, that happens elsewhere – not here.

Unthinking acceptance of the arguments of those deemed to be the master theorists has to be behind their continuing popularity. How else could one square the fact that individuals otherwise engaged in democratic activism, say, line up for hours to hear Zizek give one of his whistlestop talks at Birkbeck? Or those out on the streets in defense of state-funded education return to their rooms to work on translations of Badiou?

At any rate, there’s an incredibly sensible piece by Alan Johnson on Zizek and his fascist tendencies up at Jacobin. Here’s a bit from the beginning:

Mark Lilla in his book The Reckless Mind predicted that the “extraordinary displays of intellectual philotyranny” that disfigured the twentieth century left would not simply disappear just because the wall had fallen. So it has proved. Since 2000, Žižek has established his “New Communism” on two foundations. First, a system of concepts – Egalitarian Terror, the Absolute Act, Absolute Negativity, Divine Violence, the Messianic Moment, the Revolutionary Truth-Event, the Future Anterieur, and so on. Second, a human type and an associated sensibility – that ideologized and cruel fanatic, contemptuous of morality and trained to enormity that Žižek calls the “freedom fighter with an inhuman face.” In his passive-aggressive way, Zikek has even admitted what this so-called New Communism amounts to: “[Peter] Sloterdijk even mentions the “re-emerging Left-Fascist whispering at the borders of academia,’ where, I guess, I belong.”

Žižek’s philosophy is, to be blunt, is a species of linksfaschismus. This is true of its murderous hostility to democracy, its utter disdain for the ‘stupid’ pleasures of bourgeois life, its valorization of will, ruthlessness, terror and dictatorship, and its belief in the salvific nature of self-sacrificial death.

(Hat tip to Sofie Buckland for the link via FB…)

Written by adswithoutproducts

July 16, 2011 at 6:13 pm

Posted in theory, zizek

post-theoretical axiom no. 1: “ideology”

with one comment

The word “ideology” is banned, as it does not exist, not really. It exists only in the way that things like “art in general” exist. Henceforth, we will discuss only “public relations,” the actual tactics and material instantiations of the engineering of consent, the traceable paths of cause and effect involved with it.

We are not named by the policeman on the street who calls to us. We are named by our parents. Of course it is important to remember that we enter into language and then we never leave language. The problem is that the abstraction involved in the deployment of a term like ideology permits, no nearly mandates, that we stop just about there. The cop, the street, and then us, newly and irrevocably named – nowhere to go from there.

There is a malicious, ill-formed fiction at the heart of most theoretical errors – that is to say, I am starting to think, most theory.

Abstraction is a net that allows us to neglect the hold, to fall without worrying about reaching the next rung.

Written by adswithoutproducts

June 21, 2010 at 12:05 am

Posted in theory

third, but generally tautological, culture

with 4 comments

Via the Valve, an article in the NYT about what the paper is calling (correct me if I’m wrong – the paper’s been calling) “the next big thing” in literary studies – basically the application of evolutionary psychology and/or cognitive science to literature.

I try not to be cranky about this sort of thing – both the NYT’s reportage and this new mode of study itself. And it’s not that I don’t think there are insights potentially to be gleaned from such an approach. Rather, my problem with it is that much of the output that I’ve seen steers heavily in the direction of the massive-research-grant-funded restatement of the obvious and deep tautology. Let me show you a few examples from the article in question. (Obviously, this isn’t entirely fair, as I’m looking at newspaper re-descriptions of research rather than the research itself… But certain patterns familiar from the work in this line that I’ve actually looked at manifest themselves quite clearly in what follows, so I’ll go on…) Here’s a example:

At the other end of the country Blakey Vermeule, an associate professor of English at Stanford, is examining theory of mind from a different perspective. She starts from the assumption that evolution had a hand in our love of fiction, and then goes on to examine the narrative technique known as “free indirect style,” which mingles the character’s voice with the narrator’s. Indirect style enables readers to inhabit two or even three mind-sets at a time.

This style, which became the hallmark of the novel beginning in the 19th century with Jane Austen, evolved because it satisfies our “intense interest in other people’s secret thoughts and motivations,” Ms. Vermeule said.

Now, I am going to look into Vermeule’s work when I’m next in the office and have a minute as “free indirect style” has basically been the issue at the center of my own teaching and research for the past decade or so (that is to say, since I started work on my dissertation, or really since I started seriously reading Flaubert and Joyce as an undergraduate…), but can you see the problem here? Here are the claims in order:

1. “evolution had a hand in our love of fiction”

2. free indirect style “enables readers to inhabit two or even three mind-sets at a time”

3. free indirect style evolved because it “satisfies our ‘intense interest in other people’s secret thoughts and motivations'”

Well and good. But to my mind, even though its nothing new, only claim 2 holds any interest. How free indirect style manages the delicate play of multiple “mind-sets” is an interesting and ever-renewable issue, as it allows us to negotiate with some of the basic dynamics of fiction and their modern (considered broadly) manifestations. Point 1, on the other hand, is uninteresting because the basic assumption behind this approach (and, sure, an assumption that I share) is that evolution had a hand in everything that we have done, has a hand in everything that we do. Is there a human activity X, in other words, to which the statement evolution had a hand in our love for X? A statement like this simply doesn’t bear any, um, value-added. (More on this in a minute). Point 3 likewise merely dresses in evo-psych garb something that all of us have always already known about both free indirect style and, well, fiction in general. Was it ever a great mystery that a large part of the appeal of fiction is that it ostensibly allows us access to the elusive interiorities of other people? I suppose there’s something more to say about why this is the case, but not all that much more – it doesn’t seem all that confusing that whether one is looking for a mate or competing with the next hairy homo sapiens over a hunting ground, that thinking into the thoughts of others serves as a valuable skill in the work of gene preservation / distribution.

So just to sum up – I can see running room in the specifically literary claim that Vermeule’s making, but the “scientific” add-ons seem just that – add-ons, supplements from the realm of blinding common sense draped in the discourse of trendy science. (Please note and don’t get me wrong: theoretically inflected work very often performs and performed the same sort of dance…) But an argument that goes Behavior X seems irrational until we realize that it grants an adaptive advantage. We know that it grants an adaptive advantage because all actual behavior does… simply doesn’t seem to shed light on much of anything at all.

So critical and theoretical trends come and go. I’m a youngish academic, but I even I map my progress according to the rise and fall of Dominant Theoretical Paradigms (I entered the PhD at the peak of the Post-Colonial Bubble, got my first job as Deconstruction self-deconstructed but near the top of the Textual Materialist bubble, my second in the Age of Transatlanticism, and now, according to the paper of record, am doing my persistently untimely work in the Age of EvoPsych…) But I think there’s something special – specially symptomatic – about this trend that merits some attention. Here’s another snippet:

Ms. Zunshine is part of a research team composed of literary scholars and cognitive psychologists who are using snapshots of the brain at work to explore the mechanics of reading. The project, funded by the Teagle Foundation and hosted by the Haskins Laboratory in New Haven, is aimed at improving college-level reading skills.

“We begin by assuming that there is a difference between the kind of reading that people do when they read Marcel Proust or Henry James and a newspaper, that there is a value added cognitively when we read complex literary texts,” said Michael Holquist, professor emeritus of comparative literature at Yale, who is leading the project.

The team spent nearly a year figuring how one might test for complexity. What they came up with was mind reading — or how well an individual is able to track multiple sources. The pilot study, which he hopes will start later this spring, will involve 12 subjects. “Each will be put into the magnet” — an M.R.I. machine — “and given a set of texts of graduated complexity depending on the difficulty of source monitoring and we’ll watch what happens in the brain,” Mr. Holquist explained.

Ah, that sounds like the stuff of the properly-science oriented research grant. My department has been complaining very justly lately that the university-distributed research grants available for us to apply for – actually, which we’re reprimanded on a termly basis  for not applying often enough for – are arranged in such a way that makes them literally pointless for us to aspire to. Why? For one thing, the arrangement chez nous is that these grants can only be used to pay for research expense but in no case can be used to buy us out of teaching, that is to say buy us the time out of the classroom that we need to do our research projects. I’d write more if I had time, but I can’t think of a single research-related expense that I need money for, beyond I suppose a couple of books and the like. (This is the back story, by the way, behind the ubiquitous grant-funded fancy-ass home laptop that grantees in the humanities buy “for research purposes.” It seems cagey to do, but there’s literally nothing else to spend the money on, so you head to the Apple Store…)

But I have a sense that part of the appeal of this new “scientifically” organized work is the fact that it is compatible with the science-oriented funding that we humanities types are increasingly expected to attract, but which rarely for most of us fits the bill in any way that makes writing the grant application worthwhile. In a way, the quote above from Michael Holquist discretely says all that needs to be said about what’s driving this sort of work: “We begin by assuming that there is a difference between the kind of reading that people do when they read Marcel Proust or Henry James and a newspaper, that there is a value added cognitively when we read complex literary texts. Again, is the fact that people read Proust a bit differently from the New York Post a finding that requires ample funded-research? You need an MRI-machine to determine that? And since when is a commercial term like value-added appropriate for use in describing the sort of work that we do? (Oh, well, yes during the age of impact and its American equivalents…)

I’m definitely not against scientific and especially quantitative approaches to literature – see Franco Moretti’s relatively recent work for a fascinating example of what can happen when you run words through a machine. But I’m still waiting to see an example of EvoPsych / Cognitive Science-based literary work that doesn’t dress ordinary or even banal arguments about literature in trendily mystifying language that ultimately turns out to be 200 proof conventional wisdom. But the funny thing to remember, though, is that theory itself – which seems to be in the crosshairs of many who’ve taken up evo or cognitive approaches – itself emerged in large part in attempt to assert disciplinary rationality in an increasingly science-minded age. Structuralism, narratology, semiology and the like were all attempts to make what we do into a science rather than a endemically-skeptical art…

Written by adswithoutproducts

April 4, 2010 at 12:51 am

Posted in academia, criticism, theory

notes on militant method

with 22 comments

Gabe just left a provocative comment about Zero Books under the “militant preciousness” post:

Maybe it’s my fault for having overly high expectations, but there is a common stylistic let down in the blogs that is accentuated by the way the books promise more than they deliver, which is a perceptive or witty analysis of some cultural phenomenon, and then a final mini-paragraph which says, ‘and perhaps x shows that another way of living is possible’ which has not been earned in any way by the preceding analysis. I’m not convinced this (very enjoyable) polemic and analysis needs this ‘militant’ wrapping at all. And the clear pleasure in the ’self-marketing’ and being ‘on message’ with the unified branding and catchphrases is pretty striking to an outsider.

I feel that I should answer this the long way around, to make clear just what’s driving my rather palpable frustration with certain things. It’s sort of a long story, but basically the background to many of my positions / much of my current and future work, so hopefully you’ll bear with me.

I don’t have a problem with the “marketing” of the books per se. I have a problem when marketing steps in front of, outruns thought and argument. That is to say, I lived (as a Very Young Man) through the final years of the dominance of capital-T Theory in English departments, and cringe a bit when I think back on the ways that a sort of hipness or slickness was taken by publishers and even readers as a fully convertable currency in place of thought, practicality, and rigorous argumentation. The whole scene was, to put it bluntly, fucking useless.

Far too often, the form that “political” work in the humanities took was as follows: reassemble theoretical machine in your apartment. Force literary (or other) texts through machine. Scrape up what comes out the other end – generally a fairly bleak picture of our world and our prospects. Strain and mould into monograph. Just before baking, add a few vague, handwaving gestures about practice – gestures generally way out of sync in either their modestness or their hubristic magical thinking with the bleakness of the portrait you’ve just painted. Finally, bake in the glow of your self-admiration – for now you are a servant of revolution, you have changed the world with your book on, say, racial politics in the 19th century novel.

Then all of a sudden, capital-T theory failed. And then one day I was reading an essay about Conrad and imperialism, and noticed something. What the author was discussing was moderately valuable, interesting even. But the rotely grandiloquent claims at the front of the paper seemed to imply that she was in fact, in writing and publishing this paper, doing something about imperialism, racism, and gender imbalance. She gave a sense (and it’s not really her fault – this is just what one did or does in papers like these – it’s a sort of boilerplate that you insert at the front and the back) that a few more papers like this, and, well, we could expect a major improvement in the state of affairs whose backstory she was tracing.

All of a sudden, this seemed criminally untenable to me. It did because it is. And my head was set a-spinning. For this was just the sort of paper that I wrote too – I put the boilerplate in just the same way. Depressing! And so I started thinking about what might be done.

And I’m still thinking. But a few things have become relatively clear to me:

1. We must think steadily, honestly, and realistically about what it is that our works might reasonably do.

2. The fact that they probably won’t spur the immediate resolution of age-old antimonies and contradictions doesn’t mean that they are totally useless.

3. But getting #1 wrong will likely lead them to be useless, yes. Getting #1 right will likely lead to marginal usefulness, and marginal usefulness is better than no usefulness at all.

4. The cultural sphere still is the place where decisions collective and individual are made about who we are, where we’re headed, and what we should do. The base and superstructure are codeterminant. Intervention in culture is still very valuable.

5. You just have to think about which levers you can pull from where you’re standing. And make sure they are the right levers.

So… Writing anything that jumps a bit too quickly and way too far from object of analysis / findings to the pragmatics earned by the former sticks in my craw. Obviously, none of this is easy to sort out, there’s always a leap of some sort, and it’s very difficult to know in advance. But Owen’s work, for instance, seems to me to get the calibration just about right. (As does IT’s, for that matter). Making an argument – even if it largely at this point takes the form of pointing at things and saying that was good, there are obvious reasons to want more of that – that’s counter-intuitive or runs in the face of conventional wisdom and that is actually distributable (and distributed at this point, due to Owen’s voluminous journalism!) to those who are making real-world decisions about real-world things seems to me an object-lesson in one way we might start to do the work of what we call or used to call “theory” but to get it a bit more right this time around.

On the other hand – and here is where my comments over the last few months about “militant dysphoria” are coming from – some of the stuff being said by people (may of whom are writing books for Zero) seems to me to draw us all the way back and then some to the bad old days. The problem gives itself away, to my mind, when they’ve started fantasizing about the landscapes of the Terminator movies, or post-apocalyptic survival scenarios, or when we think vaguely Nazi Death Metal is somehow dialectically recuperable…. though they can’t say quite how, keep drawing up just short of where the connective tissue is supposed to be. This is why I keep asking for an explanation of the mechanics, and I think this is why people get a bit upset when I do.

And this is where the distortive effect of the marketing cart dragging the theoretical horse comes into play. It’s of course very sexy to lead with Absolute Destruction and Fucking Rubble!!!, Radical Moodiness and really Dark Music! But absent the steps that I’ve described above, I can’t help but feel that what we’re getting is something like the chronic perversity of marketing rather than the necessary rigor and clarity of thought that would be effective.

More to say, but it’s time to go to work!

Written by adswithoutproducts

October 5, 2009 at 12:30 pm

Posted in theory