-
About
Our Story
back- Our Mission
- Our Leadershio
- Accessibility
- Careers
- Diversity, Equity, Inclusion
- Learning Science
- Sustainability
Our Solutions
back
-
Community
Community
back- Newsroom
- Discussions
- Webinars on Demand
- Digital Community
- The Institute at Macmillan Learning
- English Community
- Psychology Community
- History Community
- Communication Community
- College Success Community
- Economics Community
- Institutional Solutions Community
- Nutrition Community
- Lab Solutions Community
- STEM Community
- Newsroom
- Macmillan Community
- :
- English Community
- :
- Bits Blog
- :
- Bits Blog - Page 136
Bits Blog - Page 136
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Bits Blog - Page 136

Author
10-10-2012
01:09 PM
A fundamental axiom in my philosophy of writing program administration is that institutions are, by definition, crystal-latticed structures designed to hold contradictions together in close proximity. Take, for example, a simple question that on the surface would seem to have an obvious answer: Who gets to teach writing? At my institution, we continue to hunker down through a perfect storm worse than any category 5 hurricane: SACS reaccreditation, program review, a fairly new and tentative administration, assessment, strategic planning, a new state law reducing the credits in the statewide general education/core curriculum, and budget cuts. It’s fascinating to see so many narratives about the university revisited and revised all at once; at the same time it’s inevitable (and simultaneously frustrating and amusing) to see how this storm churns up paradoxical discourses that long ago rested quietly in the sediment of the institution. For example, as part of our college’s budget cuts, our dean jettisoned our business writing course, since it primarily served students in the College of Business (CoB). We lost not only those students but also nine instructor lines (ouch). Recently, unaware of the fact that the CoB was transferring our course (ENC 3213) into their equivalent course (GEB 3213), an associate provost called a meeting between me and my dean and the folks from CoB. Her concern was “credentializing.” (Our school is hewing to a strict interpretation of the “18-credit-hour” rule in our accrediting body, which states that in order to teach a subject the teacher must have at least 18 graduate credit hours in the subject.) She explained that while it’s clear how someone who has taken a graduate course in Shakespeare could teach our writing course, it seemed to her a bit of a leap to claim that this same person could also teach business writing. Consider this theorem 1: only people with graduate training in business writing should teach business writing. I explained, first, that anyone working within academia is a business writer. I spend all day composing reports, proposals, memos, and e-mails, and I got my job only because of my skills in writing both a cover letter and resume. I then pointed out the logical extension of her argument. Consider that theorem 2: if a teacher needs graduate training in a discipline to teach a course in that discipline, then (I explained) there were only three people at our entire school who could teach our FYC courses—our three tenure-track Composition/Rhetoric faculty. (Curiously, that does not include me. Though my dissertation is in Comp/Rhet and though I worked with some rather significant figures in the field, my doctoral program did not have a Comp/Rhet track, thus my coursework would not qualify me to teach in the program I administer [cf. axiom 1]). Though unqualified to teach rhetoric, it would seem I am at least an adequate rhetorician, since the associate provost then conceded that perhaps people in English could teach the course—with sufficient orientation. (Though I suspect the logistical nightmare of replacing some 100 teachers of FYC courses had a more persuasive effect than the argument that academics are business writers). Complicating matters more, our university has a vigorous writing across the curriculum (WAC) program whose base philosophy is that any one in any discipline (with some training) can teach writing—we even have a second-semester FYC writing course offered in chemistry. Thus, theorem 3: writing happens in all disciplines and thus responsibility for writing should be distributed across the university. Let’s review the emerging geometry: Axiom: Institutions are designed to hold contradictions in close proximity. Theorem 1: Only people with training in a discipline can teach in that discipline. Theorem 2: Thus only people with graduate training in Comp/Rhet can teach writing. Theorem 3: Writing happens in all disciplines; thus, anyone in any discipline can teach writing. Maddening. Fascinating. As we continue to weather out this storm, I keep encouraging our chair to “embrace the illogic” of the institution. From my perspective, since any institution holds mutually exclusive positions simultaneously, getting out of any one jam simply means shifting one’s rhetorical position in the latticework. But there are larger questions embedded here. Who does get to teach writing? At our school writing is embedded in the English department. Where does it reside in your school—and why? As Comp/Rhet continues to mature as a discipline, how much longer can English (or communications, even) contain it? How can WAC/WID survive the twin forks of assessment (too often code for accountability) and accreditation?
... View more
0
2
308

Author
10-04-2012
02:07 PM
In my seminars on popular culture, my students make a class presentation on a popular cultural topic of their choice that forms the basis for their research papers. One requirement for their presentations is to explain both what their topic is and why they chose it. Over the years, this apparently basic task has proven to be more challenging than it appears, so I now offer to students my own reasons for choosing the topics that I write about as a cultural semiotician. The first point I raise is that selecting a topic for a researched semiotic interpretation should not be a random act. As in a scientific research program, the choice of the topic begins with a need to answer a significant question. The researcher does not yet know the answer, but may have some educated guesses that the project is intended to test. Thus, the choice of a topic represents not only an interest in a question or problem but also a general sense of where to go with it. It's particularly important to choose a topic not simply because a student "likes" it. In the semiotic interpretation of popular culture this is a particular hazard, because affection for one's topic can result in an inability to maintain the objective stance necessary to conduct an analysis. It is possible, of course, to have a fondness for one's topic and be able to think critically about it (I have a lifelong affection for The Lord of the Rings trilogy, for example, but that does not prevent me from seeing the rather serious political problems with that lovely fantasy), but it's not easy for those who are first learning about popular cultural semiotics to be able to do so. What often results is a paper that is more like a press release or puff piece. As a reason for choosing certain topics for my own semiotic analysis, I point out the following: why I have returned, on a number of occasions here in my Bits blog, to the popular culture of the 1960s and its historical aftermath? I can assure you that it is not out of an affection for the events of the sixties—quite the contrary. Rather, as I look at the increasingly corporatist/hypercapitalist trajectory of American society (a trajectory that has been uninterrupted no matter which political party holds the White House), I ask myself, quite simply, "what happened?" How did a generation (the largest of its kind in American history) whose cultural ethos profoundly challenged the corporate/capitalist "Establishment" end up participating in the construction of an America that is now making the 1950s look progressive? As I have stated before, there is no single answer to such a question, but the pursuit of answers, I believe, is crucial to any attempt to reverse the trend. You can't stop something if you do not understand what you are trying to stop. So, I will be returning here to what I regard to be one of the most important cultural questions of our time, the question of how a mass cultural challenge to one sort of society became an embrace, of how a decade of sociocultural exploration and experimentation rapidly transitioned to four decades of corporatist entrenchment whereby universities became "brands," students became "consumers," and scholars became "entrepreneurs." Four decades in which the collectivist spirit of sixties youth culture has retreated before a wave of libertarianism. Four decades in which the grandest social accomplishments of the thirties and forties (Social Security, Medicare, unionization) are being unraveled by the children and grandchildren of the generations that fashioned them. My thinking so far about this historical conundrum has taken me to many places. There is always something more to consider and some of it is very sensitive stuff. But it is the road that I am now traveling in my popular cultural analyses, simply because I think it is the most significant social phenomenon of our times.
... View more
0
1
310

Author
10-04-2012
08:09 AM
Our semester is already well under way, and now that the dust has settled I am reflecting back on our orientation for new and returning teachers, which we hold each year the week before classes start.Much of the orientation is built around our teaching manual, “Emerging, A Teacher,” affectionately shorthanded as EAT. EAT covers FAU-specific concerns and contains all the sample papers for our workshop. The rest of the material we use is built into the Instructor’s Manual for Emerging. (Well, technically, it’s the other way around. Much of the material in the Instructor’s Manual originally came from EAT.)I’ve been curious about how other schools and programs handle orientation. Here’s our schedule for the week:Day 1: Monday, August 13: 10:30 – 02:3010:30-11:00 Introduction to the course philosophy and pedagogy11:00-11:30 Introduction to textbooks, structure, and pacing of the course11:30-12:00 Introduction to the writing program Blackboard community12:00-01:00 Lunch on your own01:00-02:30 Business matters and practical concernsNew Teachers: For WednesdayRead the Yoshino and Poe essays and come to orientation with a sample class activity for each essay. Refer to the section on class activities in Emerging for help in formulating these materials. Also, read the sections on Yoshino, Poe, Philosophy and Pedagogy, Class Activities, and Peer Revision in Emerging and The First Day and Beyond in EAT.Day 2: Wednesday, August 15: 10:00 – 02:0010:00-10:30 Class plans and reading questions10:30-11:00 From class activities to drafts 11:00-12:00 Reading drafts for promising moments12:15-01:15 Lunch on your own01:15-02:00 Peer revision and revisionNew Teachers: For FridayRead the Friedman and Olson essays and come to orientation with a draft of your syllabus (based on the syllabus template on the writing program Blackboard site) as well as planned class activities for the first two weeks of class. Read the sections on Friedman, Olson, Commenting and Grading, Language and Grammar in Emerging and Four Moments of Metathinking and Midterm and Student Progress in EAT. If you are teaching in a computer classroom, read the material on technology in EAT for Thursday.Day Three: Thursday, August 16: 10:00 – 02:0001:00-02:00 Computer classroom orientationAttend this session if you are teaching in AL 240, AL 337, AL 346 or if you would like to know more about teaching with computersDay Four: Friday, August 17: 09:30 – 04:0009:30-10:00 Drafted materials workshop10:00-10:30 The first day: writing samples and policies10:30-11:30 Commenting11:30-12:00 University Center for Excellence in Writing 12:00-01:00 Lunch01:00-03:00 Grading and portfolios03:00-04:00 Grammar, error, and the handbookAll Teachers: Before classes startRead the section on Technology in EAT; find your classroom, print your class roster, finalize and copy your syllabus, and make copies of the writing sample. New teachers will want to make sure they have completed the “check-in” process with the human resources department.What does your orientation look like? What would you change about it if you could?
... View more
0
0
282

Author
09-26-2012
01:43 PM
Every year we offer a “standard” sequence of assignments for our teachers. Returning teachers are invited to use or adapt it; new teachers use it as they become familiar with our program, the course, and writing their own assignments. We test the sequence in the summer and gather sample student work at the same time that we use during our orientation. I thought I would offer this year’s sequence, which we titled “Rights and Bytes: The Technology of Civil Rights,” as a model for how we put assignments together. Feel free to use it or adapt it as needed.Paper 1: YoshinoAt the end of “Preface” and “The New Civil Rights,” Kenji Yoshino suggests that ultimately the law will play only a partial role in the evolution of a “new” civil rights, one based on the value of authenticity and the common denominations of being human. Write a paper in which you extend Yoshino’s argument by identifying other key areas of society that must play a role in the creation of a new civil rights.Questions for Exploration: According to Yoshino, why does an exclusive focus on law limit civil rights? What role must conversation play? What’s the difference between civil rights and human rights? How can we make the transition from one to the other? Does covering prevent the evolution of civil rights? What social factors might change covering: peer pressure? popular culture? What’s necessary for a person to achieve authenticity? How might economics, culture, or even religion function in Yoshino’s vision?Paper 2: Poe and YoshinoIn “Preface” and “The New Civil Rights,” Kenji Yoshino makes his arguments with little reference to or awareness of technology. However, as Marhall Poe makes abundantly clear in “The Hive,” technologies such as Wikipedia are growing rapidly and, more crucially, are becoming an increasingly important facet of our lives. Use Poe’s discussion of Wikipedia to complicate Yoshino’s argument by writing a paper in which you assess the potential of technology to improve civil rights. Questions for Exploration: Is Wikipedia’s bottom-up model an analogue to Yoshino’s emphasis on conversation as a mechanism of social change? Does the relative anonymity of Wikipedia impede civil rights by promoting covering? How can we harness the collaborative power of a project like Wikipedia for social change? Is the current model of civil rights a cathedral or a bazaar? What might/should a new civil rights look like in these terms? Given the pace of change on Wikipedia (and technology in general), is it realistic to expect it to play a role in slower processes, such as political and legal ones?Paper 3: Friedman, Poe, YoshinoSo far we’ve considered the relationship between rights and technology in a fairly local context—the United States. However, as Thomas Friedman makes clear in “The Dell Theory of Conflict Prevention,” it’s increasingly difficult to think about anything in a local context as the world and its economies becomes more and more interconnected. After all, Dell is as tied to its supply chain companies as they are to Dell and so too the countries involved. Using ideas from all three authors, write a paper in which you evaluate the possibility of universal human rights. Questions for Exploration: Despite past efforts, can we ever achieve a universal set of human rights? Which model of civil rights might help with that goal—equality or liberty? How might covering on a global scale impede that goal? Can nations have “reason forcing” conversations? How might technology play a role in promoting global human rights? Does the kind of collaboration represented by Wikipedia suggest that it’s possible? Would it require a top-down or bottom-up model? How might supply chains be used not simply to guarantee peace but also to advance human rights? Do economic pressures within supply chains make it more difficult to achieve universal human rights? What are the human costs of globalization? What challenges do mutant supply chains pose, and how might countries collaborate to overcome those challenges?Paper 4: Olson and One OtherIn “The End of Race: Hawaii and the Mixing of Peoples,” Steve Olson demonstrates how technological advances in genetics suggest that race is no longer a biological reality. At the same time, he also indicates that race and racism persist. Write a paper in which you evaluate “the end of race” using ideas from Olson and one other author. Questions for Exploration: If race has no biological basis, why does it continue to function as a category? What role does covering play in the continuation of race? What relationship does covering have to communities of descent? Does our current equality paradigm for civil rights mandate the continuation of race? Would switching to a liberty paradigm change things? Does Wikipedia offer a model for what a world without race might look like? What do the conflicts within Wikipedia, such as that between Cunc and Sanger, suggest about race and its persistence? What’s the difference between racism and prejudice? Do any of the authors offer tools for us to combat one, the other, or both? How might global economic collaboration affect our understanding of race? Does globalization exacerbate the racialization of culture? Do mutant supply chains form from racial groupings or communities of descent, and why might that difference matter?
... View more
0
9
430

Author
09-20-2012
01:19 PM
I have discovered recently that the word collectivism is being applied to any law or social activity that is designed to promote the common good—you know, things like environmental protection regulations that prevent people from polluting public waterways, or laws that forbid making false claims about the health effects of food products. It appears that collectivism, like communism and socialism, has become an all-purpose accusation against anything that the country’s far right does not like. While this tendency toward semantic creativity comes in part from the libertarian vogue of figures like Ayn Rand (whom the Republican National Convention has brought back into the national conversation), it is mainly comes from ignorance of what collectivism and socialism really are. To keep this blog within the limits of its usual topic area, I will examine the cultural significance of collectivism by looking at a social experiment that flourished, briefly, on America’s popular cultural stage in the late 1960s and early 1970s. This was the communal movement that saw thousands of young Americans banding together in urban apartments and rural farms (some of which, like Black Bear Ranch in California and The Farm in Tennessee, still exist) in an attempt to explore a truly collectivist experience. A group of people who emerged from the Haight Ashbury district of San Francisco—who called themselves the Diggers and later changed their name to the Free Families—were pioneers and leaders in this widespread pursuit of authentic communalism. In spite of the fact that the vast majority of the collectivist experiments of the Diggers failed, many of the surviving Diggers, like movie actor Peter Coyote, insist that in the end their revolution was successful. Personally, I wish that that were true, but in the years since the early seventies, America, rather than becoming a more communal society, has instead become ever more individualistic, to the extent that libertarianism thrives not only within the Tea Party but among many college students as well. Indeed, so individualistic has America become that the entire legacy of the Roosevelt era is in peril. What is puzzling to a cultural historian is why, given the Baby Boomers’ high profile embrace of collectivism in the sixties, this abrupt turnabout happened. As always, the answer to such questions is overdetermined: that is, there is no single explanation. In fact, in this case the explanation is massively overdetermined, and would require a book-length treatment to cover adequately. Complicating the matter is the fact that those nations that did pursue collectivist policies in the post-war era (countries like England and Sweden that were exemplars of democratic socialism for a time, and like the former Soviet Union and Eastern Bloc states that once modeled themselves on communist lines) have voluntarily abandoned the collectivist ethos as well. Heck, even China, while claiming to be a communist society, is capitalist in everything but name these days. Thus, the question would probably require an entire theory of history to address it adequately, and I am both unable to offer such a theory and disinclined to think that any particular theoretical explanation would be adequate—certainly not one within the Hegelian tradition anyway. But I would like to tease out one facet of the American turnabout to see what it tells us about ourselves as Americans. It comes down to a matter of conflicting mythologies—those guiding worldviews or value systems that in the American instance are often contradictory and contentious. On the one hand is our much-vaunted individualism, our belief in the primacy of individual experience and personal rights. On the other hand is our tradition—visible in everything from the congregationalism of the Puritan founders of New England, to the populist and progressive movements of the late nineteenth and early twentieth centuries—of communal action for the common good. American history presents us with the often teeter-tottering spectacle of these two impulses coexisting in an uneasy balance, with individualism on top at one time (the 1920s is an example of one such era) and communalism prevailing at another (e.g., the 1930s through the mid-1940s, when Americans collectively battled both economic depression and fascism). Then again, in the 1960s many baby boomers embraced communalism in reaction against what they perceived as selfish materialism, while in the decades since, individualistic consumerism (for most baby boomers as well as for succeeding generations) has held sway. In short, the ingredients for both individualism and communalism are built into the American character, ever contesting each other. And while, in the opinion of this baby boomer at least, it would be nice to arrive at some sort of productive synthesis, the experience of history suggests otherwise. Just read the news, or watch some presidential campaign advertisements. America exists in conflict with itself, and whether the current popularity of virtually anarchistic individualism will give way to a resurgence of genuine collectivism is anyone’s guess.
... View more
0
1
312

Author
09-20-2012
07:36 AM
I’m often asked about an online version of Emerging. Anyone who knows me expects it, since I do so much work with technology. I’m an unabashed Apple fanboy living in a total Mac ecosystem: iMac, iPhone, iPad, iPod. With the advent of electronic textbooks for the iPad, it’s no wonder people close to me keep waiting for the digital version of Emerging.But of course, anyone who knows anything about intellectual property rights in the digital age knows why it isn’t happening yet. It would be easy if I had just written a textbook: all my words, all original content and thus no rights and no permissions. But a reader is a completely different species. There’s an entire team at Bedford that focuses only on negotiating the rights to use each reading in the book, and each comes with its own special price tag. What I find most amazing is how random those price tags can be. There are some pieces I never thought we’d be able to afford that are (in relative terms) very cheap. Others I was sure we’d score easily, but they ended up just being too expensive.There’s no predicting the cost of permissions for print rights. Now, take that, multiply it by fears around digital rights, multiply that by the pace of change in technology, divide by the inertia of print and raise it all by a factor of 10. That’s the world of digital permissions.We’re making some progress. For this coming edition of Emerging, we have a few online readings with multimedia elements (called e-pages). But our experience putting those together illustrates the very barriers facing publishes when it comes to an online reader. There was a podcast we wanted to include but we were denied permission—not because of the podcast but because the makers of the podcast had already had so much trouble getting permission to use the music in the background.No doubt these problems will, in time, change. But change is going to be slow—not because of publishers or the massive inertia of print, but simply because permissions were a Byzantine system to start with and the digital dimension has only made them more so.I look forward to my online reader. I hope I live to see it. For now, I’m pretty stoked to have any online readings at all. Hey, it’s progress.
... View more
0
0
253

Author
09-12-2012
02:27 PM
Last year I did a series of posts about the process of putting together a composition reader Behind the Textbook. Consider this the (hopefully) last post in that series, since I am right now in the middle of working on the page proofs for the text, which is due out in January.The work involved at this stage is familiar to all of us as teachers of writing—essentially, it is proofreading the entire book. I log into the Bedford servers, download huge chunks of the text in PDF format, and read through each page looking for errors and omissions. It’s somewhat tedious work and certainly engenders in me a certain degree of sympathy for students, who have to go through something like this process weekly in the classes I teach. However, the entire experience has me reflecting on proofreading writ large.What exactly do I mean by that?Well, for starters, I devote very careful attention to this work because of the stakes involved. Soon, these pages will be printed as a book and at that point the text is out of my hands. I’m not surprised, then, that students often proofread so poorly or not at all. Specifically, I wonder what stakes they see in their writing—just a grade, perhaps? I’d hope more and I am wondering how I might get students to invest more heavily in their texts, to see them as something worth the care and attention that proofreading requires.It’s also a useful reminder to me about just what proofreading is. In this case, it actually is reading the proofs, which isn’t something students are likely to encounter. But on a larger level, it’s the final stage of production and the chance not to change content but only to catch errors. That’s a notion I might find useful in the classroom as students struggle to understand the differences between things like revising, editing, and proofreading.Finally, part of what enables me to get through this work is knowing that mine are not the only eyes reviewing the proofs. There’s a whole team also doing this work, and that’s reassuring. The mistake I miss may be the one they catch. I’m wondering, then, how to make proofreading a more collaborative act in the classes I teach. Yes, it’s part of the peer revision process, but through the work I am doing now I am coming to see that’s the wrong place to incorporate it. I’m thinking about adding a session for students to collaboratively proof their work before handing it in—a mirror of the process I am going through now.When I'm finished reading these pdf files, I’ll be thinking about how to encourage my students to invest in their writing, to see proofreading as a crucial final stage, and to work together to polish their texts to the greatest degree possible.
... View more
0
0
322

Author
08-23-2012
10:41 AM
Once upon a time, universities had identities, purposes, goals, mottos, even just functions. Then, in the 1990s, they suddenly developed "missions," in imitation of the corporate fad of the time whereby mission-focused "strategic planning" was all the rage. (I know this firsthand, having sat through endless strategic planning sessions at my university in the 1990s.) Today, strategic planning is regarded as a lumbering dinosaur in need of replacement by nimble-thinking campus president-CEOs who haven't the time to listen to students and faculty about the directions of their campuses. Instead, universities now have brands. I bring this up in a blog devoted to the teaching of popular cultural semiotics to demonstrate once again that the purpose of such classes is not to celebrate our entertainment/consumer culture but to analyze it for signs of the kind of society we have become. And in seeing universities treated by university personnel as products to be branded and sold to "consumers," we can measure just how far we have gone into the hypercapitalistic obliteration of every other perspective on what life and society can be. When universities are brands rather than settings for learning and scholarship, they are not only being told to behave like businesses, they are being told that they are businesses. This, of course, is very bad news indeed for the humanities, which have never been cash cows, and are now staring down at their own potential extinction. (If you want to see a starkly honest assessment of the future of the humanities as an academic career, I suggest that you look at the site 100 Reason Not to Go to Graduate School). Academic pundits from Stanley Fish to Martha Nussbaum have weighed in on this crisis (Fish has taken the position that the humanities shouldn't even attempt to justify themselves to business-model obsessed administrators, while Nussbaum eloquently makes the traditional case for the extra-economic value of training a citizenry in humanistic values), but they aren't changing anything. Indeed, the crisis is accelerating much like the way the effects of global warming are, as is to be expected when our culture is coming to adopt the positions of corporate-think as if they were the only world views possible. The situation has gotten so bad that I expect that some of my readers are simply shaking their heads at this obsolescent baby boomer who just cant simply get with the program. Well okay, call me old fashioned; I do not regard TED and MOOC as my friends. Like the medieval monks and lay scholars who flocked to Europe's original universities as a haven from the violence and brutalities of the Middle Ages, I entered academia as a haven from the salesmanship and money worship of American society. I think that Death of a Salesman is a far more valuable document than the Forbes 500. The entrepreneur and the CEO are no heroes of mine. For many years, American society made room for people like me, carving out a space for something (the study of literature, philosophy, and art) that never could justify itself in market-based terms. Ironically enough, I can justify the teaching of popular culture in market terms, both because there is a large demand for it among students and because its study (as I tell my classes) can be useful in such areas as advertising, market research, and the culture industries. But my emphasis is on critical thinking, not "Disneygeering." As I also tell my students, I am not personally threatened by this seismic shift in the nature of the modern university. Once upon a time, the American education system made room for people like me, and the career that I began many years ago will almost certainly serve out my time. Though we are moving toward a situation in which most "educators" will be little more than minimum-wage purveyors of digitized learning management systems, I'll be retired by then. It's the next generation that is going to pay the full price here, not mine. And they shall live corporately ever after.
... View more
0
0
664

Author
08-09-2012
07:50 AM
I originally wrote my last blog post exactly the day before the massacre in Aurora, Colorado. Since my post was a cultural analysis of The Dark Knight Rises, I withdrew it once I learned what happened out of respect for the victims. I return to the topic of Batman now, because the tragedy has made me think further about the cultural significance of the entire Batman franchise.This revised blog post differs from what I wrote originally; its subject is not any particular Batman movie, graphic novel, or comic book, but Batman himself, the passions that he raises, and what it all might mean. Even before Aurora, the new Batman movie was raising passions. Rotten Tomatoes had already had to take the unusual step of closing its comments section when the responses turned especially nasty after a couple of reviewers panned the movie. Now, one rule of thumb for cultural semiotics is that if people start getting passionate about something as trivial as a cartoon superhero who runs around in a bat suit, then maybe things aren't so trivial after all. Something is going on. I've been pondering this, and realize that we can see Batman as a signifier within a cultural terrain that has been called the "posthuman condition" by a number of cultural philosophers. Though there is a great deal of disagreement about the precise meaning of the word posthuman, I think that it actually works pretty well when looking at a figure like Batman. For Batman is posthuman if we see "posthuman" as being equivalent to "transhuman," for with his body armor and arsenal of machines Batman is indeed something of a cyborg, and the transhuman cyborg is one of the tropes of posthumanity. There is plenty of evidence, from the 1970s Six Million Dollar Man to Donna Haraway's Cyborg Manifesto, The Terminator, and beyond, that the posthuman cyborg is a figure with a great deal of cultural resonance, and with whom people identify. At the same time, the fact that Bruce Wayne disguises himself as a bat is consistent with a posthuman fascination with crossing the lines of conventional species classification, with the Pandorans probably standing as the most popular current avatars (pun intended) of cross-species popularity, but vampires could fit in here as well. Heck, even zombies are posthuman in their way. But I think that there is something else going on with batman beyond the posthuman dimension. For one thing, not everyone equates the posthuman with the transhuman. That is, in the opinion of cultural theorists like Cary Wolfe, the celebration of the transhuman cyborg is an extension of Enlightenment attitudes toward science and technology, and ultimately reflect a Eurocentric and anthropocentric concept of humanity that goes back to the Renaissance and the dawn of the modern humanities. For such philosophers, posthumanism breaks from that tradition, deconstructing the European tradition of viewing man both as the "measure of all things" and as a white heterosexual male existing in a hierarchical opposition to the rest of the world. From this perspective, then, Batman is anything but posthuman. A rich white male who runs his own crime-fighting empire, Batman is a hero whose ethics are entrepreneurial and whose power comes from his successful position in a market-driven world (he's a rich banker). One might say, accordingly, that Batman is something of a "neoliberal" figure, someone who conforms to the neoliberal socioeconomic ethos that prizes entrepreneurial initiative, deplores governmental planning or intervention, and worships the "market." And here, I think, lies an important clue to at least some of the passion that surrounds this otherwise comical figure. For a culture war is raging in our society between those who embrace a posthumanist vision for humanity (that is, environmentalist, feminist, anti-Eurocentric, animal-rights-embracing, Queer, and so on) and those who cling to a neoliberalism that values technopower, market capitalism, individualism, and anglocentrism. Batman is a hero to the latter—an often-youthful audience that also frequently holds libertarian views that are quite consonant with neoliberalism—and his fans who identify with him don't take kindly to any criticism. Of course, most people don't use such terms as neoliberal and posthuman (conservative and liberal, respectively, are the common equivalents), so to put this in more ordinary terms (and cultural studies scholars should always be prepared to do that), Batman is a hero to those (often young men still in school) who are aware that their values, while widely embraced by society at large, are not shared by all their classmates or professors (the professoriate is universally regarded as "liberal" or "socialist"). And when fundamental values get challenged, that usually leads to passionate resistance. So we don't have to get involved in a hopeless discussion about gun control and violence when it comes to the significance of Batman (after all, Batman eschews the use of guns, and he is hardly the only violent figure in contemporary entertainment). His meaning goes beyond violence into differing visions of what America should be. It is probably no accident that this latest Batman tale has Batman defeating someone who is metaphorically exploiting the Occupy movement; that's what any good neoliberal superhero would want to do. To put it all only half-jokingly, we might expect a new Batman story soon in which the villain's name sounds suspiciously like "Obamacare." I only hope he doesn't wear tights or wear a silly mask.
... View more
0
0
402

Author
08-01-2012
07:37 AM
Hi, Bits readers! We were at the Conference on College Composition and Communication again this April and we sat down with more of the talented authors who blog here on Bits to talk about writing, blogging, and online community. We hope you enjoy this chance to get up close and personal with Barclay Barrios! [embed width="425" height="350"]http://www.youtube.com/watch?v=358BfoZZb4I&feature=youtu.be[/embed] Be sure to check out these conversations with some of our other Bits bloggers if you haven't already:Jay DolmageElizabeth Wardle and Douglas DownsAndrea LunsfordSteve BernhardtSusan Naomi Bernstein
... View more
0
0
242

Author
07-26-2012
12:14 PM
This summer I've been rereading some novels that I haven't looked at since graduate school. When I first read them over thirty years ago they were presented to me as literary "classics," and, while I knew that they were first published as serial entertainment, I read them mostly as examples of high art and responded to them accordingly. Since then, the rise of cultural studies has deconstructed the lines between "high" and "low" art, and my reading of the quondam classics has taken a decidedly semiotic turn. Having just finished William Makepeace Thackeray's most popular novel, Vanity Fair, I thought I'd subject it to a cultural-semiotic reading in order to illuminate certain tendencies in current popular culture. By the time Vanity Fair came to be written in the mid-1840s, England's transition from a feudal to a bourgeois society was well underway, and this is signified in the novel wherein such haute bourgeois families as the Dobbins, Osbournes, and Sedleys, whose fortunes derive from retail trade and finance, are leading characters rather than minor ones. The novel itself, as Wolfgang Iser would say, was written with members of that class in mind as implied readers, and it reflects their point of view. Typical of that point of view is the way that both upper-class and lower-class characters are portrayed in the novel. Reflecting the still-existing love/hate attitude that the bourgeoisie hold towards the upper classes, the feudal aristocrats in the story are at once objects of desire (scenes of luxury and "fashion" pervade the novel) and of moral disdain (from Sir Pitt Crawley, Sr. to Lord Steyne, Vanity Fair's baronets and peers are usually moral bankrupts and cranks). Similarly consistent with the bourgeois perspective, the novel's lower-class characters are condescended to, appearing solely as marginal "furniture," and often as the butts of snide comedy: two dimensional caricatures who are ridiculed for things (like the servants' liveries) that they are forced to do and wear. Significantly, however (I'll get back to this), what might be called mid-level middle-class characters hardly appear at all. The scene stealer of the whole novel has always been Becky Sharp, of course, whose social origin as the daughter of an unsuccessful artist and low-opera singer is quite murky. Morally condemned in the novel for her loose sexual conduct, indifference to her son, and her refusal to pay her bills, Becky nevertheless has fascinated readers and audiences since she first appeared in serial form, not only for her titillation value but also because of her indomitable will-to-succeed, with success being equated with high status and fortune. Well, all of this is basic Victorian literature stuff, so I had best get to the point. What is striking, then, about Vanity Fair, and of so many Victorian novels, is the exclusion of a vision of satisfactory middle level life. The bourgeois characters either rise to great wealth (the Osbournes and the Dobbins) or crash to poverty and misery (the Sedleys). The upper-class characters are lavishly revealed in their palaces and balls, but are generally morally, and often economically, bankrupt, and in fear of hereditary diseases. What is missing is a well-developed representation of something in between, a mid-level middle class. We do catch a glimpse here and there of such characters (like Vanity Fair's Clapps or Great Expectation's Wemmicks), but a glimpse is all that we get. More than a century and half later, as the mid-level middle class in America comes increasingly under socioeconomic attack, we can see something of a return to the social vision of Vanity Fair in popular culture. For while, in the first decades after the Second World War, a vision of middle-class adequacy was reflected in pop culture (especially in the classic situation comedies of the fifties and the sixties), that began to change with the advent of the "dysfunctional" situation comedies of the eighties and the nineties, wherein that mid-level became an object of derision, while recent series like Breaking Bad and Weeds show that same class in a state of ongoing crisis. The center is not holding. Thus, there seems to be something inherently unstable in bourgeois ideology, an instability that is built into what we call the "American dream." For a brief period in the mid-twentieth century, that dream embraced a broad-based vision of middle-class competency, but inherent in that dream has always been a desire for more than that, and so, just as in Vanity Fair, the center is falling apart. Goodbye Donna Reed. Hello Kim Kardashian.
... View more
0
0
516

Author
07-12-2012
12:25 PM
One of the most important lessons to learn in the semiotics of popular culture is the fact that if something is entertaining it calls for an analysis, not an end to the discussion. Unlike high-art artifacts, which may indeed be created for very small audiences (or for no audience at all within their creators' lifetimes), the products of popular culture exist, by definition, in the hope that they will appeal to (i.e., entertain) a mass audience, and thus their analysis should reveal what it is that makes them entertaining and what that says about their audience and the culture in which they live. The point is not to treat the mass cultural artifact as if it was a high cultural one; it is to assess the significance of whatever it is that large numbers of people find entertaining. I begin all of my popular culture classes with this explanation, because otherwise I fully expect the kind of reaction that inevitably occurs when someone in the popular press dares to present an analysis of the cultural significance of a current entertainment. For example, in a recent review of the movie Ted, Patrick Goldstein of the Los Angeles Times offers what is, in effect, a semiotic analysis of the appeal of the film. Acknowledging what he calls the likely "sociological" explanation for the recent success of such "man-child" themed movies as Ted is—that is, the effects of "helicopter parenting, the rise of feminism, video games and a cruddy economy that has a larger percentage of 25–34 males living with their parents than ever before"—Goldstein proposes a counter interpretation that begins by situating Ted within an entire cinematic history from Stan Laurel to Will Ferrell, and ends by arguing that goofball comedy featuring immature men (and women) is simply a staple of the medium. Ted, for Goldstein, is just another instance of what might be called "The Three Stooges Syndrome." Personally, I would give more credence to the sociological explanations that Goldstein underplays in his analysis, and I would also situate Ted within the American tradition that is so well explored in Leslie Fiedler's great study Love and Death in the American Novel. But no matter: Goldstein's analysis is a competent reading of a definite contemporary trend in American entertainment. But what interests me here is the response that his column received in the "Comments" section of the LA Times. The hostility, the name calling, the sheer rage that his brief review elicited would be breathtaking if it wasn't so predictable. His greatest sins, according to the comments, are twofold. First, he dared to analyze something that is "only" an entertainment, and second, he makes some generalizations about audiences. Well, to echo one of the angriest responders: news flash, pal, Ted was a highly calculated entertainment package designed to appeal to a very large, and specific, audience, and its box-office success reveals that the generalizations that its creators made about that audience were perfectly correct. In other words, to make successful entertainments, mass cultural creators rely on their own generalizations about their intended audiences. They are, in effect, semioticians without portfolios. This is why I always have a large number of film and TV majors in my popular culture classes, and I make it explicit to my students that semiotics is just as applicable to the creation of popular cultural artifacts as it is to their analysis. Filmmakers, television writers, advertising teams, whatever—all anticipate the responses of their target market/audiences by reading the behavior of masses of people (advertisers even make use of such things as "psychographic" profiling schemes that really get into down-and-dirty stereotyping, because they work). What enrages people about this is that the American mythology of individualism absolutely rejects the predictable realities of mass society and mass behavior, but that is the essence of mass culture. The fact that America is at once the world's leading creator of mass culture and its most vociferous exponent of individualism is just another of the many contradictions that roil our society and that must be taken into account when trying to understand American behavior.
... View more
0
0
383

Author
06-28-2012
07:21 AM
Ever since Bob Dylan channeled the spirit of Joe Hill and Woody Guthrie into rock-and-roll, American popular music has had (and has been expected to have) a political edge. With such senior citizens of political pop as Neil Young and Bruce Springsteen continuing to be popular and influential performers, alongside such more generationally current figures as Kanye West, it is certainly evident that this tradition (notwithstanding all the Justin Biebers and Britney Spears out there) continues. But when cultural analysts consider the political messages of popular music, they tend to restrict their focus to rock and hip hop, often ignoring (or perhaps dismissing) the potent political semiotics of country music. But if we want to understand just what is happening in America today (from the Tea Party to Rick Santorum), it would be a very good idea to listen very closely to contemporary country, which is no more simply about lonesome cowboys and bad whisky than rock-and-roll is about sock hops and Bobby Sue. I was reminded of this recently by a fine column in the Los Angeles Times by David Horsey. Horsey notes the paradoxical fact that even as the middle class (and, we could also say, middle America) falls further and further behind in the hypercapitalist rat race, country music sends messages of dubious reassurance that, in effect, prevent the middle-American victims of contemporary socioeconomic trends from attempting to do anything about what is happening to them. Horsey focuses on the lyrics from two current country music hits—Rodney Atkins's "These Are My People" and Lee Greenwood's "God Bless the USA"—to make his point. In the former song, Atkins describes the lives of stereotypical American rednecks, working at underpaid nowhere jobs during the week and only coming alive during manic weekends filled with beer and church-league softball. In the latter, Greenwood simply waves the flag, declaring that so long as America is the land of the free, everything else (from job losses to home foreclosures) can be endured and managed. [embed width="425" height="350"]http://www.youtube.com/watch?v=hkfokukjVV8[/embed] Such songs are filled with potent political shibboleths (like family and mom and dad) that express an ongoing counter-revolution against the cultural revolution of the 1960s (which, in the mythology of country music, was simply about free love and illicit drugs: Have a look at Forrest Gump—the cinematic equivalent of a country song—for a good example of this mythology). Sometimes that counter-revolution in country music goes back even further than the 1960s to the 1860s, as in a country song I once heard on the radio that openly blamed Abraham Lincoln for all of today's social problems (I kid you not). That country music is politically conservative practically goes without saying, but my point is that it is part of the texture not only of American popular culture but of American political life, and so it bears paying attention to. And what Horsey suggests is that country music not only expresses the counterrevolution against the cultural revolution, but also acts very much in the way that Max Horkheimer and Theodor Adorno classically argued that the "culture industry" did: that is, as a way of distracting its listeners from the social inequities that afflict them. By telling people that their nowhere jobs and extreme economic vulnerability don't matter so long as they've got family, freedom, and foaming pitchers of beer, country music not only expresses the feelings of conservative Americans, it keeps them in their place by making them proud of their oppression. But that is an irony that I fear would be lost on a country song.
... View more
0
2
565

Author
06-14-2012
02:08 PM
One of the unavoidable challenges of creating pop-culture-themed textbooks is the rapid pace of cultural change. For example, when Sonia Maasik and I were preparing the sixth edition of Signs of Life in the USA, MySpace was the space for social networking (especially among the young), while Facebook was a small outfit generally used by northeastern college students and adults. Well that sure changed fast. Similarly, while we were preparing the seventh edition of Signs of Life, vampires were the hottest living dead characters around, what with the exploding popularity of the Twilight series of books and movies, and TV shows like Vampire Diaries and True Blood. So we included a detailed analysis of the phenomenon in our text. Well, with the disappointing box office performance of Dark Shadows (how could Johnny Depp as a vampire disappoint?) and Kristen Stewart's moving on to new fairytale roles, it is clear that vampires are rapidly becoming old hat: yesterday's monsters. Goodbye Buffy, hello . . . zombies. I expect that by the time we get to work on the eighth edition of Signs of Life, zombies will be getting stale, but right now they seem to have taken over the popular imagination. The phenomenon is so large that I couldn't begin to tackle it in a single blog, but I'll say that what matters most in the teaching of popular cultural semiotics is not being able to keep exactly up to date on whatever the current fad, but instead being able to show the significance of the changes that inevitably occur. As different as the contemporary "romantic" vampire and gory-gross zombie are, they share something in common: both don't really exist. While there do appear to be some people who are seriously preparing for a "zombie apocalypse," zombies, in their incarnation as cannibalistic corpses (not their original form as the victims of Voodoo rituals) are simply fantasy monsters. Like vampires (and werewolves), they belong to a horror story subphylum with its roots in ancient mythologies (and probably even earlier Totemic tales) but which today is a category within the fantasy genre of story telling. So, for that matter, are stories involving wizards and extraterrestrials, which brings me to my point. Ever since the explosive popularity in the 1960s of both Star Trek and The Lord of the Rings, fantasy (you can most certainly include costumed superheroes in the list) has moved from the fringes of B-movie and children's/adolescent literature status to become the dominant form of storytelling in our popular culture. And so, while we can see variations on the general theme (Star Wars ruled the later 1970s and 1980s, The X-Files and other Roswell-related stories towered over the 1990s, vampires took over in the "naughts," zombies rule the present, and so on), the overall trend has been consistent. It's easy, as is so often the case in popular culture, to take this for granted. But while fantasy storytelling is the oldest form of storytelling in existence, there is a crucial difference to be noted in the modern fantasy era. This difference is the fact that fantasy storytelling, at least since the latter part of the nineteenth century, had been regarded as something for children and adolescents. Indeed, the English novelist C. P. Snow evinced some irritation at the popularity among college students of J. R .R. Tolkien's novels: Snow thought that Tolkien's books were childish, and though they were published in the 1930s and 1950s, The Hobbit and The Lord of the Rings didn't become international sensations until the 1960s. And, in a cultural sense, fantasy stories were childish in the mid-twentieth century. Something for afternoon movie matinees but not serious movie making. No more: fantasy is the lifeblood (pun intended) of today's movie industry. Most of the top grossing movies of recent years have been out-and-out fantasy in one way or another. As The Avengers breaks Harry Potter's box office records and The Walking Dead chomp their way through cable TV, it is clear that amid the fads there is a trend—and that trend is a sign of something I have had occasion to refer to often in this blog: it is a signifier of the way that American culture is a youth culture in which what were once deemed mere children's stories of little interest to adults have now become the shared culture of all ages. The implications of this are enormous, and it a good class exercise would be to discuss the many ways in which America manifests itself as a youth culture. To clarify things, it would be useful to also explore the signifiers of a more traditional culture in which maturity brings power and prestige rather than being left on the shelf. As always, I should mention that being a signifier of a youth culture is not the only significance of the current popularity of fantasy storytelling. The matter is overdetermined, and there are numerous other angles that might be explored. I may do so myself in a future blog.
... View more
0
0
384

Author
05-31-2012
01:45 PM
So Jack Kerouac's iconic novel On the Road has finally (more than a half century after Marlon Brando changed his mind about starring in the first of many projected movie renditions of the story) made it to the screen. Far and away Kerouac's best-known and most popular book, On the Road is a very useful for teaching popular cultural semiotics, not only because of its enduring appeal to college age students but because of the crucial role it played in shaping the youth culture America has become. [embed width="425" height="350"]http://www.youtube.com/watch?v=N9vsE0llyBM[/embed] Ironically, it rather annoyed Kerouac to find that his break-through novel, published in 1957, would become one of the chief how-to texts of the sixties' counter culture. Himself a member of what is now fondly known as "the Greatest Generation," Kerouac had no affection for the baby-booming hippies who transformed the Beat vision into a mass cultural movement. It was Allen Ginsberg, not Kerouac, who avidly crossed generations to become the Pied Piper of the Age of Aquarius. Kerouac was a supporter of the Vietnam War effort, a friend of William F. Buckley Jr., and a jazz aficionado who resented rock-and-roll. But the fact remains that it was On the Road (published in 1957), more than any other Beat text, that inspired the cultural revolution of the 1960s, and little wonder that it should have. Appealing to a fundamental American mythology of the freedom of mobility and of endless horizons, the book also appealed to a pampered generation of baby boomers who found in its rejection of conventional adult responsibilities a model for the extension of their own youth, a delaying of maturity that continues to this day in the ongoing evolution of America's youth culture—a culture centered in pleasure and entertainment. Thus a second irony: Kerouac wanted to be taken seriously as a high art novelist, but his legacy—the endless pursuit of youthful experience, or what he so famously called "kicks"— has been in pop culture. But while virtually all Kerouac students celebrate the "mad to live" ethos that On the Road so effectively dramatizes, there is another side to the novel that sheds a somewhat startling light on the question I asked in my last blog: that is, "what happened to the spirit of the sixties?" It isn't just a rejection of conventional middle-class adult responsibilities that On the Road dramatizes; the book equally presents a world where no one feels responsible to anyone for anything. Dean Moriarty not only abandons his various women throughout the novel (the sexism of the book—and the Beats—can be quite stunning), he also abandons Sal Paradise in Mexico when Sal is helpless and ill. Not to be outdone, Sal abandons his California farm-worker girlfriend once he discovers that picking crops in the hot sun isn't as idyllic as he thought it would be, and at the end of the story he (returning the favor) essentially abandons Dean. That all these events were largely autobiographical doesn't make the picture any prettier. The funny thing, then, is what a conservative book On the Road really is. In its celebration of absolute individualism, of responsibility to no one except one's own momentary need and desire, the novel is much more libertarian than it is communitarian. Indeed, I find it ironic that Kerouac's fans tend to link Kerouac to Che´ Guevara, when what followed in the wake of the cultural revolution has been just the opposite of communism: namely, a hyper-individualistic dash for the most gold and most toys. The world of conspicuous consumption where the rich get richer and the poor get nothing is the logical descendant of the hedonistic individualism that On the Road demonstrates. Finally, I think the fact that the original scroll manuscript of On the Road was sold for $2.46 million dollars in 2001 to the owner of the Indianapolis Colts pretty much says it all: the "greening of America" turned out to be about greenbacks, not a new age. Fitzgerald had it right all along.
... View more
0
0
386
Popular Posts