Teaching Sociological Theory Practically

As I’ve written on the topic of teaching classical theory several times previously (here, here, and here – and spoken a bit here and here), this post is devoted to thinking about teaching theory differently. Moving forward, if you will. I offer five steps to begin to reconstruct a course on theory.

Step 1: What are your goals?

I think that people think hard about their syllabi. I have no doubts about that. But, how do we unlearn what our professional training has done to us? It isn’t really easy. Pick a top-10 sociology program, and you’ll find syllabi that reflect the 2 to 3 most vested-professors’ arbitrary (though seriously thought out) choices in readings and vision of theory. Generally speaking, they converge in some ways. Many, if not most, teach theory by presenting theorists they think are essential to being a sociologist. Bourdieu, Foucault, Marx, Durkheim, etc. From there, they engineer “theory.” Students generally learn little except that there were some big heavyweights, most of which (or usually all of which) are dead. Of course, in a “contemporary” theory, some “contemporary” theorists had the pleasure of dying more recently, and thus their writing is more approachable, while others died nearly two centuries ago and are impenetrable from the perspective of a 20-year old that struggles to read more than 240 characters (which is a problem in itself).

More recently, students may learn some heavyweights were predictably marginalized and should be considered more closely than the old heavyweights. Fair. Some may even learn that the discipline itself is a colonial project of imperalism and should be completely remade from the bottom up. Sure, I suppose this is du jour and somewhat cathartic.

While these choices have some rationale, when we step back and examine our goals, like what we really hope to impart, one usually hears buzzy terms like “critical thinking” and “sociological imagination.” All of these are fine goals, and I have no critique of those choosing to stick with this style, but what if we wanted students to learn theory and not theorists? What if critical thinking was only one small component of a goal to present a more coherent view of the social world? Not necessarily a mechanistic view, but one that actually showed the links between theoretical traditions and their explanation of things around us? Courses in social interaction do this all the time, tying units on meaning, emotion, language, self, identity, role, status, and interaction together. In other words, what if theory wasn’t painted as an inchoate body of ideas that represented imaginary camps of ideologically driven social scientists? Instead, what if we noted that there are core concepts that possess a significant amount of agreement in time and space and which underscore the sociological project. Theory is not a hammer, but instead a cumulative body of explanations being tested. (I realized, as an aside, some will say this is false, impossible, or secretly ideological. There is no amount of facts or evidence that will convince this group. The logic of praxis demands only some people have true consciousness, while their opponents – which far outnumber them – either need their false consciousness raised or are shills for the system. I’ve got news for them: sometimes a cigar is just a cigar.). I’m confident we can do this with the discipline or broad swaths of it more generally.

One might protest here and argue the primary function of the sociological imagination is in providing a lens to help students become good humans and citizens. A worthy goal, no doubt. But, teaching theory with little ideological, humanistic, or critical inflection can achieve this as well, if not better. The reality is the modal sociology student is already oriented towards justice, humanism, critical reflection. This is unfortunate, because we’ve become another echo chamber that rivals the one’s we rail against. We all “know” the world is a terrible place. That any progress made in the last 200 or so years is not enough, and that it won’t be enough until some imagined utopian ideal is built up. We know every person’s intentions are malicious, but not because they have agency, but because some sort of systemic pressure is their puppeteer. At least, that is, everyone but me and my closest friends. This is perhaps why the small, but exciting literature on joy is soooo radical! And, so necessary to combat a joyless sociological project driven purely by animus towards anything we deem an enemy, as opposed to balancing this by celebrating groups, solidarity, and effervescent interaction (whoa! Don’t be conservative Seth!).

Thus, at some point, you have to decide what we are doing. If we are busy teaching methods and stats, we should be using theory courses to teach students to use theory to motivate and make sense of their research. Empirical research is the weapon we use to critique the social world because it is rooted in the facts that provide ammo. Plenty of departments on campus and classes to boot are designed to inform students of the injustices of the world and the unfairnesses inherent in modernity, neoliberalism, capitalism, and other isms. Unfortunately, these “theories” are impossible to verify and, worse, impossible to falsify. Though some use them sophisticatedly as theoretical concepts, usually they are convenient philosophical positions about why things are bad and why one’s own vision of the world is morally better and one’s tribe is superior. John Levi Martin referred to this as sociological imperialism that lends a state-backed hubris to the Marxian notion that we somehow know better and are part of the vanguard party that will somehow shepherd our students to the promised land. To be sure, this is not a call for positivism or objectivism or other strawmen – if being proscience is positivist, then I guess that is a fair perjorative. Rather, it’s a simple point: we are in the business of describing, explaining, and, when possible, predicting social phenomena. The gold standard to work to is an explanation and subsequent testing of said explanation. When possible, policy-oriented sociologists may try to “control” or alter the environment, though that conversation is best left for another blog post, for another day.

Personally, in my experience, I find students get fired up when they are learning THEORY in the scientific sense. They are surprised we have theories, and that there are people empirically testing them, and that they can actually explain people, actions, situations, organizations, and so forth. The sheer diversity of theory boggles their mind. They are excited to learn we aren’t just going to talk about how capitalism is destroying the universe, but rather the social world has certain properties, even if the details vary. Communities, organizations, small groups, encounters, the self have generic components that make comparison possible and exciting, and which lend a sort of coherence to the rather chaotic world the discipline paints from one course to the next. All too often, theory is hermetically sealed off from most other courses, despite its required status in nearly all departments. This sort of obfuscation rooted in some sort of mystical, esoteric, pedantry hurts the subfield of theory and those who are trying to resurrect the idea of a theory specialist. That is why my students are pumped that what they heard theory was from friends – a dull class focused on large, dusty tomes that might as well have been written in Sanskrit – is not the course they are being exposed to. The class is not oriented towards trying to give up their inclinations towards justice or their activist aspirations if they have cultivated them already. Instead, they will have analytic tools to match these instincts and, perhaps, a sense of what they might do research-wise in a sociology dept.

Step 2: Figuring out what Level of Analysis Makes Sense

Now that you’ve clarified your goals, the harder work is ahead. This may not be a popular opinion, or even one that is even salient post-1990, but an effective theory course depends on figuring out the level of analysis you plan on working at. A slight digression to illustrate my argument. Durkheim’s classic text Suicide is famous for committing a major epistemic mistake: presuming that macro-level forces or phenomena (integration) can explain micro-level behavior (suicidality). The issue isn’t that integration has no relationship to suicidality, but that its causal relationship is dubious and nearly impossible to prove (or falsify). No need for hand-wringing on this gaping chasm in levels of social analysis. Macro and micro sociologies are just radically distinct in their logics; especially when working from the former “down” towards the latter. The issue is that sociologists have largely imagined these differences away. In part, that’s because our favorite theorists tend to work at the societal level, positing either grand, sweeping models or critical, philosophic treatises (e.g., Marx). We cannot really explain action at this level. It is for these same reasons Goffman is incompatible with macro-level theories, and suffers from the usual criticisms of not accounting enough for power, inequality, etc. And, when we see efforts to bridge or erase the yawning gap, it leads to conceptual gobbledegook and magical incantations like structuring structures and structured structures.

I don’t think there is a right choice here. I just think a theory course demands we come to terms with this problem and teach what we think is best for whatever reasons we can ascertain. Personally, I start at the very micro, thinking through the dynamics students can see – emotions, meaning, identities, status characteristics – and slowly move up to relationships, interactions/encounters, small groups, organizations, and communities. These “smaller” social units are ontologically real and are easier to touch and feel. Their impact on our self is also discernible. We can then talk about the structures and cultures that are invisible and distant from the everyday reality we experience. Think about how statistics and demography reveal patterns, and then ask the question about how these patterns and our biography intersect without presuming the former causes the latter; because the former doesn’t. To return to the Durkheim example: it is one thing to know that Protestants or unmarried men are at a higher risk of suicide than their counterparts, while it is an entirely different thing to ask why a given Protestant or unmarried man might be at risk of suicide. These questions demand local or more grounded contextual clues, as well as consideration of biological and psychological factors.

Of course, it is entirely legitimate to start at the macro. Sociology began as a comparative historical endeavor to understand how Europe became what it was. But, the properties and theoretical ideas from this large of a standpoint – both in terms of temporal and geospatial realities – is incommensurate with thinking about and talking about individual-level behavior. At least, that is my opinion.

In the end, you must stake out a position as these issues are bundled up with so many other issues related to global v. local, generic v. particular, diffuse v. specific processes. All of these binaries capture different elements of structure and culture and the challenges one faces when ignoring levels of analysis. Most of our social life, or what we even know is our social life and, therefore, the things that are causal, surround our everyday existence. Our family, friends, workplace, and so forth. The “global” leaks into our bubble through the internet and television, more than ever before. But, the idea that distal invisible forces are causing our behavior is a fundamental error in theorizing. Intellectuals think in universals more than the average person; they tend to cultivate wider networks covering greater geographic distances; and, like all people, they tend to impose their experiences on others. The local is where most of our life unfolds, relatively autonomously from the political and economic forces, sociologists imagine as puppet masters. That is not to say they have zero impact because surely they do. But, at the micro-level they are background. The theories that are useful for explaining these things, consequently, reflect this and, I think, are unfairly and prematurely maligned by a segment of the discipline for crimes they have not committed.

Conversely, if we are interested in describing and explaining a society in a particular time or over time, or comparing societies in one time or over time, then we need different tools. Cognitive and affective intra-personal motors are insignificant to the unit of analysis. Situations become too ephemeral and contained in time and space to have real impact. Instead, we are talking about massive aggregated changes that have little to do with why or how people do the things they do, but everything to do with describing and explaining society or, at the lowest macro-levels, organizations and communities (which are really meso, I think, but beyond the purpose of this post). Consequently, macro-level processes and forces, or theories, are more abstract, general, and challenging for students to fully wrap their heads around. They also demand a greater historical conscience that most students (and, unfortunately, many professors) possess, which makes pairing lively examples with dense theories difficult, though not impossible.

Again, no judgment here. I am relatively agnostic, and have theorized at just about every level. It took nearly two decades for me to come to terms with the incommensurability of macro and micro sociologies. For a long time, I figured I could paper over them, but as I stepped further outside of the conventional approaches to teaching theory, I recognized the two just were different – neither better than other, just different. In Step 4, I do offer a solution to avoiding choosing, but this solution is not necessarily the best choice. For now, the reader needs to think about what “sociology” means to them and what sociological theory is most useful for students. And, then, come to grips with the possible and impossible and accept the limits as ok.

Step 3: Read, Read, Read

I recognize the time crunches we all face. But, if one really wants to refashion and refresh their theory course, and transform it into something more than just the usual, then they have to read, read, read. Who has the time?!?! Make the time. The irony, I suppose, is we are all taught to read, read, read and subsequently keep up with our substantive area, but theory, well our backward necrophilic orientation makes it seem like these facts are facts: (1) knowing the dead people, and particularly those your advisor, colleagues, and favorite scholars worship, is enough to be a theorist or teach theory; (2) theory is not a subfield in the same sense as, say, race or immigration, and thus has no body of cumulative work; and (3) no one needs to read a lot of theory to know theory. None of this is true. A theorist or a theory professor needs to read widely, and should try to learn at least 7 to 10 of the major theoretical traditions (I’ll return to this in a second). As an experiment, I ask the reader to reflect on their own base of theory knowledge, and ask yourself if you’ve read and know all of these well enough to write a lecture on:

  1. Expectations States/Characteristics/Beliefs Theories
  2. Ecological Theories, both population level (Amos Hawley) and organizational (Hannan/Freeman)
  3. Affect Theory of Social Exchange
  4. Risk Theory (Ulrich/Luhmann)

I’ll leave it at those four, though I could add so many more examples. There is no reason to constantly dig up older deader dudes or dudettes when we have a lot of rich, empirically tested traditions! It is mindboggling that so many want to get into Hegel and Kant when Hawley’s (1986) theoretics and empirics are more refined and impressive than most classical sociologists (and many contemporary sociologists). I suspect some avoid or ignore him because he sometimes writes formally, throwing propositions into his work. Likewise, the microsociological status theories are so important, not least of which because they’ve developed a research program that has been testing Weberian-esque predictions for half a century. They have the receipts.

More broadly, our allergy to treating theory as a serious subfield stems from two obstacles; one that is likely not going to change and the other I think is changeable. The former is the stubborn inability to define theory in a consistent, coherent way such that most syllabi operate on a relatively shared vision of reality. I recognize that our inclinations as a field are to keep adding and including and expanding the base of both who is deemed a sociologist (I see you Marx) and what is deemed theory, but it has become untenable. There simply are too many people and epistemic systems that count as sociological. Some of these things rest on political movements du jour, some rest on the need to appear original or fresh, and some on the massive unknowable body of “theory” that ends up leading to so many blind spots in a given student’s sociology background that they invent new concepts when there are well-developed (but now forgotten) conceptual tools already existent. I also realize that I am swimming upstream when I argue that theory and politics are not aligned, at all. If one designs a course as a political statement or adopts some sort of ideological selection mechanism, they aren’t teaching sociological theory. Social theory, maybe, but not sociological theory. Thus, they probably aren’t reading this or they already disagree.

The second obstacle is far more manageable. It involves reading, extensively. I would begin by organizing readings based on a set of long-standing traditions:

  1. Classical Theory (everyone should know the basics, and nothing more unless they want to specialize in (a) hermeneutics or (b) history of social thought)
  2. Evolutionary/Ecological Theorizing – not only are these still alive and well, but they’ve become far less reductive over the last 20 years and are sites of cutting-edge theorizing
  3. Exchange Theorizing
  4. Symbolic Interactionist Theorizing (and Pragmatism)
  5. Microsociological Theorizing (includes Dramaturgy/Phenomenology/Ethnomethodology)
  6. Conflict Theorizing
  7. Structural Theorizing
  8. Culture-Cognitive Theorizing

This is a baseline and reflects the most common perspectives of theorizing since the classical era. It represents a cumulative body of traditions, all of which adopt scientific methods of varying kinds to examine and test their theories. In theory, one could design a course based on this list and whatever other bodies of theorizing I have unintentionally or intentionally omitted. But, once one has read enough in these areas, designing a course that is grounded in the depth and breadth of the discipline is far more likely than not. And, it will encourage fewer “names” and a more coherent tracing of concepts and ideas.

Consider, again, status characteristics theory (and its descendant, status beliefs). For nearly 50 years, this research program has been testing the theory in the literal sense of Kuhn’s “normal” science, slowly identifying the constraints and limitations of the predictive and explanatory model. A theory with 50 years of empirical evidence is a really mature theory, but two things block its path to greater notoriety. One is the bizarre belief that sociology is in the business of finding patterns until a pattern is found and then formalized, and then humans are far too creative, rational, original, random to act predictably. The hard science approach to theorizing turns people off. Two, there are some real biases against experimental work in sociology that are non-sensical and, to be honest, boring. Like other well-worn tropes grad students learn from their advisors about this theory or that, these exercises in critique are boring and are not really useful to anything other than keeping sociology in some strange chaotic status quo.

I digress. What makes status theories so great, is they are direct descendants of Weber’s own thoughts on status. Instead of using historical cases, they translated the logic of the theory into a microsociology, chose small task groups as one setting to examine the phenomenon of status, and used experimental methods to sharpen their lens. Nothing about the theory, besides its strictest adherents, prevents the assumptions, principles, or predictions from being applied beyond these conditions. It is a theory. Additionally, it not only pulls Weberian sociology into a course without having to teach only Weber, it also has clear links to macro theories of stratification, inequality, movements, and so forth. Ridgeway’s piece in 1991 brilliantly makes this clear (and, can bring Blau’s (1977) incredibly useful macro theory of differentiation and inequality back into the center where it belongs). Finally, I would add that theory is an excellent way to think through stratification without making sociology the science of stratification. That is to say, much of sociology today tends to be a contest over which axes of inequality are the most oppressive, when we should be specifying the dynamics of stratification, inequality, and oppression generally and then testing them in cases. These axes can be used as specific examples for the theory, offer correctives, or simply be reserved for the substantive classes that students will take that are almost totally consumed by them. Instead, arm students with the knowledge of (a) how status characteristics lead to expectations about behavior, reward, and competence that (b) create self-fulfilling prophecies for most people; (c) how repeated interaction in the same task groups can mitigate the most diffuse characteristics such that more specific ones become more powerful explanations for power and prestige in micro orders like work groups; and, finally, (d) how local groups can have counterintuitive power and prestige orders that do not reflect the societal stratification system, which while not disproving the impact of the latter raises really important sociological questions about members of the former. A student entering into a career in human resources or any organizational setting will benefit from being attuned to how task groups reproduce stratification systems and how they generate new, sometimes surprising, ones. If one’s job is making co-workers’ and employees’ lives better, more equitable, and just, understanding status dynamics seems imperative, rather than using the blunt oversocialized (conceptual) hammer(s) that most courses posing as sociological theory offer.

Step 4: Conceptualizing the Social World

At this point, the strategy revolves around building coherence in your syllabus. This is the most theoretical task of the theorist. The teaching-by-theorist method or hybrids are not really intellectual labor, but rather the realization of dispositions long internalized in our sociological habitus. Deciding, as a scholar, what the social world looks like from your vantage point and then executing that vision through the careful curation of texts designed not to push a theorist or tradition, but to realize the vision is an art and science. Again, some of this will be constrained by the choices made in Step 2, and of course, Step 1. But, I think besides a few alternatives, I do not know of a better approach to connecting the readings to your own thinking to the discipline at large. Students will SEE this, and I will offer a money-back guarantee that they will respond to this (though, maybe not the first time you try this as there will be kinks you cannot predict just like any course development or revamp inevitably produce). I’ll give a short example of one strategy I’ve used in the past. The figure directly below this is borrowed from Jon Turner’s theory textbook. It is a simple heuristic device that visualizes the levels of analysis.

I begin every first class with this model, talking about all the different theoretical traditions and concepts that fit into these different boxes and levels of social reality. It is an anchor to which I come back frequently throughout the semester. I don’t buy it, wholesale, and I modify it as I see fit. But, it is a device that in its adoption implicitly and explicitly rejects the old way of teaching theory. It also suggests natural units or weeks of readings. One could design a course, for instance, on the bottom two boxes, their interaction, and the arrows from the two boxes directly above (corporate and categoric units). In a course like this, one might expose students to the various traditions listed above on interaction, pushing to ultimately show how exchange, dramaturgy, phenomenology, and other important microsociologists like interaction rituals or affect theory of social exchange contribute to understanding and studying the panoply of encounters we inhabit.

Alternatively, if macro sociology is one’s bag, one could think about those middle boxes and the boxes above them. A few weeks on social movements, for instance, start at the institutional and stratificational levels, noting how systematic inequalities mixed with oppressive machinations can generate new corporate actors (social movement organizations), which work backward on the macro-level. This allows for a survey of political and economic sociological theories, movements theory, stratification and inequality, and so on. One need only note, whether taking the first or second approach, the limitations of their strategy. We cannot explain individual-level behavior based on the second approach or see the full force and dynamics of the micro with the first.

The advantage of this approach is the scientific underpinnings and the avoidance of an overly functionalist bias. Instead, the student is provided a vision of how the social world is embedded at various levels, and how even the most micro or macro focus requires some thought about the other. The coherence, whether analytic or something one takes as ontologically true, supplies a more powerful working model for those interested in actually doing research. The questions one asks about self or groups or organizations can be easily sealed by the hermetical forces of subfields. Theory is, in the ideal, the force pushing people out of those vacuums. Consequently, a scholar of small groups must at least account for the individual-level dynamics and the broader corporate (e.g., organizational) and categoric (e.g., demographic characteristics of the corporate units) levels. It is holistic and represents a severe departure from the usual patchwork of sociology that is justified by some allusion to a ‘big tent.’

Step 5: Use These Theories to Craft Practical Assignments

Now what? The structure and substance of the course are designed, the last part is to try and link the ideas to something more practical. This is tricky. Especially with macro-level sociological ideas because, well, observing “society,” “systems,” “stratification,” or other “big” phenomena is really impossible. These are abstract concepts employed to study things that are tough to study otherwise, and like Durkheim’s integration or regulation, invisible like gravity; even if, sometimes, we can observe the consequences.

Here are three examples that do not exhaust the options. First off, despite their protestations, I always put students in groups that endure the whole semester (this is optional, but for the third assignment necessary). In groups, they do several different assignments linked to several different units. The simplest is on, at least most explicitly, stratification, inequality, and value. The class is arbitrarily divided in half, Two identical assignments, distinguishable only by an A group who are tasked with thinking about men and a B group thinking about women (this categoric distinction can be easily modified to suit one’s purposes). Students are asked to imagine they are the mayor of a town and have $500,000 to divide among 10 occupations (if you give them too much money, it becomes much more complicated). The occupations I use are from a workbook I snagged this from, but can be changed. The money needs to be somewhat difficult to divide, otherwise hard choices and implicit biases cannot be triggered. The occupations: medical doctor, professor, kindergarten teacher, cop, farm worker, fast food manager, trash collector, EMT, athlete, and bus driver. After distributing the money, students are asked a series of questions culminating with a reflection question about how theory helps us understand their decisions and what this tells us about American (and now Canadian) society. As a group, they must work through a second worksheet, coming to a compromise about how to divvy up the money and answering similar questions. I have a decade’s worth of data from this, with a few interesting things I can show them after I walk them through the class averages, mode, median, and a few anonymized qualitative responses. (For instance, the gender differences in, say, medical doctors or professors are much less pronounced in Canada than in the States). It raises some really cool discussions about class, status, inequality, and so forth. (Especially when the inevitable 2-3 students divide the pot equally and are forced to defend the choice, which is a great moment in class conversations).

Second, after reading/discussing work on integration, interaction rituals, and so forth and then during the unit on encounters and local culture, students break into teams of two from their groups of four and find a fieldsite to sit and observe. A dry run is designed to practice doing field notes and observation, while a second run is meant to record data on what’s happening. The smaller groups each work independently, initially, and then debrief together about what they both saw, what unique things they caught, and what this says about the field site and about the researchers themselves. In some cases, I’ve modified this assignment to push one member of the two-person group to be a participant in the action while the second is just an observer; then, they flip these roles. They are even encouraged to breach if possible.

Finally, the third assignment is more meta and returns to status expectations theory. When the groups are assigned, I save 10 minutes of time on the first day of class for them to exchange phone numbers, create WhatsApp or text message groups, and chit-chat. They each have to fill out a worksheet asking them for their snap judgments about each group member and their likely contribution, etc. I do not tell them what this assignment is about and, because it occurs long before the status expectations lectures/readings, it is unlikely to be something explicitly on their mind. The goal is to think about how we evaluate others in our first impressions and what markers trigger our biases. Subsequently, each time they do a group assignment they fill out the same worksheet that also asks them to reflect on the previous worksheet and whether things have changed, stayed the same, surprised them, and so forth. The last worksheet comes after they’ve had the full status expectations lecture, and thus adds more theoretically-informed questions along with questions about what they’ve learned and whether they’ve carried these tools beyond the classroom. It is key to remind them in person and on the sheet that they are not to share this with anyone, that you will not share it with anyone, and that their grade is independent of the group’s “success.” Not all, but many “get” the assignment and write some really reflective, penetrating things.

Each of these assignments are designed, then, to get the students to observe their reality – not some “distal” reality, like the news or current events or whatever – through the tools being provided. That was, in effect, Garfinkel’s goal with the breaching experiments (which I also use, but perhaps felt that was too obvious to include here). The experiential piece pushes back on the “sociology is common sense” bit, as yes, many things are obvious to them, but reflecting on why and what it means is the step beyond the concrete to the abstract that is necessary to moving towards theorizing.

Posted in Musings on Sociological Theory, Teaching | Tagged , , , , , | Leave a comment

So, You Are Assigned Classical Sociological Theory in the Fall…

My best advice: RUN! Of course, this is tongue-in-cheek. This is the first of two essays I am writing on teaching theory. It’s been some time since I put words to paper on this (here, here, and here), and my thinking has evolved and changed. This essay is largely devoted to what I think is a nasty hangover in the guise of inertia: the insistence that we teach classical theory as required. I won’t waste too much time explaining why or how I came to adopt my strong, counterculture view on classical theory, but I will say a few words about it – including the contradictory notion that I see value in learning about the classics (yes, I can and happily will hold two opposing views). Next, I offer some suggestions if one has no choice. Then, in the follow up essay, I will give some practical thoughts that have emerged as I have wrestled with designing and redesigning my own theory courses.

Mired in the Past

Every fall, I begin to wonder “why on Earth the classics keep getting further and further from our rearview mirror and yet we keep on teaching them as though they matter more than, say, the sizable chunk of not-classical, not-contemporary sociology that comprised the pre-WWII/post-WWII era’. And who had a direct influence on many of the retiring wave of 1960s/1970s sociologists that trained many of us or our advisors.

It is ridiculous that we continue to force students to contend with Marx’s evolutionary model despite its complete divorce from empirical reality. Or that we push students to struggle with Durkheim’s first text on the division of labor despite his own doubts about his claims (in the very manuscript and elsewhere). Or that we keep searching for new folks to add for one reason or another (why not Plato?!). All the while, theory – as in concepts and their relationships designed to describe, explain, and predict social behavior and organization – remains largely hidden from our students. In the end, the class often feels like a philosophy course and not a social science primer on motivating research, making sense of data, and organizing scientific writing.

Before going too far, let me acknowledge that I do read the classics and I enjoyed reading them as a grad student. But, I did not enjoy my undergrad or graduate classical theory course because they felt so divorced from the other substantive courses, and in particular the latter, felt so divorced from what contemporary theorists were doing that it made little sense. I realize, of course, that theory is a contested word in sociology; and, while I err on the side of sociological theory as scientific theory, I am more than comfortable accepting a form of theory that is closer to biology than physics. Either way, in my mind, theory remains a mode of abstraction designed to explain some sort of phenomenon by raising it beyond the specific details of the case. To be sure, theories sometimes describe such as Weber’s ideal typologies, but classification systems are really only as good as their capacity to inspire explanation. I’m also comfortable with some level of atheoretical work. Good descriptive demography matters and should not be cast out. Finally, I also recognize that sociology is a big tent discipline, but I will die on the hill that we are a science and not critical humanities, journalism, history, or philosophy. Every university has clearly delineated disciplines that train their students in these fields. There is no need for us to encroach on their territory acting as though we can do their thing better than they can. Rather, we should double down on sociology as a science of human societies. To that end, classical theory is such a bear to deal with for several reasons – though I will only list a few.

  1. What is classical? In some ways, classical likely harkens to a classical liberal education steeped in Greek, Latin, and Hebrew. Our classics are a century plus old, and not archaeological artifacts. But, in another sense, classical refers to an era that is sharply delineated from the current era. But, where is that demarcation? Is the line between 1800 and 1900? Is it pre-WWII? Perhaps, but then where do Parsons, Merton, Shils, Hughes, and others prominently writing in the 1950s and before fit? Is it 1960 to the present? Does a theorist need to be dead? Is Bourdieu and Goffman classic because they’re dead, but Giddens is contemporary because he is alive? It is impossible to even determine this, yet we keep the course on the books. (And don’t get me started on the fact that theory is not really treated as a viable subfield or theorist a viable specialization in the current configuration of American sociology, so what is contemporary is primarily dead people despite the fact that many people are still doing theory and theory is perhaps as good a subfield as it has ever been).
  2. What is sociological theory? I hate to quibble over words, but I do think most recognize the distinction between social and sociological theory. The classical theorists, most of whom had no disciplinary allegiance or professional socialization, blurred the lines, but most of us recognize the differences. The latter tends towards explanation of phenomenon while the former strains towards broad systems of meaning and understanding designed to get at big questions that are impossible to falsify let alone verify, but which feel “deep.” The Frankfurt school and critical sociologies hew towards this end, for example, picking up where Marx’s soteriology and eschatology ends. When we dig into questions of “human nature” – not the evolved biological capacities we share with apes, but the philosophic questions of whether we are naturally good or bad – we veer into social theory. When we ask what the “good society” is, we lean into social theory. I am not against social theory. I found Marcuse’s One-Dimensional Man and that pushed me into theory. But, what is the point of requiring a course that is logically connected to methods and research but which does not work to actually connect the two aspects of research together? Put differently, if we aren’t teaching sociological theory while emphasizing research methods to our students, then, we are building an artificial and unnecessary barrier between required courses and the very backbone of the discipline.
  3. Pouring over ‘ancient’ texts is best left to Biblical scholars and archival historians. That we have to read Durkheim or Marx over and over implies they lacked a sort of clarity and precision that we would demand any contemporary theorist employ. For instance, Durkheim never defines integration or regulation, sometimes arguing they are totally independent of each other and sometimes arguing the former causes or precedes the latter. The former problem resists consistent, effective operationalization while the latter prevents the sort of common language scientific communities require if they are to build cumulative knowledge. The same issues arise around two of the most famous sociological concepts: alienation and anomie. We use them every day, but they are difficult to define and measure because they are too rooted in vague texts. (As an aside, the exaltation of ambiguities seems to spread to how we decide what contemporary theorists are best. Unlike the classical theorists who had few standards or training to lean on, Bourdieu purposefully obfuscated his whole career making his theories feel deep and original, but mostly just making them “useful” because of their lack of clarity).
  4. Finally, something that has become very clear recently is the level of analysis most, not all, classical theory works with. For the most part, these theorists were speaking to people who were classically educated and interested in the historical and evolutionary past. Their data were not always solid and their intellectual pursuits were often colored (and constrained) by their social milieu, but sociology was the science of society and not American Society. Their theories work with time scales much larger than we are used to working with and far more expansive space than ordinary. Consequently, their level of generalization and abstraction rests at the level of biological theorizing, which runs against the grain of most people’s own ability and training. We don’t think evolutionarily, despite the fact that evolution and genetics are undeniably important. We do not think historically, but rather impose presentist logic in ahistorical ways. Most criminally, we often ignore the massive theoretical and empirical disjunction between the macro world Weber or Marx are thinking about and the micro world that consumes our own ego and sense of self. That is to say, unthinkingly, we commit the ecological fallacy casually more often than not.

In short, there are so many problems with making classical theory a required theory course as opposed to just a survey of the history of the discipline; one, incidentally, that does not artificially separate the now from the then by collapsing the middle era in which most of the now emerged out of. Teach the history, but leave theory for teaching theory. The historical course can right past wrongs, can think about how social thought evolves and moves over time, and how scientific communities are social entities subject to social dynamics. But, then isolate the concepts and principles that are theoretical and teach theory in a course about theory. So, if you are teaching classical theory next term or in the future and want to change, even a tiny bit, what would that look like? I wrote a few ideas on the back of a cocktail napkin and am expanding on them below.

Rethinking What We Are Up To

  1. Leaving the Humanities Heuristic Behind: I was trained in classical theory, initially, through the classic Coser text, Masters of Sociological Thought. It meant a lot to me. It still does. But, at this point, I see little value in the humanities version of theory. The great man (and now, increasingly, woman) model of teaching theorists needs to be thrown away. Unintended consequences abound, not least of which is the tendency to elevate contemporary theorists to some god-like status (I’m looking at you Bourdieu, and sort of you, too, Foucault). All of which obscures the hard work of learning to use theory to motivate research, make sense of your data and case, and write clear papers using theory. The least one could do is to abandon this antiquated model. I know it can be scary, but it is worth doing. What makes this so hard, in particular, is that theory has finally become more inclusive in recognizing and lauding the contributions of marginalized folks, like WEB Du Bois. But, that story needs to be told in a history of the discipline sort of class, and Du Bois scientific contributions pulled out from the rest so that they can be incorporated into the framework of a new sort of theory course. He can remain cited and celebrated, but like the rest should be set aside to make room for the science.

    The most likely criticism, and fairest, I think this approach would/will/does receive is that the classics are the one remaining source of common socialization for sociology. I do not dispute this argument. But, I also think that isn’t a good argument for pedagogically poor decisions. Moreover, if that is truly the reason it says a lot more about the discipline itself. An alternative way to view this: if most classical theory courses abandon the biographical heuristic, perhaps textbooks and texts will fill the void with increasingly standardized versions of a new theory course that substitutes as the center holding the diverse array of sociologists to some sort of anchor.
  2. Seek Wisdom from our Elders: Before the reorganization of theory into classic and contemporary, and the construction of the former into some sort of pantheon or hall of fame, sociology wove theory into the substantive subfields. Admittedly, some of this stems from the simple fact that functionalism dominated the landscape for a significant period of time, supplying a relatively common set of concepts and principles. Lurking behind these assumptions was organizing heuristics much different from today. For some, sociological theory was best taught by selecting certain themes around which constellations of texts orbited. At one point years ago, I used SN Eisenstadt’s argument that classical theory was centered on three primary problems: the problem of integration, of regulation, and of legitimation – community/solidarity, power/authority, and meaning. To this, I happily added the self, given the fact that at the time of Eisenstadt’s introduction to sociology, symbolic interaction was radically distinct from the center of the field. Once thematicized, it was easy to find readings from a plurality of writers that zoomed in on key issues I felt were central to these themes. No biographies, just an attempt at crafting cohesive, coherent pedagogical units.

    Years later, I shifted to Robert Nisbet’s 1972 text on sociological traditions. Nisbet argued that there were five central axes upon which society changed, most rapidly and visibly, in the 1800s: community, authority, status, the sacred, and alienation. Like Eisenstadt, Nisbet did not see one theorist as dominating one theme, but rather multiple theorists thinking through why and how these changes came to be and, especially, what that meant for the world they inhabited and, perhaps, the future. Community and authority are self-explanatory, whereas status captures a much larger set of things. At its simplest, Nisbet identified the fact that people’s identities and esteem were once wrapped up in ascribed patterns of status that gave way to an achieved form of status. But, Nisbet took this further to capture the transformation in classes of people and stratification, the emergence of myriad hierarchies, and the differentiation of culture and symbolic reality. One could, for instance, pair Weber’s Class, Status, Party with a reading from The Religion of India on status and class alongside Du Bois’ work on the color line and Veblen’s conspicuous consumption. All revolve around this idea. The sacred, something largely ignored by modern sociology, referred to Weber and Durkheim’s interest in religion as a force of unification and of change; and, of course, as something losing its omnipotence. Finally, most theorists recognized that there were negative consequences wrought by industrialization, atomization, and the like.

    The strength here is that one could choose from a range of themes. And, these problems or themes remain evergreen today. For instance, ascribed and achieved status remain forces today. For all of these theorists, it wasn’t a change in kind but of degree. Likewise, community and authority, integration, and regulation, remain central factors in everyday life, local politics, and at the larger levels of social life like the state. The issue, as is always the case with classical theory, is that macrosociology feels far, far away. But, these themes touch on the small groups we belong to as much as they do on the macro-level societies that we feel subtle connections with. Our families, peer groups, classrooms, dorm halls, and so forth all reflect these dynamics. And, one can always toss in microsociological ideas given symbolic interaction as well as the myriad traditions in the 1950s-1970s that moved Weberian principles of status to the group level.
  3. Embrace the Science, Eschew the Rest. If one stopped here, their classical class would be fresher. But, let’s say you want to take it to the next level. Embracing the science of the classics and tossing out the rest is the most radical step left – besides throwing the whole class out. Let’s take Durkheim as an example. The first route to doing this is by teaching Suicide as a theory-building text. Durkheim has two principles: suicide rates are a positive function of the structure of social relationships and, two, the structure of social relationships vary in terms of their level of integration and regulation. That’s it. From here, there are so many interesting lessons: can we find how he defines them conceptually? Operationally? Can we conceptualize other things that might vary as such? From there, myriad treatises beginning in the 1950s that seek to clarify, expand, critique, and deal with Durkheim’s theoretical frame can introduce the student to how theories evolve in the crucible of empirics and within the larger community of sociological knowledge. Another possibility: Durkheim’s Elementary Forms has a basic thesis regarding the emotional and behavioral dynamics behind the construction and maintenance of collectives, whether couples, small groups, organizations, or bigger amalgams. Forget the rest. That is the gist. One can then introduce the three recent theoretical efforts to leverage this insight and expand it: Goffman’s Interaction Rituals, Collins’ Interaction Ritual Chains, and Lawler’s Affect Theory of Social Exchange. Each offers key additions and clarifications. From there, one can introduce basic social neuroscience, ethnology, and psychology that show Durkheim’s theory is, in fact, supported by strong interdisciplinary empirical evidence. This exercise opens up a range of applications and research puzzles demanding study.

I could go on. Weber’s natural experiment in which he pits Europe against China and India provides an opportunity to think about historical methods as theory-building and testing. Mead, and really Blumer, provide the foundations for a diverse array of contemporary experimental programs like the aforementioned Affect Theory, but also Identity Control Theory and Expectations States Theory. Each of these, however, provides insight into how these basic building blocks are married to other theoretical ideas and how this informs the types of questions people ask and try to answer.

Final Thoughts

There are two other points I might make. The first is to resist presentism. I get that many aspects of the classics cannot be interpreted in the social milieus in which they were written, all the more reason to shine the light brightly on the science, but I also think it is possible to resist recasting them in ways that would make no sense to the authors. Durkheim is, again, a great example. The idea that his sociology is colonial, imperial, white supremacist, conservative, or whatever the preferred du jour critique makes zero sense. Durkheim was a Jew in the 19th century living in a much greater religiously saturated world. He was anything but a part of the system, unless you consider his status as a university professor as complicit in some essentialist, oversocialized logic. (And if that is the case, then all professors then and NOW are tools of the state). He was marginal. It is hard, from a modern view in which Jews in many Western countries, are viewed as white, to remember this is a recent construction dating to, say, 50 or 60 years ago at most. If that is not enough to convince the skeptic, go read about the Dreyfus Affair. In his day, his sociology was radical. He cannot help it that religion and civil life split so radically decades after his death. The beauty of my recipe for recasting classical theory is that little of this matters. Either Durkheim’s theory of suicide or interaction ritual/group formation can be empirically verified or cannot. It either makes sense of data and motivates research or it does not. Just as Darwin’s model of natural selection is either useful or not, the classics are only useful in so far as they provide scientific knowledge. Otherwise, the mismatch between our and their social milieu is far too great to overcome. Their education and what their average reader would understand is extraordinarily divorced from what an educated reader today would understand, let alone a 20-year-old. Rather than reduce them to caricatures of some political or social issues today, let’s toss the junk out and leave those debates for philosophers or activists or someone else.

The last fix I would consider? Go big! If you really want to radicalize your classical theory course, make the ultimate criteria for inclusion death. If one no longer breathes our air, then they are fair game. This exercise makes things harder, because it pushes us to compress nearly a century’s worth of theory into a class already suffering from a time crunch. But, it forces us to be creative and think harder about the themes or problems we want to emphasize as central to the sociological project. It cuts the fat, so to speak. It also encourages the sort of scientific emphasis I think is necessary for sociology to take its next steps forward and become treated by the public and policymakers with the dignity it deserves. We should be churning out lay sociologists who are better than just critics of their environment. They should be able to spot the social dynamics we know a lot about rather than be expert complainers. Likewise, the other wing of sociologists we churn out, those expert statisticians and ethnographers – should have the theoretical edge necessary to translate their work into sociological change instead of economic or psychological change. Going big ensures their classical theory course is not perfunctory or radically disassociated from the rest of their sociological education.

Posted in Musings on Sociological Theory, Teaching | Tagged , , , , | 2 Comments

Making Sense of Affective Action, Part 1

Besides Charles Cooley and Emile Durkheim, most classical sociological theorists looked askance at emotions. The Cartesian duality that sees rationality, reason, and logic as masculine and emotion and feeling as feminine was alive and well. I’ve tackled the idea that this is an untenable position here and here, so no need to dive too deeply into the basic arguments. Instead, I propose asking and answering the logical followup to that essay: what is affective action?

Perhaps the most explicit statement on affective action came from Max Weber, whose outline of sociology famously included a typology of action that included traditional, affectual, substantive-rationality, and instrumental rationality. Weber (1978:25) had this to say about affectual action:

Purely affectual behavior also stands on the borderline of what can be considered “meaningfully” oriented, and often it, too, goes over the line. It may, for instance, consist in an uncontrolled reaction to some exceptional stimulus. It is a case of sublimation when affectually determined action occurs in the form of conscious release of emotional tension. When this happens it is usually well on the road to rationali­zation in one or the other or both of the above senses.

The orientation of value-rational action is distinguished from the affectual type by its clearly self-conscious formulation of the ultimate values governing the action and the consistently planned orientation of its detailed course to these values. At the same time the two types have a common element, namely that the meaning of the action does not lie in the achievement of a result ulterior to it, but in carrying out the specific type of action for its own sake. Action is affectual if it satisfies a need for revenge, sensual gratification, devotion, contemplative bliss, or for working off emotional tensions (irrespective of the level of sublima­tion).

Much of sociology’s handling of emotions can be read from this quote. For one, there is a healthy skepticism that affective action is social action, or oriented towards others. In part, this is because it is reactive, impulsive, and, especially, lacking in cognition. Humans are special, as the story we’ve told ourselves goes, because we have big beautiful brains, language, and can control these passions! Affect, then, for Weber is something to be channeled into substantive or value rationality. The pursuit of ultimate ends like salvation or truth sublimates our base, animal reactions into human endeavors.

Of course, this is an ideal type, which means Weber is aware that this is a heuristic tool for understanding why empirical cases diverge from expected instrumental motives. But, we can set that aside as the point stands: sociologists struggle with making sense of the way feeling, thinking, and doing relates to each other. So, back to our initial question: what would a theory of affective action that is social, look like? In this essay, I will talk through the two different forms or types of pleasure and then introduce the idea of (affective) action tendencies.

Two Types of Pleasure

Affect, for better or worse, is tied tightly to pleasure and pain broadly defined. It is more “primitive” than cognition, rooted in two drives that all organisms share: approach and aversion (Lang and Bradley 2010). In mammals, this dual system of affective motivation remains intact, but more complicated because of our enhanced cognitive capacity and (typically) social nature. We still approach or avoid for the sake of homeostatic drives (hunger) and sensorimotor experiences (pain), but to this, we have affective systems that evolved in response to environmental pressures, social complexity, and the like (Panksepp 1998). I’ve discussed much of this in more detail here and here. For my purposes, the central point is that one of these affective systems, what Panksepp calls SEEKING, is innate to all mammals. It is a dopaminergic system that is located in the midbrain, but which has dense and numerous neural pathways leading “up” from its subcortical architecture to the grey matter that we believe makes us special. Moreover, rather than satiate or scratch our itch, when we pursue things that stimulate the SEEKING system, mammals will self-stimulate (Berridge 2018, 2023). Like a cocaine user that uses their entire stash in search of that initial high they felt, rats, apes, and other mammals will keep scratching because the pursuit is pleasurable.

Often called wanting, this system is a tonic or ceaseless motor. Its evolved function, in theory, is to drive neonates to find their primary caretaker for the sake of survival. It compels us to keep track of our mom or dad as they move about the environment; to follow them, like ducklings; and to become hypervigilant when we lose sight of them. The system is elastic, like all affective systems, and intimately tied to learning. Any object – physical, social, or abstract – can become something we SEEK. Our phones become things we keep close track of, for instance. We reach for them, unreflectively, when we first wake up, when we are in a meeting and bored, when we are driving. And, in the movement of reaching and touching, and even the very thinking about those movements fires dopamine and dopamine is affectively rewarding. It is pleasurable.

This type of pleasure is not the type of pleasure that sociologists often caricature as their strawman against simplistic economic theories of action or Freudian specters of psychological action. A separate neural system, called the liking system and tied to the release of opioids upon consumption or manipulation of the object we want, is the source of hedonic enjoyment. It is partially dissociable, neurally, and research has shown that while the two often work in sequence, the two operate apart in many cases. Addiction is the most obvious example (Berridge 2023). Here, intense craving drives actors to find their fix, but the fix only weakly triggers the liking circuitry; in many cases, it doesn’t trigger it at all. People feel no pleasure. But, their dopamine fires intensely during the craving stage. Conversely, evidence for disinterested experiences exists (Chatterjee and Vartanian 2016). When “consuming” art, for instance, the cycling we see between the wanting and liking systems observed in, say, eating a steak, are absent and only the hedonic pleasure spots fire.

What this all suggests is that there are two types of pleasure, if we were to oversimplify things. One is predicated on all the elements associated with wanting things. Curiosity, anticipation, preparation, and so forth are all things that can be pleasurable in theory and practice, while pleasure in the colloquial sense of the term is its own domain. They can be tied together in many ways, but distinct enough to draw a clear line between them. Sociology, then, gets affective action all wrong in a couple of ways that this shift in conceptualization. (1) Affect drives action more than sociologists imagine, in part because cognition is impossible without it and very much caused – either commanded or controlled – by our subcortical affective systems. (2) Unreflective motor responses may be totally kneejerk and thus reactive, but emotional consciousness is still consciousness even without cognitive awareness. Perception, attention, labeling, retrieval and storage of memories, and decision-making are all affective processes as much if not more so than cognitive. So, Weber is partially correct: some affective action is purely reactive and, thus, perhaps not social in the sense he means. But, most affective action is. (3) Wanting is proactive, and even when habitual or routinized, deliberate in the sense that it is controlled, guided, and intentional. (4) Finally, proactive affective action is sometimes about prediction, drawing on a schema or mental representations that suggest expected rewards, but prediction and pursuit are not tightly coupled in reality. A lot of what we do is in spite of predictive models. Indeed, learning the value and salience of objects is affective rewarding – that is, generating, reinforcing, or updating schema is a piece of affective action.

Below, I expand on this to show the tendencies this proactive model of affective action suggests.

Affective Action Tendencies

Borrowing from Frijda’s (2006) use of the term action-tendencies to describe emotions, I would argue that there are four forms or ways that affective action disposes us to act: curiosity, hedonic pleasure, learning, and desire. This is to say our body and mind are predisposed to act, without prompting or environmental stimuli, to aimlessly search an environment for no specific or cognitively deliberate goal; to engage more deeply with unexpected, unplanned pleasurable objects or activities; to want to understand and explain the environment for no reason besides the fact that there is pleasure in learning; and, because we make predictions about things that will reward us and we purposefully seek that which we desire. The first two forms are purely unreflective impulsions, to borrow Martin’s (2011) terminology. Because our SEEKING or wanting circuits are tonic – continuously firing dopaminergic systems – we do not need to be “motivated” cognitively to work to stave off boredom, manage anxiety, quickly find thrill in cheap thrills, or simply ‘kill time.’ We also do not study this behavior much, mostly because we presume it is outside of the bounds of sociological analysis as solitary action (Cohen 2015). Perhaps, implicitly, it is also because the true sociological roots of this sort of behavior stem from the way our local and global environments shape and channel these tendencies. Everyone is curious, but who is encouraged to develop curiosity, how and when that curiosity manifests, the number, density, and frequency of things that occupy our curiosities vary based on a host of social factors. The cynical sociological response would be: well, of course, curiosity is just a function of things we already know like race or gender, and therefore, why study the behaviors when we can just study the forces? Yet, these behaviors preoccupy a significant amount of time and can become obsessions and addictions that remain interesting above and beyond the simplistic sociological wisdom that inequality is the answer to everything (see, for instance, Benzecry 2011), and, like many less sensational and exciting topics, remain features of society and therefore objects of a science of society.

This point may be most true of hedonic pleasures, which are far beyond most sociological inquiry. For sociology, the world is a dour, relentlessly cold-blooded, horrible, exploitative place. When we zoom out and see the effects of race or class or gender inequities, how could it not be anything but? There is no reason to not point out injustices but without the balance of all the good, the joys, the pleasures, what sort of science are we invested in? What’s worse, where I would see small joys in this activity or that, the poison of Marxist “theory” in the discipline makes others see exploitation, indoctrination, and oppression. The logical question would be: what joy, pleasure, and happiness would socialism or whatever its equivalent bring? I digress.

The other two tendencies can be both reflectively and unreflectively activated. Most people reading this essay, if they are still reading it, are reflectively committed to learning. Academics, scientists, theologians, sports and aesthetic fanatics, journalists, and the like are learning addicts. They crave knowledge. But, we are all disposed to learn and find dopaminergic joy in learning. It is the reason we pick up puzzles or magazines in waiting rooms and finding tiny moments of pleasure in fidgeting with these things. It is why we take pride in figuring out our way around a new city, or trying new restaurants and cataloging those we like and dislike, or mastering a new device or technology we purchase.

Of course, we are also motivated, proactively, to pursue the things we like or think we like. We make a reservation in advance to take our partner out on a date, considering things like favorite foods, the level of romanticism we want, other activities we might tether to the dinner, and so forth. We go to said restaurant, aware to some degree of what we might order. How aware varies, a lot. A steakhouse, for instance, delimits the choices to some degree, but I would guess that most people go there with a particular steak in mind, the sorts of sides they like, and even the alcoholic beverage they plan to drink. Others, maybe along for the ride, know that the steakhouse will have other options like lobster, and prepare themselves accordingly. Still, others may decide to go in for the gametime decision. Nevertheless, the goals are set and, once there, can be changed, but usually with some reflection. Desire, as a proactive affective tendency, is never purely rational in the utilitarian sense. One might be having such a good time at the dinner that the previously unplanned desire to have dessert arises organically. Seeing another table get a dessert or acquiescing to one’s company’s desires.

An arbitrary, low-stakes example, admittedly, pales in comparison to selecting a date for the prom, choosing a college major, pulling the lever in a voting booth, getting up for a religious service or sleeping in, and so forth. But, they all reflect the desires we have. People do not drive aimlessly to a voting place, walk in, and throw a proverbial dart at the board. Their decisions reflect a long set of intermediate chains of actions shaped powerfully by affect. That they are motivated to even go vote also says a lot about their desires as much as not going to vote speaks volumes about the disenfranchised, the lazy, the apathetic, the uninformed, or the unmotivated voter.

Tendencies, of course, are only as good or useful as impulsions in the way they manifest into actual action, or what I would call affective action repertoires. In the next post, I will dig deeper into the cluster or constellation of responses that are affective in nature even when they are intimately tied to cognitive processes.

Posted in Emotion, Musings on Sociological Theory, Uncategorized | Tagged , , , | Leave a comment

Why Not Affectivism?

Sociology is, at least in part, the science of social behavior. Or (just) behavior? From one angle, sociologists look at the mechanisms or forces or dynamics, depending on your persuasion, shaping, constraining, and enabling behavior. We might call this the sociology of social control, or so it was before critical and “post-” social theorists of a wide variety call it other things. From another angle, sociologists are interested in why or, oftentimes, how people do the things they do. This diverse array of social scientists call themselves action theorists. Of course, sociologists are also interested in the question of how the former determines the latter – how does society get inside of us? At one point, sociology broadly accepted the idea of socialization, but for various reasons, it was rejected and the question was left unanswered. Luckily, this question has returned with some urgency, as has the gradual re-incorporation of the term socialization.

In any case, how the two pieces fit together is not always clear, because what people say they do, why they think they do it (at least, when asked after the fact), and what they do is often misaligned. Moreover, the reliance on self-reports – which sociology has spent an inordinate amount of time litigating – muddles what we are actually studying: are we asking people about the motivations that propelled their behavior or the ex post facto justifications or “motive talk” for doing what they did (Abrutyn and Lizardo 2023)? As is too often the case, sociologists seem resigned to accept that the former is an impossibility; a question best left in the hands of the psychological sciences. Instead, we are better at (a) asking about the latter and then, (b) rejecting our interlocutor’s claims and imposing some sort of explanation we prefer best (Martin 2011), whether it be some grand, unfalsifiable systemic cause (Neoliberalism!) or some sort of pseudo-psychological cause (e.g., because we care deeply about others expectations and fear reprisal!) (Wrong 1961).

Case in point: Lately, I’ve been reading a lot of ethnographic texts. I am really interested in the question sociologists appear to have largely ceded to psychology. While it appears, to some extent, that they have largely ignored the question of why people do things, in all reality they haven’t ceded the question at all; rather, they’ve internalized the discipline’s ready-made answers to the question. If anything, it would seem sociology’s reliance on these common stocks of knowledge underscores how society or, the mental life of a community, gets inside our heads with real consequential outcomes (Zerubavel 1999) as well as its own strange blind spots. But, this essay isn’t exactly a critique of the discipline, so let’s set this all aside for the time being.

The Fallacy of Reason Triumphing over Passion

In any case, I’ve come to realize just how allergic sociologists – today in two thousand and twenty-four – are to emotions. Despite having a flourishing and rich subfield, and three decades of research in neuro- and cognitive science telling us this position is untenable, sociology is largely defined by cognitivism. What is cognitivism? There is an overwhelming emphasis on our big brains and cognitive processes vis-a-vis other processes. But, it is something darker and more problematic than this. It is the vestiges of the roots of the discipline and our unfortunate reverence of the masters to the extent of being backward instead of forward-facing field. What do I mean?

Generally speaking, humans imagine themselves to be significantly more evolved, advanced, and distinct from non-humans. We build houses and tame every ecological space; we go to comedy clubs, churches, hospitals, and museums; we write poetry, debate abstract ideas, and think a lot about thinking. While these beliefs about our special status and superiority are not unique to sociologists (van den Berghe 1990), it is easy to understand why the classical theorists we revere the most carried these biases into their own theorizing. Chimpanzees and orangutans were unknown to most people until the late 19th century, and when faced with creatures that shared some real morphological features as us, the reception was to double-down on these differences rather than face the fact that we are animals, primates, apes (de Waal 2019). This idea that we are different from animals was most obvious in our presumed superiority in intelligence, reason, and ability to use logic to tame the animalistic passions that we were all aware of. This was obvious! The role of a parent is to teach a child to control themselves, to self-regulate. What are they regulating? The infantile outburst, uncontrolled desires, and lack of social norms. This position was also adopted by those who set out for adventure, plunder, and, unfortunately, worse motives. The peoples they encountered were human, to be sure, but they were either “barbarians” or, worse, “savages.” Like children, they were to be tamed, but with one major exception: they could not be socialized into being full humans, as they were unredeemable less evolved. Finally, where did this leave animals? Animals cannot even communicate with language! So not only is this (not totally correct) fact an indictment of their intelligence, but it also meant/means they could not defend themselves in the court of scientific or public opinion.

All three of these “categories” of objects lacked adequate control. Children because they had not yet been socialized into the ways of the community; savages because they were closer to animals and their community had not yet developed “appropriate” norms and beliefs that could provide the “necessary” mechanisms of social control; and animals because, well, they lacked reason, logic, language, and the like. What’s more, all of these objects were imagined as lacking key cognitive processes and skills, with only some capable of gaining them. The father of American social psychology, George Herbert Mead, for instance, humans were different from their pets because children could learn to speak, which meant they could self-reflect and understand other’s behavior. The discovery of mirror neurons (Rizzolatti & Sinigaglia 2008) along with decades of naturalistic (de Waal 2019) and experimental research on other primates and human children (Tomasello and Vaish 2013) challenges many of these self-evident assumptions, but nonetheless. Mead (1934:173, 193, etc) felt the self and everything social was a cognitive achievement, which was undoubtedly shared by the pragmatist circle he traveled in (folks like Charles Peirce or John Dewey) and his colleagues at Chicago (e.g., Park). Likewise, German sociologist, Max Weber, had many categories of social action, but all of them were compared to formal or instrumental rationality, which admittedly was impossible in its pure form, but which served as a measuring stick of sorts for determining why people did what they did (Kalberg 1990).

When we sift through the common explanations today, we see the specters of these giants. Rational-choice and cognition-in-culture perspectives both rely heavily on cognitive processes, while symbolic interactionism inherited its progenitors’ overemphasis on language and cognition. There is nothing inherently wrong with caring about cognition. Without a doubt, there are traits that humans have that are unique to them; at least unique in so far as evolution appears to have built off of capacities and traits other mammals possess. We are significantly better at taking the role of other people over long periods of time (Tomasello 2019), and are able to keep track of people’s reputations as well as ours for an entire lifetime. Of course, the counter to this is that role-taking and reputation tracking are as much a product of our brain’s memory systems that rely heavily on affective tagging (Asma and Gabriel 2019), as they are our inherited ape traits, like paying close attention to face, to emotions, and to how fair our exchanges are (Turner 2007). Moreover, many mammals express behaviors resembling grief and mourning when they see a “friend” die, as well as concern when a friend is hurt (Lents 2016).

I think the point, in the end, is that sociology has largely ignored emotions. And when we deal with them, we are mostly interested in their construction – that is, either how some emotions are structured by systems and organizations (Kemper 2006) or culturally determined through ideological rules (Hochschild 1979). None of this is wrong. BUT, it does miss the several facts. First, before language or the gray matter we cherish so much evolved, mammals evolved complex affective systems that compel us to seek out resources, find friends to play with, mates to lust over, and children to nurture (Panksepp 1998). In addition, the neural pathways going “up” from the subcortical, emotion centers of our brains are more numerous and denser than those going “down” (Franks 2006). This is not too mention that key affectual centers in our brains are significantly bigger than our closest relatives, chimps, suggesting evolution worked heavily on these places for survival (Turner 2007). On the one hand, having better control over emotional noisiness common among apes would have helped us better hunt big game in coordinated hunting groups. On the other hand, for a species lacking most of the natural defenses other mammals and, especially apes, have, group life would have been most fit. Affect is the bonding force. Durkheim knew this, but it has been confirmed in neuroscience, psychological sciences, the ethnological record, and in sociology.

Second, nearly every process cognitivists emphasize have their basis in affect! Perception, attention, memory, and even our most prized possession, our sense of self are affective (Izard 2009; Frijda 2007). Indeed, when sociologists talk about cognition, they sometimes mean consciousness, which can be reflective, but it can also be pre-reflective emotional consciousness (LeDoux and Brown 2017). That is, we are often coding, indexing, storing, and retrieving memories without being cognitively aware of these processes (Holland and Kensinger 2010). This is not to say that cognition doesn’t matter, because it certainly does. But, there is a ton of affective processes coordinating cognition, and at times controlling it or, worse, commanding it. That is to say, “affective mechanisms are the core of value generation, of the valence that directs, slows down, speeds up, and gives meaning within decision-making and action release” (Asma & Gabriel, 2019, p. 32). So, while we think we are taming our passions, there is very little we do that is not affectually based; and our emotions are just as essential to us making decisions or setting goals (Lerner et al. 2015), as they are taming our reason, logic, and rationality.

Thus, while we do indeed have comedy clubs, churches, and art museums, these are affective achievements first and foremost. They are amplified by our big brains, but they are affective in their nature. Yes, we write poetry and think about thinking. Both are cognitive achievements in so far as we piece coherent sounds together into words, sentences, paragraphs, and so forth. But, the artist feels motivated to express or debate or argue; they feel motivated to continue on, even when they struggle with their words; and, the reader feels the words, keeping attention and the desire to finish the piece or simply daydream and move on.

Looking for an Emotional Escape Hatch

To return, then, to some comments I made earlier, I have found it exceedingly interesting that sociologists of various ilk have been looking for a way to escape cognitivism. For instance, the dual process model has become a common explanation for how society gets inside of us and which control mechanisms shape action (Vaisey 2009; Lizardo et al. 2016). The model simply pulls from cognitive science to reify an older model of social action: One type, Type 1, is habitual, rooted in automatic processes often inaccessible to reflective thought, while a second type, Type 2, is deliberate and often triggered only in the face of dissonance, incongruence, problems, or unsettled cultural solutions. While these have become the common currency in cultural sociology, where the problem of action is currently receiving its most concerted attention, a growing number of sociologists have sought to show there is plenty of action that simply does not fit either type. Using pragmatism, phenomenology, and other fonts of inspiration, these folks point to aesthetics (Pagis & Summers-Effler 2021; Silver 2022), creativity (Leschziner & Brett 2019), and embodiment (Winchester 2016; Winchester and Pagis 2022) as ways around conventional cognitivist explanations. Instead, terms like somatics, sense and sensation, feeling, and so forth are evoked. Attention and perception are key.

However, my take, and I may be wrong, is that they aren’t framing it as escaping cognitivism because we (I, too, belong to the tribe of sociologists) have taken for granted the idea that cognition and consciousness are synonymous and that affective processes like perception, attention, and memory are largely cognitive. They are not throwing the shackles off, not because they don’t want to, but because the Cartesian dualism remains firmly entrenched in sociology; their works are, unironically, challenges to this dualism and to cognitivism’s primacy. Yet, the evocation of words like senses or sensation are not enough to overcome the barriers the Weber’s, Mead’s, and others have erected. For these folks, emotions were understandable, so long as they were sublimated to cognitive things like ultimate valued or in so far as they became objects that could be labeled, classified, quantified, and interpreted. Otherwise, emotional action was not meaningful and, therefore, not sociologically interesting. This position, however, is untenable.

As such, while I see the incredible flourish of work pushing against cognitivism as entry points to an affective revolution; one in which sociology shifts to an embrace of affectivism, The problem is, the vocabulary for making this case is delimited. So, when Winchester & Pagis (2022) talk about the ways religious practices are learned, unreflectively, they are talking about the pre-reflective affective processes that call us to attention (Frijda 2007; Izard 2009; Asma and Gabriel 2019). Locating value in our environments or determining object salience – including our own self – is an affective act, driving by the mesolimbic dopaminergic reward systems that intensify when we find activities or objects rewarding, when curiosity is met with a sense of control and competency (Panksepp 1998; Berridge 2023), and when we find unexpected rewards are affective systems (Di Domenico & Ryan 2017). They are the neural circuits and chemicals Panksepp (1998) undergirding the affective system of SEEKING or wanting — which is dissociable from liking (Berridge 2023). Wanting is the folk psychological name for a set of affectively motivated actions: anticipation, preparation, curiosity, learning, mastery, and the like (Abrutyn & Lizardo forthcoming). All of these activities and desires are all closely tied together in their relationship to the mesolimbic reward system. Dopamine is tonic, continuously firing, meaning we are always affectively moving. When we get an unexpected reward from something, instead of continuous firing, it fires rapidly in pulses calling our attention to the thing and motivating us to approach it.

In short, we feel driven to do things. Eastern Orthodox, Buddhist meditators, and women who join convents (Lester 2005) all experience these SEEKING feelings, compelling them to want to do things and learn things. Just as my writing this post is a mixture of cognition (I have to think about the words, sentences, and overarching argument) and affect. The desire to finish it, to keep writing, to come back to it to edit. These are affective. Learning to become a better writer involves learning to love learning, just as becoming an Olympan swimmer (Chambliss 1989), an opera fanatic (Benzecry 2011), or big wave surfer (Corte 2022) is as much about the affective motivation to want to do it as it is about the pleasure of winning or aesthetic experience or the camaraderie of preparing for a surfing outing or the recounting of the day’s events. The motivation to prepare and the subtle joy in the mundanity of preparation, of waiting, of imagining the act, and, of course, the pleasure of doing it. Affectivism is the embrace of the emotional nature of how we think and act. It isn’t the pendulum swinging so far that cognition is tossed out. But, rather it puts affect in its rightful place. Once we begin to embrace that piece, the vocabulary for thinking about it, the leveraging of methods designed to observe and understand emotions as independent variables, and the rolling back of outdated understandings of animals, preliterate peoples, and even infants can be undertaken.

In the end, we cannot abandon cognitivism entirely, nor should that be the goal. I think bringing affect into the discourse more seriously along with developing or borrowing methods designed to examine the sorts of components sociologists tend to take for granted – like the expressive or physiological aspects of affect – is the correct course of action. There is nothing to lose from seeing ourselves as animals. Indeed, we can learn a lot from the observations primatologists have made, including the fact that primates feel the same things we do with or without the capacity to express it in self-reports. They grieve; they remember old friends; they lust and care for, and play with each other. They also show remarkable amounts of what many have considered to be the hallmarks of humanity: self-control. While they may not search for tickets for their favorite comedians, the desire to search for amusement is recognizable. In short, affect evolved to help mammals survive and, regardless of the environmental changes cultural adaptation affords human societies, it remains a force in how we think and act.

Posted in Uncategorized | Tagged , , , , | 2 Comments

Sociology, Science, Suicide…and Durkheim (Part 2)

Our first step in thinking about how to teach Durkheim to undergrads and graduate students involves discussing and evaluating his descriptive model (see Part 1 for background context on this essay). Admittedly, there will be times where I have to “peek” at the next essay, Part 3, because his descriptive model is difficult to untangle entirely from his explanatory model.

I would be willing to bet that the average sociology reader knows there are four types of suicide, according to Durkheim – egoism, anomic, altruistic, and fatalistic. I’d also be willing to bet that he actually only had three types of suicide (the first three in the list), but a seemingly last-minute decision on his part and the crystallization of a contemporary interpretation obscured this fact. Wait, what?!? Go back, re-read Suiicdei and you’ll see that these three types are the only three types he mentions save for a single paragraph in a single footnote at the end of the last substantive chapter (more on this soon). In what is called “Book III,” where Durkheim shifts to his big theoretical argument – something, strangely and unfortunately, overlooked by most sociological theory texts – fatalism is nowhere to be found. Rather, he conceptualized societies as either balanced in a ‘tug of war’ between three social currents: egoism (or liberalism), altruism (or intensive mechanical solidarity), and anomie (or acute/chronic dysregulation). The dominance of the first meant the cult of the individual was winning out and people were ‘bowling alone’. The second meant that society demanded people take their life under certain conditions (e.g., martyrdom). And the third meant moral anchorage was temporarily or indefinitely impossible due to some disruption in political, economic, and/or domestic life. These social currents (or social facts) are more like public opinions or dominant beliefs that spread through a population and were the etiological roots of higher-than-average suicide rates (more on this soon, I promise). Hence, three types were the foundation of his descriptive classification system and the explanatory model.

******

[Quick Aside for the Uninitiated Reader or for a Refresher: Durkheim’s theory is rather parsimonious. In essence, it hinges on two principles. First, the structure of suicide rates is a positive function of the structure of social relationships within a collective, broadly defined (groups, classes of people, geographic communities). Second, the structure of social relationships varies in terms of how integrative and regulative they are. The former is usually interpreted as social connections and solidarity, while the latter is about the group’s moral order and its ability to guide behavior consistently. When groups are not integrative or not regulative enough, or too much, then members become vulnerable to suicide. Why they die is not important, as subjective, specific individual decisions for Durkheim have no scientific merit. It is structural, macro, and beyond anyone’s individual choices.]

******

So, how did we get to fatalism? I’ve never dug deep on this, but if Durkheim was like you or me, as he was preparing his manuscript for publication, I am sure a friend, colleague (maybe his nephew Mauss?), or even his own sharp self noticed a potential “reviewer critique”: on the one hand, he had symmetry in egoism and altruism representing two opposing types of suicide based on integration. In that sense, he had met the criteria for a good descriptive mode (exhaustive and exclusive types with, potentially, clear explanatory power). But, anomie was its own thing. What was its logical opposite? In a hastily written – and that is the only conclusion one can draw from a crappy footnote in an otherwise exhaustively researched set of substantive chapters – Durkheim made a careless mistake and added a random type (fatalism) for, what I can only surmise, the sake of symmetry (it’s on p. 276 in the 1951 translation for the interested reader). I joke in talks that I dream that a footnote I write will become a full-fledged totem despite its undeserving nature. In elevating this fourth type, modern sociology (ironically) tossed aside Book III and the explanation Durkheim laid out, in favor of a four-fold model that looks nice and clean (see the figure below from Bearman 1991) but was not actually what Durkheim appears to have intended at all!

The lesson, here, is that exegesis is only as good as the exegetes and the degree of freedom contemporary exegetes have – structurally and symbolically – for dealing with old texts. Thus, smarter (and currently dead) famous sociologists treated Durkheim’s work as having four types which subsequently led to this model’s enshrinement in texts books purporting to teach classical theory which ultimately made someone like myself’s protestations fall on deaf ears (“Who are you to question Durkheim’s greatness!?”). The excavation of a four-fold model was not like paleontologists filling in an incomplete skeleton with drawings or fake bones, but rather an archaeologist recovering a fragment of a text and adducing an entire story from a sliver of writing. We invented a whole type and now must live with this situation and, because the model and the theory are a “settled” matter, we cannot really develop the four types beyond the superficial reading that most theory textbooks have ingrained in their students’ heads.

It is catastrophic in the sense of scientifically approaching suicide on two grounds. First, as my old advisor was fond of saying: once you’ve wrapped your favorite text in velvet, placed it on the altar, lit the incense, and prostrated yourself, there is no self-correction, no modification, no advancement. The ideas are set and we must contend with the taken-for-granted wisdom of the common (mis)interpretation of Durkheim’s Suicide. Second, because we are frozen, we are left with a purportedly general classification model that appears exhaustive and exclusive, but which raises doubts about both criteria. To put it differently, sociology has struggled to contribute to the study of suicide in a meaningful way, but this is a microcosm of many of the things we do. Teaching this to grad and undergrad students simply reproduces this problem, when the real lesson this text offers is how to evaluate theory, how to advance theory, and how to abandon the things not worth keeping anymore.

In that spirit, let’s look a little more closely at some of the issues presented by this four-fold model.

The Curious Case of Fatalism

Adding fatalism makes the model elegant looking, visually. It feels complete. But, what exactly is fatalism? Is it a worthy type? Durkheim resoundingly answers in the negative. In that footnote/paragraph, Durkheim notes there is another type of suicide but quickly tells the reader it is insignificant, sociologically, and only perhaps of interest to historians. He then anecdotally – which, again, is strange given the effort he puts into empirically validating his other types – lists three historical cases: slaves, middle-aged childless wives, and newly married men. The first case is always listed as the exemplar. Makes sense. What class of people are more overregulated than slaves? But the other two cases are headscratchers as he has no explanation for why he chose them or why they are fatalistic. We can fill in the blanks, I suppose. The first case, which may strike the reader as erring on the side of misogyny, probably comes from his own circle or extended network. Women, in those circles, would have largely been expected to be dutiful hood ornaments to their upwardly aspiring husbands and their essential role was having children – preferably boys, but children to transform the couple into a respectable family. Any middle-class family worth their salt needed kids, and any woman entering her 30s would need kids to maintain her social relationships and family’s status. A childless woman would be a social pariah. Her friends might shun her, while her husband may but distance between them. Consequently, we can presume many of these women became locked in a loveless, unhappy marriage or, worse, threatened with divorce and spinsterhood. All of this may be what Durkheim was thinking; but, we will never know. It is a fair question to ask how analytically and empirically similar this case is to slavery. They do not really seem to share etiological foundations and, even more vexing, they differ morphologically in very serious ways. These differences are beyond my interest, for now, and thus this discussion will await some future essay.

The third case, bachelor-turned-married men is also likely anecdotally drawn from French newspapers, gossip, his own circle, or just ‘common sense’. Like a slave, newly married men are suddenly “choked” with obligations, though for a relatively short period of time. In this sense, they go from a state of relative freedom to one of obligation, but is it akin to slaverly? Marriage often carries positive status enhancement as well as stability and support (after all, men seem to benefit from marriage as protective against suicidality). I won’t beat the proverbial horse, though, and let the reader decide, however, what to make of this and whether or not fatalism makes any more sense given his three sorts of parallel, mostly not cases. Beyond the empirical weakness of the type, there are theoretical dilemmas too.

The real trouble for making sense of fatalism as an authentic type emerges when we consider it within the context of his chapter on anomie – the only type he appears to have intended to link regulation and suicidality. The common interpretation, thanks to Parsons (1951), is that deregulation means normlessness or directionlessness (the word anomie, in Greek, does mean “a” – lack of “nom” law or order); and normlessness causes negative outcomes. The question, though, is what is the causal force of anomic suicide? Is it too little regulation? A curious puzzle arises when we consider that Durkheim believed economic booms and busts were two disruptions capable of deregulating society or sectors in society. Most people have studied busts, but would busts not contribute to the narrowing of choices and delimiting of aspirations? The choking of futures? Sounds a bit like anomie may include having too many moral dilemmas and the sudden loss of moral flexibility. What this suggests is Durkheim made a distinction about deregulation that was different from the egoism-altruism continua. There was no need for symmetry because the etiology of suicides caused by regulation had less to do with the structure of social relationships and everything to do with their disruption and disintegration and the sudden moral ambiguity that created. Fatalism radically re-orients the theory, causing the Parsonian version of normlessness to become the explanation, which may account for the surprising lack of empirical verification of Durkheim’s most famous type of suicide – especially considering how universally supported egoism is across disciplines.

This issue, however, points to two key takeaways. First, it is a great teaching moment about conceptual clarity. Anomie and egoism, regulation and integration, were never defined clearly. You can’t fault mid-century sociologists for trying to operationalize them while working with vague source material. It also reminds us that common interpretations are inextricably tied to a community of scholars and are not always correct. The classics suffer from being Rorschach tests. They would likey never be published today, because they fail at specifying the theoretical logic in precise, parsimonious fashion. But, for graduate education, we should be training them to see these flaws and work out a better, tighter theory while undergrads should be taught about theory instead of classical lore.

The second takeaway is that regardless of our own interpretation today, the addition of fatalism violates the exclusivity criterion because Durkheim probably never intended to have fatalism as a type. This is all fine and well and can be easily remedied with some logical exposition, but it suggests we have propped up a theory whose foundations are flawed. The conceptual ambiguity, however, surrounding Durkheim’s intended ideas about anomie and regulation, and the fact he saw it necessary to even add fatalism, also raise questions about the veracity of the three-fold model. Of course, it remains difficult to imagine sociology changing course and adopting the three types instead of the four types, or whether that would make a difference. In fact, we must applaud those who have massaged these inconsistencies to produce legible and compelling accounts, like Bearman’s (1991) and Pescosolido’s (1990; Pescosolido and Georgianna 1989) network approaches. But, too often we gloss over the inconsistencies without taking in the lessons they teach graduate students and the harm it has on the science of suicide. My colleague and I once wrote a paper attempting to totally re-theorize fatalism (2018 – also stored elsewhere on my website) and something peculiar happened. The editor of the journal loved the paper and noted, ironically, that it was about time someone addressed fatalism, while one of the reviewers pushed us to omit most of the reference to fatalism in the service of building a more comprehensive theory of structural and cultural regulation. Such a fascinating interplay between two very bright scholars. In the end, my sense is there is room for keeping fatalism, but perhaps the reviewer was right: the labels themselves are doing more harm than good. But, for now, I will offer one more example revealing the exclusivity problem.

Is it the Dogma or the Moral Community?

Durkheim’s most famous thesis was that Protestants died by suicide more than Catholics. He surmised that the former invited individualism, free thinking, fewer obligations to fewer people, and weaker communities. So important a thesis, Merton (1967) referred to it as sociology’s one law – nevermind extensive research, some using Durkheim’s own data (Halbwachs 1978), pointing to a lot of caveats with this so-called law and also showing the inverse in some cases (Travers 1990; Pescosolido and Georgiana 1989). Why it isn’t likely a law speaks to some interesting problems with his typology and, specifically, with egoism as a type.

The argument, of course, is that religion offers a moral community in which people find strong support and protection against pathology. It is important to note, that sometimes in purposive or unintentional defiance of the common interpretation of Durkheim, researchers presume protection comes in the way of the prohibitions some religions offer against suicide. A mortal sin, in Catholicism, backed by stigma, threats of excommunication of one’s everlasting soul, and, in medieval times, the actual public degradation of the whole family and the reduction in its social standing (Barbagli 2015). Indeed, this operational confusion points to the theoretical confusion surrounding religion – as well as the degree to which modern sociologists are willing to accept orthodoxy despite obvious alternative theses (which is a good thing!). This confusion, good for science as it may be, does undermine the exclusivity of the model. This is particularly problematic because Durkheim cannot settle on the explanatory logic of religion and suicide (see 1951, pp. 209ff.). He initially argues it’s because of the moral community, but then he argues the moral community provides regulative capacities (duh!), thereby making integration the direct and indirect cause (through regulation) of suicide rates. So, is it integration or regulation, because if it were the latter or some mixture of the two, then we cannot retain the notion that there are discrete types and, consequently, a true classification system.

Well, we could – Weber’s famous typology of action and of domination present ideal types that do not actually exist in empirical reality, but serve to study variation in real cases and the consequence of this variation. But, for Durkheim, these are ontologically real phenomena; the are social facts.

The point, ultimately, is that one can (and some have [Johnson 1965]) make the case that there is only one reason for suicide: disintegration, or a lack of integration (see also, Halbwachs 1978). Either social organization and networks are in a state of connective scarcity or a process of declining slowly or suddenly. But, that is not how we interpret Durkheim or how he is taught, or how most reviewers tasked with evaluating research tasked with testing these theses understand the conceptual or operational definitions. So…yeah, it’s not good.

Perhaps, though, you disagree with my first and second contentions. Fair. But, I have one more trick up my sleeve with regards to the exhaustive nature of the typology.

Suicide Suggestion, Contagion, Clustering

In the 1970s, David Phillips (1974) “discovered” the study of suicide suggestion, or what he called the Werther effect. In the 18th century, Göethe published The Sorrows of Young Werther in which the eponymous protagonist dies by suicide because of unrequited love. Following the publication of the book, several copycat suicides first within Göethe’s social network, then in the town and eventually the surrounding area happened in relatively rapid succession. The authorities banned the book. Borrowing the name, Phillips identified a strong association between being exposed to newspaper articles about suicide and subsequent spikes in suicide rates among the audience. The longer the exposure and greater the celebrity, the bigger the effect (Stack 1987). For decades now, using increasingly conservative and sophisticated statistical methods, this relationship has held firm (Stack 2004). And, seems to be even stronger when the exposure is relational, such as being exposed to a friend or family member’s suicidality (Abrutyn and Mueller 2014; Mueller and Abrutyn 2015).

Phillips astutely noted that Durkheim had rejected the idea of contagion as a mechanism or cause of suicide (see 1951:123-142). Phillips turned, instead, to Durkheim’s rival who he famously buried in public debates – Gabriel Tarde (Abrutyn and Mueller 2014). Tarde had written a book on the “laws” of imitation, which Durkheim ridiculed. (Admittedly, Durkheim’s argument is far more nuanced as he discerns between three definitions of imitation and accepting one as a real possible explanation of suicide, but not sociologically important. One more aside: after vanquishing Tarde, I would argue he stole his ideas when he began writing about the contagion of sacred beliefs and emotions in The Elementary Forms of Religious Life). With some theoretical ingenuity and synthesis, making Durkheim’s work fit a model of contagion is possible (Abrutyn and Mueller 2016), but it is neither implicit or explicit in his descriptive or, as we will see explanatory, theories. It is totally absent.

Not surprisingly, it wasn’t until the 1970s that someone even noticed the correlation between exposure and suicidality, and not for several more decades that someone bothered to demonstrate the relationship causally (Abrutyn and Mueller 2014; Mueller and Abrutyn 2015; Mueller et al. 2015). It is still not something taught in sociology courses unless they teach the small boutique literature that includes my own work with Anna Mueller and a few random papers by others (Bearman and Moody 2004; Baller and Richardson 2005, 2009). There is Durkheim and then nothing else; which may explain why sociology has barely contributed to suicidology (the interdisciplinary field) or the practical policy interventions designed to increase safety and reduce suicide.

Long before Phillips, anecdotal evidence of suicide epidemics was known. Durkheim was aware of small villages, sentry towers, regiments, and medieval monasteries as sites of tragedy. But, social science had also cataloged these sorts of events, unsystematically of course, a few decades before Phillips. The tendency for suicides to cluster in temporal and geographic space, ironically, is the most sociologically interesting case of suicide given some places are vulnerable and not others (Haw et al. X). That is, some Indigenous communities are sites of suicide clusters, but not all (X); some high schools have clusters, but not all; some prisons have suicide clusters, but not all. The variation in social environment seems eminently relevant to a Durkheimian or just an average sociologist interested in the social distribution of pathological behavior!

Durkheim’s theory, however, fails to even make sense of these. One cannot study what isn’t in the purview of the theory motivating research questions and puzzles. And, thus, the exercise of both identifying the descriptive theory that Durkheim lays out and evaluating it provides major lessons for graduate education and also questions whether classical theory is worth doing any more or at least in the conventional sense. In my last essay, I will return to the question of what do we keep and what do we toss. The third part of this series, however, will explore the explanatory model as the most important criteria of any descriptive theory is whether it produces good or bad explanations. Durkheim’s, to foreshadow this assessment, is mixed, but does do better than his descriptive model.

Posted in Musings on Sociological Theory, Suicide | Leave a comment

Sociology, Science, Suicide…and Durkheim (Part 1)

It has become fashionable, once again, to argue that sociology is an impossible science (perhaps it is a constant feature of sociology and not a fad at all). The logic, despite being cloaked in a wide range of new ideologically-drenched rhetoric, remains the same. Society and its agents are like electrons: we may know their position or their speed, but because of their complexity, we can never really know everything. Consequently, sociology should be critical, descriptive, political, or all of the above (for a strong critique of these positions, see Turner 1990; Hammersly 2005). I won’t rehash or wade into these debates, as I do strongly believe sociology is a science, has a real body of cumulated knowledge, and has produced some solid general principles that are not, in fact, provincial, and can still contribute to public discourse, civil society, and policy making. I think one of the issues, to be frank, is that the we continue to look to the past (and classical sociology) while wringing our hands about how we might professionalize and socialize new students if we stop teaching classical theory. I think this strategy continues to be misguided, and there is a better way forward.

To do this, I predictably return again to the sociology of suicide, its main protagonist (Durkheim), and the illustrative lessons we can glean from this. Though I could write a treatise on doing theory, these essays are aimed at how we teach theory as the expectations we set for undergraduates shape upstream expectations of graduate students and the obligations we, as teachers, have to pushing the discipline forward as a science. In this first essay, I would like to lay out the background for the ensuing essays that tackle the substantive issues head-on. This essay focuses on sociological theory, and the pitfalls of worshiping the classics.

Durkheim and Sociological Theory

Durkheim is (rightly) considered one of the most important sociologists from the classical (19th century) era. He is also an exemplar of the good and bad of caring this much about one’s scientific disciplinary past. On the one hand, Durkheim’s work was revolutionary in methodological innovation, theoretical ingenuity, and sociological imagination. Some of it, like the fundamentals of his interaction ritual theory found in The Elementary Forms of Religious Life, continues to be generative across social sciences as mounting evidence supports his theoretical claims (see, for instance, Rimé & Páez 2023).

On the other hand, he represents a lot of what is wrong with the discipline’s approach to theorizing. First, he is classical for a reason. Not only does his writing and examples sometimes reflect the repugnant (from a contemporary perspective) views held by (Western) white men, but they also can be quite wrong on other levels. Like all classical writers, his style, thought process, and understanding of the world are delimited by time, place, economy, polity, culture, and so on. None of these positional qualifiers makes a difference in the validity of his interaction ritual, and yet rather than hone in and extract the basic principle we continue to insist on making undergraduates read the text over and over again; even though many aspects, like the data itself, are facutally incorrect. And herein lies the problem with a backward facing sociology: instead of building on scientific principles we remain stuck between a humanities-style interrogation of its ideas and subject matter and a scientific approach to studying its phenomena. Our feet stuck to the shoulders of our giants. Thus, when we teach Durkheim, then, we run into the inevitable problems one might expect from any old text. Students struggle with the writing and the philosophic ideas that are far removed from mainstream academia and common sense; the casual misogyny, ethnocentrism, and so forth obfuscate the contribution; and in the end, we are all distracted such that instead of teaching and using theory we are embroiled in debates about the fundamentals of the discipline itself.

When we set aside this set of challenges, even with good intentions, a third problem arises: humans tend to crystallize old texts, shifting from the creative effort of producing the text and what that should inspire us to do to either orthodox interpretations or exegetical activities (Goody 1986). Both are rife with pitfalls. The orthodox approach has consequences for the principle of self-correction in science. Too many sociologists continue to treat the “canon” or their favorite works as though they are finished projects, accepted truth, and perfected works. They are not. Durkheim’s Suicide is not immune to this problem, as it remains accepted wisdom despite shakey (Breault 2001) and non-existent (Leenars 2004) empirical evidence for three of his four types of suicide. This doesn’t matter because we teach and treat it as accepted wisdom. Consequently, to study suicide “correctly” means to study it as Durkheim did, testing and re-testing his vague theoretical ideas and nineteenth century empirical examples using new data or analytic strategies (Wray et al. 2011). Nevermind creative efforts to move beyond the text or specific methodological guidelines he set (Phillips 1974), as these upset the dogma and doctrine of a beatified sociologist. Hence the second problem: if the ideas are hermetically sealed, then being creative means digging deeper and deeper into the text in search of new interpretations; reading marginal comments and correspondence with fellow travellers; and, ultimately, remaining mired not on the shoulders of giants but the ghosts of the past.

The question, then, is what do we do? Make no mistake, there is no reality in which the classics are totally ignored. In fact, a History of Sociology course has real value for a variety of reasons, including the sociology of knowledge and ideas. But, what exactly should we be teaching undergraduates? I realize that many sociologists conflate social theory with sociological theory, and thereby, see theory as oriented toward philosophical questions as much as scientific principles. Yet, there are departments that specialize in philosophy and its method, which means we need to keep Durkheim’s belief in disciplinary boundaries seriously. Thus, when we argue students must “wrestle” with the German Ideology, what we are actually saying is we must expose our students to the holy texts, have them search these texts for critical insights, and, consequently, complete their Jedi training as sociology students. The problem, though, is they leave these classes not knowing what the hell theory is, what it does, why we care so much about it, and what the relationship between theory and methods are. Marx teaches our students nothing about these things. And while Durkheim’s Suicide is instructive for a number of these questions, the irony is that his theory is so poorly constructed (by any modern standards of theory building), we end up focused on his empirical generalizations like Protestants die by suicide at a higher rate than Catholics – despite evidence to the contrary (Halbwachs 1978). What do we do?

Sociological Theory

In its simplest form, a theory is a set of interrelated statements, with each statement characterized by two concepts and their relationship. Good scientific theory is good in so far as it leaves little ambiguity regarding the definitions of concepts, the relationship between them, and the interrelationship between the several statements, as consensus around conceptual definitions can only come about where vagaries are dispatched and measurement (and testability) is possible. Theories need not be formalized into propositions or analytic models, though these often provide the clearest, tersest mode of expression (Turner 1990). In part, the terseness of a statement is due to the goal of abstraction. Theories are decidedly not hypotheses because concepts are not variables—though, the difference between the two may be more of degree than of kind (Alexander 1990).

Sociology is rife with concepts, but rarely are they placed into statements about the relationship between this concept and that concept. Durkheim’s theory of differentiation serves as a good example: social differentiation is a positive function of moral density. The two concepts “moral density” and “social differentiation” are abstract in that how we measure them, concretely, is left to the imagination of the sociologist. Moral density, for example, may be the actual spatial density of a collective and/or the multiplex nature of social ties, frequency of interaction, and tightness of culture. Meanwhile, why this relationship is so, escapes this simple statement so that it can be applied to a wide range of specific cases, such as economic niches or socioecological systems, and, therefore, be flexible re explanations focused on emergent properties at different levels of social reality.

In addition to their abstractness, theories tend to be shaped by one or more of the following five goals: description/classification, explanation, prediction, overarching framework, and control. Each of these goals represents different aspects and aspirations of theory building, with description being the most basic scientific activity. Sociology, of course, is filled with a wide range of classificatory theories, ranging from myriad Weberian typologies to Parsons’ infamous AGIL schema. The primary criteria for evaluating a classification system’s value are determined by (1) how exhaustive the system is and (2) how mutually exclusive each type is, as well as (3) how readily the system lends itself to explanatory theory-building. Explanation and prediction are typically conflated but refer to different temporal orientations. It is much easier to explain why X caused Y than to predict that X will cause Y. Sociology, typically, works best for the former hence the tendency towards causal process modeling vis-a-vis laws or axiomatic theorizing (Turner 1990). There is a second reason to separate explanation and prediction: prediction is too often the strawman critics of sociology as a science evoke when attacking positivism. We are often told that the social world and social behavior are too complex to predict, and thus sociology and sociological theory is not scientific, cannot be, and/or should not be. But, it is worth pointing out that most natural sciences are much better at explaining things than predicting them. A biologist can tell you why a leaf falls in autumn, but because of the myriad variables outside of the laboratory, cannot predict when it will fall; a seismologist can reasonably ascertain the conditions increasing the likelihood of an earthquake, but cannot predict the timing. Neither of these disciplines are criticized for being less scientific than others.

These first three goals fold neatly into the fourth goal: an overarching framework. Evolutionary theory, for example, is the theoretical shell within which the biological sciences are unified (Mayr 2001). It offers classifications (e.g., genus), explanations (e.g., natural selection), and predictions (e.g., environmental change will put pressure on phenotypes and reproductive success). Consequently, it is paradigmatic in that it becomes the orienting frame through which biologists communicate with each other, motivate their own work, and conceptualize the biotic world. In many ways, this fourth goal is the “holy grail” of science, indicative of a theoretical framework that unifies a community of scholars—even if it is not without some criticism and critics—and highlights the fact that the theory seems to genuinely make sense of the data better than the alternative, competing theories. To be sure, it may be the case that sociology, because of its varied phenomena of interest, levels of reality, agnosticism towards methods, and the panoply of theoretical perspectives, resists an overarching framework, but we can imagine a sociology that moves past the popular middle-range theories to sets of broader, more abstract theoretical frameworks that come to characterizes interrelated studies, while also overlapping in key ways. Indeed, I think we already implicitly work within this world, but whether we are ready to explicitly specify the principles at work and, more importantly, abandon the way we currently professionalize neophytes in favor of a coherent, consistent approach remains an open question.

Finally, the most controversial goal for social science: control. A tension in sociology has long existed. On the one hand, besides Spencer and Weber, the foundational theorists in Europe and, later, in Atlanta and Chicago, took as a basic fact that social science could and should alter the social environment they inhabited. From Comte’s ridiculed religion of humanity to Marx’s commitment to praxis to the treatment of urban spaces as laboratories and the adoption of a pragmatist epistemology, sociology was oriented towards application. On the other hand, eugenics, social Darwinism, Nazism, and the like present cautionary tales of applied social engineering gone awry. How sociology creates an applied wing, then, has political and moral implications as much as scientific ones, and therefore it appears to be not as “simple” as in the natural sciences. Rather than wade any deeper into this thorny issue, I will say two things about control. First, I do believe our subject matter is more difficult to control than that which many natural scientists work with. Second, like prediction, I do think control is a matter of degree, and therefore, my first point has to be qualified by the recognition that total control—e.g., harnessing fossil fuels to power engines—is unlikely in many cases, but many branches of sociology provide exquisite empirical evidence and theoretical models informing public policy. Policy is not totalistic, usually, in its reach, but it can and does shape people’s lives across a broad metric of outcomes.

The Path Forward

What, then, does this mean? In short, I propose we examine classical theory according to these five aims, teasing out what the theory does and where it falls short, and then move on to the next theory. In so doing, we provide a clearer picture of the theory itself, disassociating it from the noise of the theorist, their writing, their milieu, and their philosophic commitments. This does not preclude intellectual activities that sit on the border of the humanities, such as pouring over Durkheim’s letters to Mauss or inspecting Marx’s marginal comments in an original version of Capital, as these both may alter the fundamentals of the theory at best, and at worst, they provide depth to a history of sociology. At the undergraduate level, the process is less intensive: we present the theory and examine how it shapes current research while at the graduate level students can actually do the extraction of principles. The former are taught theory while the latter further advance their knowledge of theory while also beginning to understand how to theorize.

To demonstrate the efficacy of this method, the following paper explores Durkheim’s Suicide, scrutinizing it with three of the five aims (more on that shortly). It is an exemplar because almost every sociologist knows the theory and, arguably, most believe it is the iron-clad, definitive statement on suicide. How can I be so sure? Having been subjected to reviewers who are not suicide specialists and having reviewed a fair share of what is being written on suicide in sociology and sociology-adjacent fields, Durkheim’s theory looms large over every study. Reviewers often rhetorically ask, if they don’t like the paper, “What would Durkheim say about this?’, or worse, if you try to study suicide with only passing reference to Durkheim, there is always one reviewer confidently arguing that Durkheim’s theory needs more attention. From a reviewer’s perspective, almost every paper I read is a re-test of one of Durkheim’s classic hypotheses using some new data or analytic technique (see the recent review of sociology of suicide for more systematic support of this assertion [Wray et al. 2011]). It is low-hanging fruit to show Durkheim was only partially right or, better yet, wrong. Thus, it seems to explore what the theory is, what it actually does and does not do, and pointing to the problems can provide a clear pathway for those looking to actually extend Durkheim instead of play in his 19th-century sandbox. It makes it a living thing, while also challenging his possession over sociological common property. And by removing it from the amber in which it is fossilized and reinjecting its DNA into contemporary sociology, it encourages the very same sociological imagination that Durkheim exudes in his own writing.

In that spirit, the next entry will examine Durkheim in light of the classification system. Doing so will allow for a detailed demonstration of the theory itself, while also pointing to some of the flaws inherent in the theory.

Posted in Musings on Sociological Theory, Teaching | 2 Comments

The Age of Biological or Social Contamination?

For nearly three years, most of the world has been experiencing the ebb and flow of a pandemic. I’ve written previously about sociology and disease, the panic and grief social distancing caused, and, more recently, the reasons for the US’s acute and (relatively) peculiar response to COVID-19. In that last piece, a topic that has become increasingly important to me took root: social contamination. I have been thinking a lot about this idea, as the notion of pollution and purification has been something of a theme in my thoughts on religious evolution. But, the implications for a microsociology of contamination, pollution, and, perhaps cleansing, have become urgent in my own sociological understanding of how we act, think, and feel. Before digging deep into this idea, a short but I think interesting diversion into the self is warranted.

A Moral Self

Sociology has long rejected the idea of a purely, utilitarian actor whose driven primarily by maximizing their pleasure and minimizing their pain. At the very least, rational action is severely constrained by biotic/ecological factors, broader social structural features like inequality, cultural beliefs, and the idiosyncracies of personality developed within these different contextual layers. But, sociology tends to have two ways of handling the “problem” of a social self. First, the symbolic interactions (SI) perspective (Mead 1934; Blumer 1962) imagines the self as fully developed when it can (1) imagine what another person would do if confronted by the same situational cues and (2) have an internal conversation with themselves, a real/imagined other, or a generalized community. That means that one’s self is built from the acquisition of language and other expressive idioms that allow them to understand what others will think and feel about them, to anticipate future behavior, and to be able to talk to themselves in the shower or their car without opening their mouth. To escape the critique of oversocialization (Wrong 1962), SI argues we are all fully capable of creative action, but with one condition: the pragmatic nature of SI means that we are habitual creatures until faced with a problem or obstacle, and then we become conscious, deliberate actors (Gross 2009). The result, in many cases, is a desireless person that does not seem to have wishes, wants, or likes; and which runs afoul of the basic science behind motivation (Kringelbach and Berridge 2017).

The alternative to this comes, partly, from Bourdieu’s (1977) theory of practice. The self is structured within a field that socializes or cultivates a set of dispositions reflective of classes of people who share a similar field position (and, consequently, similar economic and cultural “capital”). Like muscle memory, this enculturated self acts habitually (this constellation of dispositions was called a habitus by Bourdieu), but these habits reflect internalized class-based strategies (unevenly distributed and acquired) and thus the problem of rational choice is avoided by making people unaware of their habitual reproduction of social structure. We are still self-interested, within reason, and we are still strategic (though limited by the type and number of strategies our structural position offers), but instead of calculating, we are often just an oversocialized humna. Cultural sociologists have tried to resolve the problem of creativity by either adopting a cognitive dual processing model that recognizes, like SI, deliberate action happens, though probably for similar reasons (Vaisey 2009) or because “unsettled times” provide the sort of structural and cultural autonomy necessary for innovation (Swidler 1986).

Again, desire seems to be sidelined – though, our strategic dispositions allow for the pursuit of preferences or tastes that our habitus provides us. And, yet, I cannot help but ask if sociologists themselvess are joyless, desireless? Or do we just think we are so controlled by neo-liberal forces that desire is disingenuous or reflects the capitalist superstructure’s imposition of false consciousness? Or maybe we just think this is the domain of psychological shyster-ism and the self-help industrial complex? In any case, it does not actually square with empirical reality and builds a significant chasm between what most people do all day long and what we say they do all day long. (But, this digression is best left for a long essay on another day).

A Morally, Affectually Grounded Homo Sapien

Once more, I beg the reader’s patience as I need a second diversion before reaching our destination on contamination. The brain clearly evolved in the context of intense selection pressures for cooperation in big game hunting that required bands of humans to tolerate each other (Bowles and Gintis 2011). Some key characteristics of this evolved brain are: the ability to discern at an early age pro-social vs. anti-social adults (Decety and Howard 2011); the ability to calculate fairness in exchanges that encouraged food sharing, reciprocity, delayed gift exchange, and so forth (Tomasello and Vaish 2013); the ability to take the role of others to ascertain motive and intention, keep score of who helps most, and who we owe and are owed something (Tomasello 2020); and, finally, to keep track of our and other’s reputation (Boehm 2012). Humans bucked the dominance hierarchies of their closest ape cousins because of these capacities (Boehm 1999), favoring more level social organization that fit better with a model of shared hunting and the institutionalization of enduring cultural and structural institutions like kinship (Abrutyn and Turner 2022). These innovations did not remove the animal from our human nature but expanded the palette of objects to which we could affectually attach ourselves and, therefore, experience acute pain when we imagined losing them (Panksepp 1998), felt rejected or alienated from them (Abrutyn 2023), or, felt them threatened. This evolutionary perspective seems humans as desirous creatures, craving objects that we develop affectual attachments with. Some are innate, like caregivers, potential sexual mates, foods we prefer; others are built up from repeated exposure and cultural or structural patterns. But, we have things we want and like and we pursue them, even if some are collectively-desired (e.g., meat from the big game hunt was obviously something individual desire aligned closely with collective desire). But, these bigger points are best left for another essay, because the takeaway is one that sociologists sort of assume, but which is empirically grounded: One of these objects we grew attached to were the rules of the group; an evolved capacity that compliments the aforementioned list of evolved neural capacities.

The consequence was an ape with “well-internalized moral values and rules [that] slow us down sufficiently that we are able, to a considerable extent, to pick and choose which behaviors we care to exhibit before our peers” (Boehm 2012:30). That is, while SI and practice theory are busy thinking about habitual action, humans evolved to manage their impressions because the first impression is often the last impression. Thus, life is mostly filled with moral moments even if sociologists are wont to reduce the stakes of a given encounter because nothing serious appears to happen. Yet, in reflecting on the mundane commitment most people show to waiting in line at a store or deli, Rawls (1987) argues that this behavior is one of the most moral things people can do! Goffman, too, argued that transgressing minor, seemingly inconsequential rules signaled one is ‘that type of person’ and could call into question their entire reputation. Simply put reputation and its various related elements like (self)-respect, esteem, honor, status, and so forth rest at the core of an alternative view of self. Is it a moral self? Absolutely. Weber purposively constructed the idea of status and status groups as the converse of Marx’s notion of class. The former was his non-economic, morally-tinged form of social organization and action while the latter remained embedded in economic categories of organization and action. While Bourdieu tried to marry the two, one cannot help but get the feeling that wealth and material power lie at the heart of his model of habitus and field, whereas Weber’s model was better emulated by theorists like Veblen or Simmel.

If reputation-making and reputation-taking are central to a different version of self, and that version is moral, then it comes as no surprise that reputation is inextricably tied to moral emotions like pride, shame, and so forth. When we make claims about the respect we are owed or show the proper deference to others, certain emotions signal we are taking reputation correctly (Kemper 2006). When we try to enhance our reputation – or make reputation – through various strategies that have their own risks, we also experience these emotions as well as weaponize them in some cases (Clark 1990). When we are degraded, through public ritual (Garfinkel 1956) or in private face-to-face encounters (Goffman 1967), we experience the loss of reputation through these emotions. These emotions, thus, are signals of our reputational value, but they are also motivating forces to do the work necessary to create, sustain, enhance, and protect reputation. Pride, for instance, is universal to humans and is a unique, preconscious affectual force driving us to care what others think and to strive to do our best (Tracy 2016). Shame, likewise, is an evolved universal human response to the severe judgment of others (Boehm 2012) and the universal feeling of being small, wanting to hide, feeling mortified, and so on (Scheff 1988). Moral emotions, it seems, undergird our desires as much as more basic emotions like anger or fear. And, these moral emotions are eminently wrapped up in our biological capacities to become affectually attached to various types of objects.

This version of self is desirous. It need not be cognitively deliberate, which means it does not necessarily violate the convenitonal sociological wisdom that much of what we do is unconscious or more automatic. But, where those theories of self fall short is in their neglect of affect. Affect is often preconscious, and central to all of the ingredients of cognition like memory, comparison, self, and so forth (Frijda 2017). It coordinates (Damasio 1996), and often controls and commands learning, behavioral response, perception, and personality (Davis and Montag 2019). And yet, people feel affect; it is embodied; and, thus, the idea of an unconscious, unaware creature just doing routines or one that is simply pursuing the interests pre-programmed, falls flat. Meanwhile, this more Durkheimian/Goffmanian vision of self fits into the current affective and cognitive neuroscience that rejects the notion of habit because most action is motivated action even when it is not conscious, because all action requires intention, guidance, and control (Miller Tate 2019); and we can sense or feel the “right” behavior. Additionally, the internalized programs themselves makea us feel like what we expect to happen is happening, allows us to adopt increasingly flexible yet repeatable lines of action, and are built up from affect like shame or pride tagging past events, experiences, people, objects, and the like as supremely relevant, rewarding, salient, valued, and so on.

Ok, now we can move back to the theme of this essay: contamination. Clearly, we desire being seen as “that type of person” as we experience a lot of positive or negative embodied affect in everyday encounters. What is “a” or “the” fundamental things we must do to protect our reputation? What happens when one violates the ceremonial rules and casts their own credibility, other’s credibility, or the situation’s credibility into doubt? What happens when a person repeatedly violates these rules? What happens when one’s violations are so egregious (relative to the usual violations) that they escape conventional categorization and, thereby, existing mechanisms of social control and sanction? These questions, and others like them, guide the remainder of my thoughts in hopes of establishing a beachhead for a sociology of contamination.

Contamination

In the aforementioned paper on the pandemic (linked above), it had become obvious to me that the pandemic was interesting for two reasons. First, there is the element of biological contamination. Remember when we were all quarantined in those early days and, if we were allowed to go out and walk, we would avoid being near people (at all costs)? The anxiety of catching COVID-19, even after we learned how it was transmitted, was enough to make people sanitize their outside grocery bags, carry anti-bacterial gel, and contemplate a life of perpetual mask-wearing and 6ft. social distancing. But, over time there was also an intense fear of social contamination that had less to do with the “catching” the virus.

For progressive-minded people, there were two ways contamination lurked in the background. First, those who wantonly disobeyed public health recommendations signaled not merely outsider status, but also heightened risk and danger. They were potentially exposing people, needlessly, to biological exposure, raised questions about the shared grounds of reality that so many people take for granted in non-crisis times, and challenged the constitutive nature of ceremonial rules. Second, this was nowhere more apparent than the sudden rise in public danger: as unreasonable and subjective as it was, the threat of anti-masking, anti-vaxxing public outbursts became a source of social contamination.

For the political right, the imposition of government-mandated rules and the further erosion of imagined or longed-after local political, economic, and cultural autonomy represented a different, yet no less social form of contamination. For everybody, I think, the loss in the U.S. was the imaginary veil we had built up in public places or workspaces that allowed us to ignore the signs that people we knew well were ‘that type of person’. It smoothed everyday interactions, which was nice and also productive for workplace goals. But, it was a thin veneer that was suddenly broken by the seemingly ever-present threat of social contamination. Who did we think we knew that it turned out we didn’t actually know?

Contamination means exposure and, especially, contact with dirty, polluting things (Douglas 1966). As I write “imagine reaching into the toilet and squeezing your feces,” it elicits an automatic affectual response, which we would label disgust. Though this is biological in nature, contamination extends to the social as well. Weber (1978:305ff.) was fascinated with status group closure, arguing that all closure implies some degree control over interpersonal relationships, exchanges, and so forth. At the extreme end are caste or caste-like systems, where physical contact with some negatively privileged groups elicits a similar automatic response as being forced to lick a dirty toilet bowl. Just as charisma could spread from those possessing it to those who get close to the charismatic possessor, the contaminative elements of those who are defiled, discredited, polluting can spread to us. Indeed, Goffman argued “that individuals can be relied upon to keep away from situations in which they might be contaminated by another or contaminate [them]” (1971:30). In the extreme, this idea meant some classes of people (those with certain stigma markers or those who are institutionalized) presented dangers to the rest of us. Not just run-of-the-mill danger, but the type of danger that would destroy or mortify our own reputation and, therefore, the sanctity of the moral, affectual self.

However, we need not rely on extreme cases to illustrate how powerful contamination is for shaping behavior. At the crux of the issue is our reputation – and questions like whose opinion matters to us, what consequences a sullied or questionable reputation has, and what rules might transform us from ‘this type of person’ to ‘that type of person.’ At a party, do you go through the host’s bedroom closet, touching personal, private objects? How would you feel if you left said party, and remembered you forgot your phone on the couch. Upon returning, you walk in on the host and her best friend impugning the integrity of other departed guests? If you got caught in the first act or caught someone in the second act, would the term “mortified” suffice? Admittedly, many of the most stringent rules of etiquette designed to elicit self-shaming and self-regulation have weakened in societies rife with the sort of division that leads to different forms of social contamination, as outlined above. Admittedly, what was once “ought to” rules are now mostly “shoulds” (Abrutyn and Carter 2015), and thus some aspects of contamination vary heavily in time and space. Hence, you shouldn’t go through a medicine cabinet in a host’s house, but with social media, you can deflect some of that shame into a sort of voyeuristic virtual space that escapes some of the punishment that would have happened in the not-too-distant past.

And yet, contamination is real. Many people still avoid homeless people’s encampments, move to the other side of the road when approached, and lock their doors when stopped at a red light near a panhandler in the median. Visibly disabled people still elicit a cocktail of difficult emotions for many people. While de-stigmatizing mental illness has become a noble cause for many, coming face-to-face with folks who violate the most taken-for-granted conventions remains scary. These, of course, are extreme examples. But, for many conservative Americans, having a son or daughter that identifies with one or more categoric distinctions reviled by the modern Republican political movement – think, “liberal,” gay, trans, non-binary, or whatever – can be contaminative at Thanksgiving dinner. And, worse, a source of severe shame and disgust in their circle of friends where they hide this discrediting source of pollution. The left is not any better: I imagine many readers who have children would be mortified if their child brought home a caricature of the things they consider to be beyond the bounds of the moral order.

In the end, the idea of contamination is a powerful one, because it implies a preconscious disgust response to noxious stimuli and intense prohibitions regarding contact. The proscriptions surrounding exogamy often, though not always, rest on stigma theories about the risk, danger, and defilement of marrying out of your group. The deeply held beliefs about white people marrying black people (still held by, unfortunately, many today) or Christians marrying Jews (also, sadly held by many today) are not new, but more diffuse because of the size, scale, and complexity of modernity.

The Punch Line

The fear of contamination and the ensuing feeling of shame, stigmatization, humiliation, and mortification when one becomes the threat they fear others pose is likely biological. What makes us contaminated may vary in time and space, but the response surely does not. And thus, humans are not likely to be overcoming this anytime soon.

The downsides are clear. Some behaviors will always be considered far out-of-bounds and the transgressors of great danger. The extremes, of course, are obvious, but so too are things like muttering to oneself in public, smelling of bodily fluids, and so forth. We may call the homeless new terms (‘the unhoused”) but likely this makes our conscience feel better with likely little palpable real world change. People will always be opposed to things they deem unhygenic, even if what that means in practice varies in time and space. From a socially constructed point of view, there are always “backstages” and, thus, in addition to the sorts of bio-social contaminants some people appear as there are deeply conventional forms. Bedrooms and certain bathrooms in houses are always “preserves” of self. Who enters and what they are entitled to do is always restricted. But, the term “purity test” that has become commonplace for referring to the sorts of hurdles centrists face in political parties today reminds us that almost anything can become a line in the sand. Does it reach the level of disgust? Is the person a pollutant in the same sense as many of the examples provided above? Would be interesting to try and measure this, but the level of anger and fear displayed by many folks who consume Fox News, and have for some time (Rotolo 2022), indicates that people who do not display all the proper signage of membership may be viewed as disgusting, dangerous risks to the moral order.

The silver lining in all of this is that humans come off as moral, emotionally-charged creatures. Like our ape cousins, we are rational in so far as we are goal-oriented and planners, but these goals and plans are only as good as the ability to coordinate our affectual responses and cognitive interpretation of the situation. When objects are so taboo that they elicit disgust and the belief that touching them will make one as disgusting as the contaminating object, then the possibilities for strong in-/out-group rules can form. But, it also suggests that humans are compelled to design the types of rules that become essential to their own conception of self and desire to not be a polluted person. Figuring out how to expand the scope of these rules to cover all, or maybe most, people in a community, instead of place some in categories of good, honorable, clean and others in bad, stigmatized, and dirty is the trick. But, the capacity to do so is already there.

Posted in Culture, Emotion | Tagged , | Leave a comment

Cultural Sociology I: Meaning Making and the Psychological Industrial-Complex

In a previous post, I made the argument that sociology needs to go beyond just incorporating culture into a sociology of suicide. It needs a cultural theory altogether. But, what would that look like? What would be its framework? One obvious issue, that it would need to deal with is meaning, meaning making, and who is “responsible” for meaning making. Sociology has largely ignored the problem of meaning in studying suicide, choosing instead to pursue the population-level study of the distribution of suicide, while suicidology, until rather recently, cared very little about meaning, culture, or, anything socially constructed, for that matter. Meaning, however, matters. For instance, consider the political and social implications behind the move from the term committed suicide (Smith 1983) to “completed” or “died” by suicide. Semantics, to be sure, but with real implications for intention, motive, causal attribution, and stigma.

Not surprisingly, the problematization of meaning was at the heart of the earliest and most interesting critiques of the Durkheimian macro-structural approach (Douglas 1968; Atkinson 1978). The idea was that Durkheim’s approach depended on the veracity of official statistics about suicide. Otherwise, how could we trust the significance of comparisons between and within countries’ suicide rates or causal explanations about why rates in one place or among one group (e.g., religious categories) changed over time? The issue, which is a very legitimate issue, is that those doing the documentation of suicide – primarily, but not limited to medical examiners and coroners – are not always dealing with clearcut cases of suicide. They have to do inductive and deductive work around so-called “suspicious” deaths (Timmermans 2006). Adding to this less-than-objective determinative process is the fact that medical examiners serve multiple masters, including the medical sphere that licensed them, the legal sphere which may call them in to testify as expert witnesses, and the domestic sphere which, in many cases, has plenty of incentive to have a suspicious death declared anything but a suicide (e.g., insurnace claims, stigma and shame). Taken together, there are reasons to cast doubt on – or at least reasons to scrutinize – how official statistics are constructed and whether they measure what they purport to measure. [Incidentally, the question of whether or not statistics do what they aim to do remains an open question, but it is worth noting that Pescosolido and Mendelson (1986) convincingly showed that the social causes identified by Durkheim stood up against these claims of explicit and implicit bias.]

Setting this aside what would a sociology not committed to the study of the social distribution of suicide look like? In future essays, I will try to tackle the diversity of possibilities a cultural turn in the sociology of suicide might take shape as, but for now there is the question of who makes the beliefs about suicide that American, and perhaps most Western, nations adopt? There is no point in stretching this argument to the point of determinism as a cultural sociology of suicide would care about the diversity of meanings in play as well. But, taking up the thread in the earliest critiques, it remains a central task to ask who makes the meanings that the public, media, and even scientific communities take for granted? For Douglas, Atkinson, and others, it was/is medical examiners/coronorers and their consanguines. But, what about the psychiatric/psychological “industrial-complex?” That is to say, what about the people who are in the business of defining/disagnosing/treating suicide (Abrutyn and Mueller 2021)? Their centrality, which from a sociology of professions and occupations is surprisingly overlooked, are a key faction in the construction and perpetuation of meaning. So much so that an entire counter-movement in the interstitial space between psychology, social work, and anthroplogy has sprung up against them in the guide of critical suicidology (White et al. 2015). Hence, the professionalization process in the 1970s and 80s should be of paramount concern for a sociology of suicide that builds off of culture and its principle mechanism: meaning.

The Briefest History of the Diagnosticians

As an important caveat, one of the fundamental strengths of historical and organizational sociologies is that they focus on the collective and not the individual. It is important, then, to place the following discussion in its proper context: Psychologists and psychiatrists, then and now, were not nefariously plotting to scam people or to cause harm – in fact, in my experience the vast majority are committed to the opposite. Many things that happen, when refracted through historical lenses, show that changes – intentional or not – occur when groups of individuals, sometimes as a single group in a movement or other times aggregated, respond to social pressures beyond their control. They feel their material and ideal interests are threatened, real or not, and respond in ways that aim to protect what it their’s. In particular, when a class of people, like therapists or clinicians, feel their livelihood is being squeezed, like any occupational group, they will respond in ways meant to prevent losing their jobs/careers, their privilege and status, and/or their power.

In the 1970s, several intersecting historical forces created the conditions for professionalization (Horwitz 2002). The government and various community-level organizations began pushing a different mental health agenda, shifting from the stigmatizing institutional model to the disease/medical model. The world was changing, as asylums/institutions were expensive and had come under intense scrutiny by social scientists (Goffman 1961; Scheff 1966) and pop cultural figures (Kesey 1963 – One Flew Over the Cuckoos Nest) regarding their inhumane treatment of patients. A massive economic restructuring drove insurance companies to begin to change their policies toward mental health and psychiatry, creating a need for diagnostic checklists that could suffice for bureaucratic processing. Gone were the days a therapist could simply say a paient needs “X”, rather they needed to check boxes for processing the insurance claim. Amidst changes in insurance and government regulation, pharmeceutical companies saw opportunity. Ince the 1950s, they had pushed similar versions of SSRIs as today but to no avail (Herzberg 2009). Psychoanalysis had no need for drug treatment so long as therapy for anxiety/neuroses cotoinued to be covered by insurance or middle-upper class clients. Finally, the explosive growth of public, higher education in the early 1960s did what it did to many established social sciences: brought a ton of new approaches driven by cadres of newly minted graduates who had grown to challenge hegemonic holds across disciplines as they prusued their own careers. Psychoanalysis’ days were numbered as various forms of cognitive science proliferated and pushed new ideas about the etiology, diagnosis, and treatment of mental disorders.

Like the shift to mental illness-as-disease, the claim to professionalize psychiatry was built around the medical model doctors had used less than a century prior: their knowledge and practices would depend on the scientific method and evidence-based research (Conrad and Slodden 2013). A committee was tasked with reviewing the research on mental illness, discerning what hard proof there was for myriad disorders. Following a medical model, the goal was to only include in the DSM-III discreet diseases, as determined by their having discreet etiologies, diagnoses, and prognoses (Horwitz 2002). But, as with any community of scientific knowledge producers, this was not a task achieved in a vacuum safe from politics and competing interests (Merton 1979).

Psychoanalysts, in particular, felt the acute threat to their livelihood this project promised. Neuroses and most psychoanalytic disorders were not founded in anything approaching the rigor suggested by the experimental or clinical trials found in medicine (Horwtiz and Wakefield 2007). It was built up from individual patient histories, generalized with or without generalizble evidence. The fear was not unfounded: if the DSM-III became the official “bible” of psychology, clients would eventually prefer those who adhered to it, while the APA could, theoretically, become like the AMA with the power to certify/decertify practitioners. So, they fought back, as did many other clinicians fearing their skill sets would be less valuable in the DSM era. Ultimately, the final product, published in 1980, looked nothing the vision that the committee was tasked with realizing.

For instance, far too many disorders included in the DSM were not discrete in their etiology, diagnoses, or prognoses. Decades of research, independent of and sponsered by drgu companties coupled not find an organic source of, say, depression, so they downplayed this crieria. Of course, this sleight of hand was a major indictment of their claims to parallel medical doctor status. In part, because the same treatments became common for a diverse array of disorders, with little understandng of why they worked in some cases and why many patients could never find a single treatment that worked (Karp 2016). Nonetheless, the writing and publishing of expert knowledge is powerful (Goody 1986). And so many drug treatments for medical disorders had become normalized, that between the DSM and the presumed efficacy of drugs (which was a much cheaper alternative for insurance companies than long-term therapy) gave the mascent professiona legitimacy. The claim that serotonin levels were directly related to mental disorders (and therefore supposedly treatable with SSRIs) became widely accepted through the field, mass media, pop culture, and common sense, despite the empirical evidence to the contrary (Moncrieff et al. 2022), but which became so diffuse through psychology, mass media, and eventually common sense claims-making.

With the backing of “science,” the book became the source of routinizing training, knowledge claims, practical repertoires, and meaning-making. Psychologists were like doctors, just for the mind. It made no difference that several disorders were totally political and social. For example, the inclusion of homosexuality as a disorder was only a disorder in so far as a community declared it as such. (Removing it, incidentally, required a concerted and sustained political movement by lay activists and radical psychologists alike (Bayer 1987)). Indeed, by the time Prozac Nation was published and made into a popular movie, the psychological industrial-complex (psychological science, insurance companies, community organizations preferring the disease model over the stigmatizing institutionalization model, and drug companies) was a taken-for-granted force to be reckoned with (Pearlin et al. 2007). Even with the recent meta-study debunking the organic etiological claims of depression (Moncrieff et al. 2022), the damage has been done. “They” are the trusted source for anything labeled mental health-related. Hence their dominance in suicidology since its inception, and in their hegemonic hold over the causal explanations we accept as either real – psycheache (Shneidman 1995), loneliness/hopelessness (Joiner 2005; Klonsky and May 2015), and escape from psychic pain (Baumeister 1990) – or suppose are real – e.g., mental illness causes suicide (Mueller et al. 2021).

Meaning Makers

When Douglas (1968) and Atkinson (1978) wrote their treatises, they had no clue that the DSM-III would emerge or that pop culture, lay society, and the scientific community would come to revere the psychological/medical approach to mental health. This approach naturally spread into the science of suicide. And, like doctors in the 1950s and 1960s (Starr 1982), the notion that suicide is caused by intrapersonal forces, be they mental illness, psycheache, or any number of cognitive appraisals like hopelessness and loneliness, was solidified as the result of psychology’s ascent to dominance (Cavanaugh et al. 2003). This is not to say that Durkheim’s work or sociology had managed to diffuse their own empirically grounded claims that suicide was caused by social forces. Indeed, to the contrary, Durkheim’s work has not made much a of dent because it has virtually nothing to do with why people die by suicide and everything to do with demonstrating that social factors constrain or facilitate suicidality among certain groups or classes of people. Important, indeed, but peripheral to the business of explaining suicide as a social behavior. Of course, it is questionable how well the professionalized version of psychiatry has done, given rates have grown over the last two decades despite billions of dollars spent on psychological research.

Most obviously related to the proclivities of a profession rooted in a diagnostic manual – and its pitfalls – are the efforts to catalogue individual risk factors. Over 150 risk factors have been identified as correlated with suicide, rendering these factors useless in explaining or predicting suicide.

Predictably, the psychological industrial-complex tends not to offer social explanations like those typically reflected in artwork (Stack and Bowman 2012) nor consults the insights sociology might lend (Mueller et al. 2021). On the one hand, the vast majority of movies made in the 1900s involving suicide featured social causes most prominently: relationship strains, unrequited love, status disruptions, and so forth. While this may feel anecdotal, stories resonate because they make sense to the viewer, not because of their implausibility. On the other hand, the very notion of what a mental disorder is (Scheff 1966) and, more importantly, what it means – e.g., is it good or bad, a sign of stigma or one of creativity – remains socially constructed (Horwitz 2002; Pearlin et al. 2007). One need only consider the disappearance of once popular diagnoses, like hysteria (Micale 2019), homosexuality (Bayer 1987), or anxiety (Horwitz 2010) – which, incidentally, were largely replaced by the ubiquitous depression diagnosis – to understand how unorganic most mental disorders likely are (besides, perhaps, schizophrenia and bipolar disorders). But, their expert status has cemented, in the minds of the public, media, and scientific community, the idea that suicide, like mental health, is caused by intrapersonal forces and not interpersonal.

Unfortuntely, the question of meaning-making and meaning-makers is beyond the scope of short blog post, but I think it serves the larger goal: a cultural sociology of suicide must begin by examining and revealing how psychologists have built up the meanings of suicide in the U.S. and throughout the West, how and why alternative explanations and meanings have emerged and persisted or failed to, and how their science is translated into public-facing knowledge. What might this look like?

For one thing, I am unaware of studies that actually interview and/or observe, ethnographically, psychologists and psychiatrists. What do we know? How do they process clients? How do they think about suicide, mental health, and the use of diagnostics? Anticipating other posts, it would also be prudent to think creatively about how to study just how impactful psychological beliefs about suicide are for those who have completed, attempted, and are thinking about suicide? Do they frame and understand their feelings, attitudes, and actions within a psychological model? How are these beliefs distribtued across time and space, and what sorts of factors cause them to expand or contract? And, how do rival beliefs succeed or fail to succeed against the dominant hold psychological beliefs presumably have over the majority of Americans?

Admittedly, these questions are not focused on suicide, per se, or victims, survivors, or prevention science. Instead, they are focused on cultural production and dissemination. To me, that is one good starting point for introducing a cultural theory to suicidology, because it reveals the process by which suicidology became suicidology, its beliefs are constructed and distributed, and cultural ideas circulate, reach a peak, and contract. From there, we can begin asking about the mechanisms by which these beliefs become available, accessible, and applicable to people.

Posted in Culture, Suicide, Uncategorized | Tagged , , , , | 1 Comment

A Cultural Theory of Suicide?

Sociology has famously studied suicide using Durkheim’s classic structural framework. For the uninitiated or for those needing a refresher, what that means – or at least the common interpretation of what that means – is that (a) the structure of suicide rates varies based on the structure of social relationships and (b) the structure of social relationships varies according to a collective’s degree of integration or regulation. Plenty of people, including myself, have plumbed the depth of what this means, so I won’t waste time on that (for a recent review, see Mueller et al. 2021). I also won’t bother critiquing this approach without necessarily leaving Durkheim behind, as this is well-covered terrain as well (some of my favorites in addition to the Mueller et al.: Phillips 1974; Johnson 1965; Nolan et al. 2010; Abrutyn and Mueller 2014, 2018). That said, what Durkheim doesn’t do, and in all fairness he was not interested in doing, is ask or answer why do people choose to die by suicide? This silence is deafening.

By Stack and Bowman’s (2011) count, sociology has contributed the second-fewest amount of published studies on suicide since 1980; and the gap between us and the number 1 discipline is massive (~180 papers to 9000+). We are behind disciplines like molecular biology and law, the former of which does not stand out immediately as a typical field of suicide inquiry. How is it that the 16th most assigned reading in the discipline (according to Open Syllabus Project) has not revved up the sociological imagination of generations of sociologists?!? Part of the problem is sociology’s obsessive gaze backward at some halcyon days of classical yore (which I have commented on here, here, and here). Durkheim’s theory is hermetically sealed, while his theoretical frame is often taught as fact; case closed. Suicide consists of four types and is caused by too much/too little integration/regulation.. Sure. Fine.

But, we also know suicide – or the meaning of suicide – varies tremendously across time and space (Barbagli 2015; Baechler 1979; Kitinaka 2012). As sociologists, we know meanings matter in so far as humans make sense of their feelings, thoughts, and behaviors (as well as real, imagined, and generalized other’s feelings, thoughts, and behaviors). We know that humans are planners, and thus orient themselves to social objects in the course of their planning, while these social objects – physical, social, or ideational – take on meaning through interaction with others. And, thus, we know suicide as a social object varies in meaning according to the cultural beliefs of the time and place. Exceedingly high rates of suicides in a region in south India are interpreted and understood as caused by massive strains between the exceedingly high material aspirations and the structural constraints preventing many from realizing them (Chua 2014). Meanwhile, unfathomably high rates of youth suicide are explained by respondents as caused by historical cultural genocide leading to a search for a way to “belong,” and suicide presents one such pathway to belonging (Niezen 2009). The lesson here is that suicidal behavior, like any other social behavior, is performed and, thereby, signals subjective feelings and thoughts in symbolic form that the attempter or decedent presumably wished to express to some intended audience.

Of course, the audience may misinterpret the meanings, applying their own well-worn schema to the decedent’s performance. Likewise, unintended audiences are free to make sense of the suicide as they see fit. Nevertheless, it is a symbolic act that demands meaning-making and, thus, it is saturated in culture. And yes, culture is a sticky concept with myriad meanings and contested vagaries. But, it is the primary tool through which human societies reproduce themselves, distinguish themselves from each other, and convey what feelings, thoughts, and actions are good, right, appropriate, or bad, wrong, and inappropriate.

Thus, like Becker’s (1953) marijuana user, one must become suicidal. They must acquire the meanings that transform their inner feelings and thoughts into something a suicidal person would feel and think. These meanings must be available, which they always are thanks to various modes of artistic expression that teach us about suicide as well as the very real possibility that through traditional and social media will be exposed to a celebrity’s suicide or just a member of our community. So, the action itself is always “out there.” But, is it accessible and applicable to the person’s reality (Patterson 2014:19)? And, if it is accessible and applicable, how many people are vulnerable to those meanings? Is it just a solitary individual whose struggles mirror or are perceived as mirroring a media report of a suicide? Or, does it fit a pattern of folks, like Niezen found among Indigenous youth in a particular community?

These questions scratch the surface of my larger point: Durkheim’s theoretical and methodological strategy is still important, particularly in measuring wholesale changes over time and place when massive disruptions happen, like the Great Recession in 2008. But, it falls short of asking more interesting questions about suicide that sociology is well equipped to ask and answer.

The sociology of culture has become an exciting space where the types of debates and discussions concerning sociological theory’s biggest questions are occurring. It is a place of promising marriage between those studying suicide and those asking about (1) how/where do beliefs come from, (2) how/why do beliefs cluster or cohere in certain collectives and/or classes of folks, and (3) how/why/when do beliefs come to shape behaviors? Durkheim notably chose suicide for many reasons, but one central one was its extreme finality (it is also its biggest weakness as one cannot interview a decedent to ask the most pertinent questions about why one chooses to do something). He felt that explaining suicide meant being able to explain most other negative outcomes and, implicitly, positive, desirable outcomes. The problem, as noted above, was that he didn’t care to actually ask why people do things, just what social forces impel them to do things.

Fine. But, we can do better. To do so means to heed Atkinson’s (1978) serious critique that “its the theoretical and methodological content of Suicide which has fascinated generations of sociologists and not that phenomenon which members of a society call ‘suicide'” (p. 10). It is to ask anew what suicide is, why it occurs, what it means to society and within the framework of local cultural realities as large as communities and as small as families or even dyads, and how deciphering the meaning of suicide might offer the science behind suicide better understanding and explanation and the sicence of prevention/post-vention tools that allow us to intervene more efficaciously.

Posted in Culture, Suicide | 1 Comment

Sociology and the Good Society

As we spend summer thinking, not thinking, ruminating, not ruminating on classes, sociology, and the good society, I wanted to point to an oddity in the sociological ideology. An ideology is a set of beliefs about what we believe will happen, why it happens, and, as a retrospective tool, an interpretive lens for putting the past into some cohesive pattern. with the present It is not causal, nor is it scientific – though, it may draw from science, be scientific in outlook, and may be causal in how it shapes a person or class of person’s behavior. All disciplines have an ideological bent that crystallizes and saturates its students and practitioners, emanating from some mix of history, epistemics, and other contingencies like self-selection of students. In any case, sociologists learn a bunch of stuff early on in undergraduate and graduate training that largely goes uninterrogated and simply accepted as fact. And these tacit beliefs usually go unnoticed like most tacit beliefs wherever cosmologies are inherited. One of these little nuggets sits at the heart of this essay and forms one of the true paradoxes of sociology. A (ideological) paradox with immense and far reaching ramifications. I’ll call it the “community paradox,” which admittedly is a sloppy, imperfect name, but works for now.

The community paradox rests on sociology’s insistence on teaching classical theory as though all of it was theory and not just (bad) social philosophy constrained by both shaky data and predictably terrible interpretations. The basic paradox rests on a warped view of modernity founded on an incredibly tumultuous and rapidly changing period of time in European history. In short, the arguments are as such: modernity (usually code for urban, industrial, liberal [in the more classical political theory sense]) tore asunder the traditional bonds of premodern social organization. On the one hand, then, there was something lost: the ascriptive, deep kin-based social solidarity was protective, healthy, natural. On the other hand, it was delimiting, “primitive,” inherently inequitable.

The past is simultaneously a golden age to be wielded against the unnatural nature of the present while also being so different from the present that it might as well be inhabited by aliens and, therefore, does not serve as a good model for the good society. In part, this ambivalence rests on a bunch of similarly constructed binaries meant to distinguish the object of classical theory’s inquiry by throwing “modernity” into sharp relief with the past. Think gemeinschaft/gesellschaft; mechanic/organic; primary/secondary; communal/associative; premodern/modern; traditional/modern; sparse/dense; total self/partial self; personal/impersonal; specific/general. It is, of course, simpler to think in big containers, often dichotomous containers, if only because it makes pedagogy much more standardized and consumable.

It need not matter whether these are empirically true. For Marx, the arc of humanity went from some romantic horde sharing and caring to a class-based exploitative system (though, at least he saw the arc of humanity as ending well). For Durkheim, old societies were to be admired for their protective, cohesive qualities, but distrusted for the violence they do to the individual and her freedom, while modernity was, in theory, perfect, but in practice pathological. For Weber – who was much better at hiding his views on human nature – the enchanted nature of irrationally organized societies was always being impinged on the sociological version of the law of conservation: routinization, formalization, standardization, or rationalization, was always a threat to disenchant. He, of course, seemed genuinely concerned about the inevitability of a modern hyper-bureaucratized society ending enchantment once and for all. Simmel, too, contrasted the forms of society that fostered sociality and the totality of self vis-a-vis the metropolis and its tendency to objectify culture and people, reducing the to typifications and stereotypes. I can go on and on with this list, but will not bore you further.

But, that was then, this is now, a critic may push back. I give you, as evidence of the paradoxes implicit sway, Bowling Alone and recent papers on the decline in friendship and so forth (here, for instance). Both sorts of scholarship assume that there was a time in which social cohesion was better, denser, richer, more protective. For Bowling Alone, it was the golden age of associationalism in the U.S., while other scattered scholarship is delimited by data constraints (how far back surveys go). But, the irony in these studies, is that the golden age they sort of imagine was intensely critiqued by then-modern sociologists for being too conformist, too narrow in ideals, too inauthentic, too lonely. The scholars critiquing the massification of society were busy revering the 19th century for its publics and support of free-thinking, localism. So, in the end, golden ageism has been baked into the discipline but also with a careful, studied distance that only intellectuals — whose preferred, comfortable milieu is the very anonymous, cosmopolitan, culturally-rich modern urban spaces — could foment.

Is it correct that small, tight knit communities are too stifling, too nosey, too reproductionist of all the systems of inequality sociologists bemoan (particularly patriarchy), and, shocker, too “rube-like”? And, is it correct that urban spaces denature us? That impersonal ties, achieved over ascriptive categorization, hierarchy, and so forth are the worst thing to happen to humans? Below, I’ll address both of these questions a little further, shedding some light on them, but likely will not answer them in ways that are satisfying and definitive – sorry. That said, it is worth keeping, in the background, a much more complex question: what can sociology actually tell us about the “good society”? If the premodernity is littered with social organization over integrative and regulative for individuality and modernity is too individualized, to borrow a fairy tale metaphor, what is the “just right” porridge to eat or bed to sleep? Is there a good society or are sociologists implicitly, subtly saying we are all doomed?

The Premodern Problem, or How We Learned to Love/Hate Bucolic Life

A passage in Marcuse’s One-Dimensional Man (whose page numbers elude me as I write this) captures the gist of my argument. In discussing just how one dimensional humans are in capitalist societies, he contrasts sex in traditional times with modern times (modern = early 1960s). He imagines the former as sensuous, creative, spontaneous, and unencumbered by technology’s many trappings. The latter is mechanical in nature and unnatural locations (think the backseat of a small car – a VW Beetle, perhaps): cramped, unimaginative, rote, and objectified. Nevermind the unsanitary nature of life in the 18th century and before; or the essential free license men had; or the lack of birth control that not only threatened the spread of STD but also of pregnancies that were far more damaging for the woman and her family than for the man. Nor should we pay attention, according to Marcuse, to the increasingly liberal notions of sexuality that were unfolding in the 1960s in many corners of the west. What mattered was the fact that the idyllic past was superior to the present in all forms; it was two and not one dimensional. Humans were free then, sublimated now.

Admittedly, this take is influenced by Freud far more than classical sociologists were, but it’s underlying logic was remained indebted to those classic binaries. Equally true, Marcuse, like Marx, was not advocating going backward because there was something implicitly uncouth and uncivilized about the past. In the distant past were caricatures of foraging societies who lacked individuality, lived brutish lives, and were somehow less intelligent. In the more recent past, there were two models: aristocratic or peasant life. No sociologist worth their salt would ever pine for the life of the landed gentry, while peasant life was just as abhorrent as foraging life to our sensibilities.

At a theoretical level, the stakes were obvious. For Durkheim, there was a lot to love in the abstract about the comfort and support of premodern society, but it was also too over bearing, squelching the freedom the individual had presumably attained in modern times. For Marx, was more pragmatic: technology was a harbinger of material comforts hitherto unheard of or unseen, and thereby, should be celebrated!

But, these theoretical decisions have had consequences. In short, one might say that the golden ageism baked into the discipline gives a green light (and a set of models) for contemporary sociologists to reproduce the flaws of the founders. It is almost as though we are trained to always produces a perfect, idyllic foil for explicitly or implicitly judging the present, but ironically, never a model for the future. One might also say that this stance has major downsides to a science of societies.

For one thing, it obscures the biological, neurophysiological, psycho-social, and social continuities between so-called premodern and modern societies (a sleight of hand that adding the prefix “post” achieves, too, but for different reasons and with different consequences). It is correct to say biological evolution has not ceased to work on our brains and bodies; ~7,000 years ago, blue eyes, for instance, were virtually non-existent. But, it is also incorrect, given the overwhelming evidence we have, to suggest the earliest homo sapiens were radically different than modern humans, at least in terms of our brain’s size and shape. Finally, it is also naïve to presume hierarchy, status seeking, domination, wanton killing, rape, venture capitalism (that is, the kind of capitalism of raiding), and the like were non-existent. Humans are animals, despite our big brains and moral conscience. The evidence suggests aggressive male upstarts have always existed, going back to the last common ancestor we shared with chimps and, likely, with gorillas (Boehm 2001). Societies had to work exceptionally hard to beat back these upstarts, and thus when the conditions were ripe – e.g., real/imagined/manufactured external threats, needs for third-party adjudication, need for centralized risk management – a fine line between hierarchicalization and temporary centralization/consolidation of the legitimate right to make binding decisions about resource (in the broadest sense of the term) production, distribution, and consumption made social problems a constant, lurking in the background.

Put differently, we are not that different from the past. Marx and Marcuse are right to presume we are far more advanced technologically. We live in societies much larger, denser, and complex than ever before, but just how different are we? Probably not nearly as much as we believe. Rather, the past serves as a contrasting tool with the present, serving to both highlight whatever flaw the scholar sees as problematic today while also being hermetically encased and irretrievable. This comfortable process collapses 300,000 plus years of human evolution, acting as a Rorschach test: which time and place in the long duree is the analyst choosing as their foil against modernity? What is it they are yearning for that is lacking today? Whatever may be the answer, the one dimensionality proposed by Marcuse is, in fact, found in sociology’s stance towards the past and not in the present.

A Portrait of an Era in its Infancy

In contrast to the natural state of premodernity, modernity was distrusted by classical theorists. Unlike premodern times, modernity was unnatural. The hardness of concrete, steel, right angles, and a quickened pace of life easily parallels the hardened social characteristics of urbanity like impersonal social ties, anonymity among denizens of the megalopolis, shrinking family networks, objectified economic relations, growing secondary (read: utilitarian) associations, and the constant surveillance of the State. We are one-dimensional, robbed of everything subjective, communal, sensuous, moral; the world is disenchanted and, though efficient and productive, dehumanizing; mutual interdependence and liberal individualism allow for free thinking and diversity, and yet are vulnerable to overspecialization, alienation, anomie, exploitation, and the like.

Durkheim so feared the disintegrative forces of modernity that he wrote a book on suicide as caused by a confluence of modern forces that severed the strong ties of the past (of which he was ambivalent about in the first place!) and caused moral confusion. These harsh critiques remain as vivid as the images of smoky factories, dirty and impoverished urban spaces, and squalor of a Dickens’ novel. And while modernity is not only factories and cities, the classics and their descendants have mostly been engaged with urbanity, industrialism, and the like, which is perhaps why so many have rushed to move on from modernity into some imagined new stage of human life

(BRIEF DIGRESSION: A sort of irony in the unyielding fear and Cassandra-ism of classical theory is that for all the distaste for the dangers of modernity, academics are intellectuals and intellectuals have historically favored denser, culturally richer, more vibrant spaces. The image of the salon, coffee house, or beer hall life – real or manufactured by fiction – captures the effervescing qualities of Durkheim’s moral density. But, cities have also offered protection both in their provision of anonymity and in their relative politico-economic autonomy (at least since the Italian-city states [Rashdall 1936]) from the larger political territories in which they are nestled. Bologna, Salerno, and Paris were the first sites of universities, underscoring the pull cities had in the first place — legal scholars attracted second-born sons who were blocked by primogeniture from their inheritance and who saw few other honorable options beyond priesthood or monastic life — and their central place as engines of further growth.)

Were the classical theorists correct? Have denser populations in highly differentiated social organizational patterns dehumanized us?

Arguably, this position ignores several alternative interpretations of modernity that see life, today, as perhaps more closely resembling life in foraging societies. The long arc of history might be represented as a negative curvilinear line, with the poles representing a host of organizational elements that are in fact natural to the earliest homo sapiens and that these modern societies share with them today (Maryanski and Turner 1992); albeit with modifications, and some key differences.

Up until 10,000-12,000 years ago, small-scale societies were the rule and not the exception. The two basic organizational features of these small-scale societies? The abstract community and the nuclear family, which served as the basic productive unit. Not surprisingly, we share the former with Gorillas and Chimps, who have a very clear sense of membership in a community, while the latter is a result of selection pressures working on the other durable source of social organization: mother-child bonds. When humans settled down, extended family became increasingly important as lineage shaped property rights, inheritance, defensive measures, and collective risk management. Thus began the social cage of kinship. Fast forward to today, and we see that modernity has been an unabated assault on this cage, eroding the thick webs that ensnared people, and making the primary/immediate family the basic unit of social organization for most humans.

The question, then, are we happier with greater autonomy and fewer strong tie obligations, like other apes and foraging societies, or are we happier in dense, complex social networks? Intriguing question, to be sure. Clearly, our big brains allow us to be flexible in the diversity of milieus we can inhabit, but we are still apes whose neurobiology has not evolved radically since homo sapiens branched off.

Another example of the curvilinear pattern: stratification and inequality. At either pole, we find much lower levels of stratification and inequality (there is no evidence, to my knowledge, of a truly communal or communist society, despite humans’ best efforts). It is radical to speak of the current world as less stratified and inequitable as the past, but the data do not lie (Nolan and Lenski 2014; Sanderson 1999). In particular, one thing the social cage brought with it was pressures to consolidate and centralize leadership. For 300,000 years, humans worked hard to reduce domination and prevent enduring leaders from calcifying political roles. As the slow march of political differentiation accelerated some 5,000 years ago, so to did the extraordinary amount of oppression, injustice, impoverishment, and so forth.

This is not to say that inequality hasn’t sharpened over the last few decades (it has); or that poverty has been eliminated (it hasn’t); or that we couldn’t or shouldn’t be better (we can and should). It is a simple fact that the vast majority of agrarian societies consisted of the tiniest of proportions of humans living their best lives while the rest of society either serviced those people or were like the humans in the Matrix: born into the life of a battery to keep the system afloat, birth new batteries, and die once this purpose was served.

I could go on about other similarities. The rise of spirituality, syncretism, agnosticism, and secularism parallel preliterate religious life far more than the trappings of highly organized religions, for example. But, the bigger point is that the myriad social cages humans have spent millennia erecting as mechanisms of control have been demolished or reduced. Of course, the amount of control and domination, a critic might respond, has increased! So, how do we square the ape trait for autonomy and independence with the sharp rise of surveillance, biopolitics, and so forth?

I would argue we are not as controlled as one might suspect, especially when compared to the idyllic small community pf Marcusian imagination. A small percentage of crimes are solved; a not insignificant portion of the population cheats on their taxes, even a little bit, and never gets caught; people consume immense amounts of porn and can hide their addiction (and, I am not talking just run of the mill porn, but the types of sexual behavior that in a small town in the 18th century would be known through gossip and would lead to expulsion, in the case of a man, and perhaps being burned at the stake, in the case of a woman). Does the state excessively monitor and severely sanctioning segments of the population? Unfortunately yes. But, the idea that we are controlled is just another version of oversocialized sociological theory.

More broadly, every society works to monitor and sanction its people. Smaller, denser social spaces, like a chiefdom village or the idyllic American small town, are far more adept at doing this (see Sherwood Anderson’s Winesburg, Ohio, for the amount of knowledge a resident would have of most other residents). Deviance is far more difficult to pull off, especially deviance that exceeds the local definition of “garden variety.” Modernity is significantly freer than at any point since 10,000 years ago, even if our keystrokes are being recorded.

Implications

There are empirical and theoretic implications for sociology’s adherence to this paradox. Consider the following example.

Durkheim’s larger goal was to demonstrate the power of sociology as a lens shaping research questions, methods, and the subsequent explanations for social behavior. It was not just about suicide, but suicide served as a perfect example for so many reasons. But, the community paradox constrained his analysis and framing and set the rules of the game for studying suicide (or any behavior that employs a explicit or implicit Durkheimian methodological logic). First, suicide is always a social pathology, but it is only in modernity that we can learn something interesting. Durkheim validates this decision by arbitrarily and glibly declaring fatalistic suicide a type only interesting to historians and altruistic suicide a relic of underindividuated “primitive” societies (for criticism of these decisions, see this and that). Imagine a general theory of anything that denies two of its four categories as relevant to inquiry! (It has had real effects on contemporary sociology of suicide in so far as they have eluded empirical investigation for the most part).

It also set up an untenable situation that speaks to a larger sociological dilemma (which I’ll unpack shortly): how do we ascertain what enough integration and regulation are? Too much or too little are unhealthy, but where is the “just right” porridge? This slippage weakens the explanatory power of his two great dimensions causing suicide. What’s more, it reinforces the mythology that cities are bad or unnatural. But, Durkheim offers no alternative, as he sees premodernity as a romanticized Garden of Eden (more protective, yay!) but ultimately a Panopticon of horror (no individuality, boo!). He even names one of his types of suicide (altruistic) after the fact that these societies have the “right” to demand individuals sacrifice themselves. The alternative, for Durkheim, is to embrace the liberal institutions of education and democracy, but at the cost of atrophying social ties and, where capitalism gets a foothold, moral relativism and dysregulation. Durkheim passes no real judgment, not explicitly, on the former, but his chapter on anomie offers a scathing critique of the ethos of capitalist urbanity.

The bigger issue here is Durkheim makes his readers think that suicide is a modern malaise, and not something endemic to human societies (which, it is); therefore, it is just outside the reaches of a comparative, cross-cultural sociology. Or, maybe he makes us feel that contemporary suicides are more problematic because contemporary society is not natural. He defines the parameters of sociology as modernity and the study of modern problems in the context of modernity, just as most of the classical theorists do by holding up the social problems most salient to them as exemplars.

Consequently, the old, which encompasses millennia of human evolution, are collapsed into a one dimensional foil to be easily compared to today because today is so radically different (a sleight of hand employed by postmodernists and post-everythingists, too). Even Weber, who is perhaps the most judicious and historically adept sociologist draws his own lines (pre-Reformation/post-Reformation) – though, in his defense, he is better at drawing on social continuities even if his empirics are sometimes questionable.

The last thing I will say requires stepping back from the details, and thinking about this paradox and what its says about sociologists and their view of the world. Philosophy has long wondered what the “good society” is, and this tradition greatly influenced the founders of the discipline; how could it not, as sociology, like most social sciences, grew out of philosophy. It still preoccupies the discipline’s zealous commitment to social problems and has taken purchase in an entire wing broadly defined as “critical.” And yet, the community paradox is broadly infused in our DNA. What does that say about us?

The past, if we squint hard enough, was once golden, but also abhorrent in too many ways to count. It is used as both a blunt instrument and surgical scalpel to critique modernity. The present is unnatural, filled with inequities, and prologue to the future. If these are our options, a golden age to which we objectively cannot return nor would we really want to return and a corrupt modern world, then what is a good society? Clearly, the best we can offer are utopias, but these rarely reflect the actual generalizable qualities of humans and human societies!

I offer no answers, just questions. Maybe there is no good society? Maybe there just are societies?

Posted in Uncategorized | Leave a comment