The Age of Biological or Social Contamination?

For nearly three years, most of the world has been experiencing the ebb and flow of a pandemic. I’ve written previously about sociology and disease, the panic and grief social distancing caused, and, more recently, the reasons for the US’s acute and (relatively) peculiar response to COVID-19. In that last piece, a topic that has become increasingly important to me took root: social contamination. I have been thinking a lot about this idea, as the notion of pollution and purification has been something of a theme in my thoughts on religious evolution. But, the implications for a microsociology of contamination, pollution, and, perhaps cleansing, have become urgent in my own sociological understanding of how we act, think, and feel. Before digging deep into this idea, a short but I think interesting diversion into the self is warranted.

A Moral Self

Sociology has long rejected the idea of a purely, utilitarian actor whose driven primarily by maximizing their pleasure and minimizing their pain. At the very least, rational action is severely constrained by biotic/ecological factors, broader social structural features like inequality, cultural beliefs, and the idiosyncracies of personality developed within these different contextual layers. But, sociology tends to have two ways of handling the “problem” of a social self. First, the symbolic interactions (SI) perspective (Mead 1934; Blumer 1962) imagines the self as fully developed when it can (1) imagine what another person would do if confronted by the same situational cues and (2) have an internal conversation with themselves, a real/imagined other, or a generalized community. That means that one’s self is built from the acquisition of language and other expressive idioms that allow them to understand what others will think and feel about them, to anticipate future behavior, and to be able to talk to themselves in the shower or their car without opening their mouth. To escape the critique of oversocialization (Wrong 1962), SI argues we are all fully capable of creative action, but with one condition: the pragmatic nature of SI means that we are habitual creatures until faced with a problem or obstacle, and then we become conscious, deliberate actors (Gross 2009). The result, in many cases, is a desireless person that does not seem to have wishes, wants, or likes; and which runs afoul of the basic science behind motivation (Kringelbach and Berridge 2017).

The alternative to this comes, partly, from Bourdieu’s (1977) theory of practice. The self is structured within a field that socializes or cultivates a set of dispositions reflective of classes of people who share a similar field position (and, consequently, similar economic and cultural “capital”). Like muscle memory, this enculturated self acts habitually (this constellation of dispositions was called a habitus by Bourdieu), but these habits reflect internalized class-based strategies (unevenly distributed and acquired) and thus the problem of rational choice is avoided by making people unaware of their habitual reproduction of social structure. We are still self-interested, within reason, and we are still strategic (though limited by the type and number of strategies our structural position offers), but instead of calculating, we are often just an oversocialized humna. Cultural sociologists have tried to resolve the problem of creativity by either adopting a cognitive dual processing model that recognizes, like SI, deliberate action happens, though probably for similar reasons (Vaisey 2009) or because “unsettled times” provide the sort of structural and cultural autonomy necessary for innovation (Swidler 1986).

Again, desire seems to be sidelined – though, our strategic dispositions allow for the pursuit of preferences or tastes that our habitus provides us. And, yet, I cannot help but ask if sociologists themselvess are joyless, desireless? Or do we just think we are so controlled by neo-liberal forces that desire is disingenuous or reflects the capitalist superstructure’s imposition of false consciousness? Or maybe we just think this is the domain of psychological shyster-ism and the self-help industrial complex? In any case, it does not actually square with empirical reality and builds a significant chasm between what most people do all day long and what we say they do all day long. (But, this digression is best left for a long essay on another day).

A Morally, Affectually Grounded Homo Sapien

Once more, I beg the reader’s patience as I need a second diversion before reaching our destination on contamination. The brain clearly evolved in the context of intense selection pressures for cooperation in big game hunting that required bands of humans to tolerate each other (Bowles and Gintis 2011). Some key characteristics of this evolved brain are: the ability to discern at an early age pro-social vs. anti-social adults (Decety and Howard 2011); the ability to calculate fairness in exchanges that encouraged food sharing, reciprocity, delayed gift exchange, and so forth (Tomasello and Vaish 2013); the ability to take the role of others to ascertain motive and intention, keep score of who helps most, and who we owe and are owed something (Tomasello 2020); and, finally, to keep track of our and other’s reputation (Boehm 2012). Humans bucked the dominance hierarchies of their closest ape cousins because of these capacities (Boehm 1999), favoring more level social organization that fit better with a model of shared hunting and the institutionalization of enduring cultural and structural institutions like kinship (Abrutyn and Turner 2022). These innovations did not remove the animal from our human nature but expanded the palette of objects to which we could affectually attach ourselves and, therefore, experience acute pain when we imagined losing them (Panksepp 1998), felt rejected or alienated from them (Abrutyn 2023), or, felt them threatened. This evolutionary perspective seems humans as desirous creatures, craving objects that we develop affectual attachments with. Some are innate, like caregivers, potential sexual mates, foods we prefer; others are built up from repeated exposure and cultural or structural patterns. But, we have things we want and like and we pursue them, even if some are collectively-desired (e.g., meat from the big game hunt was obviously something individual desire aligned closely with collective desire). But, these bigger points are best left for another essay, because the takeaway is one that sociologists sort of assume, but which is empirically grounded: One of these objects we grew attached to were the rules of the group; an evolved capacity that compliments the aforementioned list of evolved neural capacities.

The consequence was an ape with “well-internalized moral values and rules [that] slow us down sufficiently that we are able, to a considerable extent, to pick and choose which behaviors we care to exhibit before our peers” (Boehm 2012:30). That is, while SI and practice theory are busy thinking about habitual action, humans evolved to manage their impressions because the first impression is often the last impression. Thus, life is mostly filled with moral moments even if sociologists are wont to reduce the stakes of a given encounter because nothing serious appears to happen. Yet, in reflecting on the mundane commitment most people show to waiting in line at a store or deli, Rawls (1987) argues that this behavior is one of the most moral things people can do! Goffman, too, argued that transgressing minor, seemingly inconsequential rules signaled one is ‘that type of person’ and could call into question their entire reputation. Simply put reputation and its various related elements like (self)-respect, esteem, honor, status, and so forth rest at the core of an alternative view of self. Is it a moral self? Absolutely. Weber purposively constructed the idea of status and status groups as the converse of Marx’s notion of class. The former was his non-economic, morally-tinged form of social organization and action while the latter remained embedded in economic categories of organization and action. While Bourdieu tried to marry the two, one cannot help but get the feeling that wealth and material power lie at the heart of his model of habitus and field, whereas Weber’s model was better emulated by theorists like Veblen or Simmel.

If reputation-making and reputation-taking are central to a different version of self, and that version is moral, then it comes as no surprise that reputation is inextricably tied to moral emotions like pride, shame, and so forth. When we make claims about the respect we are owed or show the proper deference to others, certain emotions signal we are taking reputation correctly (Kemper 2006). When we try to enhance our reputation – or make reputation – through various strategies that have their own risks, we also experience these emotions as well as weaponize them in some cases (Clark 1990). When we are degraded, through public ritual (Garfinkel 1956) or in private face-to-face encounters (Goffman 1967), we experience the loss of reputation through these emotions. These emotions, thus, are signals of our reputational value, but they are also motivating forces to do the work necessary to create, sustain, enhance, and protect reputation. Pride, for instance, is universal to humans and is a unique, preconscious affectual force driving us to care what others think and to strive to do our best (Tracy 2016). Shame, likewise, is an evolved universal human response to the severe judgment of others (Boehm 2012) and the universal feeling of being small, wanting to hide, feeling mortified, and so on (Scheff 1988). Moral emotions, it seems, undergird our desires as much as more basic emotions like anger or fear. And, these moral emotions are eminently wrapped up in our biological capacities to become affectually attached to various types of objects.

This version of self is desirous. It need not be cognitively deliberate, which means it does not necessarily violate the convenitonal sociological wisdom that much of what we do is unconscious or more automatic. But, where those theories of self fall short is in their neglect of affect. Affect is often preconscious, and central to all of the ingredients of cognition like memory, comparison, self, and so forth (Frijda 2017). It coordinates (Damasio 1996), and often controls and commands learning, behavioral response, perception, and personality (Davis and Montag 2019). And yet, people feel affect; it is embodied; and, thus, the idea of an unconscious, unaware creature just doing routines or one that is simply pursuing the interests pre-programmed, falls flat. Meanwhile, this more Durkheimian/Goffmanian vision of self fits into the current affective and cognitive neuroscience that rejects the notion of habit because most action is motivated action even when it is not conscious, because all action requires intention, guidance, and control (Miller Tate 2019); and we can sense or feel the “right” behavior. Additionally, the internalized programs themselves makea us feel like what we expect to happen is happening, allows us to adopt increasingly flexible yet repeatable lines of action, and are built up from affect like shame or pride tagging past events, experiences, people, objects, and the like as supremely relevant, rewarding, salient, valued, and so on.

Ok, now we can move back to the theme of this essay: contamination. Clearly, we desire being seen as “that type of person” as we experience a lot of positive or negative embodied affect in everyday encounters. What is “a” or “the” fundamental things we must do to protect our reputation? What happens when one violates the ceremonial rules and casts their own credibility, other’s credibility, or the situation’s credibility into doubt? What happens when a person repeatedly violates these rules? What happens when one’s violations are so egregious (relative to the usual violations) that they escape conventional categorization and, thereby, existing mechanisms of social control and sanction? These questions, and others like them, guide the remainder of my thoughts in hopes of establishing a beachhead for a sociology of contamination.

Contamination

In the aforementioned paper on the pandemic (linked above), it had become obvious to me that the pandemic was interesting for two reasons. First, there is the element of biological contamination. Remember when we were all quarantined in those early days and, if we were allowed to go out and walk, we would avoid being near people (at all costs)? The anxiety of catching COVID-19, even after we learned how it was transmitted, was enough to make people sanitize their outside grocery bags, carry anti-bacterial gel, and contemplate a life of perpetual mask-wearing and 6ft. social distancing. But, over time there was also an intense fear of social contamination that had less to do with the “catching” the virus.

For progressive-minded people, there were two ways contamination lurked in the background. First, those who wantonly disobeyed public health recommendations signaled not merely outsider status, but also heightened risk and danger. They were potentially exposing people, needlessly, to biological exposure, raised questions about the shared grounds of reality that so many people take for granted in non-crisis times, and challenged the constitutive nature of ceremonial rules. Second, this was nowhere more apparent than the sudden rise in public danger: as unreasonable and subjective as it was, the threat of anti-masking, anti-vaxxing public outbursts became a source of social contamination.

For the political right, the imposition of government-mandated rules and the further erosion of imagined or longed-after local political, economic, and cultural autonomy represented a different, yet no less social form of contamination. For everybody, I think, the loss in the U.S. was the imaginary veil we had built up in public places or workspaces that allowed us to ignore the signs that people we knew well were ‘that type of person’. It smoothed everyday interactions, which was nice and also productive for workplace goals. But, it was a thin veneer that was suddenly broken by the seemingly ever-present threat of social contamination. Who did we think we knew that it turned out we didn’t actually know?

Contamination means exposure and, especially, contact with dirty, polluting things (Douglas 1966). As I write “imagine reaching into the toilet and squeezing your feces,” it elicits an automatic affectual response, which we would label disgust. Though this is biological in nature, contamination extends to the social as well. Weber (1978:305ff.) was fascinated with status group closure, arguing that all closure implies some degree control over interpersonal relationships, exchanges, and so forth. At the extreme end are caste or caste-like systems, where physical contact with some negatively privileged groups elicits a similar automatic response as being forced to lick a dirty toilet bowl. Just as charisma could spread from those possessing it to those who get close to the charismatic possessor, the contaminative elements of those who are defiled, discredited, polluting can spread to us. Indeed, Goffman argued “that individuals can be relied upon to keep away from situations in which they might be contaminated by another or contaminate [them]” (1971:30). In the extreme, this idea meant some classes of people (those with certain stigma markers or those who are institutionalized) presented dangers to the rest of us. Not just run-of-the-mill danger, but the type of danger that would destroy or mortify our own reputation and, therefore, the sanctity of the moral, affectual self.

However, we need not rely on extreme cases to illustrate how powerful contamination is for shaping behavior. At the crux of the issue is our reputation – and questions like whose opinion matters to us, what consequences a sullied or questionable reputation has, and what rules might transform us from ‘this type of person’ to ‘that type of person.’ At a party, do you go through the host’s bedroom closet, touching personal, private objects? How would you feel if you left said party, and remembered you forgot your phone on the couch. Upon returning, you walk in on the host and her best friend impugning the integrity of other departed guests? If you got caught in the first act or caught someone in the second act, would the term “mortified” suffice? Admittedly, many of the most stringent rules of etiquette designed to elicit self-shaming and self-regulation have weakened in societies rife with the sort of division that leads to different forms of social contamination, as outlined above. Admittedly, what was once “ought to” rules are now mostly “shoulds” (Abrutyn and Carter 2015), and thus some aspects of contamination vary heavily in time and space. Hence, you shouldn’t go through a medicine cabinet in a host’s house, but with social media, you can deflect some of that shame into a sort of voyeuristic virtual space that escapes some of the punishment that would have happened in the not-too-distant past.

And yet, contamination is real. Many people still avoid homeless people’s encampments, move to the other side of the road when approached, and lock their doors when stopped at a red light near a panhandler in the median. Visibly disabled people still elicit a cocktail of difficult emotions for many people. While de-stigmatizing mental illness has become a noble cause for many, coming face-to-face with folks who violate the most taken-for-granted conventions remains scary. These, of course, are extreme examples. But, for many conservative Americans, having a son or daughter that identifies with one or more categoric distinctions reviled by the modern Republican political movement – think, “liberal,” gay, trans, non-binary, or whatever – can be contaminative at Thanksgiving dinner. And, worse, a source of severe shame and disgust in their circle of friends where they hide this discrediting source of pollution. The left is not any better: I imagine many readers who have children would be mortified if their child brought home a caricature of the things they consider to be beyond the bounds of the moral order.

In the end, the idea of contamination is a powerful one, because it implies a preconscious disgust response to noxious stimuli and intense prohibitions regarding contact. The proscriptions surrounding exogamy often, though not always, rest on stigma theories about the risk, danger, and defilement of marrying out of your group. The deeply held beliefs about white people marrying black people (still held by, unfortunately, many today) or Christians marrying Jews (also, sadly held by many today) are not new, but more diffuse because of the size, scale, and complexity of modernity.

The Punch Line

The fear of contamination and the ensuing feeling of shame, stigmatization, humiliation, and mortification when one becomes the threat they fear others pose is likely biological. What makes us contaminated may vary in time and space, but the response surely does not. And thus, humans are not likely to be overcoming this anytime soon.

The downsides are clear. Some behaviors will always be considered far out-of-bounds and the transgressors of great danger. The extremes, of course, are obvious, but so too are things like muttering to oneself in public, smelling of bodily fluids, and so forth. We may call the homeless new terms (‘the unhoused”) but likely this makes our conscience feel better with likely little palpable real world change. People will always be opposed to things they deem unhygenic, even if what that means in practice varies in time and space. From a socially constructed point of view, there are always “backstages” and, thus, in addition to the sorts of bio-social contaminants some people appear as there are deeply conventional forms. Bedrooms and certain bathrooms in houses are always “preserves” of self. Who enters and what they are entitled to do is always restricted. But, the term “purity test” that has become commonplace for referring to the sorts of hurdles centrists face in political parties today reminds us that almost anything can become a line in the sand. Does it reach the level of disgust? Is the person a pollutant in the same sense as many of the examples provided above? Would be interesting to try and measure this, but the level of anger and fear displayed by many folks who consume Fox News, and have for some time (Rotolo 2022), indicates that people who do not display all the proper signage of membership may be viewed as disgusting, dangerous risks to the moral order.

The silver lining in all of this is that humans come off as moral, emotionally-charged creatures. Like our ape cousins, we are rational in so far as we are goal-oriented and planners, but these goals and plans are only as good as the ability to coordinate our affectual responses and cognitive interpretation of the situation. When objects are so taboo that they elicit disgust and the belief that touching them will make one as disgusting as the contaminating object, then the possibilities for strong in-/out-group rules can form. But, it also suggests that humans are compelled to design the types of rules that become essential to their own conception of self and desire to not be a polluted person. Figuring out how to expand the scope of these rules to cover all, or maybe most, people in a community, instead of place some in categories of good, honorable, clean and others in bad, stigmatized, and dirty is the trick. But, the capacity to do so is already there.

Posted in Culture, Emotion | Tagged , | Leave a comment

Cultural Sociology I: Meaning Making and the Psychological Industrial-Complex

In a previous post, I made the argument that sociology needs to go beyond just incorporating culture into a sociology of suicide. It needs a cultural theory altogether. But, what would that look like? What would be its framework? One obvious issue, that it would need to deal with is meaning, meaning making, and who is “responsible” for meaning making. Sociology has largely ignored the problem of meaning in studying suicide, choosing instead to pursue the population-level study of the distribution of suicide, while suicidology, until rather recently, cared very little about meaning, culture, or, anything socially constructed, for that matter. Meaning, however, matters. For instance, consider the political and social implications behind the move from the term committed suicide (Smith 1983) to “completed” or “died” by suicide. Semantics, to be sure, but with real implications for intention, motive, causal attribution, and stigma.

Not surprisingly, the problematization of meaning was at the heart of the earliest and most interesting critiques of the Durkheimian macro-structural approach (Douglas 1968; Atkinson 1978). The idea was that Durkheim’s approach depended on the veracity of official statistics about suicide. Otherwise, how could we trust the significance of comparisons between and within countries’ suicide rates or causal explanations about why rates in one place or among one group (e.g., religious categories) changed over time? The issue, which is a very legitimate issue, is that those doing the documentation of suicide – primarily, but not limited to medical examiners and coroners – are not always dealing with clearcut cases of suicide. They have to do inductive and deductive work around so-called “suspicious” deaths (Timmermans 2006). Adding to this less-than-objective determinative process is the fact that medical examiners serve multiple masters, including the medical sphere that licensed them, the legal sphere which may call them in to testify as expert witnesses, and the domestic sphere which, in many cases, has plenty of incentive to have a suspicious death declared anything but a suicide (e.g., insurnace claims, stigma and shame). Taken together, there are reasons to cast doubt on – or at least reasons to scrutinize – how official statistics are constructed and whether they measure what they purport to measure. [Incidentally, the question of whether or not statistics do what they aim to do remains an open question, but it is worth noting that Pescosolido and Mendelson (1986) convincingly showed that the social causes identified by Durkheim stood up against these claims of explicit and implicit bias.]

Setting this aside what would a sociology not committed to the study of the social distribution of suicide look like? In future essays, I will try to tackle the diversity of possibilities a cultural turn in the sociology of suicide might take shape as, but for now there is the question of who makes the beliefs about suicide that American, and perhaps most Western, nations adopt? There is no point in stretching this argument to the point of determinism as a cultural sociology of suicide would care about the diversity of meanings in play as well. But, taking up the thread in the earliest critiques, it remains a central task to ask who makes the meanings that the public, media, and even scientific communities take for granted? For Douglas, Atkinson, and others, it was/is medical examiners/coronorers and their consanguines. But, what about the psychiatric/psychological “industrial-complex?” That is to say, what about the people who are in the business of defining/disagnosing/treating suicide (Abrutyn and Mueller 2021)? Their centrality, which from a sociology of professions and occupations is surprisingly overlooked, are a key faction in the construction and perpetuation of meaning. So much so that an entire counter-movement in the interstitial space between psychology, social work, and anthroplogy has sprung up against them in the guide of critical suicidology (White et al. 2015). Hence, the professionalization process in the 1970s and 80s should be of paramount concern for a sociology of suicide that builds off of culture and its principle mechanism: meaning.

The Briefest History of the Diagnosticians

As an important caveat, one of the fundamental strengths of historical and organizational sociologies is that they focus on the collective and not the individual. It is important, then, to place the following discussion in its proper context: Psychologists and psychiatrists, then and now, were not nefariously plotting to scam people or to cause harm – in fact, in my experience the vast majority are committed to the opposite. Many things that happen, when refracted through historical lenses, show that changes – intentional or not – occur when groups of individuals, sometimes as a single group in a movement or other times aggregated, respond to social pressures beyond their control. They feel their material and ideal interests are threatened, real or not, and respond in ways that aim to protect what it their’s. In particular, when a class of people, like therapists or clinicians, feel their livelihood is being squeezed, like any occupational group, they will respond in ways meant to prevent losing their jobs/careers, their privilege and status, and/or their power.

In the 1970s, several intersecting historical forces created the conditions for professionalization (Horwitz 2002). The government and various community-level organizations began pushing a different mental health agenda, shifting from the stigmatizing institutional model to the disease/medical model. The world was changing, as asylums/institutions were expensive and had come under intense scrutiny by social scientists (Goffman 1961; Scheff 1966) and pop cultural figures (Kesey 1963 – One Flew Over the Cuckoos Nest) regarding their inhumane treatment of patients. A massive economic restructuring drove insurance companies to begin to change their policies toward mental health and psychiatry, creating a need for diagnostic checklists that could suffice for bureaucratic processing. Gone were the days a therapist could simply say a paient needs “X”, rather they needed to check boxes for processing the insurance claim. Amidst changes in insurance and government regulation, pharmeceutical companies saw opportunity. Ince the 1950s, they had pushed similar versions of SSRIs as today but to no avail (Herzberg 2009). Psychoanalysis had no need for drug treatment so long as therapy for anxiety/neuroses cotoinued to be covered by insurance or middle-upper class clients. Finally, the explosive growth of public, higher education in the early 1960s did what it did to many established social sciences: brought a ton of new approaches driven by cadres of newly minted graduates who had grown to challenge hegemonic holds across disciplines as they prusued their own careers. Psychoanalysis’ days were numbered as various forms of cognitive science proliferated and pushed new ideas about the etiology, diagnosis, and treatment of mental disorders.

Like the shift to mental illness-as-disease, the claim to professionalize psychiatry was built around the medical model doctors had used less than a century prior: their knowledge and practices would depend on the scientific method and evidence-based research (Conrad and Slodden 2013). A committee was tasked with reviewing the research on mental illness, discerning what hard proof there was for myriad disorders. Following a medical model, the goal was to only include in the DSM-III discreet diseases, as determined by their having discreet etiologies, diagnoses, and prognoses (Horwitz 2002). But, as with any community of scientific knowledge producers, this was not a task achieved in a vacuum safe from politics and competing interests (Merton 1979).

Psychoanalysts, in particular, felt the acute threat to their livelihood this project promised. Neuroses and most psychoanalytic disorders were not founded in anything approaching the rigor suggested by the experimental or clinical trials found in medicine (Horwtiz and Wakefield 2007). It was built up from individual patient histories, generalized with or without generalizble evidence. The fear was not unfounded: if the DSM-III became the official “bible” of psychology, clients would eventually prefer those who adhered to it, while the APA could, theoretically, become like the AMA with the power to certify/decertify practitioners. So, they fought back, as did many other clinicians fearing their skill sets would be less valuable in the DSM era. Ultimately, the final product, published in 1980, looked nothing the vision that the committee was tasked with realizing.

For instance, far too many disorders included in the DSM were not discrete in their etiology, diagnoses, or prognoses. Decades of research, independent of and sponsered by drgu companties coupled not find an organic source of, say, depression, so they downplayed this crieria. Of course, this sleight of hand was a major indictment of their claims to parallel medical doctor status. In part, because the same treatments became common for a diverse array of disorders, with little understandng of why they worked in some cases and why many patients could never find a single treatment that worked (Karp 2016). Nonetheless, the writing and publishing of expert knowledge is powerful (Goody 1986). And so many drug treatments for medical disorders had become normalized, that between the DSM and the presumed efficacy of drugs (which was a much cheaper alternative for insurance companies than long-term therapy) gave the mascent professiona legitimacy. The claim that serotonin levels were directly related to mental disorders (and therefore supposedly treatable with SSRIs) became widely accepted through the field, mass media, pop culture, and common sense, despite the empirical evidence to the contrary (Moncrieff et al. 2022), but which became so diffuse through psychology, mass media, and eventually common sense claims-making.

With the backing of “science,” the book became the source of routinizing training, knowledge claims, practical repertoires, and meaning-making. Psychologists were like doctors, just for the mind. It made no difference that several disorders were totally political and social. For example, the inclusion of homosexuality as a disorder was only a disorder in so far as a community declared it as such. (Removing it, incidentally, required a concerted and sustained political movement by lay activists and radical psychologists alike (Bayer 1987)). Indeed, by the time Prozac Nation was published and made into a popular movie, the psychological industrial-complex (psychological science, insurance companies, community organizations preferring the disease model over the stigmatizing institutionalization model, and drug companies) was a taken-for-granted force to be reckoned with (Pearlin et al. 2007). Even with the recent meta-study debunking the organic etiological claims of depression (Moncrieff et al. 2022), the damage has been done. “They” are the trusted source for anything labeled mental health-related. Hence their dominance in suicidology since its inception, and in their hegemonic hold over the causal explanations we accept as either real – psycheache (Shneidman 1995), loneliness/hopelessness (Joiner 2005; Klonsky and May 2015), and escape from psychic pain (Baumeister 1990) – or suppose are real – e.g., mental illness causes suicide (Mueller et al. 2021).

Meaning Makers

When Douglas (1968) and Atkinson (1978) wrote their treatises, they had no clue that the DSM-III would emerge or that pop culture, lay society, and the scientific community would come to revere the psychological/medical approach to mental health. This approach naturally spread into the science of suicide. And, like doctors in the 1950s and 1960s (Starr 1982), the notion that suicide is caused by intrapersonal forces, be they mental illness, psycheache, or any number of cognitive appraisals like hopelessness and loneliness, was solidified as the result of psychology’s ascent to dominance (Cavanaugh et al. 2003). This is not to say that Durkheim’s work or sociology had managed to diffuse their own empirically grounded claims that suicide was caused by social forces. Indeed, to the contrary, Durkheim’s work has not made much a of dent because it has virtually nothing to do with why people die by suicide and everything to do with demonstrating that social factors constrain or facilitate suicidality among certain groups or classes of people. Important, indeed, but peripheral to the business of explaining suicide as a social behavior. Of course, it is questionable how well the professionalized version of psychiatry has done, given rates have grown over the last two decades despite billions of dollars spent on psychological research.

Most obviously related to the proclivities of a profession rooted in a diagnostic manual – and its pitfalls – are the efforts to catalogue individual risk factors. Over 150 risk factors have been identified as correlated with suicide, rendering these factors useless in explaining or predicting suicide.

Predictably, the psychological industrial-complex tends not to offer social explanations like those typically reflected in artwork (Stack and Bowman 2012) nor consults the insights sociology might lend (Mueller et al. 2021). On the one hand, the vast majority of movies made in the 1900s involving suicide featured social causes most prominently: relationship strains, unrequited love, status disruptions, and so forth. While this may feel anecdotal, stories resonate because they make sense to the viewer, not because of their implausibility. On the other hand, the very notion of what a mental disorder is (Scheff 1966) and, more importantly, what it means – e.g., is it good or bad, a sign of stigma or one of creativity – remains socially constructed (Horwitz 2002; Pearlin et al. 2007). One need only consider the disappearance of once popular diagnoses, like hysteria (Micale 2019), homosexuality (Bayer 1987), or anxiety (Horwitz 2010) – which, incidentally, were largely replaced by the ubiquitous depression diagnosis – to understand how unorganic most mental disorders likely are (besides, perhaps, schizophrenia and bipolar disorders). But, their expert status has cemented, in the minds of the public, media, and scientific community, the idea that suicide, like mental health, is caused by intrapersonal forces and not interpersonal.

Unfortuntely, the question of meaning-making and meaning-makers is beyond the scope of short blog post, but I think it serves the larger goal: a cultural sociology of suicide must begin by examining and revealing how psychologists have built up the meanings of suicide in the U.S. and throughout the West, how and why alternative explanations and meanings have emerged and persisted or failed to, and how their science is translated into public-facing knowledge. What might this look like?

For one thing, I am unaware of studies that actually interview and/or observe, ethnographically, psychologists and psychiatrists. What do we know? How do they process clients? How do they think about suicide, mental health, and the use of diagnostics? Anticipating other posts, it would also be prudent to think creatively about how to study just how impactful psychological beliefs about suicide are for those who have completed, attempted, and are thinking about suicide? Do they frame and understand their feelings, attitudes, and actions within a psychological model? How are these beliefs distribtued across time and space, and what sorts of factors cause them to expand or contract? And, how do rival beliefs succeed or fail to succeed against the dominant hold psychological beliefs presumably have over the majority of Americans?

Admittedly, these questions are not focused on suicide, per se, or victims, survivors, or prevention science. Instead, they are focused on cultural production and dissemination. To me, that is one good starting point for introducing a cultural theory to suicidology, because it reveals the process by which suicidology became suicidology, its beliefs are constructed and distributed, and cultural ideas circulate, reach a peak, and contract. From there, we can begin asking about the mechanisms by which these beliefs become available, accessible, and applicable to people.

Posted in Culture, Suicide, Uncategorized | Tagged , , , , | Leave a comment

A Cultural Theory of Suicide?

Sociology has famously studied suicide using Durkheim’s classic structural framework. For the uninitiated or for those needing a refresher, what that means – or at least the common interpretation of what that means – is that (a) the structure of suicide rates varies based on the structure of social relationships and (b) the structure of social relationships varies according to a collective’s degree of integration or regulation. Plenty of people, including myself, have plumbed the depth of what this means, so I won’t waste time on that (for a recent review, see Mueller et al. 2021). I also won’t bother critiquing this approach without necessarily leaving Durkheim behind, as this is well-covered terrain as well (some of my favorites in addition to the Mueller et al.: Phillips 1974; Johnson 1965; Nolan et al. 2010; Abrutyn and Mueller 2014, 2018). That said, what Durkheim doesn’t do, and in all fairness he was not interested in doing, is ask or answer why do people choose to die by suicide? This silence is deafening.

By Stack and Bowman’s (2011) count, sociology has contributed the second-fewest amount of published studies on suicide since 1980; and the gap between us and the number 1 discipline is massive (~180 papers to 9000+). We are behind disciplines like molecular biology and law, the former of which does not stand out immediately as a typical field of suicide inquiry. How is it that the 16th most assigned reading in the discipline (according to Open Syllabus Project) has not revved up the sociological imagination of generations of sociologists?!? Part of the problem is sociology’s obsessive gaze backward at some halcyon days of classical yore (which I have commented on here, here, and here). Durkheim’s theory is hermetically sealed, while his theoretical frame is often taught as fact; case closed. Suicide consists of four types and is caused by too much/too little integration/regulation.. Sure. Fine.

But, we also know suicide – or the meaning of suicide – varies tremendously across time and space (Barbagli 2015; Baechler 1979; Kitinaka 2012). As sociologists, we know meanings matter in so far as humans make sense of their feelings, thoughts, and behaviors (as well as real, imagined, and generalized other’s feelings, thoughts, and behaviors). We know that humans are planners, and thus orient themselves to social objects in the course of their planning, while these social objects – physical, social, or ideational – take on meaning through interaction with others. And, thus, we know suicide as a social object varies in meaning according to the cultural beliefs of the time and place. Exceedingly high rates of suicides in a region in south India are interpreted and understood as caused by massive strains between the exceedingly high material aspirations and the structural constraints preventing many from realizing them (Chua 2014). Meanwhile, unfathomably high rates of youth suicide are explained by respondents as caused by historical cultural genocide leading to a search for a way to “belong,” and suicide presents one such pathway to belonging (Niezen 2009). The lesson here is that suicidal behavior, like any other social behavior, is performed and, thereby, signals subjective feelings and thoughts in symbolic form that the attempter or decedent presumably wished to express to some intended audience.

Of course, the audience may misinterpret the meanings, applying their own well-worn schema to the decedent’s performance. Likewise, unintended audiences are free to make sense of the suicide as they see fit. Nevertheless, it is a symbolic act that demands meaning-making and, thus, it is saturated in culture. And yes, culture is a sticky concept with myriad meanings and contested vagaries. But, it is the primary tool through which human societies reproduce themselves, distinguish themselves from each other, and convey what feelings, thoughts, and actions are good, right, appropriate, or bad, wrong, and inappropriate.

Thus, like Becker’s (1953) marijuana user, one must become suicidal. They must acquire the meanings that transform their inner feelings and thoughts into something a suicidal person would feel and think. These meanings must be available, which they always are thanks to various modes of artistic expression that teach us about suicide as well as the very real possibility that through traditional and social media will be exposed to a celebrity’s suicide or just a member of our community. So, the action itself is always “out there.” But, is it accessible and applicable to the person’s reality (Patterson 2014:19)? And, if it is accessible and applicable, how many people are vulnerable to those meanings? Is it just a solitary individual whose struggles mirror or are perceived as mirroring a media report of a suicide? Or, does it fit a pattern of folks, like Niezen found among Indigenous youth in a particular community?

These questions scratch the surface of my larger point: Durkheim’s theoretical and methodological strategy is still important, particularly in measuring wholesale changes over time and place when massive disruptions happen, like the Great Recession in 2008. But, it falls short of asking more interesting questions about suicide that sociology is well equipped to ask and answer.

The sociology of culture has become an exciting space where the types of debates and discussions concerning sociological theory’s biggest questions are occurring. It is a place of promising marriage between those studying suicide and those asking about (1) how/where do beliefs come from, (2) how/why do beliefs cluster or cohere in certain collectives and/or classes of folks, and (3) how/why/when do beliefs come to shape behaviors? Durkheim notably chose suicide for many reasons, but one central one was its extreme finality (it is also its biggest weakness as one cannot interview a decedent to ask the most pertinent questions about why one chooses to do something). He felt that explaining suicide meant being able to explain most other negative outcomes and, implicitly, positive, desirable outcomes. The problem, as noted above, was that he didn’t care to actually ask why people do things, just what social forces impel them to do things.

Fine. But, we can do better. To do so means to heed Atkinson’s (1978) serious critique that “its the theoretical and methodological content of Suicide which has fascinated generations of sociologists and not that phenomenon which members of a society call ‘suicide'” (p. 10). It is to ask anew what suicide is, why it occurs, what it means to society and within the framework of local cultural realities as large as communities and as small as families or even dyads, and how deciphering the meaning of suicide might offer the science behind suicide better understanding and explanation and the sicence of prevention/post-vention tools that allow us to intervene more efficaciously.

Posted in Culture, Suicide | 1 Comment

Sociology and the Good Society

As we spend summer thinking, not thinking, ruminating, not ruminating on classes, sociology, and the good society, I wanted to point to an oddity in the sociological ideology. An ideology is a set of beliefs about what we believe will happen, why it happens, and, as a retrospective tool, an interpretive lens for putting the past into some cohesive pattern. with the present It is not causal, nor is it scientific – though, it may draw from science, be scientific in outlook, and may be causal in how it shapes a person or class of person’s behavior. All disciplines have an ideological bent that crystallizes and saturates its students and practitioners, emanating from some mix of history, epistemics, and other contingencies like self-selection of students. In any case, sociologists learn a bunch of stuff early on in undergraduate and graduate training that largely goes uninterrogated and simply accepted as fact. And these tacit beliefs usually go unnoticed like most tacit beliefs wherever cosmologies are inherited. One of these little nuggets sits at the heart of this essay and forms one of the true paradoxes of sociology. A (ideological) paradox with immense and far reaching ramifications. I’ll call it the “community paradox,” which admittedly is a sloppy, imperfect name, but works for now.

The community paradox rests on sociology’s insistence on teaching classical theory as though all of it was theory and not just (bad) social philosophy constrained by both shaky data and predictably terrible interpretations. The basic paradox rests on a warped view of modernity founded on an incredibly tumultuous and rapidly changing period of time in European history. In short, the arguments are as such: modernity (usually code for urban, industrial, liberal [in the more classical political theory sense]) tore asunder the traditional bonds of premodern social organization. On the one hand, then, there was something lost: the ascriptive, deep kin-based social solidarity was protective, healthy, natural. On the other hand, it was delimiting, “primitive,” inherently inequitable.

The past is simultaneously a golden age to be wielded against the unnatural nature of the present while also being so different from the present that it might as well be inhabited by aliens and, therefore, does not serve as a good model for the good society. In part, this ambivalence rests on a bunch of similarly constructed binaries meant to distinguish the object of classical theory’s inquiry by throwing “modernity” into sharp relief with the past. Think gemeinschaft/gesellschaft; mechanic/organic; primary/secondary; communal/associative; premodern/modern; traditional/modern; sparse/dense; total self/partial self; personal/impersonal; specific/general. It is, of course, simpler to think in big containers, often dichotomous containers, if only because it makes pedagogy much more standardized and consumable.

It need not matter whether these are empirically true. For Marx, the arc of humanity went from some romantic horde sharing and caring to a class-based exploitative system (though, at least he saw the arc of humanity as ending well). For Durkheim, old societies were to be admired for their protective, cohesive qualities, but distrusted for the violence they do to the individual and her freedom, while modernity was, in theory, perfect, but in practice pathological. For Weber – who was much better at hiding his views on human nature – the enchanted nature of irrationally organized societies was always being impinged on the sociological version of the law of conservation: routinization, formalization, standardization, or rationalization, was always a threat to disenchant. He, of course, seemed genuinely concerned about the inevitability of a modern hyper-bureaucratized society ending enchantment once and for all. Simmel, too, contrasted the forms of society that fostered sociality and the totality of self vis-a-vis the metropolis and its tendency to objectify culture and people, reducing the to typifications and stereotypes. I can go on and on with this list, but will not bore you further.

But, that was then, this is now, a critic may push back. I give you, as evidence of the paradoxes implicit sway, Bowling Alone and recent papers on the decline in friendship and so forth (here, for instance). Both sorts of scholarship assume that there was a time in which social cohesion was better, denser, richer, more protective. For Bowling Alone, it was the golden age of associationalism in the U.S., while other scattered scholarship is delimited by data constraints (how far back surveys go). But, the irony in these studies, is that the golden age they sort of imagine was intensely critiqued by then-modern sociologists for being too conformist, too narrow in ideals, too inauthentic, too lonely. The scholars critiquing the massification of society were busy revering the 19th century for its publics and support of free-thinking, localism. So, in the end, golden ageism has been baked into the discipline but also with a careful, studied distance that only intellectuals — whose preferred, comfortable milieu is the very anonymous, cosmopolitan, culturally-rich modern urban spaces — could foment.

Is it correct that small, tight knit communities are too stifling, too nosey, too reproductionist of all the systems of inequality sociologists bemoan (particularly patriarchy), and, shocker, too “rube-like”? And, is it correct that urban spaces denature us? That impersonal ties, achieved over ascriptive categorization, hierarchy, and so forth are the worst thing to happen to humans? Below, I’ll address both of these questions a little further, shedding some light on them, but likely will not answer them in ways that are satisfying and definitive – sorry. That said, it is worth keeping, in the background, a much more complex question: what can sociology actually tell us about the “good society”? If the premodernity is littered with social organization over integrative and regulative for individuality and modernity is too individualized, to borrow a fairy tale metaphor, what is the “just right” porridge to eat or bed to sleep? Is there a good society or are sociologists implicitly, subtly saying we are all doomed?

The Premodern Problem, or How We Learned to Love/Hate Bucolic Life

A passage in Marcuse’s One-Dimensional Man (whose page numbers elude me as I write this) captures the gist of my argument. In discussing just how one dimensional humans are in capitalist societies, he contrasts sex in traditional times with modern times (modern = early 1960s). He imagines the former as sensuous, creative, spontaneous, and unencumbered by technology’s many trappings. The latter is mechanical in nature and unnatural locations (think the backseat of a small car – a VW Beetle, perhaps): cramped, unimaginative, rote, and objectified. Nevermind the unsanitary nature of life in the 18th century and before; or the essential free license men had; or the lack of birth control that not only threatened the spread of STD but also of pregnancies that were far more damaging for the woman and her family than for the man. Nor should we pay attention, according to Marcuse, to the increasingly liberal notions of sexuality that were unfolding in the 1960s in many corners of the west. What mattered was the fact that the idyllic past was superior to the present in all forms; it was two and not one dimensional. Humans were free then, sublimated now.

Admittedly, this take is influenced by Freud far more than classical sociologists were, but it’s underlying logic was remained indebted to those classic binaries. Equally true, Marcuse, like Marx, was not advocating going backward because there was something implicitly uncouth and uncivilized about the past. In the distant past were caricatures of foraging societies who lacked individuality, lived brutish lives, and were somehow less intelligent. In the more recent past, there were two models: aristocratic or peasant life. No sociologist worth their salt would ever pine for the life of the landed gentry, while peasant life was just as abhorrent as foraging life to our sensibilities.

At a theoretical level, the stakes were obvious. For Durkheim, there was a lot to love in the abstract about the comfort and support of premodern society, but it was also too over bearing, squelching the freedom the individual had presumably attained in modern times. For Marx, was more pragmatic: technology was a harbinger of material comforts hitherto unheard of or unseen, and thereby, should be celebrated!

But, these theoretical decisions have had consequences. In short, one might say that the golden ageism baked into the discipline gives a green light (and a set of models) for contemporary sociologists to reproduce the flaws of the founders. It is almost as though we are trained to always produces a perfect, idyllic foil for explicitly or implicitly judging the present, but ironically, never a model for the future. One might also say that this stance has major downsides to a science of societies.

For one thing, it obscures the biological, neurophysiological, psycho-social, and social continuities between so-called premodern and modern societies (a sleight of hand that adding the prefix “post” achieves, too, but for different reasons and with different consequences). It is correct to say biological evolution has not ceased to work on our brains and bodies; ~7,000 years ago, blue eyes, for instance, were virtually non-existent. But, it is also incorrect, given the overwhelming evidence we have, to suggest the earliest homo sapiens were radically different than modern humans, at least in terms of our brain’s size and shape. Finally, it is also naïve to presume hierarchy, status seeking, domination, wanton killing, rape, venture capitalism (that is, the kind of capitalism of raiding), and the like were non-existent. Humans are animals, despite our big brains and moral conscience. The evidence suggests aggressive male upstarts have always existed, going back to the last common ancestor we shared with chimps and, likely, with gorillas (Boehm 2001). Societies had to work exceptionally hard to beat back these upstarts, and thus when the conditions were ripe – e.g., real/imagined/manufactured external threats, needs for third-party adjudication, need for centralized risk management – a fine line between hierarchicalization and temporary centralization/consolidation of the legitimate right to make binding decisions about resource (in the broadest sense of the term) production, distribution, and consumption made social problems a constant, lurking in the background.

Put differently, we are not that different from the past. Marx and Marcuse are right to presume we are far more advanced technologically. We live in societies much larger, denser, and complex than ever before, but just how different are we? Probably not nearly as much as we believe. Rather, the past serves as a contrasting tool with the present, serving to both highlight whatever flaw the scholar sees as problematic today while also being hermetically encased and irretrievable. This comfortable process collapses 300,000 plus years of human evolution, acting as a Rorschach test: which time and place in the long duree is the analyst choosing as their foil against modernity? What is it they are yearning for that is lacking today? Whatever may be the answer, the one dimensionality proposed by Marcuse is, in fact, found in sociology’s stance towards the past and not in the present.

A Portrait of an Era in its Infancy

In contrast to the natural state of premodernity, modernity was distrusted by classical theorists. Unlike premodern times, modernity was unnatural. The hardness of concrete, steel, right angles, and a quickened pace of life easily parallels the hardened social characteristics of urbanity like impersonal social ties, anonymity among denizens of the megalopolis, shrinking family networks, objectified economic relations, growing secondary (read: utilitarian) associations, and the constant surveillance of the State. We are one-dimensional, robbed of everything subjective, communal, sensuous, moral; the world is disenchanted and, though efficient and productive, dehumanizing; mutual interdependence and liberal individualism allow for free thinking and diversity, and yet are vulnerable to overspecialization, alienation, anomie, exploitation, and the like.

Durkheim so feared the disintegrative forces of modernity that he wrote a book on suicide as caused by a confluence of modern forces that severed the strong ties of the past (of which he was ambivalent about in the first place!) and caused moral confusion. These harsh critiques remain as vivid as the images of smoky factories, dirty and impoverished urban spaces, and squalor of a Dickens’ novel. And while modernity is not only factories and cities, the classics and their descendants have mostly been engaged with urbanity, industrialism, and the like, which is perhaps why so many have rushed to move on from modernity into some imagined new stage of human life

(BRIEF DIGRESSION: A sort of irony in the unyielding fear and Cassandra-ism of classical theory is that for all the distaste for the dangers of modernity, academics are intellectuals and intellectuals have historically favored denser, culturally richer, more vibrant spaces. The image of the salon, coffee house, or beer hall life – real or manufactured by fiction – captures the effervescing qualities of Durkheim’s moral density. But, cities have also offered protection both in their provision of anonymity and in their relative politico-economic autonomy (at least since the Italian-city states [Rashdall 1936]) from the larger political territories in which they are nestled. Bologna, Salerno, and Paris were the first sites of universities, underscoring the pull cities had in the first place — legal scholars attracted second-born sons who were blocked by primogeniture from their inheritance and who saw few other honorable options beyond priesthood or monastic life — and their central place as engines of further growth.)

Were the classical theorists correct? Have denser populations in highly differentiated social organizational patterns dehumanized us?

Arguably, this position ignores several alternative interpretations of modernity that see life, today, as perhaps more closely resembling life in foraging societies. The long arc of history might be represented as a negative curvilinear line, with the poles representing a host of organizational elements that are in fact natural to the earliest homo sapiens and that these modern societies share with them today (Maryanski and Turner 1992); albeit with modifications, and some key differences.

Up until 10,000-12,000 years ago, small-scale societies were the rule and not the exception. The two basic organizational features of these small-scale societies? The abstract community and the nuclear family, which served as the basic productive unit. Not surprisingly, we share the former with Gorillas and Chimps, who have a very clear sense of membership in a community, while the latter is a result of selection pressures working on the other durable source of social organization: mother-child bonds. When humans settled down, extended family became increasingly important as lineage shaped property rights, inheritance, defensive measures, and collective risk management. Thus began the social cage of kinship. Fast forward to today, and we see that modernity has been an unabated assault on this cage, eroding the thick webs that ensnared people, and making the primary/immediate family the basic unit of social organization for most humans.

The question, then, are we happier with greater autonomy and fewer strong tie obligations, like other apes and foraging societies, or are we happier in dense, complex social networks? Intriguing question, to be sure. Clearly, our big brains allow us to be flexible in the diversity of milieus we can inhabit, but we are still apes whose neurobiology has not evolved radically since homo sapiens branched off.

Another example of the curvilinear pattern: stratification and inequality. At either pole, we find much lower levels of stratification and inequality (there is no evidence, to my knowledge, of a truly communal or communist society, despite humans’ best efforts). It is radical to speak of the current world as less stratified and inequitable as the past, but the data do not lie (Nolan and Lenski 2014; Sanderson 1999). In particular, one thing the social cage brought with it was pressures to consolidate and centralize leadership. For 300,000 years, humans worked hard to reduce domination and prevent enduring leaders from calcifying political roles. As the slow march of political differentiation accelerated some 5,000 years ago, so to did the extraordinary amount of oppression, injustice, impoverishment, and so forth.

This is not to say that inequality hasn’t sharpened over the last few decades (it has); or that poverty has been eliminated (it hasn’t); or that we couldn’t or shouldn’t be better (we can and should). It is a simple fact that the vast majority of agrarian societies consisted of the tiniest of proportions of humans living their best lives while the rest of society either serviced those people or were like the humans in the Matrix: born into the life of a battery to keep the system afloat, birth new batteries, and die once this purpose was served.

I could go on about other similarities. The rise of spirituality, syncretism, agnosticism, and secularism parallel preliterate religious life far more than the trappings of highly organized religions, for example. But, the bigger point is that the myriad social cages humans have spent millennia erecting as mechanisms of control have been demolished or reduced. Of course, the amount of control and domination, a critic might respond, has increased! So, how do we square the ape trait for autonomy and independence with the sharp rise of surveillance, biopolitics, and so forth?

I would argue we are not as controlled as one might suspect, especially when compared to the idyllic small community pf Marcusian imagination. A small percentage of crimes are solved; a not insignificant portion of the population cheats on their taxes, even a little bit, and never gets caught; people consume immense amounts of porn and can hide their addiction (and, I am not talking just run of the mill porn, but the types of sexual behavior that in a small town in the 18th century would be known through gossip and would lead to expulsion, in the case of a man, and perhaps being burned at the stake, in the case of a woman). Does the state excessively monitor and severely sanctioning segments of the population? Unfortunately yes. But, the idea that we are controlled is just another version of oversocialized sociological theory.

More broadly, every society works to monitor and sanction its people. Smaller, denser social spaces, like a chiefdom village or the idyllic American small town, are far more adept at doing this (see Sherwood Anderson’s Winesburg, Ohio, for the amount of knowledge a resident would have of most other residents). Deviance is far more difficult to pull off, especially deviance that exceeds the local definition of “garden variety.” Modernity is significantly freer than at any point since 10,000 years ago, even if our keystrokes are being recorded.

Implications

There are empirical and theoretic implications for sociology’s adherence to this paradox. Consider the following example.

Durkheim’s larger goal was to demonstrate the power of sociology as a lens shaping research questions, methods, and the subsequent explanations for social behavior. It was not just about suicide, but suicide served as a perfect example for so many reasons. But, the community paradox constrained his analysis and framing and set the rules of the game for studying suicide (or any behavior that employs a explicit or implicit Durkheimian methodological logic). First, suicide is always a social pathology, but it is only in modernity that we can learn something interesting. Durkheim validates this decision by arbitrarily and glibly declaring fatalistic suicide a type only interesting to historians and altruistic suicide a relic of underindividuated “primitive” societies (for criticism of these decisions, see this and that). Imagine a general theory of anything that denies two of its four categories as relevant to inquiry! (It has had real effects on contemporary sociology of suicide in so far as they have eluded empirical investigation for the most part).

It also set up an untenable situation that speaks to a larger sociological dilemma (which I’ll unpack shortly): how do we ascertain what enough integration and regulation are? Too much or too little are unhealthy, but where is the “just right” porridge? This slippage weakens the explanatory power of his two great dimensions causing suicide. What’s more, it reinforces the mythology that cities are bad or unnatural. But, Durkheim offers no alternative, as he sees premodernity as a romanticized Garden of Eden (more protective, yay!) but ultimately a Panopticon of horror (no individuality, boo!). He even names one of his types of suicide (altruistic) after the fact that these societies have the “right” to demand individuals sacrifice themselves. The alternative, for Durkheim, is to embrace the liberal institutions of education and democracy, but at the cost of atrophying social ties and, where capitalism gets a foothold, moral relativism and dysregulation. Durkheim passes no real judgment, not explicitly, on the former, but his chapter on anomie offers a scathing critique of the ethos of capitalist urbanity.

The bigger issue here is Durkheim makes his readers think that suicide is a modern malaise, and not something endemic to human societies (which, it is); therefore, it is just outside the reaches of a comparative, cross-cultural sociology. Or, maybe he makes us feel that contemporary suicides are more problematic because contemporary society is not natural. He defines the parameters of sociology as modernity and the study of modern problems in the context of modernity, just as most of the classical theorists do by holding up the social problems most salient to them as exemplars.

Consequently, the old, which encompasses millennia of human evolution, are collapsed into a one dimensional foil to be easily compared to today because today is so radically different (a sleight of hand employed by postmodernists and post-everythingists, too). Even Weber, who is perhaps the most judicious and historically adept sociologist draws his own lines (pre-Reformation/post-Reformation) – though, in his defense, he is better at drawing on social continuities even if his empirics are sometimes questionable.

The last thing I will say requires stepping back from the details, and thinking about this paradox and what its says about sociologists and their view of the world. Philosophy has long wondered what the “good society” is, and this tradition greatly influenced the founders of the discipline; how could it not, as sociology, like most social sciences, grew out of philosophy. It still preoccupies the discipline’s zealous commitment to social problems and has taken purchase in an entire wing broadly defined as “critical.” And yet, the community paradox is broadly infused in our DNA. What does that say about us?

The past, if we squint hard enough, was once golden, but also abhorrent in too many ways to count. It is used as both a blunt instrument and surgical scalpel to critique modernity. The present is unnatural, filled with inequities, and prologue to the future. If these are our options, a golden age to which we objectively cannot return nor would we really want to return and a corrupt modern world, then what is a good society? Clearly, the best we can offer are utopias, but these rarely reflect the actual generalizable qualities of humans and human societies!

I offer no answers, just questions. Maybe there is no good society? Maybe there just are societies?

Posted in Uncategorized | Leave a comment

Fear and Loathing in the Summer of Covid

As one of Malthus’ four horsemen of human death, disease (& plague & epi/pandemic) has been a central force in human societies. Besides the obvious illness, death, and general misery diseases bring, despite being hidden in plain sight from humans until the 1880s, disease have been responsible for far more of human history and evolution than sociologists often realize. Consequently, sociology might benefit greatly from taking more seriously the role disease has historically played and the unique effects it has in modern, complex societies. 

Disease as Evolutionary Force
How has disease been a force producing selection pressures on human evolution? First, for a significant span of time, disease gradients, or ecological zones hospitable to humans that are surrounded by invisible ecological (disease) barriers, restricted geographic movement (McNeill 1996 [1976]). Like mountains or oceans (Carneiro 1970), disease reduced the space with which a given society could expand; at least without the appropriate technology for reducing the viability of the disease (e.g., clearing jungles). Less obviously, the lack of mobility also intensified pressures for political evolution, as population growth and density increased the odds of resource scarcity, internal conflict, and the need for centralizing risk (Johnson and Earle 2000; Abrutyn and Lawrence 2010). 

McNeill (1996) also underscores two forms of parasitism, macro and micro. Part analogy, part realism, he poses an interesting framework. Humans live by eating other organisms (macroparasitism), while parasites survive by using us as their host (microparasitism). The more “successful” a human society, the more disease becomes a prevalent risk. This is particularly true about 12-10000 years ago when sedentary human societies grew increasingly normal (Fagan 2004), accelerating exponentially with the evolution of urban societies in southern Iraq (Mesopotamia), Egypt, China, and the Indus Valley 5000 years ago (Adams 1966). “The result of establishing successful government is to create a vastly more formidable society vis-à-vis other human communities,” writes McNeill (1996;72-73):

[Hence,] a suitably diseased society, in which endemic forms of viral and bacterial infection continually provoke antibody formation by invading susceptible individuals unceasingly, is also vastly more formidable from an epidemiological point of view vis-à-vis simpler and healthier societies. Macroparasitism leading to the development of powerful military and political organization therefore has its counterpart in the biological defenses human populations create when exposed to microparasitism.

Success, then, meant greater precarity, which, ironically, set these societies up for greater success. What do I mean? With no value judgment in mind, societies that are bigger, have more advanced technology, and are better organized tend to conquer, colonize, and, sometimes, destroy their counterparts; while disease may ravage a population in the short-term, surviving an epidemic means immunity in the longer run. And, immunity usually adds an invisible devastating weapon that can lay waste to a neighbor, enemy, or innocent bystander in ways that make the bigger society even stronger and more likely to survive. Indeed, while we think of disease-wielding states as bad actors, McNeill reminds us that disease chains – or the circuits along which disease travels – are often built upon quite unintentional, normal human activities like foreign trade and tourism/travel; points quite salient in the summer of covid. Thus, like parasites that are unintentional organisms seeking to survive, sometimes human evolution proceeds accidentally and other times, like in modern biological warfare, purposefully.

The Hidden Consequences of Disease
At the nexus of evolution and political economy, there are other reasons to take seriously how diseases affect human societies. For instance, schistosomiasis, a water-borne disease that affects the exposed over their entire life course (Olivier and Nassar 1967), would have been a major problem beginning 5000 years ago at the dawn of the Urban Revolution (Adams 1966). Because it causes debilitating lethargy, the political elite’s survival in ancient Egypt, China, and Mesopotamia (Ruffe 1921) was always tenuous as an outbreak meant potential famine as peasants grew increasingly less efficient, dramatically reducing the surplus elites could expropriate.

A second example comes from a provocative – if controversial – theory posited by Rodney Stark (2006) suggesting disease is intimately connected to Christianity’s spread and, eventual, success in Rome and Europe. Using various sources of data, Stark argued that the two plagues that struck Rome (166 CE and then 249 CE) produced pressures that favored Christians who embraced the moral imperative to care for family, friends, and neighbors rather than see them as dangerous. The common solution to dealing with the sick was total quarantine, yet smallpox and measles (McNeill 1996) could be overcome with basic care like fresh water, rest, and monitoring. Christians, Stark argued, would have been more likely to care for their sick, and to extend this care to pagan neighbors. Surviving the plague, then, could motivate conversion either because of the affectual attachment to neighbors who helped and/or because of supranatural belief that the Christian God proved more efficacious and helpful than the pagan alternatives.

Sociology and Disease
What makes disease outbreaks sociologically interesting, then, is that they are an external exigency that molds human societies and social disasters – that is, ‘physical, cultural, and emotional event(s) incurring social loss, often possessing a dramatic quality that damages the fabric of social life” (Vaughan 1999:292). They are, however, a fascinating sub-category of disasters. On the one hand, diseases are connected to industrial/technological innovation, but not quite like nuclear meltdowns. Rather, the intensivity and extensivity of transportation and communication technology today, particularly the former, greatly amplifies the typical disease chain routes, as travel is easier, quicker, and more robust. The latter technological advance contributes by connecting humans in time and space, increasing the possibility of panic, collective trauma, and, conversely, potentially greater resistance campaigns to public health mandates.

On the other hand, disease are a natural disaster, sharing much in common with tornadoes and earthquakes, yet being qualitatively different. Specifically, it shares more in common with Malthus’ other horsemen of death (warfare, pestilence, and famine). For one thing, the effects are totalistic, in that they pose biological, psychological, and sociological risks . Threats of debilitation and potential lethality have obvious physical consequences, as well as metaphysical pain as actors question why and how a disease could indiscriminately affect loved ones and strangers alike. Disease also threatens social relationships, as all people – kin, kith, and strangers – are potential carriers of something invisible and harmful; something that can contaminate and pollute, as well as injure, maim, and kill.

Covid-19 has exposed just how vulnerable we are. To be sure, Trump has mismanaged and, perhaps, made significantly worse the consequences of the disease, but no country has escaped this unscathed, save for New Zealand. What is most disconcerting is that eight months into the pandemic, the specific reasons for why San Francisco and Vancouver have managed to reduce or constrain the negative outcomes while Los Angeles and Toronto have not remains murky. Indeed, mask-wearing reduces the spread, yet Canada has a lower rate of mask-wearing than the U.S.; and by most accounts, is handling the pandemic “better.”

Additionally, it has raised legitimate questions: how large and dense can a city, region, or nation can get before the dangers and risks outweigh the benefits? Is the model of neo-local, geographic mobility preferable to living near extended family and other close systems of support? Are pandemics going to be common? How do we mitigate the inequitable distribution of safety, security, and risk? How do we deal with segments of the population that see risk as individualized, thus putting strangers and friends alike in harms way? How does the federal government, in the U.S. respond to recalcitrant communities that see state intervention as anathema to local control?

These questions, and many others, remain unanswered and are worth digging deeply into; especially in the connected world we live in today. For instance, while marginalized groups are at higher risk of being exposed and dying from Covid-19, privileged groups do not have as many options for avoiding exposure as in the past. When Yellow Fever struck the American south in the late 1800s, the solution for those with means was to simply move further from the urban cores near the Mississippi River. Nearly 1/3 of Memphis’ population – mostly wealthy whites – simply picked up and left, leaving the poor and marginalized to deal with the disease (Rushing 2009). Disease chains, however, extend further and further into what was once isolated, safe space.

While Malthus’ predictions have been severely criticized, perhaps it is prudent to wait to see the final score. Or, more practically, to begin re-imagining a world in which diseases are real forces to contend with.

Posted in Evolution, Musings on Sociological Theory | 3 Comments

Patrimony and Bureaucracy: Explaining the Age of Trump

And now for something completely different…The Daily Beast recently reported that Donald Trump’s son-in-law

Jared [Kushner] had been arguing that testing too many people, or ordering too many ventilators, would spook the markets and so we just shouldn’t do it… That advice worked far more powerfully on [Trump] than what the scientists were saying. He thinks they always exaggerate.

Nevermind the veracity of this claim. Rather, reflect on the emotion it elicited. Reflect on the emotions previously elicited when reports of his sprawling influence on the executive office bubble up. I know this subject has been exhaustively discussed by pundit and social scientist alike, and at risk of being redundant or even obvious, I look to some sociological theory to maybe offer a different take on why many of us feel the way we do.

Systems of Domination
Alongside Max Weber’s more famous ‘types of authority” (traditional; legal-rational; charismatic) are his less explored types of domination. Of course, most sociology students know his work on bureaucracies, which is one of the most important pieces in his oeuvre. In line with his general historical argument that all social organization evolves towards greater routinization and rationalization, Weber lays out three types of domination: patriarchy, patrimony, and bureaucracy; all of which present different logics of domination and subordination.

The first two are very close cousins differentiated only by the whether or not there is an administrative staff. Over time, as the administration grows larger and the tasks more complex, a proto- and then more ideal typical bureaucracy evolves. Though patriarchy concerns us a little, it is really patrimony and bureaucracy that stand at the center of our focus. First, however, a table highlighting some of the principal differences between the latter two political forms of domination:

Patrimonial System Bureaucratic System
No Distinction Between Public and Private Spheres Offices/Administration are Clearly Separated from Incumbent
Subjects Exist to Support the Patrimonial Household Administration Designed to Manage Needs of Population
Commitment predicated on Individual’s Authority over Others Commitment to Impersonal Purposes of Department/State
Domination Rooted in Personal Subjection Domination Rooted in Legal Norms
Power Used Arbitrarily w/ Competing Interests only Potential Barrier Power Constrained by Custom, Norms, and Law
Appointment/Promotion Based on Personal Loyalty Appointment/Promotion Based on Formal Rules
Competence Based on Fidelity and Loyalty Competence based on Technical Training
Compensation Based on Benefice and Reward Compensation Divorced from Ruler, Based on Salary/Wage

Weber, of course, famously argued that the modern world was becoming increasing rationalized across many different social orders. The Catholic Church, for instance, was a monument to bureaucratized religion (or, a hierocracy in Weberian terminology), which soon was followed in the mid-17th century by the construction of the nation-state and the slow rise of a civil bureaucracy. Although patrimonialism, in the form of absolute sovereignty, stubbornly held on for several centuries after, but with the spread of democratic republics throughout the west, bureaucracy became the central organizing principle of state and church (Collins 1986). The bureaucratization of every sphere was accelerated by the growth of rational western law that was both a consequence and cause of church bureaucratization (Berman 1983). Eventually, the democratic state and the separation of its branches accelerated the growth of formal economic organizations (corporations), the bureaucratization of universities and education more generally, and so forth. In short, the world we have inherited is one in which mass societies composed of millions and millions of people; and, the most efficient and effective means of dealing with those masses, thus far, is bureaucracy.

Bureaucracy and its Discontents?
A less obvious point Weber makes because most people abhor the “red tape,” snail-like pace, and dehumanizing qualities of, say, the DMV or a university’s administration, is that bureaucracies also improve the lives of more people. For instance, the first feature of bureaucracies in the table underscores the separation of incumbent from personal possession of office. We take for granted just how big of a deal this is, but it prevents families from owning a title or office and passing it on from one generation to the next. Combined with formal and explicit promotion/hiring standards and the elevation of achieved status markers (e.g., a bachelor’s degree), bureaucracy offers greater mobility and equity than previous systems of domination. And though regulations can be personally stressful or encourage inertia, there are positives to a system designed to reduce the chances of arbitrary abuses of power – even if they do, obviously, occur. (I realize it is in fashion, especially in the social sciences, to decry all authority and power because it is assumed nefarious and rife with malfeasance, but like the anti-vaccine crowd that struggles to understand how wretched pre-vaccine society was compared to today, I think it genuinely difficult for people to conceptualize how truly unfair and disastrous patrimonial systems of domination were/are for the vast majority of humans).

In any case, we are all very used to the bureaucratic system of governance in the U.S. Though political opponents often try to point to the subversion of bureaucracy as a polemic and, indeed, sometimes these accusations are correct, the vast majority of politicians are constrained by the system. To my knowledge, know congressperson or Senator has tried to sell their seat or pass it directly to their progeny. At least since the early 1900s, the physical abuse or assault of subordinates by a politician is also rare. And, while people are promoted for reasons other than their qualifications, the civil bureaucracy in the U.S. has been pretty effective at being non-partisan. Indeed, dystopian novels ranging from Brave New World1984, and even the popular Hunger Games books/movies demonstrate just how inured we are to bureaucratic domination: all roads of tyranny lead not only to the callous violence found in satire like Brazil, but to the brutalization and dehumanization exemplified by the Nazi or Stalinist regimes.

Trump Administration as Patrimonial Domination
Now, consider some of the points Weber makes about patrimonial domination and think about who this sounds like:

  1. The patrimonial ruler [and his acolytes] sees his authority and office as his “personal right, which he appropriates in the same way as he would any ordinary object of possession” (1978:232)
  2. “The exercise of power is oriented towards the consideration of how far master and staff can go in view of the subjects’ traditional compliance without arousing their resistance.” (227)
  3. In terms of decision-making: “There is a wide scope for actual arbitrariness and the expression of purely personal whims on the part of the ruler and the members of his administrative staff.…Patriarchalism and patrimonialism have an inherent tendency to regulate economic activity in terms of utilitarian, welfare or absolute values [thereby breaking] down the type of formal rationality which is oriented to a technical legal order.” (bid. 239-40)
  4. “In the patrimonial state the most fundamental obligation of the subjects is the material maintenance of the ruler” (ibid. 1014)
  5. “The ruler recruits his officials in the beginning and foremost from those who are his subjects by virtue of personal dependence, for of their obedience he can be absolutely sure. [When he must recruit extrapatrimonial officials, he insists] the same personal dependency” (ibid. 1026)
  6. “The patrimonial office lacks above all the bureaucratic separation of the “private” and the “official” sphere….Of course, each office has some substantive purpose and task, but its boundaries are frequently indeterminate” (ibid. 1029-1030, emphasis mine)

What is striking about these points is the fact that the tensions between Trump and democracy are not the kind fiction or social science have feared, but a throwback to ancien régimes. In many ways, this isn’t surprising or even novel a conclusion. Trump appears to have run his organization as a mafiosa-type system (also patriomonial domination, but without the full weight of the monopoly over the legitimate right to violence). It also may account for why every venture has failed, as patrimonial logic does not work well with formal rational capitalism (see quote #4 above). And thus, there are tensions and real consequences when logics of domination cross streams; especially when they are ill-fitted to each other.

Make no mistake, there are many reasons to be frustrated with the last few years besides this disjunction. Besides the GOP ramming through judicial nominees that are retrogressive and the failure of the popular/electoral vote to produce a satisfying outcome, there are many, many things to concern us: Trump’s xenophobia, racism, antisemitism, misogyny, and so on. More problematic, if you were to ask me, is his complete and utter lack of empathy to the degree that he appears sociopathic. Yet, I think this disjunction in logics is the crux of the problem: I would argue that the disjunctions between his patrimonial style and the crystallized expectations we have about governance and domination are as much a source of consternation as the personality quirks and blemishes of a narcissist. Running a casino into the ground for his own vanity pales in comparison to incommensurate logic patrimony produces amidst a serious public crisis, like a pandemic.

For instance – and germane to the intro to this essay –  non-elected staff – his daughter, son-in-law, valet, and other close confidants – have no clear boundaries between their private and public selves, nor between a specific project in which some expertise might be handy. However, when grafted uneasily onto a different system of domination, it invites far more confusion and inefficiency then either system would produce under incompetent leadership. In part, it is because the former system attempts to subvert all organs of the state to ruler’s material and symbolic gain while bureaucracies – even those rife with corruption – are oriented towards resolving the problem, if only to prevent being voted out of office or fired.

Patrimonialism and its Discontents
It is an interesting and open question whether Weber could have imagined a retrogression of domination. Though I believe Weber was not an historical determinist, having learned much from Marx’s failures, I do think it fair to say that organizations of all kind lurched towards greater rationalization. By rationalization, I mean formalization (e.g., written rules), standardization, quantification of metrics, and forward-thinking (e.g., predictability as a key value); add to this motivation to replace human error with non-human technology. Rationalization, for Weber, was not so much an immovable object but rather an irresistible force.

Had he lived long enough to see Hitler, Stalin, or Mao, he would have seen charismatic individuals embrace bureaucratic domination to pursue their ends and probably wouldn’t have been surprised.Though they were brutal and sometimes arbitrary in their decision-making, they sought to use the bureaucracy and were, thus, beholden to rationalized forms of mass brutality.

For Trump, this rationalized brutality is only really present in his immigration policies, which like the aforementioned dictators, used state force and mass incarceration to achieve their goals. It is expected that any party and president will choose political appointees committed, to some degree, to enacting his or her vision and policy goals. Trump, however, has only one goal: using the state to enrich himself materially and affectually (through adoration and fealty). His cabinet, in many ways, have become miniature patrimonies because, on the one hand, they owe their fealty to Trump with two key rules governing their decisions: do not outshine Trump’s news coverage and do not contradict him. However, on the other hand, they have one job: evade the bureaucracy and dismantle it if necessary. Politics as usual expects that Obama’s EPA or labor secretary will be pro-environment and unions and that Bush’s will be far more pro-business than not. However, it is entirely a different thing to essentially choose purely self-interested actors whose motives are, in fact, the opposite of the department they are running.And, in nearly every other regard, his patrimonialism throws into sharp relief just how different he is from every other president. Even Nixon, ultimately, resigned before the customs, norms, and laws constraining his use of power caught up with him. 

Thus, while much of the outrage can be correctly pinned on the explicit cruelty of the Trump administration, I would argue that it runs deeper. Amidst the incongruence between two different systems of domination, a sense of structural chaos and, worse, unpredictability emerges. Perhaps this is why sociology exalts Weber’s types of authority, which “make sense” in a modern democratic republic, and not his types of domination, where patriominalism seems distant structurally, culturally, and phenomenologically from the system most of us are born and operate in. And while Weber worried about the dehumanizing effects of bureaucracy – effects which unfolded in their full horrifying display during the Holocaust – like most of the classical theorists he was deeply suspicious of “irrational” systems of domination; systems that usually were extensions of an individual and his household’s personal pleasure. Hence why every action of Trump or his staff become subject to being viewed through a patrimonial lens and, thereby, corrupt. Meanwhile the bureaucracy atrophies, and sincere as well as partisan fear and anger rages around the norms and laws that seemed to protect the U.S. from the very thing Trump’s most ardent followers decried Obama and Clinton being guilty of (and which the rallying cry of the COVID “liberate” protestors: tyranny. The stakes couldn’t be higher.

Posted in Musings on Sociological Theory | Leave a comment

Cultural Trauma and Total Social Facts

Since Durkheim, sociology has had the habit of looking at psychological phenomena and attempting to co-opt it in the name of social facts and forces. A promising phenomena, one with some relevance for the current COVID pan-pocalypse we are all enduring, is trauma. Once a subset of neurosis or anxiety found in soldiers who saw combat, PTSD has become a common diagnosis for a wide range of individuals whose experience or experiences have lasting cognitive and affectual consequences. Trauma is no joke.

Predictably, sociologists have borrowed this term, applying it, recently, to moments in which “members of a collectivity feel they have been subjected to a horrendous event that leaves indelible marks upon their group consciousness, marking their memories forever and changing their future identity in fundamental and irrevocable ways” (Alexander 2004:1). In typical sociological fashion, sociological phenomena – like collective or cultural trauma – is not reducible to biology or psychology: “Trauma is not the result of a group experiencing pain. It is the result of this acute discomfort entering into the core of the collectivity’s sense of its own identity” (ibid. 10). Since Durkheim’s “discovery” in the 1950s/1960s and subsequent canonization in the 1970s/1980s, American sociology has hewed closely to his forceful denial of the overlap between psychology and sociology in both his Rules of the Sociological Method and Suicide. (Never mind his Elementary Forms of Religious Life, which I could argue was the first and one of the greatest pieces of social psychology sociology has ever produced!). What if, however, cultural trauma could be empirically distinct from the types of intrapersonal trauma abused women and children or soldiers who witnessed their entire platoon decimated, yet still have biological and psychological roots? Would that undermine sociology’s claims, or would it bring these disciplines closer together while asserting sociology’s rightful place?

Total Social Facts
In a series of lectures on psychology and sociology, Durkheim’s nephew, student, and collaborator, Marcel Mauss (1979), argued there were some types of social facts that were total social facts.  Social facts are external, “coercive” forces that give shape to how a group’s members tend to think and feel. Some are structurally patterned while others travel like “currents” of electricity or public opinion, spreading from one person to the other in a recurring game of telephone. Social facts become internalized through ritualized occasions; both the ceremonial and spectacular discussed by Durkheim and the mundane emphasized by Goffman, but remain external in the symbolic representations of meanings etched into words, physical and social objects, and, even geographically and temporally distinct spaces.

A total social fact is one that is fundamentally rooted in our biology and psychology, yet is made real and meaningfully through social interaction and patterned by structure and cultural formations. Mauss, for instance, pointed to language as a total social fact. As Meštrović (1987) interprets this, language is manifest sociologically in the written word, psychologically in speech, and physiologically in evolved features that allow for the mechanics of speech, like the larynx. With advances in cognitive sciences since the 1980s and their spread into sociology, we can push this idea a little further: sociologically, language is shaped by both the actors, environment, and the way in which actors are distributed in space; psychologically, speech is rooted in the mechanisms facilitating actor-environment interface, like mirror neurons that allow for toddlers with no language to track and internalize subtle muscle movements around phonemes; and, biologically in terms of the larynx and language centers of the brain. Thus, on the one hand the speech act is actually a mixture of biological and psychological forces, but on the other hand, it is wholly determined by the structural and cultural context in which significant symbols used in speech are acquired. We need not be afraid of the brain, both biologically and cognitively, because social forces remain ever-present in their ability to coerce patterns of thought and behavior.

Mauss, as Meštrović notes, did not fully leverage the idea of a total social fact, and mostly assumed the listener’s/reader’s familiarity with Durkheim and his acolytes’ sprawling work on religion and kinship, so like Meštrović, we are free to extend the idea with some creative imagination. Can something presumed psychological – PTSD – be a total social fact too?

Return of the Organismic Analogy
One possible way to imagine trauma as both sociological and as a total social fact is to reconsider the organismic analogy. In classical functionalism, society (or, more accurately, collectives) were conceptualized as an organism or supraorganism. Often, this analogy was metaphorical: the large, complex proto-industrial and industrial societies sociology emerged within were composed, like the body, of differentiated “organs” that look different form each other and did different things. However, some, like Herbert Spencer (1873) and, arguably, Durkheim in his Division of Labor, went beyond the metaphor to argue that, in fact, social units were organisms (with special caveats). However, sociologists summarily rejected both the analogous and literal organismic perspectives. But, it remains an open question whether the rejection was too hasty or made on empirical grounds.

For instance, in Kai Erikson’s (1978) incredible study of several West Virginian hollow communities that had been suddenly destroyed and dislocated by a severe coal mining flood, we are presented with evidence that some types of social organization can resemble an organism. Echoing Durkheim’s vision, Erikson compares Buffalo Creek communities to the types of foraging and pastoral societies made famous by anthropological investigation:

persons who belong to traditional communities relate to one another in much the same fashion as the cells of a body: they are dependent upon one another for definition, they do not have any real function or identity apart from the contribution they make to the whole organization and they suffer a form of death when separate from the larger tissue…a community of this kind being discussed here does bear at least a figurative resemblance to an organism…It is the community that cushions pain, the community that provides a context for intimacy, the community, that represents morality and serves as the repository for old traditions” (Erikson 1978:194, emphasis mine).

In interview after interview, Erikson found themes that bubbled up regardless of the individual and the community and network she belonged to prior to the flood. The same pain; apathy; moral disanchorage; disorientation; tendency to develop psychosomatic physical ailments; confusion. He concluded that “when you invest so much of yourself in that kind of social arrangement you become absorbed by it, almost captive to it, and the large collectivity around you becomes an extension of your own personality, an extension of your own flesh” (ibid. 191). This description anticipates the conceptual frame Alexander provides to understand cultural trauma. However, it also suggests something more fundamental than just external social facts: there is something social psychological (Abrutyn 2019) and, even, biological. Erikson (1978:191) continues: members of the community were not only “diminished as a person when that surrounding tissue is stripped away, but [they] are no longer able to reclaim as your own the emotional resources [they] invested in it.”

Cultural, collective traumas, then, have the ability to act as total social facts in cases of tight-knit communities where members have tremendous material and ideological stakes. At the social-level, collective trauma is a function of structural factors, like network density – and cultural factors, like the tightness/coherence of culture. Moreover, it is found in the loss of social relationships and identities that triggers the experience of shared trauma (Abrutyn 2019), thus bringing the level of analysis of trauma into the social psychological and, presumably, psychological. It is accurate to say that the loss of social ties and other things that represent them – like our identities, routines, and so forth – have the effect of eliciting social pain. Social pain, interestingly, affects areas of the brain that overlap with physical pain (Panksepp & Watt 2014; Tchalova & Eisenberger 2015; Matheson et al. 2016), which brings trauma into the neurobiological. And because collective trauma traverses all three levels, yet remains tethered to each, we can conclude it is indeed a phenomenon that truly resembles Mauss’ vision of total social facts. That said, a set of interesting and germane questions revolve around whether or not collective, cultural trauma can extend beyond these tight-knit networks?

Trauma in a Depersonalized and Digitally Mediated World?
I conclude this essay with less certainty and more speculation. Currently, we are all experiencing the effects of the coronavirus. For some, it is moderate in its effects: our favorite haunts are shut down and we may be working from home For others, it means strict social isolation and seriously reduced human contact beyond the denizens of our home. And, for others – particularly members of disadvantaged communities and/or classes of people – it is a high-risk situation: working with or without PPEs; struggling to balance the need for a paycheck with the lack or reduced childcare; working in essential organizations in which contact with symptomatic and asymptomatic individuals is common or, even, the description of the job.

However, the majority of the Earth’s population lives in urban areas with another significant proportion living in the types of non-urban communities that are too large and too diverse across an array of categoric distinctions to create supraorganismic conditions. So, we live in two worlds more than the tight-knit worlds of Buffalo Creek: an encompassing depersonalized world (Lawler et al. 2009) and a digitally mediated world. The former is everywhere as most interaction involves people occupying generalized role positions (e.g., doctor or patient) that carry well-worn patterns of thinking and doing that reduce the need for personal relationships. The latter, however, is no less common, especially now in times of social isolation. Both raise questions about whether the disintegrative consequences of COVID can be understood as collective trauma. Depersonalized contacts involve “thin” exchanges that differ from the dense, multiplex nature of tight-knit communities, while it has become painfully obvous that face-to-face interaction is dynamically different from its digitally mediated cousin (e.g., zoom; twitter).

In both cases, the question of whether a generation or cohort of individuals are experiencing the type of collective trauma that Buffalo Creek’s communities did remains open. If so, why and how does the process resemble the sudden disintegrative reality imposed on those West Virginian’s? More interestingly, how is it different? How does it differ across categories of people and regions? If the trauma, however, is in fact different, how so? Is there a sociological process occuring or are people alone on their island? These are, ultimately, empirical questions of which I lack the data to even hazard an intelligent answer. But, they are really important questions. The world is not going to suddenly shrink and personalize and, short of the end of electricity, social media and digital communications are not likely to disappear; indeed, one consequence of COVID will probably be the expansion of these technologies into more and more facets of our lives. I’ve made the case elsewhere (here and here) that the pain we are experiencing – the grief, panic, anxiety – is rooted in loss. Loss of recurring ritual occasions; of taken for granted routines; of both the annoyances and small gifts daily life provides us; and, even, the places and other objects that implicitly are extensions of our self. The question, though, is whether or not this pain can be characterized as a total social fact in the sense that it is shared and cultural, and fundamentally tied to our biopsycho architecture?

If so, what does the search for a new collective identity look like in a digitally mediated or depersonalized world? One consequence Alexander underscores is that the identity anchors are torn apart and new shared sense of community identity must be built up. But, in less deeply dense networks, how does this proceed and what does this look like? Alternatively, what becomes of the current arrangements of cities? Do we try to create new structural and cultural supports to build these identities? And if so, how? And what do those look like? Again, lot’s of unanswered questions…And, lots of opportunity for new types of research!

 

Posted in Culture, Emotion, Musings on Sociological Theory | Tagged | 1 Comment

Initial Results: Teaching Grad Theory

Well, the term is over. Not simply because of Covid, but because UBC is on a 13-week semester and class ended last week. I am reporting, unscientifically, initial evidence from my experiment. First, a note on how the class ended up being organized.

  1. The first week was devoted to “what is theory,” with several papers like Abend’s now-classic and Jon Turner’s defense of positivism. At the same time Paul Reynold’s primer on theory construction was read to provide a sense of what formal scientific theory does and looks like. The second half of the class was devoted to actually deconstructing a theory-driven article (in this case, it was Ridgeway/Berger’s 1986 ASR). The point was to find a clear “formulaic” article that allows us to discuss both the practical side of writing/publishing theory-heavy or theory-driven papers and the exercise of deriving propositional statements from less-than formalized work. Any article would suffice here.
  2. Weeks 2-6 were devoted to broad strategies of writing, presenting, publishing, and conceptualizing theory. Formal theory was first; followed by a week on how some theorists/subfields/strands of scholarship develop and engage with a classical theorist or a specific work (in this case, Suicide, but anything will do fine here; next we read an assortment of theoretical pieces that employed different strategies synthesizing theory – this meant, synthesizing subfields, concepts, and so forth; we then shifted gears to how a theoretical idea, like schema, evolves within a given subfield; and then looked at a range of works that are interpretivist/hermeneutical. The one received best was the integrative week, perhaps because it was the most creative, but there was a lot of love for the first and second weeks too. The former was enjoyed because it helped them see how propositions come in different presentational forms and also how helpful they are in outlining an article and using it for comps, dissertation ideas, and actual empirical work. The latter was enjoyed because we looked at both theory and empirical work that sought to deal with serious theoretical dilemmas in Durkheim’s work. While the students liked the less-positivist nature of interpretivism, the criticism was the articles were overly detailed in their historical/archival data and one could get lost too easily in the weeds.
  3. The remaining weeks were devoted to large thematic areas: the self; emotions; commitment; power; social capital; and meso-micro linkages (e.g., networks; Fine’s idiocultures; Glaser/Strauss’ awareness contexts; community; strategic action fields). These weeks were devoted to seeing each of these conceptual themes from different angles while also noting the overlapping, commensurate aspects. If I were to change anything here, it would have been a couple of the readings and some of the themes. Mostly, these are placeholders for me to change year-by-year for the sake of freshness.

Now, the pros and cons.

  1. Pro #1: A nice balance between pragmatic concerns (writing publishable theory) and substantive concerns (a theory class is supposed to give students breadth of some sort). A lot of class – before Covid destroyed in-person meetings – was devoted to how to read theory, which was good. It also exposed students to a wide ranging body of theory that I think they would miss. By not focusing on areas or theorists allowed me to assign diverse readings each week tied together by writing or theorizing strategies (integrative theory, for instance). This mean both a practical side (what do people try to integrate and how many different ways do they try to accomplish it), and a substantive side (what is the actual theory?). We ended up covering a lot. I snuck evolution in, for instance, through the aforementioned gender strat paper and in Turner’s 2007 book on the evolution of emotions and social relationships. I was able to jump back and forth between social psych and organizational sociology. And, ultimately, I gave a decent survey of the discipline. Also, by including exercises devoted to thinking about how theory is used in more empirically-driven articles, we could spend time thinking about the role theory plays in research and not just in navel-gazing or canon-worshipping.
  2. Con #1: In the background, a serious problem kept bubbling up: the macro-micro dilemma. Some sociologists can and do think macro, historical, and in patterns; the vast majority do not. Many are, rightfully so, devoted to contemporary issues whether they are race or sexuality because they matter to them. The time and energy and resources needed to learn history to the level that one is comfortable even engaging big historical theories is also a major cost and, as Marx would have found out had he really tried to raise the proletariat’s historical consciousness, too often met with resistance, frustration, and questions of relevance. For instance, we read a great article by five authors positing a general theory of gender stratification. Now, admittedly, the article was on gender, which meant some students were simply going to critique it on critical grounds. But, the point of the article was to take four major areas that gender scholars focus on in explaining stratification, examine the historical and ethnographic record, and posit a theory that explains variations across time and space in how inequitable gender relations are. Its big and necessarily complex in its parts, but the article does a valiant job condensing a lot of ideas into a small space, drawing links between different research areas, and demonstrating how and why gender stratification varies. In any case, it is a great example of integrative macro-theory, induced by extant empirical evidence in order to produce deductive testing. The criticisms were typical: it is too simple (four big areas with two or three dozen concepts that could be operationalized in five or six dozen ways!!?!); it ignores the experiences of women (that’s the point, experiences are for a different type of sociology, but that doesn’t make this any less valid or worth reading). The lack of engagement with theorythe theory was astounding across the board. I shouldn’t be surprised as that tends to be the case year after year, no matter how or what I taught. If you asked me, it is really the disconnect between theory as philosophy, theory as critique of everything one hates or sees as the enemy of their group’s mobility, and theory as social science. Students aren’t sure what they are supposed to be getting out of it, and the theorists they often gravitate to are like Rorschach tests: garbled, obscurantist, and thus easy to impose whatever one wants to see.
  3. Pro #2: You know what did work out? My students increasingly became aware of the fact that there are 20-24 basic dimensions of social life (on continua like formal-informal, intimate-impersonal, etc) and that there are a delimited number of conceptual issues, themes, and so on. They came to this realization early, so I like to think the readings and the format of the front end of the class was the cause of these realizations (but, I have no data nor did I even deign to try collecting such data). To be sure, a couple of students I think came in believing sociology wasn’t a science and science is a white male oppressive, hegemonic ideology, so they would never be persuaded, though I think one became more critical of that knee-jerk perspective. Ultimately, I think it worked. It also worked that students became proficient at drawing propositions out of various types of work, both journal articles and monographs like Kai Erikson’s Everything in its Path.
  4. Con #2: Even with two more weeks as is common in the US, there are areas I would have missed because of the time constraints. For instance, I love evolutionary sociology, but it seemed not to fit and would have required a lot of time talking about evolution itself. Likewise, I study institutions and it would have been nice. In the past, and to some extent now, I have used theory to expose students to the stuff electives and methods classes usually don’t expose them to. It is often as though other subfields, like evolutionary sociology, are less important than hotter fields, and thus if they don’t get it in my class, then would they choose an elective on human societies and social evolution? Doubtful, but that is a hypothesis worth testing.
  5. Con #3: This is a little self-serving, but I continue to miss the joy I felt when I first was exposed to theory in my MA program. I was given free-reign by a hardcore Marcusian Marxist trained by a sociobiologist to explore. I read widely. Freud, Fromm, Fanon, Luhmann, Comte, and so on. Two years and so much reading. To be sure, that was as much my own use of time as it was a programmatic. But, theory to me was about reading widely and having serious conversations. A class on theory can be that, but it also raises serious questions about what the point of theory is. If you have students who are learning to write grants or develop research programs, does knowing Nietzsche, Plato, or Rousseau matter? Is it important to debate how much of an influence Kant had on Durkheim? Is it worth spending precious time exploring critiques like Marcuse’s when they are rooted in impossible to prove critical analyses? Just what are we up to?
  6. Pro #3: The class is getting closer to what I think is a good theory class. The end result was to produce a theoretical paper, not just a review of a literature or subfield. The five papers (I have read their outlines and intros, to date) are actually amazing ideas. Will they be executed the first go-around? I don’t know. Is it a case of randomness (five really great students) or the class provided certain tools? Also, unknown. But, I can say for the first time every paper – even those that sit outside of what I would write as theory – look really great and potentially publishable under the right circumstances. Students took chances, in many cases, and are trying really difficult but creative things. So, that is a win.
Posted in Teaching | Tagged | 1 Comment

PANIC/GRIEF, or the Pain of Social Distance

Do you feel it? The pain of being stuck inside, apart from the people you love? Apart from the routine movements that fill the rounds of daily life that are blindly taken for granted? The patterns of interaction or exchange that, like signposts on a road, made these routine movements meaningful – i.e., innocuous small talk with a co-worker whose office you passed every day; the familiar feel of your desk or warm glow of your screen; the distaste for a fellow co-worker or a voice that grates on you? The feeling of others present in unfocused encounters like riding a train or bus, walking through a hotel lobby or department store, or people watching from your upstairs window? You’re not alone. As apes, our brains evolved to crunch complicated calculations about conspecifics; about one’s position relative to others; about one’s love, lust, like for another and their evaluation of one’s self; about one’s status or reputation; about past interactions and anticipation of future ones. Indeed, our supercomputer of a brain seems designed to handle up to 150 relationships in a network of others (Dunbar 1992). Social intelligence, perhaps; a powerful need, definitely. We are designed to be “nosy,” to talk about each other, to gossip, to want to “groom” each other, to compete and be co-present. Why?

Late-neuroscientist, Jaak Panksepp (1998; 2005; see, also, van der Westhuizen and Solms 2015) identified seven structurally discrete affectual systems in mammalian brains that he theorized evolved to coordinate – and sometimes command – the behavior of mammals facing the same sorts of evolutionary challenges. [The all-caps was Panksepp’s strategy for emphasizing the motivational component over the usual emphasis on primary emotions – however, each affectual system corresponded with one or a set of related emotions]. (1) SEEKING, or the motivation to pursue resources; (2) RAGE, or the motivation to defend those resources; (3) FEAR, or the motivation to avoid pain and destruction; (4) LUST, or the motivation to bond intimately with some others; (5) PANIC/GRIEF, or the motivation to avoid rejection, isolation, and exclusion; (6) CARE, or the motivation to nurture the young; and (7) PLAY, or the motivation to bond with others through vigorous interaction. In primates, particularly apes, the latter three are bigger in size because of their obvious importance to sociality. A key insight from Panksepp is that these affectual systems are primary systems like our digestive or endocrine systems: though they may work in coordination with cognitive functions, they usually serve as executive controls (that is, they stimulate and control other functions like memory or behavior) and, in intense moments, as command functions taking over the entire body and forcing “instinctual” reactions. This is not to say that we are automatons, but that affect is primary to cognition (Damasio 1994; LeDoux 2000); which, really isn’t controversial.

That all said, our brains are plastic in so far as these affectual systems are designed to aide in learning (Davis and Montag 2019). Memory, for instance, is predicated on information being selected as important or not based on the affectual intensity it elicits and valence it triggers. Thus, more emotionally intense events become more easily remembered, more easily recalled, and more likely to be deemed relevant to the self and/or a relationship (Conway 2005). Thus, as George Herbert Mead theorized, our self is constructed through interactions with others – real, imagined, and generalized – such that we acquire the meanings that make physical, social, and ideational objects significant and, therefore, something we can coordinate our actions in pursuit or use of. The resources we SEEK or when, why, and how we RAGE (that is, how and why we express or suppress anger), and whom we LUST after is a mixture of our evolved affectual systems, unique genetic factors, and, very often, the sociocultural environment we acquire Mead’s meanings.

SOCIAL PAIN
One of the most powerful systems, in my opinion, is the PANIC/GRIEF system. Panksepp notes that all neonates trigger the PANIC system automatically when they lose sight of their mothers and, conversely, all mothers trigger PANIC when they lose sight of their young. Any parent who cannot find their child in a crowded mall or any reader that recalls being separated for even the shortest period in an unfamiliar place knows the feeling. We are, essentially, wired to feel the emotions centered in PANIC when suddenly isolated and when we a bond is dissolved or we are excluded or rejected. Interestingly, Panksepp gave the PANIC system a second name, GRIEF, to reflect the intertwined relationship between losing sight of a caregiver and threats to and losses of social bonds. To be sure, isolation, rejection/exclusion, and loss all elicit different culturally appropriate emotions and different levels of intensity based on the many factors including the significance of the person with whom we lose a bond, the duration of the relationship that is threatened and, thereby, the time, energy, and other resources we’ve invested, and the source of blame to which we attribute the affectual response (self, other, group, abstract system). Exclusion and rejection are particularly painful, often eliciting intense social emotions like shame and humiliation (Retzinger 1991; Gilligan 2003), whereas losing a bond because of death triggers grief in all of its forms. But the point stands: the loss of anchorage to person or group, for a variety of reasons, triggers a deeply evolved affectual system that, subsequently, pushes us into action. Why does this matter?

Right now, most if not all of those who chose to read this are sitting alone, at a computer, socially, responsibly distancing from others. The pangs of anxiety you feel are natural activation of FEAR to the uncertainty of when this will end and what that end will look like. The activation of GRIEF we have is premised on not being able to have the small or the large, the mundane or the spectacular co-present rituals affirming our social ties. And, for some of us, when both are activated, we probably do feel a sense of PANIC. That we have three affectual systems devoted to different types of social bonding (LUST/intimacy; CARE/nurturance; and PLAY/social joy) underscores the variety of interactions we are being excluded from; even those we have long taken for granted and which a survey would surely miss as most people do not even realize just how much we yearn for the simple exchanges as much as the more complex. Indeed, I would argue the former are more important than the latter, which may help explain why my twitter feed features as many people isolating with their family feeling anxiety and grief as those who are alone for one reason or another. That is, we would expect those stuck with the significant others to feel supported and warm and not social pain. So, why is this the case?

APE SOCIAL NETWORKS
In previous posts on this site, I have presented Alexandra Maryanski’s (1987; also, Maryanski and Turner 1992) network analyses of the four remaining ape species (Gibbons; Orangutans; Gorillas; Chimps). One of the central arguments is the counter-to-the-conventional-sociological-assumption that apes – of which we belong – prefer to have only a couple strong ties and many weak ties. Chimps, for instance, live in communities that share a territory, but have few strong ties. Mothers and their young are obviously strongly bonded, but that is the extent to which those types of ties exist. Male-male bonds do form between hunting partners who reciprocate meat sharing, but when females reach child-bearing age, they leave the natal group and join another and so do some of the males. Female-female bonding is also quite rare. Hominins, which split from chimps and therefore shared a Last Common Ancestor, also preferred autonomy, independence, and few strong ties/moral obligations. That means, the earliest human societies likely included a lot of freedom of movement.

Our brains, then, are theoretically designed to make complex social calculations about a lot of weak tie relationships. There is pleasure found in gossip; in associating with people who we share a single interest with; in friendly and intense status competitions; in the banter that comes and goes, but which is as routine as the road we take to work everyday, the seat we occupy in every lecture in a class, and familiar occurrences that make us mildly content or annoyed. So, the general feeling is we are missing significant others, but we really are missing the extensive networks we belong to as direct, indirect, fringe, or bridge members. The familiar has melted away.

To put it more colloquially: we are mourning. Your friends and family over social media and facetime are going through the five stages at different paces and over different social bonds. The world is grieving over social ties and panicking over what this means for the self in the future.

Furthermore, when we take a walk or brave a supermarket, the rules of distancing and the paranoia many feel and express, has created unsatisfactory small, weak tie exchanges. We have all become Goffman’s (1963) “normals” and everyone is now a non-person; someone to fear, The dynamics of interpersonal interaction are now mechanical and conscious instead of fleeting, taken for granted, and mildly contenting.

But, why are people stuck with their significant others feeling isolated and anxious? Well, there is sociological evidence that strong ties are not always our favorite ties (Small  2017). They are exhausting, mentally draining, and filled with obligations – everything we, as apes, dislike. To be sure, I love my family; and, I think there have been a lot of good things to come from being stuck for almost two weeks in a small space together. But, the lack of freedom of movement and escape from the “social cage” as Maryanski – putting a twist on Weber’s “iron cage” – call’s it, is stressful and just as anxiety-riddled. If variety is the spice of life, then the bonds built on familiarity and the totality of self blur the thrill of backstage/frontstage divisions; of the transition from one to the next; of intrigue; of being only a tiny sliver of oneself and having secrets that give depth to our weak tie relationships. Reputation in our family life matters much less on a day-to-day basis. Social calculations are mundane and rote. In a word, the thrill is gone.

 

Posted in Emotion, Evolution, Musings on Sociological Theory | 3 Comments

What Is the Point of Sociological Theory?

This morning, I will be embarking on graduate contemporary theory for the eighth time in my career. Every year, it has evolved – sometimes quite significantly – making me the guy who won’t commit to a recurring syllabus and, thereby, taking advantage of the amazing teaching load UBC offers research-oriented faculty. This year, it has taken one final step forward, but that is not the specific point of this specific entry (though, I do want to revisit what happened last term with my undergrad theory course and discuss the way I’ve designed this current course in future entries…entries which I hope come at a more consistent and quicker pace). For now, I want to build a little on a abbreviated twitter thread I left yesterday. In short, the title of this essay is the subject of the post.

What is The Point of Theory?
In re-reading Abend’s (2008) piece on the meanings of theory and Turner’s (1985) piece defending positivism, alongside the first couple of chapters of Reynold’s *Primer on Theory Construction*/Homans’ *The Nature of Social Science* – all assigned for day one, the question once again was as salient as it can be. I was once again excited to read Abend’s arguments, which, in essence, put the burden, first, on the discipline to work out, politically, what exactly we mean by theory; and, second, on individual sociologists – in their reviews of papers, for instance – to not just say something like: “it does not sufficiently advance theory,” but rather be self-reflexive and specify which of his seven commonly attributed meanings of theory the reviewer implies. Both of these points seem wise, and in fact useful. That I have never seen a paper of mine reviewed this way or reviewed a paper this way, a decade plus later, I would say Abend’s practices have not been adopted.

What I didn’t like about Abend’s argument, however, kept haunting me. Especially as I read Turner’s argument for science and positivism (which, is a loaded word with myriad meanings and really not the point of this essay, so will be left alone for now). On the one hand, Abend’s first, second, and third types of theory share several key features the others do not; mainly, a commitment to engagement between theory and empirical reality. The others are exercises belonging to the humanities. To be sure, my argument ultimately is not that these do not belong in the discipline – surely they do, and some of my most favorite work comes from scholars mining the depths of a specific theorist – but that Abend does not go far enough in distinguishing what is theory – by the actual definition of the term – from that which is something else. Of course, that is not his intention or rhetorical strategy. Yet, I was left unfulfilled as I have always been in reading his otherwise excellent paper.

If the point of sociological theory is not to engage with the empirical world through some type of methodological strategy, then it is probably best not to call it theory. Why? Well, for one reason, there is a scientific community that is significantly larger than sociology that has adopted this meaning; and as we know, what is believed to be real becomes real in its consequences. Intersubjectivity is a foundational element of any community, and if the scientific community, writ large, defines theory in terms of its application to empirical facts and regularities, then who are we to take a term and use it however we want. If it was subversive, then I might understand. But, I think Abend is right: people have usurped the term in order to fight other political battles in the discipline, and this fact does not make it right or better or appropriate.

So, in a few hours, my students will be hearing that they have four tasks this term: learning substantive theoretical frameworks/theories (something I am simply no longer able to focus my primary energies on in course); how to read theory and/or theoretical-elements of research articles; how to extract, write, and draw formal social scientific theory; and how to publish the types of theory offered by Abend, Turner, and Alexander. My first thoughts about this compromise is that it is a lot. But, it really isn’t as much as one might surmise. I have taught theory in several ways for over a decade plus, and I have realized students will glean what they glean, regardless of your efforts. It is, like stats and many other classes at the grad level, there for the student’s taking and up to the student. So, substantively, I give them as much range and depth as I can and they choose their level of weekly engagement. In terms of the reading of theory, every week is split into the first/second half of the course, with the latter reserved for posting PDFs of articles with detailed notes about how people present theory in different ways when writing papers. This also contributes to the substantive goal and to the writing/publishing goals. Its the latter two that are potentially in conflict.

Scientific Theory
Abend cautions us against pushing a scientistic epistemological and ontological solution to the so-called semantic problem. He worries practically and politically about the outcome, for good reason. But, I am less worried about these problems. After all, as Turner has said: if we aren’t doing science, then what are we doing that is unique? Critical literary analysis? Covered in another department whose training is primarily focused on this. Philosophy of science. Covered. Ideographic historical case-studies? Covered. We study societies, social organization, social behavior and attitudes and feelings, and the like. And, our contribution to social problems comes most cogently from our best methodological practices. Turner’s extremism, unfortunately, obscures the better parts of his argument. Arguments I know well having been his student and also having countless conversations with him. He is much less ideologically-rigid than his polemics presume.

First, Turner is agnostic methodologically. For many reasons the word positivism has become synonymous with quantitative orthodoxy, but neither Comte nor Turner think this way. Positivism for both simply states: we should be working towards identifying the key properties of the social world, the law-like relationships regardless of time and space, and proximate rather than ultimate causality. These are lofty goals, and we can debate if there are laws or not (there are, incidentally). But, neither staked out a side as to how we achieved these goals. Turner is somewhat un-empirical, but he prefers historical and ethnographic data to stats (he is quite critical of complex modeling strategies that pretend to be theory). Comte argued that naturalistic observation was as important as anything else.

Second, while Turner says there are better strategies for building cumulative theory, he recognizes that any theory committed to empirical analysis is, by definition, scientific. It just falls short in the ultimate goal of cumulative knowledge. That said, like many general theorists of his generation, middle range theories, historical explanations, analytic schema, etc. all serve as inspirations for the the sociological imagination in building systems of causal laws. [Side note: I am not even sure Turner is committed to laws, as laws are empirical regularities and less abstract than the systems of interrelated propositional statements he prefers (see 2010, 2010, 2011). His broader argument is we know a lot more than we act like we do, and we could teach our students a common theoretical language before they move on to their own specific interests and without robbing them of the sociological imagination].

Third, Turner accepts that there are other goals of science that can serve as criteria for evaluating the value of theory. In Reynolds’ (1971) archetypal text on the construction of sociological theories, several goals are listed besides cumulative knowledge, and overlap/extend Abend’s types 1, 2, and 3 theories. The first is description/classification. Theories can be descriptive! This is actually nice. Note description for description sake is the weakest criteria of theory-building, because, presumably, one’s taxonomic efforts should also contribute to explanation and, potentially, prediction; the next two goals.

In terms of explanation, Reynolds, like Turner, is agnostic. He clearly prefers those explanations that are independent of time/space, but also realizes historical explanations of specific cases are useful. This heterodoxic stance also pries open space for qualitative research committed to scientific rigor. In my own experiences, qual is essential to revealing mechanisms, processes, and, of course, meanings that should – in a perfect world – inform future deductive research in terms of the questions asked and the instruments developed to answer them. This is not always the case, but I would argue this is more a function of a false divide between quant and qual and, worse, the idea that theory shouldn’t be rooted in scientisim. Again, other types of activities not scientistic are not less than, subordinate to, or non-sociological; they just aren’t theory. If we push back and see explanation as a central goal to theory, then description strives to reach explanation. Quant folks maybe read qual folks more, or collaborate, to improve surveys and analytic strategies; and new qual folks emerge to deal with the ever-present gaps science, as an epistemology, purposefully creates.

The bogeyman, here, is prediction. I won’t dive too deeply here. But, I do think sociologists can predict some things. And, by prediction, I am going by other science’s standards – e.g., biologists can predict the general time a leaf will fall and specify why their prediction is as such, but they cannot give you a day or time. We know the conditions under which ethnic or class conflict should arise. Like the leaf falling on Tuesday, conflict may not happen next week or month; it may simmer. And like a chance fire burning the tree to the ground before the leaf falls, other intervening variables may change the trajectory of the conflict. This is not precise, to be sure, and thus sociology does not necessarily lend itself to an applied physics, but we know a lot about issues like mental health stigma, poverty, formal organizations, and so on. We can actually predict, within reason, more than we presume. More broadly, this is but one of five goals I’ve delineated, and as such, not a make-or-break criteria for sociology’s status as a science.

The final two goals Reynolds discusses are “understanding” and control. He dismisses the latter, arguing that scientists have a tough time controlling lots and lots of things (e.g., earthquakes), so most of what we study is also probably quite difficult to control. But, understanding, for Reynolds, is related to the construction of paradigmatic (for lack of a better word) systems of causal relationships. It is more a framework that a set of scholars work within that informs their decision-making. Evolution in biology, for instance, is a perfect example: it guides most of the assumptions, questions, and explanations biologists deal with. Again, one criteria among others.

To these goals (cumulative knowledge; description; explanation; prediction; paradigmatic; control), I would add the more conventional sociological meaning of understanding as a part of the theory-building process, though perhaps not really a criteria for evaluating theory (I continue to think and evolve on this). I think interpretivist sociology, when done well and with serious rigor, can become explanatory over time. Scholars committed to a particular milieu, process, set of actors, etc. can build, over several projects, clearer explanations. With abductive strategies, this logic is already built in to the approach (Tavory and Timmermans 2012). However, I see a place for understanding as a goal of science too. Especially when the scholar abandons the outdated pretense of the naive observer. In a highly diverse world, where little pockets of the world are distant cognitively and geographically, illuminating the discrete attributes and meanings may, in fact, lead to powerful social science. I know our work on suicide has benefited from trying to understand a community’s meanings surrounding suicide, and we believe with more research, much of what we found can be made more abstract and generalized. Not so much the details on the ground, unique to Poplar Grove, but the processes and what not. In this sense, understanding can begin the process of developing practical tools for dealing with social problems.

Who Cares?
One could argue these points are implicit, baked into the sociological enterprise. As with everything I muse about re theory, it all returns to how we teach it; and, by way of, how we train students to use and be theoretically-minded. The way we teach it is a mess. It lacks coherence and consistency. If we all began with the guiding principle of scientific criteria, we would no longer be able to teach the canon without re-engaging with what it is we are even teaching. If we began with science, then contemp theory courses would cease to be eclectic, arbitrary exercises in pet theorists, ideological axes, and the like.

This underlying epistemological stance will, hopefully, guide the course I am teaching this term. The goal is to provide practical tools for those who gravitate towards writing more theoretically pieces, but really for all students to be able to more clearly articulate what their theories are, how they inform their research, and what their contribution is. I have always made clear that my position is not the only position, and that they need to decide for themselves what kind of sociologist they want to be. Moreover, I do not discriminate against those pursuing Abend’s other types of theory. But, to accomplish the four goals of the class, and make well-rounded, competent, prepared sociologists, it seems scientism is the best thread to tie it all together.

Posted in Musings on Sociological Theory, Teaching | 1 Comment