Economic Pain is Coming for the College Graduates of 2020. But Who Will Suffer, and How Will They Suffer, and For How Long?

It’s official: America’s economy is in decline. Economists with the National Bureau of Economic Research announced yesterday that the United States economy, thanks to COVID-19, entered a recession in February, ending its longest period of economic expansion in 166 years.

Although very few Americans’ will emerge from this crisis with their financial situations unscathed, I am a university professor, so I think a lot in particular about the economic futures of the students that come in and out of our institutions of higher learning. In light of the economic contraction we’ll likely face over the next couple of years, many of the nation’s newly minted college grads are concerned that they are entering a hostile job market that will leave them unemployed for months, underemployed for years, and, in the long run, less competitive due to “skills obsolescence:” In 2021, after all, the class of 2021 will have more up-to-date skills than the class of 2020, and in 2022, the class of 2022 will have more up-to-date skills than the classes of 2020 and 2021. 

If the past is prologue, then they’re right to worry.

The Great Recession of 2007-2009 gives us some idea of what the future might hold for the college graduates of the next couple of years. A recent paper by the economist Jesse Rothstein reveals that the young people who graduated just after the Great Recession of 2007-2009 experienced significant economic setbacks from which they still have not completely recovered. These setbacks were not due merely to the immediate shocks that came from the depressed job market of 2010: They continue to face disadvantages in the job market to this day, which is an economic phenomenon that labor economists call scarring

If history is a reliable guidepost, the Class of 2020 (along with graduates over the next couple of years) should also expect significant struggles: Although 10% of recent graduates may face unemployment in the short term, their more likely fate is underemployment. According to Jaison Abel and Richard Dietz’s research on employment patterns following the Great Recession, our new college grads are likely to face a job market in which fewer of the jobs that are available (compared to the pre-COVID era) will require a college degree. As many as 50% of recent college grads may confront this reality. And 10% of them might find that their first jobs out of college are so-called “low-skilled service jobs” that tend to pay minimum wage. Although we were wrong to catastrophize that the college students who graduated after the Great Recession would all be forced to take jobs as baristas in cool coffee shops, some did, in fact, become baristas.

If there is a bright side to be found in any of this, it’s in the fact that underemployed college graduates will still probably make more money than non-college graduates who work in the same job categories. Even among the underemployed, a college degree will fetch higher wages.

Nevertheless, the economic disadvantages for the class of 2020 and beyond are likely to be substantial and long-lasting. Indeed, a 2018 analysis from Boston College’s Center for Retirement Research, based on data from the aftermath of the Great Recession, suggests that the Class of 2020 should brace itself for significant student debt and for jobs that bring lower wages and fewer fringe benefits. These economic disadvantages may persist long enough to discourage them in their late 20s and early 30s from marrying and from buying their first homes. And they’re really going to need to pay attention to their retirement savings.

Not every group of college graduates will suffer this fate, of course. In fact, how well our recent grads endure the COVID-19 recession is likely to depend greatly on the fields in which they majored. Math, physics, engineering, education, and nursing majors will need to worry less than most about underemployment or employment in low-skilled service jobs. Those who majored in criminal justice, performing arts, leisure and hospitality, anthropology, or art history, on the other hand, may be about to encounter some stiff economic headwinds.

What is Classic Style? A Primer for Social Scientists

This quarter, I’m teaching a course called Writing About Thinking. The course got a soft launch a couple of years back, when I taught it as an undergraduate seminar at The University of Miami. Now that I’m at UCSD, I am teaching a more advanced version of the course to a very nice group of our PhD students. The course is based on a simple premise: Writing about thinking, which every psychologist must do, is hard, but it’s possible to get better at it by first thinking about thinking. The course, therefore, involves excursions into psychological research on communication, cooperation, memory, syntax, argumentation, and, of course, style.

One of the books we’re reading is Francis-Noël Thomas and Mark Turner’s little book Clear and Simple as the Truth, in which they explicate a style of writing they call Classic Style. It’s an intentionally coy, playful little book that teaches as much about Classic Style by showing what Classic Style is as by telling what Class Style is.

The Classic Style, as Thomas and Turner lay it out, involves several guiding principles. Here are eight principles that I think are among the most important. 

(1) It is based on the conceit that it is possible to say things about the world that are true, and that it is the writer’s job to point to these things. 

(2) It assumes a writer that “takes the pose of full knowledge,” and is competent to explain everything the reader needs to know to understand the subject.

(3) It rests on the gambit that the reader is no less intelligent than the writer. The only difference between the writer and the reader is that the writer happens to know something that the reader doesn’t. The reader is perfectly competent to acquire this truth.

(4) It relies on a writer who is confident in her own abilities. She resists the temptation to argue for the importance of her subject matter, she abstains from complaining about how hard writing is or how hard-won her insights are, and she avoids self-reflection and rumination. The classic-style writer hides her effort, but because she exerted herself so mightily in advance, the end product of her effort appears effortless, as if it could have been written in no other way.

Here, Thomas and Turner convey this idea in what I regard as a triumph of Classic Style:

The classic writer is not like a television cook showing you how to mix mustard and balsamic vinegar. He is like a chef whose work is presented to you at table but whose labor you are never allowed to see, a labor the chef certainly does not expect you to share. There are no salt and pepper shakers on your table.

(5) Because the writer and the reader are intellectual equals, and because the writer is pointing at true things in the world, the two of them can have a conversation. Classic-style writing, when read aloud, sounds like one person talking to another, like a really good tour guide when you’re visiting a museum or a foreign city.

(6) Sentences and paragraphs go somewhere. Each unit of meaning, Thomas and Turner write, “has a clear direction and goal.” The payoff comes at the end of the sentence or passage, but to get to that payoff, the reader must follow a path, made of several steps, along which the writer is leading him.

(7) With all of its reality and pointing and seeing and touring , the Classic Style relies on the same image schema we use to interact with the physical world. Ideas have weight; they develop. Arguments go somewhere. We follow lines of reasoning. By relying on physical imagery, Classic Style is able to depend on some of the cognitive processes that use so successfully to navigate the real world.

(8) No topic is so complex that it cannot be explained.

The first part of Clear and Simple as the Truth is the exposition. The second part is “The Museum,” consisting of a variety of classic-style passages, along with Thomas and Turner’s analyses of them. The Museum is well worth a visit, but its examples are not as helpful for social scientists as examples from actual social science might be. I was therefore very pleased to discover yeseterday that one of my favorite articles in Psychology–Denny Borsboom, Gideon Mellenbergh, and Jaap van Heerden’s The Concept of Validity (which I am currently re-reading for a paper I’m working on, and which I blogged about earlier here)–is an exemplar of classic style.

At the opening of the paper, you find this marvel:

Please take a slip of paper and write down your definition of the term construct
validity. Now, take the classic article of Cronbach and Meehl (1955), who invented the concept, and a more recent authoritative article on validity, for instance that of Messick (1989), and check whether you recognize your definition in these works. You are likely to fail. The odds are that you have written down something like “construct validity is about the question of whether a test measures what it should measure.” If you have read the articles in question carefully, you have realized that they do not conceptualize validity like you do. They are not about a property of tests but about a property of test score interpretations. They are not about the simple, factual question of whether a test measures an attribute but about the complex question of whether test score interpretations are consistent with a nomological network involving theoretical and observational terms (Cronbach & Meehl, 1955) or with an even more complicated system of theoretical rationales, empirical data, and social consequences of testing (Messick, 1989).

Who in psychology opens a paper like that? Too few of us.

A little further along, there’s this:

The argument to be presented is exceedingly simple; so simple, in fact, that it articulates an account of validity that may seem almost trivial. It is as follows. If something does not exist, then one cannot measure it. If it exists but does not causally produce variations in the outcomes of the measurement procedure, then one is either measuring nothing at all or something different altogether. Thus, a test is valid for measuring an attribute if and only if (a) the attribute exists and (b) variations in the attribute causally produce variations in the outcomes of the measurement procedure. The general idea is based on the causal theory of measurement (e.g., Trout, 1999).

And then this: 

That the position taken here is so at variance with the existing conception in the literature is largely because in defining validity, we have reversed the order of reasoning. Instead of focusing on accepted epistemological processes and trying to fit in existing test practices, we start with the ontological claim and derive the adequacy of epistemological practices only in virtue of its truth. This means that the central point in validity is one of reference: The attribute to which the psychologist refers must exist in reality; otherwise, the test cannot possibly be valid for measuring that attribute. This does not imply that the attribute cannot change over time or that that psychological attributes are unchanging essences (cf. Kagan, 1988). It does imply that to construe theoretical terms as referential requires a realist position about the phenomena to which such terms refer. Thus, measurement is considered to involve realism about the measured attribute. This is because we cannot see how the sentences Test X measures the attitude toward nuclear energy and Attitudes do not exist can both be true. If you agree with us in this, then you are in disagreement with some very powerful philosophical movements that have shaped validity theory to a large extent.

In spite of their scholarly apparatus (such as citations in parentheses, maybe slightly too much meta-discourse), these passages bear all of the marks of Classic Style. No hedging, no apologizing, no showing off, plenty of grounding in spatial imagery (with its taking of positions, reversings of causal orderings, and so on), and a confidence that even a very complicated idea can be expressed in plain English to any reader who is willing to take some time out to “talk” with an expert about it.

As a bonus, the paper itself pushes what I regard as a classic-style view of science, measurement, and validity. On Borsboom and colleagues’ view of measurement, things either exist or they don’t, and it’s only the things that exist that can be measured. And a measure has validity as a measure of that invisible entity (intelligence, self-esteem, reading comprehension, or whatever) only if that invisible entity is real and if that entity is involved in the chain of causal processes that lead to the representations that we take to be “measurements.” Reality is out there, validity is much simpler than you think, and when we do measurement, we take a sounding of real things. I love the fit here between the the writers’ medium and their message: Borsboom and colleagues help their case along through clear, confident, conversational writing that asks the reader to no more than look where the writer is pointing. 

The UK Publication of My Upcoming Book, The Kindness of Strangers, has been delayed until September 2020

I just received word from OneWorld Publications, which is publishing The Kindness of Strangers in the UK, that they are delaying publication until September.

By then, one hopes, the world will be in good enough shape that people will have the bandwidth to turn their attention to non-Covid matters.

Until then, please enjoy the UK cover for the book, which I think is just dandy.

Trust in the Time of Coronavirus: Low Trusters are Particularly Skeptical of Local Officials and Their Own Neighbors

A few days ago, I saw the results of a new Pew poll on Americans’ trust in the wake of the Coronavirus outbreak. The poll, based on a random sample of 11,537 U.S. adults, addressed two questions: Which groups of people and societal institutions do Americans trust right now? And how do their background levels of generalized trust influence their trust in those specific groups of people and institutions?

The takeaway is troubling: High trusters and low trusters have comparable amounts of trust in our federal agencies and national institutions, but they have vastly different amounts of trust in the responses and judgments of their local officials and neighbors.

To examine these issues, the Pew resesarchers first divided the sample into three groups based on their responses to three standard questions for measuring generalized trust. Helpfully, they called these three subgroups Low Trusters, Medium Trusters, and High Trusters.

As many other researchers have found, generalized trust was associated with ethnicity (white Americans have higher levels of generalized trust than blacks and hispanics do), age (the more you have of one, the more you have of the other) education (ditto), and income (ditto). These results are hardly surprising–ethnicity, age, education, and income are among the most robust predictors of trust in survey after survey–but they do nevertheless provide an interpretive backdrop for the study’s more important findings.

What really struck me were the associations of people’s levels of generalized trust and their sentiments toward public institutions and groups of other people. Low, medium, and high trusters had fairly similar evaluations of how the CDC, the news media, and even Donald Trump were responding: On average, people at all three levels of generalized trust had favorable evaluations of the CDC; on average, people at all three levels of generalized trust had lukewarm evaluations of Trump’s response.

Where the three groups of trusters differed more conspicuously was in their evaluations of their state officials, their local officials, and–most strikingly–ordinary people in their communities. About 80% of high trusters thought their local and state officials were doing an excellent or good job of responding to the outbreak. Only 57% of low trusters said the same.

But the biggest gulf in the sentiments of high trusters and low trusters was in their evaluations of ordinary people in their communities. Eighty percent of high trusters said that ordinary people in their community were doing an excellent or good job in responding to the outbreak. Only 44% of low trusters approved.

 

High trusters, medium trusters, and low trusters also had widely divergent opinions about the responses of ordinary people–both across the country and in their local communities.

Most people, regardless of how much generalized trust they had, thought their state governments, local governments, and local school systems were responding with the right amount of urgency to the outbreak. However, high trusters and low trusters differed greatly in their attitudes toward the responses of their neighbors. Where as16% of high trusters thought ordinary people in their local communities were overreacting; 35% of the low trusters–more than twice as many–thought ordinary people in their local communities were overreacting.

What I find troubling about these statistics is that all epidemics, like all politics, are local. The people who should be best equipped to tell you about what’s going on in your community are the people who are paid to know what’s going on in your community and the people who actually live in your community. We’re entitled to clear and accurate information from local officials, and we should be ashamed that local people cannot always trust their judgment. But local officials are not the only source of information that people should be able to trust. An ordinary person in your community could, in principle, be able to tell you whether a teacher at your kid’s school or a cashier at your local grocery store tested positive. How much unnecessary risk do we expose ourselves to when some of us inhabit communities or worldviews that cause us to perceive our local officials and neighbors are liars, incompetents, or chicken-littles?

Social Distancing By the Numbers: Who’s Staying Home?

The New York Times has been doing some excellent reporting about the spread of COVID-19. I particularly admire their graphics, which put the message into a visual form that anyone with the eyes to see can comprehend and appreciate.

One hopes that most Americans now know that COVID-19 spreads through person-to-person contact, and that the best way to avoid contracting or spreading the virus is to avoid interacting with others in close proximity–or better still, to simply stay home. Has this message sunk in? The visualizations published in today’s NYT (which are not only informative, but also beautiful), which are based on analyses of 15 million anonymous Americans’ cell phone use over the past few weeks, show just how much (or little) people in each U.S. county have been curtailing their travel over the past few weeks.

The three lessons these data teach are striking and troubling.

First, there is tremendous county-by-county variation in how much people have reined in their travel. In some counties (in the light pastels and greys below), travel has ground to a near standstill, with the average daily travel declining from five miles a day to around a mile or so:

Clearly, people in those light-pastel and grey counties have stopped driving their cars and have turned instead to walking their dogs:

Second, the declines in travel are not uniformly distributed across the nation. It is particularly noteworthy that counties with stay-at-home orders in place have had much steeper reductions in travel than those without travel orders in place. People in counties with stay-at-home orders have curtailed their travel by 80% or so; those in counties without stay-at-home orders have curtailed their travel by maybe 65%. That difference of 15% might not sound like much, but it’s actually a huge effect, so readily comprehensible to the naked eye that you don’t even have to do any statistics on the data to appreciate the difference:

Third, the counties with stay-at-home orders are mostly concentrated in the Northeast, the West Coast, and the Midwest. Unsurprisingly, given how few stay-at-home orders are in place, the counties in which people have reduced their travel the least are concentrated in the South. In Duval County, where I grew up, people were still driving about 3.4 miles per day this past Friday, making it the third least staying-at-home large county in the Nation. (My family members in Duval County, to my great relief, have been locked down in their homes for two weeks).

These figures say all we really need to know about staying at home during this crisis: Whether you like the idea of the state or county officials ordering Americans to stay at home during outbreaks of communicable diesases (for what it’s worth, the federal government arrogated that power long ago, and has exercised it with impunity, as the need has arisen, for centuries), stay-at-home orders seem to be working (bearing in mind the standard caveats about correlation vs. causation). The apparent effectiveness of stay-at-home orders at getting people to stay at home is so striking that it’s almost as if people possess a tendency to heed the directives of people in positions of legitmate authority–particularly when those people have the ability to impose sanctions.

The second lesson, equally clear, is that the Southern states, along with Texas, Oklahoma, Kansas, Wyoming, and a few others, are still in for a great deal of pain.

Empathy: Does “Putting Yourself in the Other Person’s Shoes” Make any Difference?

Are humans hardwired to care about strangers? Glancing over my bookshelves, titles such as Born to Be Good, The Compassionate Instinct, and The Altruistic Brain remind me that many of my scientific colleagues answer this questions with a resounding yes. Each of these books, in its own way, teaches that the animal designated Homo sapiens has evolved for compassion. Caring about strangers is just part of who we are. If it doesn’t come effortlessly, all it takes is some patience and some practice. Attend a workshop. Volunteer at a homeless shelter. Read some fiction. Meditate. Compassion is inside of you. You just need to coax it out.

One of the ways we have been taught to coax empathy out is by deliberately trying to take the perspective of a suffering person. “Try to see things from his point of view.” “Imagine how it would feel to walk a mile in her shoes.” “How would you feel if the shoe were on the other foot?” (A surprising number of shoes make an appearance in these aphorisms.) We encourage our kids to take the perspective of the people who might be negatively affected by their nasty or self-centered behavior, hoping that our admonitions are doing something to turn them into better people. But does encouraging people to take the perspective of others actually work?

For half-century (give or take a few months) experimental psychologists have been working under the assumption that perspective-taking does, in fact, encourage empathy. The social psychologist Ezra Stotland was the first person to try to encourage empathy with what have come to be called “perspective-taking instructions.” According to Stotland’s research, it worked.

By the way, here’s a fun photo of Professor Stotland with Ted Bundy. That’s Ted on the left; Ezra’s on the right. (This really deserves a blog entry of its own.)

WhenTedMetEzra

Following Stotland’s 1969 lead, researchers have been using perspective-taking instructions in attempts to manipulate empathy experimentally for five decades. In the typical experiment, subjects encounter a stranger in the lab who is going through something difficult in his or her personal life; then, the experimenter asks subjects to do one of several things. To encourage perspective-taking, researchers might instruct subjects to

try to imagine how the person feels about what has happened and how it was affected his or her life. Try not to concern yourself with attending to all of the information presented. Just concentrate on trying to imagine how the person feels.

In a variant of these standard perspective-taking instructions, researchers instruct participants to imagine how they (rather than the suffering person) might feel in a similar predicament:

try to imagine how you yourself would feel if you were experiencing what has happened to the person and how this experience would affect your life. Try not to concern yourself with attending to all of the information presented. Just concentrate on trying to imagine how the person feels.

To encourage still other subjects to remain objective (under the premise that doing so will squelch empathy), researchers instruct subjects to

try to be as objective as possible about what has happened to the person and how it has affected his or her life. To remain objective, do not let yourself get caught up in imagining what this person has been through and how he or she feels as a result. Just try to remain objective and detached.

In the ideal experiment, researchers also assign some subjects to an experimental condition in which they receive no instructions at all. They just learn about a person in need without any prompting to do anything in particular in response. These subjects serve as a control group that enables experimenters to find out both (a) whether perspective-taking increases empathy, and (b) whether remaining objective reduces empathy. Without such a control group, any differences in empathy that arise between people who engage in perspective taking and people who remain objective cannot be attributed to either condition: As a result, we can’t know whether perspective-taking raised empathy above its typical levels, remaining objective lowered empathy below its typical levels, or a little of both. As we’ll see below, this turns out to be really important.

My colleagues and I, with the psychologist William McAuliffe in charge, just published a statistical review (called a meta-analysis) of the results of every experimental investigation that we could get our hands on that compared the effects of these instructional sets on self-reported empathic emotion toward a needy stranger.

The paper was published here, but you can download a pre-publication copy of it here.

We found 85 papers in all. From these 85 papers, we extracted 177 comparisons between any two pairs of the four experimental conditions (imagine-other, imagine-self, remain objective, no-instructions). 

Here’s a very quick summary of what we found when we meta-analyzed those 177 two-group comparisons. There are some surprises.

1. Imagining how the needy person feels does not generate any more empathy than imagining how you yourself might feel in the same situation.

In other words, “Imagining how he/she might feel” = “imagining how you might feel.”

2. Imagine-other and imagine-self instructions do not generate any more empathy than receiving no instructions at all.

In other words, Perspective-Taking = No Instructions.

3. People instructed to remain objective experience less empathy than people who are not given any instructions at all.

In other words, No Instructions > Remain Objective.

4. People who get perspective-taking instructions experience more empathy than people instructed to remain objective.

In other words, Perspective Taking > Remain Objective.  (SEE: TRANSITIVE PROPERTY OF MATHEMATICS.) This contrast is the only reason why Perspective Taking Instructions Appear to boost empathy. They don’t. Instead, Remaining Objective reduces empathy.

For people who like to stare at the results of meta-analyses, here is a figure that summarizes those results reasonably well.

ma-figure

Take a moment to let these findings sink in. What they show is that perspective-taking instructions
do not, as a matter of fact, increase empathy. They’re no better than being given no instructions at all.
Instead, it appears that remain objective instructions lower empathy.

By the way, we also examined whether perspective-taking instructions affect men’s and women’s empathy differently, or whether they alter our empathy levels differently when we are trying to empathize with people who belong to our own social groups than when we are trying to empathize with people who belong to other social groups. These two factors made no difference.

So, as I see it, there are three big take-aways.

  • While it may be true that trying to take the perspective of needy others encourages empathy for their plights, it’s not true to say that there’s a great deal of experimental evidence for it. Things can be true without being supported by experimental evidence, of course, but the lack of support must surely be some kind of wake-up call to re-think our assumptions.
  • The previous evidence suggesting that perspective-taking instructions increase empathy appeared to do so only because they were being compared against “remain objective” instructions, which do, in fact, reduce empathy.
  • It does appear that we know how to restrain people from feeling empathy: just tell them to remain objective and ignore how the needy person might be feeling.

What should we make of these results? I see a glass-half-empty interpretation and a glass-half-full interpretation.

Glass-half-empty: The major tool that social psychologists have counted on for half a century for increasing people’s empathy doesn’t work. We have a nice tool for reducing empathy, though: Just tell people to ignore the suffering person’s feelings.

Glass-half-full: People could be walking around in the world with the same amount of empathy for needy others as they would experience if they were explicitly instructed to take the perspective of needy others. Maybe we take needy people’s perspectives so intuitively that we can’t get any additional bang for the buck by trying to do it deliberately. Maybe, by default, we’re more empathic than callous.

Reference: McAuliffe, W. H. B., Carter, E. C., Berhane, J., Snihur, A. C., & McCullough, M. E. (2019). Is Empathy the Default Response to Suffering? A Meta-Analytic Evaluation of Perspective Taking’s Effect on Empathic Concern. Personality and Social Psychology Review. https://doi.org/10.1177/1088868319887599

The Golden Rule: Gold or Fool’s Gold?

The Golden Rule—do unto others as you would have them do unto you—didn’t get its start with a 1961 Norman Rockwell painting. It’s the ethical bedrock for the major world religions, including Hinduism, Confucianism, Judaism, Christianity, and Islam, so it has been swimming around in people’s consciences for at least two millennia. In his Analects, for example, Confucius wrote, “What you do not want done to yourself, do not do to others.”[1] The Mahabharata of Hinduism gives similar guidance: “Knowing how painful it is to himself, a person should never do that to others which he dislikes when done to him by others.[2] The book of Leviticus from the Hebrew Bible features a Yahweh who commands his followers “Thou shalt love thy neighbor as thyself.” Five centuries later, Jesus took the “love thy neighbor” idea even further by using the Parable of the Good Samaritan to assert that people had ethical obligations to help strangers in their times of need—even strangers from outside ethnic groups.

Confucius, Yahweh, and Jesus didn’t teach the Golden Rule because they thought it was cute: They taught the Golden Rule because they believed it makes for more ethical people. Not everyone agrees that it does, however—or even that it could. In fact, many modern philosophers think the Golden Rule is philosophical claptrap.

In his defense of the Golden Rule, the philosopher W. T. Blackstone first listed the charges: It’s a flawed ethical principle because it implies that we can figure out how to treat others morally simply by consulting our own wants and needs. It’s flawed because it leads us to treat others immorally if immoral treatment is what we want for ourselves. It’s flawed because it insists that we can look inward to discover what is right, even though this habit of thought breeds ethnocentrism and motivates us to perpetuate society’s moral status quo, no matter how ethically flawed the status quo might be.[3] Or imagine a judge who uses the Golden Rule to justify why she decides to let a convicted mass murderer go free: If the shoe were on the other foot, she would want to avoid prison time, so shouldn’t she extend the same consideration to the killer? Because the Golden Rule seems to have these sorts of limitations, the ethicist Kwame Anthony Appiah has called it “fool’s gold.”[4]

I wonder if the Golden Rule really deserves so much cynicism. It seems to me that most of the philosophers’ worries are quite silly unless you assume that the person attempting to live by the Golden Rule has the intellect and reasoning powers of a five-year-old. A masochist who gets sexual pleasure from abuse at the hands of others, yet seeks to live by the Golden Rule, doesn’t follow it so slavishly as to assume that it obligates him to abuse other people in the same way. Instead, he knows that others might have tastes and preferences that differ from his own. Likewise, the judge who seeks to follow the Golden Rule in her professional decisions doesn’t need to vacate the sentences of mass murderers. Instead, she also considers her obligations to the law-abiding people who would not want convicted murderers running around free.

The philosopher Harry Gensler is the world’s leading exponent of the idea that the Golden Rule can, when read properly, withstand close ethical scrutiny. As he explains in his book Ethics and the Golden Rule, many philosophical objections to the Golden Rule vanish once we have a better grasp on how to implement the rule intelligently and reasonably. Gensler recommends a series of four steps, which he summarizes with the acronym KITA (Know-Imagine-Test-Act). In the Know step, we take time to learn what will help a specific person and what will harm him. A conscientious golden-rule follower does his homework. After having learned about the other person’s basic needs and desires, a conscientious Golden Rule follower will then implement the Imagine step by trying to imagine how his possible courses of action will affect others. Gensler isn’t talking about idle, half-second flashes of intuition. He’s talking about deliberate effort to work through the possible consequences for everyone who might be affected. The judge has to consider not only how her sentence will affect the convicted criminal, but also how it will affect the citizens who don’t want a convicted murderer released back into their community.

After obtaining the relevant facts and running all of the simulations to figure out which ones will harm and which ones will help, a conscientious Golden Rule follower can proceed to the third step in KITA, which is a Test for consistency: We must ask whether the action we have in mind is what we would want for ourselves. If the behavior we have in mind passes this consistency test—if we conclude that the behavior we intend to impose on someone else is consistent with how we think we would desire to be treated in exactly the same circumstances—including sharing that person’s beliefs, and desires—then we are ready to execute KITA’s fourth step: we can Act.[5]

I appreciate Gensler’s efforts to defend the Golden Rule’s honor. I am not totally satisfied that it does everything we might want from an overarching guide to a moral life, but it does seem to make others’ welfare a primary moral consideration, which sits well with my own Utilitarian leanings. In addition, Gensler’s KITA routine does seem to help us avoid most of the pitfalls associated with a five-year-old’s application of the Golden Rule—even though I am skeptical that most of us would take the time to go through all of those steps in real life. Who has the time to do all that homework?

Even so, the fact that living by the Golden Rule is cognitively difficult doesn’t mean it’s dumb to try.

 

[1] Confucius (trans. 1861), Book 15, Chapter 23.

[2] Krishna-Dwaipayana Vyasa (trans. 1896)  Book XII, Section 259, p. 620.

[3] Blackstone (1965).

[4] Appiah (2006), p. 60.

[5] Gensler (2013).

Appiah, K. A. (2006). Cosmopolitanism: Ethics in a world of strangers. New York: Norton.

Blackstone, W. T. (1965). The Golden Rule: A defense. The Southern Journal of Philosophy, 3, 172-177.

Confucius. (trans. 1861). The analects of Confucius (J. Legge, Trans.): Pantianos Classics.

Gensler, H. J. (2013). Ethics and the golden rule. New York: Routledge.

Krishna-Dwaipayana Vyasa. (trans. 1896). The Mahabharata (K. M. Ganguli, Trans.). (n.p.): Author. (Reprinted from: 2018).

 

Behavioral Altruism is an Unhelpful Scientific Category

Altruism has been a major topic in evolutionary biology since Darwin himself, but altruism (the word) did not appear even once in Darwin’s published writings.[1] The omission of altruism from Darwin’s thoughts about altruism is hardly surprising: Altruism had appeared in print for the first time only eight years before The Origin of Species. The coiner was a Parisian philosopher named Auguste Comte.

Capitalizing on the popularity he had already secured for himself among liberal intellectuals in both France and England, Comte argued that Western civilization needed a complete intellectual renovation, starting from the ground up. Not one to shrink from big intellectual projects, Comte set out to do this re-vamping himself, resulting in four hefty volumes. Comte’s diagnosis: People cared too much for their own welfare and too little for the welfare of humanity. The West, Comte thought, needed a way of doing society that would evoke less égoisme, and inspire more altruisme.

Comte saw a need for two major changes. First, people would need to throw out the philosophical and religious dogma upon which society’s political institutions had been built. In their place, he proposed we seek out new principles, grounded in the new facts emerging from the new sciences of the human mind (such as the fast-moving scientific field of phrenology), human society (sociology), and animal behavior (biology).

Second, people would need to replace Christianity with a new religion in which humanity, rather than the God of the Abrahamic religions, was the object of devotion. In Comte’s new world, the 12-month Gregorian calendar would be replaced with a scientifically reformed calendar consisting of 13 months (each named after a great thinker from the past—for example, Moses, Paul the Apostle, Gutenberg, Shakespeare, and Descartes) of 28 days each (throw in a “Day of the Dead” at the end and you’ve got your 365-day year). Also, the Roman Catholic priesthood would be replaced with a scientifically enlightened, humanity-loving “clergy” with Comte himself—no joke—as the high priest.

Comte’s proposals for a top-down re-shaping of Western society didn’t get quite the reception he was hoping for (though they caught on better than you might think: If you’re ever in Paris or Rio, pay a visit to the Temples of Humanity that Comte’s followers founded around the turn of the 19th century). In England especially, the scientific intelligentsia’s response was frosty. On the advice of his friend Thomas Huxley, Darwin also steered clear of all things Comtean, including altruism.

Nevertheless, altruism was in the air, and its warm reception among British liberals at the end of the 19th century is how the word percolated into everyday language. It’s also why the word is still in heavy circulation today. The British philosopher Herbert Spencer, an intellectual rock star of his day, was a great admirer of Comte, and he played a major role in establishing a long-term home for altruism in the lexicons of biology, social science, and everyday discourse.[2] Spencer used the term altruism in three different senses—as an ethical ideal, as a description of certain kinds of behavior, and as a description for a certain kind of human motivation. (He wouldn’t have understood how to think about it as an evolutionary concept.)[3]

Here, I want to look at Spencer’s second use of the word altruism—as a description of a class of behaviors—because I think it is a deeply flawed scientific concept, despite its wide usage. At the outset, I should note that as a Darwinian concept—an evolutionary pathway by which natural selection can create complex functional design by building traits in individuals that cause them to take actions that increase the rate of replication of genes locked inside their genetic relatives’ gonads—altruism has none of the conceptual problems that behavioral altruism has.

With Spencer’s behavioral definition of altruism, he meant to refer to “all action which, in the normal course of things, benefits others instead of benefiting self.”[4] A variant of this definition is embraced today by many economists and other social scientists, who use the term behavioral altruism to classify all “costly acts that confer benefits on other individuals.”[5] Single-celled organisms are, in principle, as capable of Spencerian behavioral altruism as humans are. Social scientists who subscribe to the behavioral definition of altruism have applied it to a wide range of human behaviors. Have you ever jumped into a pool to save a child or onto a hand grenade to spare your comrades? Donated money to your alma mater or a charity? Given money, a ride, or directions to a stranger? Served in the military? Donated blood, bone marrow, or a kidney? Reduced, re-used, or recycled? Adopted a child? Held open a door for a stranger? Shown up for jury duty? Volunteered for a research experiment? Taken care of a sick friend? Let someone in front of you in the check-out line at the grocery store? Punished or scolded someone for breaking a norm or for being selfish? Taken found property to the lost and found? Tipped a server in a restaurant in a city you knew you’d never visit again? Pointed out when a clerk has undercharged you? Lent your fondue set or chain saw to a neighbor? Shooed people away from a suspicious package at the airport? If so, then you, according to the behavioral definition, are an altruist.[6]

Some economists seek to study behavioral altruism in the laboratory with experimental games in which researchers give participants a little money and then measure what they do with it. The Trust Game, which involves two players, is a great example. We can call the first actor an Investor because he or she is given a sum of money—say, $10—by the experimenter, some or all of which he or she can send to the other actor, whom we might call the trustee. The investor knows that every dollar he or she entrusts to the trustee gets multiplied by a fixed amount—say, 3—so if the investor transfers $1 to the trustee, the trustee now has $3 more in his or her account as a result of the investor’s $1 transfer. Likewise, the investor knows that the trustee will subsequently decide whether to transfer some money back. Under these circumstances, according to some experimental economists, if the Investor sends money to the Trustee, it is “altruistic” because it is a “costly act that confers an economic benefit upon another individual.”[7] But the lollapalooza of behavioral altruism doesn’t stop there: It’s also altruistic, per the behavioral definition that economists embrace, if the Trustee transfers money back to the Investor. Here, too, one person is paying a cost to provide a benefit to another person.

Notice that motives don’t matter for behavioral altruism. (To social psychologists like Daniel Batson, altruism is a motivation to raise the welfare of another individual, pure and simple. Surprising as it might seem, this is also, in fact a conceptually viable scientific category. But that’s another blog post.) All that matters for a behavior to be altruistic is that it entails costs to actors and benefits to recipients. Clearly, donating a kidney or donating blood are costly to the donor and beneficial to the recipients, but even when you hold a door open for a stranger, you pay a cost (a few seconds of your time and a calorie or so worth of physical effort) to deliver a benefit to someone else. By this definition, even an insurance company’s agreement to cover the MRI for your (possibly) torn ACL qualifies: After all, the company pays a cost (measured in the thousands of dollars) to provide you with a benefit (magnetic confirmation either that you need surgery or that your injury will probably get better after a little physical therapy).

But a category that lumps together recycling, holding doors for strangers, donating kidneys, serving in the military, and handing money over to someone in hopes of securing a return on one’s investment—simply because they all involve costly acts that confer benefits on others—is a dubious scientific category. Good scientific categories, unlike “folk categories,” are natural kinds—as Plato said, they “carve nature at its joints.” Rather than simply sharing one or more properties that are interesting to a group of humans (for example, social scientists who are interested in a category called “behavioral altruism”), they should share common natural essences, common causes, or common functions. Every individual molecule with the chemical formula H2O is a member of a natural kind—water—because they all share the same basic causes (elements with specific atomic numbers that interact through specific kinds of bonds). These deep properties are the causes of all molecules of H2O that have ever existed and that ever will exist. Natural kinds are not just depots for things that have some sort of gee-whiz similarity.[8]

If behavioral altruism is a natural kind, then knowing that a particular instance of behavior is “behaviorally altruistic” should enable me to draw some conclusions about its deep properties, causes, functions, or effects. But it doesn’t. All I know is that I’ve done something that meets the definition of behavioral altruism. Even though I have, on occasion, shown up for jury duty, held doors open for strangers, received flu shots, loaned stuff to my neighbors, and even played the trust game, simply knowing that they are all instances of “behavioral altruism” does not enable me to make any non-trivial inferences about the causes of my behavior. By the purely behavioral definition of altruism, I could show up for jury duty to avoid being held in contempt of court, I could give away some old furniture because I want to make some space in my garage, and I could hold the door for someone because I’m interested in getting her autograph. The surface features that make these three behaviors “behaviorally altruistic” are, well, superficial. Knowing that they’re behaviorally altruistic gives me no new raw materials for scientific inference.

So if behavioral altruism isn’t a natural kind, then what kind of kind is it? Philosophers might call it a folk category, like “things that are white,” or “things that fit in a bread box,” or “anthrosonic things,” which comprise all of the sounds people can make with their bodies—for example, hand-claps, knuckle- and other joint-cracking, the lub-dub of the heart’s valves, the pitter-patter of little feet, sneezes, nose-whistles, coughs, stomach growls, teeth-grinding, and beat-boxing. Anthrosonics gets points for style, but not for substance: My knowing that teeth-grinding is anthrosonic does not enable me to make any new inferences about the causes of teeth-grinding because anthrosonic phenomena do not share any deep causes or functions.

Things that are white, things that can fit in a bread box, anthrosonics, things that come out of our bodies, things we walk toward, et cetera–and, of course, behavioral altruism–might deserve entries in David Wallechinsky and Amy Wallace’s entertaining Book of Lists[9], but not in Galileo’s Book of Nature. They’re grab-bags.

~

[1] Dixon (2013).
[2] Spencer (1870- 1872, 1873, 1879).
[3] Dixon (2005, 2008, 2013).
[4] Spencer (1879), p. 201.
[5] Fehr and Fischbacher (2003), p. 785.
[6] See, for instance, Silk and Boyd (2010), Fehr and Fischbacher (2003); Gintis, Bowles, Boyd, & Fehr (2003).
[7] Fehr and Fischbacher (2003), p. 785.
[8] Slater and Borghini (2011).
[9] Wallechinsky, Wallace, and Wallace (2005).

REFERENCES

Dixon, T. (2005). The invention of altruism: August Comte’s Positive Polity and respectable unbelief in Victorian Britain. In D. M. Knight & M. D. Eddy (Eds.), Science and beliefs: From natural philosophy to natural science, 1700-1900 (pp. 195-211). Hampshire, England: Ashgate.

Dixon, T. (2008). The invention of altruism: Making moral meanings in Victorian Britain. Oxford, UK: Oxford University Press.

Dixon, T. (2013). Altruism: Morals from history. In M. A. Nowak & S. Coakley (Eds.), Evolution, games, and God: The principle of cooperation (pp. 60-81). Cambridge, MA: Harvard University Press.

Fehr, E., & Fischbacher, U. (2003). The nature of human altruism. Nature, 425, 785-791.

Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (2003). Explaining altruistic behavior in humans. Evolution and Human Behavior, 24, 153-172.

Silk, J. B., & Boyd, R. (2010). From grooming to giving blood: The origins of human altruism. In P. M. Kappeler & J. B. Silk (Eds.), Mind the gap: Tracing the origins of human universals (pp. 223-244). Berlin: Springer Verlag.

Slater, M. H., & Borghini, A. (2011). Introduction: Lessons from the scientific butchery. In J. K. Campbell, M. O’Rourke, & M. H. Slater (Eds.), Carving nature at its joints: Natural kinds in metaphysics and science (pp. 1-31). Cambridge, MA: MIT Press.

Spencer, H. (1870- 1872). Principles of psychology. London: Williams and Norgate.

Spencer, H. (1873). The study of sociology. London: H. S. King.

Spencer, H. (1879). The data of ethics. London: Williams and Norgate.

Wallechinsky, D., & Wallace, A. (2005). The book of lists: The original compendium of curious information. Edinburgh, Scotland: Canongate Books.

Evolution’s Gravity: A Paean to Natural Selection

Physicists speak of four fundamental forces that govern the interactions among the bits of matter that make up our universe. The strongest of these four forces, aptly known as the Strong Force, is so powerful that it can keep an atom’s positively charged protons from ripping the atom’s nucleus apart as their mutually repellent positive charges push them in opposite directions. The second fundamental force, electromagnetism, is 137 times weaker than the strong force, but its ability to cause bits of matter with opposing electrical charges to attract each other, and to cause bits of matter with like charges to avoid each other, is what gives unique three-dimensional structure to atoms, molecules, and even the proteins that form the building blocks of our body’s cells. At only one-millionth the strength of the strong force, the third fundamental force—the so-called weak force—changes quarks from one bizarre “flavor” to another and gives rise to nuclear fusion reactions.

The weak force deserves a better name: It’s actually the fourth force—gravity—that’s the weakling of the bunch. At only 6/1,000,000,000,000,000,000,000,000,000,000,000,000,000 the strength of the strong force, the influence of gravity on the interactions of protons, quarks, and other subatomic particles amounts to, well, about as close to zero as you can get. When I use the refrigerator magnet that holds up my kid’s school photo to lift the ring of keys on the kitchen table, the magnet easily overcomes the gravitational pull of the entire planet. At Subatomic Beach, gravity is the scrawny guy who’s always getting sand kicked in his face.

But the only reason gravity looks like such a weakling in comparison to the other fundamental forces is because we haven’t yet zoomed out to the scales of mass and distance that reveal gravity’s actual power to guide the interactions among bits of matter. For the change of perspective that can reveal gravity’s real power, we have to use a telescope, not a particle accelerator. When we’re studying the interactions of very small things that are separated by small distances, gravity is the only fundamental force that doesn’t matter. But when we’re studying the interactions of large things that are separated by great distances, it’s the only one that does.

Every time the mass of an object increases one-hundredfold, the influence of gravity upon its particles increases tenfold. Because very massive objects like planets and stars have no net electrical charge (the charges of all of their constituent bits more or less cancel each other), it’s the weakling gravity—acting across huge distances, always attracting, never repelling—that causes their interactions. And when an object gets really massive—roughly the size of 100 Jupiters—the gravitational forces acting on the atoms that make up that jumbo object can hold the object’s particles in a spherical shape even when weak force interactions among those particles have turned the center of the object into a nuclear fusion reactor. Gravity is the Charles Atlas of the cosmos. Gravity is a star-maker.

Researchers-Simulate-Astrophysical-Jets-in-the-Lab

Natural selection, one of the fundamental processes of evolution, has something in common with gravity: A public relations problem. At one level of analysis, natural selection, like gravity, looks like a chump. When you’re looking up close at the tiny bits of stuff that go into making humans—the sequences of DNA that constitute the human genome—and how they came to be arranged in the manner that they are, natural selection doesn’t seem to have done very much. Other evolutionary processes, such as mutation, migration, and drift, seem to have exerted far more powerful influences on our genomes. For that matter, distinctly non-evolutionary events—one-off famines, freezes, floods, and fires—can exert a far more powerful influence on the fate of a species at any given point in time than natural selection can.

However, when you zoom out and look at evolution from a high-altitude vantage point, natural selection is the only evolutionary force that matters at all. This is because natural selection is the only evolutionary force that can produce design.  Natural selection, like gravity, acts uniformly and consistently, through deep time, to sift genes according to one hard-and-fast criterion: It increases the prevalence of genes that are good at increasing their own rates of propagation and it reduces the prevalence of genes that are less good at increasing their rates of propagation.

As Richard Dawkins has described so brilliantly in so many different ways, genes take actions in the world that alter their rates of replication by cloaking themselves in really cool features and gadgets—mitochondria, ribosomes, specialized cells, arms, legs, eyes, ears, neurons, brains, beliefs, desires. Those features that increase the genes’ replication rates get conserved and elaborated upon. Those that reduce the genes’ rates of replication are shuffled off. As the result of aeons and aeons of a gene-sifting process that operates according to a single criterion—does this gene create phenotypic effects that speed up its propagation in the population, or does it slow its propagation?—organisms accumulate design.

None of the other evolutionary forces can produce this kind of complex functional design. The result of all of natural selection’s criterion-based gene-sifting is that organisms end up looking like geniuses for thriving in the environments to which they are adapted. Bacteria, birds, bees, bats, bears, boas–and even Bill and Betty–every one is a genius.

Natural selection, like gravity, is a star-maker.