Human Oxytocin Research Gets a Drubbing

There’s a new paper out by Gareth Leng and Mike Ludwig1 that bears the coy title “Intranasal Oxytocin: Myths and Delusions” (get the full text here before it disappears behind a pay wall) that you need to know about if you’re interested in research on the links between oxytocin and human behavior (as I am; see my previous blog entries here, here, and here). Allow me to summarize some highlights, peppered with some of my own (I hope not intemperate) inferences. Caution: There be numbers below, and some back-of-the-envelope arithmetic. If you want to avoid all that, just go to the final paragraph where I quote directly from Gareth and Mike’s summary.

brain-OTFig 1. It’s complicated.

  1. In the brain, it’s the hypothalamus that makes OT, but it’s the pituitary that stores and distributes it to the periphery. I think those two facts are pretty commonly known, but here’s a fact I didn’t know: At any given point in time, the human pituitary gland contains about 14 International Units (IU) of OT (which is about 28 micrograms). So when you read that a researcher has administered 18 or 24IU of oxytocin intranasally as part of a behavioral experiment, bear in mind that they have dumped more than an entire pituitary gland’s worth of OT into the body.
  2. To me, that seems like a lot of extra OT to be floating around out there without us knowing completely what its unintended effects might be. Most scientists who conduct behavioral work on OT with humans think and of course hope that this big payload of OT is benign, and to be clear, I know of no evidence that it is not benign. Even so, research on the use of OT for labor augmentation has found that labor can be stimulated with as little as 3.2 IU of intranasal OT during childbirth by virtue of its effects on the uterus. This is saying a lot about OT’s potential to influence the body’s peripheral tissues because that OT has to overcome the very high levels of oxytocinase (the enzyme that breaks up OT) that circulate during pregnancy. It of course bears repeating that behavioral scientists typically use 24 IU to study behavior, and 24 > 3.2.2
  3. Three decades ago, researchers found that rats that received injections of radiolabeled OT showed some uptake of the OT into regions of the brain that did not have much of a blood brain barrier, but in regions of the brain that did have a decent blood brain barrier, the concentrations were 30 times lower. Furthermore, there was no OT penetration deeper into the brain. Other researchers who have injected rats with subcutaneous doses of OT have managed to increase the rats’ plasma concentrations of OT to 500 times their baseline levels, but they found only threefold increases in the CSF levels. On the basis of these results and others, Leng and Ludwig speculate that as little as 0.002% of the peripherally administered OT is finding its way into the central nervous system, and it has not been proven that any of it is capable of reaching deep brain areas.
  4. The fact that very low levels of OT appear to make it into the central nervous system isn’t a problem in and of itself—if that OT reaches behaviorally interesting brain targets in concentrations that are high enough to produce behavioral effects. However, OT receptors in the brain are generally exposed to much higher levels of OT than are receptors in the periphery (where baseline levels generally range from 0 to 10 pg/ml). As a result, OT receptors in the brain need to be exposed to comparatively high amounts of OT to produce behavioral effects—sometimes as much as 5 to 100 nanograms.
  5. Can an intranasal dose of 24 IU deliver 5 – 100 nanograms of OT to behaviorally relevant brain areas? We can do a little arithmetic to arrive at a guess. The 24 IU that researchers use in intranasal administration studies on humans is equivalent to 48 micrograms, or 48,000 nanograms. Let’s assume (given Point 3 above) that only .002 percent of those 48,000 nanograms is going to get into the brain. If that assumption is OK, then we might expect that brain areas with lots of OT receptors could—as an upper limit—end up with no more than 48,000 nanograms * .00002 = .96 (~1) nanogram of OT. But if 5 – 100 nanograms is what’s needed to produce a behavioral effect, then it seems sensible to conclude that even a 24 IU bolus of OT (which, we must remember, is more than a pituitary gland’s worth of OT) administered peripherally is likely too little to produce enough brain activity to produce a behavioral change—assuming that it’s even able to get into deep brain regions.

Leng and Ludwig aren’t completely closed to the idea that intranasal oxytocin affects behavior via its effects on behaviorally relevant parts of the brain that use oxytocin, but they maintain a cautious stance. I can find no better way to summarize their position clearly than by quoting from their abstract:

The wish to believe in the effectiveness of intranasal oxytocin appears to be widespread, and needs to be guarded against with scepticism and rigor.


1If you don’t know who Gareth Leng and Mike Ludwig are, by the way, and are wondering whether their judgment is backed up by real expertise, by all means have a look at their bona fides.

2A little bet-hedging: I think I read somewhere that there is upregulated gene expression for oxytocin receptors late in pregnancy, so this could explain the uterus’s heightened sensitivity to OT toward the end of pregnancy. Thus, it could be that the uterus becomes so sensitive to OT not because 3.2 IU is “a lot of OT” in any absolute sense, but because the uterus is going out of its way to “sense” it. Either way, 3.2 IU is clearly a detectible amount to any tissue that really “wants”* to detect it.


*If you’re having a hard time with my use of agentic language to refer to the uterus, give this a scan.

 

What Does Revenge Want?

hellocopyIn The Princess Bride, the Spanish swashbuckler Inigo Montoya (played memorably by Mandy Patinkin), is the poster child for the seductive power of revenge. Montoya is a man whose entire adult life has been directed and shaped by his desire to avenge the death of his father. (The scene in which he ultimately fulfills this central goal of his life can be seen here in this YouTube clip. Only a few weeks ago did I realize that the other actor in this scene is the great Christopher Guest).

History provides us with other fascinating examples of the compelling power of the desire for revenge: Geronimo, the famous Apache warrior, describes his satisfaction after the slaughter of the Mexican forces that had massacred his mother, wife, and children only a year before: “Still covered in the blood of my enemies, still holding my conquering weapon, still hot with the joy of battle, victory, and vengeance, I was surrounded by the Apache braves and made war chief of all the Apaches. Then I gave orders for scalping the slain. I could not call back my loved ones, I could not bring back the dead Apaches, but I could rejoice in this revenge.[1] In Blood Revenge, the anthropologist Chris Boehm recounts how one of his informants described how a tribal Montenegrin typically feels after taking revenge against an enemy: “[H]e is happy; then it seems to him that he has been born again, and as a mother’s son he takes pride as though he had won a hundred duels.”[2]

Inigo Montoya, Geronimo, and tribal Montenegrins notwithstanding, the notion that revenge brings satisfaction does not sit easily with all psychologists. Kevin Carlsmith, Tim Wilson, and Dan Gilbert published a paper in 2008, for instance, that seemed to suggest that people only believed that revenge would bring satisfaction.[3] The notion that revenge was satisfying was, the authors suspected, another example of affective forecasting (an idea that I generally like, by the way): people only think that revenge is satisfying. They found that people did indeed expect to feel better after seeking revenge, but when given an actual opportunity to punish a bad guy after mistreatment in the lab, Carlsmith and colleagues found that avengers actually felt less satisfied than did victims who stayed their hands.

The Carlsmith et al. finding—people only think revenge will make them feel better, when it in fact makes them feel worse—has become a kind of psychological truism about revenge. Just yesterday, for example, this bit of common knowledge got repeated in an otherwise excellent article on revenge that appeared in the New York Times. In the article, Kate Murphy writes, “But the thing is, when people take it upon themselves to exact revenge, not only does it fail to prevent future harm but it also ultimately doesn’t make the avenger feel any better. While they may experience an initial intoxicating rush, research indicates that upon reflection, people feel far less satisfied after they take revenge than they imagined.”

But Murphy’s claim ignores a lot of data that indicates otherwise.[4] A series of experiments from Mario Gollwitzer and his colleagues have shown that people are in fact capable of experiencing just as much satisfaction from revenge as they were hoping for, especially when they become aware that the transgressor has learned a lesson from the punishment and has decided to mend his or her ways as a result of it. What revenge wants, it appears, is not simply to lob a grenade over a wall at a bad guy, or to blow off a little steam. Instead, what revenge wants is a reformed scoundrel—an offender who both realizes that what he or she has done to a victim was morally wrong and who acknowledges his or her intent to avoid harming the avenger again in the future. When revenge accomplishes these goals, it seems, it really does satisfy. The bad guy has been deterred from repeating his or her harmful actions. The goal has been accomplished. The itch has been scratched.

Let me be clear: Revenge is bad. It’s bad for relationships (generally), it’s bad for business, it’s bad for societies, and it’s bad for world peace. It’s probably even bad for your health. I’ve never met anybody who wanted more revenge in the world (except, of course, they were the wronged party). But no one has ever gotten rid of any bad thing in the world by understanding it less clearly. Ugly or not, revenge that finds its target can be as exhilarating as winning the Super Bowl. Once we really understand that revenge wants deterrence, we’ll be in a better position to create institutions that can provide alternative means of scratching the itch.

Boehm, C. (1987). Blood revenge: The enactment and management of conflict in Montenegro and other tribal societies (2nd ed.). Philadelphia: University of Pennsylvania Press.

Carlsmith, K. M., Wilson, T. D., & Gilbert, D. T. (2008). The paradoxical consequences of revenge. Journal of Personality and Social Psychology, 95, 1316-1324. doi: 10.1037/a0012165

Funk, F., McGeer, V., & Gollwitzer, M. (2014). Get the message: Punishment is satisfying if the transgressor responds to its communicative intent. Personality and Social Psychology Bulletin, 0146167214533130. doi: 10.1177/0146167214533130

Geronimo. (1983). Geronimo’s story of his life. New York: Irvington.

Gollwitzer, M., & Denzler, M. (2009). What makes revenge sweet: Seeing the offender suffer or delivering a message? Journal of Experimental Social Psychology, 45, 840-844. doi: 10.1016/j.jesp.2009.03.001

Gollwitzer, M., Meder, M., & Schmitt, M. (2011). What gives victims satisfaction when they seek revenge? European Journal of Social Psychology, 41, 364-374. doi: 10.1002/ejsp.782

[1] Geronimo (1983).

[2] Boehm (Boehm, 1987).

[3] Carlsmith, Wilson, and Gilbert (2008).

[4] Funk, McGeer, and Gollwitzer (2014); Gollwitzer and Denzler (2009); Gollwitzer, Meder, and Schmitt (2011).

A P-Curve Exercise That Might Restore Some of Your Faith in Psychology

I teach my university’s Graduate Social Psychology course, and I start off the semester (as I assume many other professors who teach this course do) by talking about research methods in social psychology. Over the past several years, as the problems with reproducibility in science have become more and more central to the discussions going on in the field, my introductory lectures have gradually become more dismal. I’ve come to think that it’s important to teach students that most research findings are likely false, that there is very likely a high degree of publication bias in many areas of research, and that some of our most cherished ideas about how the mind works might be completely wrong.

In general, I think it’s hard to teach students what we have learned about the low reproducibility of many of the findings in social science without leaving them with a feeling of anomie, so this year, I decided to teach them how to do p-curve analyses so that they would at least have a tool that would help them to make up their own minds about particular areas of research. But I didn’t just teach them from the podium: I sent them away to form small groups of two to four students who would work together to conceptualize and conduct p-curve analysis projects of their own.

I had them follow the simple rules that are specified in the p-curve user’s guide, which can be obtained here, and I provided a few additional ideas that I thought would be helpful in a one-page rubric. I encouraged them to make sure they were sampling from the available population of studies in a representative way. Many of the groups cut down their workload by consulting recent meta-analyses to select the studies to include. Others used Google Scholar or Medline. They were all instructed to follow the p-curve manual chapter-and-verse, and to write a little paper in which they summarized their findings. The students told me that they were able to produce their p-curve analyses (and the short papers that I asked them to write up) in 15-20 person-hours or less. I cannot recommend this exercise highly enough. The students seemed to find it very empowering.

This past week, all ten groups of students presented the results of their analyses, and their findings were surprisingly (actually, puzzlingly) rosy: All ten of the analyses revealed that the literatures under consideration possessed evidentiary value. Ten out of ten. None of them showed evidence for intense p-hacking. On the basis of their conclusions (coupled with the conclusions that previous meta-analysts had made about the size of the effects in question), it does seem to me that there really is license to believe a few things about human behavior:

(1) Time-outs really do reduce undesirable behavior in children (parents with young kids take notice);

(2) Expressed Emotion (EE) during interactions between people with schizophrenia and their family members really does predict whether the patient will relapse in in the successive 9-12 months (based on a p-curve analysis of a sample of the papers reviewed here);

(3) The amount of psychological distress that people with cancer experience is correlated with the amounts of psychological distress that their caregivers manifest (based on a p-curve analysis of a sample of the papers reviewed here);

and

(4) Men really do report more distress when they imagine their partners’ committing sexual infidelity than women do (based on a p-curve analysis of a sample of the papers reviewed here; caveats remain about what this finding actually means, of course…)

I have to say that this was a very cheering exercise for my students as well as for me. But frankly, I wasn’t expecting all ten of the p-curve analyses to provide such rosy results, and I’m quite sure the students weren’t either. Ten non-p-hacked literatures out of ten? What are we supposed to make of that? Here are some ideas that my students and I came up with:

(1) Some of the literatures my students reviewed involved correlations between measured variables (for example, emotional states or personality traits) rather than experiments in which an independent variable was manipulated. They were, in a word, personality studies rather than “social psychology experiments.” The major personality journals (Journal of Personality, Journal of Research in Personality, and the “personality” section of JPSP) tend to publish studies with conspicuously higher statistical power than do the major journals that publish social psychology-type experiments (e.g., Psychological Science, JESP and the two “experimental” sections of JPSP), and one implication of this fact, as Chris Fraley and Simine Vazire just pointed out is that the former set of experiment-friendly journals are more likely, ceteris paribus, to have higher false positive rates than is the latter set of personality-type journals.

(2) Some of the literatures my students reviewed were not particularly “sexy” or “faddish”–at least not to my eye (Biologists refer to the large animals that get the general public excited about conservation and ecology as the “charismatic megafauna.” Perhaps we could begin talking about “charismatic” research topics rather than “sexy” or “faddish” ones? It might be perceived as slightly less derogatory…). Perhaps studies on less charismatic topics generate less temptation among researchers to capitalize on undisclosed researcher degrees of freedom? Just idle speculation…

(3) The students went into the exercise without any a priori prejudice against the research areas they chose. They wanted to know whether the literatures the focused on were p-hacked because they cared about the research topics and wanted to base their own research upon what had come before–not because they had read something seemingly fishy on a given topic that gave them impetus to do a full p-curve analysis. I wonder if this subjective component to the exercise of conducting a p-curve analysis is going to end up being really significant as this technique becomes more popular.

If you teach a graduate course in psychology and you’re into research methods, I cannot recommend this exercise highly enough. My students loved it, they found it extremely empowering, and it was the perfect positive ending to the course. If you have used a similar exercise in any of your courses, I’d love to hear about what your students found.

By the way, Sunday will be the 1-year anniversary of the Social Science Evolving Blog. I have appreciated your interest.  And if I don’t get anything up here before the end of 2014, happy holidays.

Do Humans Have Innate Concepts for Thinking About Other People?

Gossip is one of life’s greatest consolations and one of our most reliable conversational fall-backs. In a world without gossip, many of us could realize Tim Ferriss’s ideal of a Four-Hour Work Week without even putting any of his advice into practice. Gossip is also, according to the anthropologist Donald Brown (1991), a human universal—one of those pan-human traits that people within every world society can be expected to evince.

Our ability to gossip, as is the case with all ostensive communication, is premised on the idea that our listeners are in possession of concepts that enable them to convert the sounds coming out of our mouths into ideas that resemble those we are trying to convey. Which got me to thinking about the psychology that makes gossip possible: Are there universal “person concepts”—species-typical cognitive representations of particular human traits or attributes—that every human reliably acquires during normal development? If you flew back from your vacation in Tanzania with a Hadza man or woman whom you planned to entertain in your home for a couple of weeks, would the two of you be able to settle into your living room and enjoy a little TMZ* (assuming you spoke Hadza and could translate)? The Hadza are arguably the last full-time hunter-gatherer society on the planet; it’s difficult to imagine a society more different from our own. Could you trust that your Hadza friend had acquired all of the person concepts that would enable him or her to follow the action? Are there any universal and native social concepts upon which all humans rely in order to make social life work?

I’ll get to that in a moment, but first a slightly bigger question: Does the mind contain any native concepts at all? Here in 21st century, many scholars in the social sciences would answer this question affirmatively, having turned their backs on the most hardcore versions of the Blank Slate theory that Steven Pinker describes in his aptly titled (2002) book, The Blank Slate. (The Major Blank-Slater of Western thought, John Locke, famously wrote “If we will attentively consider new-born children, we shall have little reason to think that they bring many ideas into the world with them.”). Even so, there is still much to be debated and discovered about innate ideas.

For starters, how many innate ideas are there? Conceding that there are more than zero of them is not a particularly bold claim. Are there handfuls? Dozens? Scores? Many evolutionary psychologists and cognitive scientists prefer large numbers here, and not without good reason: It’s difficult to imagine how even the basic behavioral tasks that humans must accomplish to stay alive—finding food, water, and warmth, for starters—could be accomplished unless the mind contained some built-in conceptual content.

To Find Food, a Newborn Baby Needs FOOD

Since Locke brought up the case of “new-born children,” let’s think about babies for a moment. A newborn infant comes into the world with a pressing problem: She must find something to eat. Locke thought the infant came into the world with the ability to experience hunger, but he did not think the infant came into the world with a concept of FOOD. The so-called Frame Problem, which Daniel Dennett (2006) so vividly described, makes it unlikely that a newborn infant could solve this problem (“Find food”) before it starved unless it had some built-in representation of what FOOD is. The selection pressure for the evolution of a conceptual short-cut here is enormous: Successful food-finding in the first hour after birth is a predictor of infant survival, so that first hour matters. The clock is ticking. Therefore, a cognitive design that requires infants to find food on a blind trial-and-error basis is likely to be a losing design in comparison to a design that comes with a built-in concept for FOOD from the outset.

For human infants, the FOOD concept involves the activity of neurons that respond to the olfactory properties of specific volatile chemicals that human mothers emit via the breast, possibly along with visual and tactile features of the human breast as well (Schaal et al., 2009). Through a matching-to-template process, human infants can quickly locate breast-like objects in their environments, which of course are the only objects in the universe that are specially designed to provide human neonates with nutrition and hydration.

What about More “Complex” Concepts?

Convincing you that human neonates possess an innate concept for FOOD is perhaps an easy sell, but in a recent paper in Current Directions in Psychological Science, Andy Delton and Aaron Sell (2014) argued that humans come to possess a variety of universal and reliably developing social concepts as well, which enable them to regulate the universal components of human social life. For Delton and Sell, there can be “no motivation without representation,” so if there are certain adaptive challenges that humans have evolved behavioral programs to surmount, there should also be concepts within the human mind that enable them to parse their worlds into adaptively meaningful units so that the stimuli that are relevant to achieving those adaptive goals can be easily identified.

Delton and Sell’s list of candidates for intuitive concepts (which they in no way claim to be exhaustive) includes COOPERATOR, FREE RIDER, NEWCOMER, KINSHIP, ROMANTIC PARTNER, ROMANTIC RIVAL, ENTITLEMENT, DISRESPECT, INGROUP, and OUTGROUP, among others (see the Table below). The claim here, again, is that if humans are going to have evolved goals that involve “establishing cooperative relationships,” “deterring free riders,” or “evaluating whether to engage in trade with someone from an outgroup,” they will need concepts to represent what COOPERATORS, FREE RIDERS, and OUTGROUPS actually are. There can be no motivation without representation.

(By the way, to claim that such concepts are “innate” or “native” is not to claim that they are present in the mind from birth, but rather, that the human genome possesses the programs for assembling these representations within the mind at developmentally appropriate points in the human life cycle, and with appropriate kinds of environmental inputs. Concepts come and concepts go as we develop. Think of how the concept of FOOD gets overwritten once infants turn away from breast milk and toward other foods during the first three to four years of life. The FOOD concept within the mind/brain changes over ontogeny, but the genes that give rise to that initial FOOD concept—which the infants match against environmental inputs on the basis of the olfactory, visual, and tactile information—remain in the genome and are passed onto one’s genetic heirs so the concept can be re-constructed during ontogeny.)

Delton-Sell-Figure1From Delton and Sell, 2014

Looking for Universal Concepts in the Dictionary

Another paper was recently published that provides some confirmatory evidence, of a sort, for Delton and Sell’s position. The personality psychologist Gerard Saucier and his colleagues (2014) read through the English dictionaries representing the languages of 12 geographically and linguistically distinct cultural groups from all over the world (see Table below) in hopes of finding the universal concepts that humans use to parse up the actions and dispositions of other humans.

Saucier_Figure1From Saucier et al., 2014

The logic behind Saucier et al.’s effort was straightforward: All human societies should end up making words to represent the attributes that humans universally use to parse their social lives—presuming, I suppose, that those concepts are worth talking about. (Universal social concepts for which humans universally make words might be only a subset of all universal social concepts: Some universal social concepts might not be worth talking about, though I can’t think off-hand of what such concepts might be. Can you?)

By scouring these dictionaries, Saucier and colleagues ultimately located nearly 17,000 words across the 12 languages that could be used to refer to human attributes. Through a reduction process that enabled them to thrown out synonyms and variations on common roots (fool, foolish, foolishly, “to fool,” and “to be fooled” can all be reduced to a single attribute concept, as can all of the other words that gloss in English as “to be foolish”), they were able to greatly simplify the number of attribute concepts within each language to more manageable numbers.

Having reduced each language’s human attribute lexicon down in this fashion, they then looked for attribute terms that cropped up in either (a) all 12 of the languages they studied; or (b) 11 of the 12 languages they studied. With their “11 out of 12” rule, they were taking a cue from the anthropologist Donald Brown, who argued that “Human Universals” should be manifest in the ethnographic materials for 95% of the world’s societies. Placing the empirical estimate at 100% would be too strict because it doesn’t allow for ethnographers’ oversights. With only 12 dictionaries to work with, 11 out of 12 is as close as you can come to 95%.

What Saucier and colleagues discovered was fascinating. All twelve languages had human attribute concepts corresponding to BAD, GOOD, USELESS, BEAUTIFUL, DISOBEDIENT, STUPID, ALIVE, BLIND, SICK, STRONG, TIRED, WEAK, WELL, AFRAID, ANGRY, ASHAMED, JEALOUS, SURPRISED, BIG, LARGE, SMALL, HEAVY, OLD, and YOUNG. If you use the slightly more lenient “11 out of 12” criterion for judgments of universality, you get to add EVIL, HANDSOME, GOSSIP, HUMBLE, LOVE, CLUMSY, DRUNK, FOOLISH, QUICK, SLOW, UNABLE, WISE, DEAD, SLEEPY, HUNGRY, PAIN, PLEASURE, THIRSTY, HAPPY, SATISFIED, TROUBLED, FAT, LITTLE, SHORT, TALL, MARRIED, POOR, RICH, and STRANGER.

To me, this is a fascinating list. Some of the traits on the list involve moral evaluation (e.g., BAD, GOOD, EVIL, HUMBLE). Others clearly have to do with physical health, condition, or capacity for work (e.g., ALIVE, BLIND, SICK, WELL, QUICK, SLOW, UNABLE, STRONG, WEAK). Others relate to reproductive value (e.g., BEAUTIFUL, HANDSOME, MARRIED), and age (YOUNG, OLD). Many of the universals relate to more temporary motivations, emotions, and behavioral dispositions (e.g., TIRED, AFRAID, ANGRY, ASHAMED, JEALOUS, SURPRISED, PLEASURE, THIRSTY, HUNGRY, PAIN, COLD, HOT). And still others are associated with reliability and judgment (e.g., CLUMSY, WISE, RIGHT, USELESS). I think Delton and Sell would be especially pleased to see that STRANGER even makes it to the list—consistent with their speculation that humans possess innate “NEWCOMER TO A COALITION” and “OUTGROUP” concepts.

I wouldn’t want to overstate the significance of Saucier and colleagues’ findings (although I think the findings are extremely important): As I mentioned above, just because we lack a word for something doesn’t mean we don’t have an innate concept for it (remember that infants can find food because they come into the world with a well-developed FOOD concept, even though they can’t converse with you about food). Saucier’s list of universal person words almost surely does not exhaust the list of evolved person concepts that humans reliably acquire through ontogeny, but it might be a decent rough draft of the set of person concepts that all adults eventually find regular occasions to gossip about. And of course, if you can count on the fact that your Hadza houseguest has concepts for Bad, Good, Beautiful/Handsome, Love, Drunk, Sick, Ashamed, Jealous, Fat, Short, Old, Young, Rich, and Poor, then translating an episode of TMZ for him or her should be no trouble whatsoever.

Postscript: After reading this post, Paul Bloom wrote me to ask why I “didn’t mention the enormous  developmental psych literature that looks at exactly this question—work that studies babies with an eye toward exploring exactly which concepts are innate and which are learned, e.g., Carey, Baillergeon, Wynn, Spelke, Gergely, Leslie, and so on.” (Paul was too modest, I think, to put himself to this list, but he should have.) Hat in hand, I couldn’t agree more. If you don’t know the work of the scientists that Paul mentioned above, you can look to it for further evidence that humans come into the world with complex social concepts. ~MEM

References

Brown, D. E. (1991). Human universals. Boston, MA: McGraw-Hill.

Delton, A. W., & Sell, A. (2014). The co-evolution of concepts and motivation. Current Directions in Psychological Science, 23(2), 115-120.

Dennett, D. C. (2006). Cognitive wheels: The Frame Problem of AI. New York: Routledge.

Pinker, S. (2002). The blank slate: The modern denial of human nature. New York: Viking.

Saucier, G., Thalmayer, A. G., & Bel-Bahar, T. S. (2014). Human attribute concepts: Relative ubiquity across twelve mutually isolated languages. Journal of Personality and Social Psychology, 107(1), 199-216.

Schaal, B., Coureaud, G., Doucet, S., Delaunay-El Allam, M., Moncomble, A.-S., Montigny, D., . . . Holley, A. (2009). Mammary olfactory signalisation in females and odor processing in neonates: Ways evolved by rabbits and humans. Behav Brain Res, 200, 346-358.

*I trust that you were able to infer that by TMZ I meant the celebrity gossip show and not the cancer drug or the Soviet motorcycle manufacturer.

The Myth of Moral Outrage

This year, I am a senior scholar with the Chicago-based Center for Humans and Nature. If you are unfamiliar with this Center (as I was until recently), here’s how they describe their mission:

The Center for Humans and Nature partners with some of the brightest minds to explore humans and nature relationships. We bring together philosophers, biologists, ecologists, lawyers, artists, political scientists, anthropologists, poets and economists, among others, to think creatively about how people can make better decisions — in relationship with each other and the rest of nature.

In the year to come, I will be doing some writing for the Center, starting with a piece I that has just appeared on their web site. In The Myth of Moral Outrage, I attack the winsome idea that humans’ moral progress over the past few centuries has ridden on the back of a natural human inclination to react with a special kind of anger–moral outrage–in response to moral violations against unrelated third parties:

It is commonly believed that moral progress is a surfer that rides on waves of a peculiar emotion: moral outrage. Moral outrage is thought to be a special type of anger, one that ignites when people recognize that a person or institution has violated a moral principle (for example, do not hurt others, do not fail to help people in need, do not lie) and must be prevented from continuing to do so . . . Borrowing anchorman Howard Beale’s tag line from the film Network, you can think of the notion that moral outrage is an engine for moral progress as the “I’m as mad as hell and I’m not going to take this anymore” theory of moral progress.

I think the “Mad as Hell” theory of moral action is probably quite flawed, despite the popularity that it has garnered among may social scientists who believe that humans possess “prosocial preferences” and a built-in (genetically group-selected? culturally group selected?) appetite for punishing norm-violators. I go on to describe the typical experimental result that has given so many people the impression that we humans do indeed possess prosocial preferences that motivate us to spend our own resources for the purpose of punishing norm violators who have harmed people whom we don’t know or otherwise care about. Specialists will recognize that the empirical evidence that I am taking to task comes from that workhorse of experimental economics, the third-party punishment game:

…[R]esearch subjects are given some “experimental dollars” (which have real cash value). Next, they are informed that they are about to observe the results of a “game” to be played by two other strangers—call them Stranger 1 and Stranger 2. For this game, Stranger 1 has also been given some money and has the opportunity to share none, some, or all of it with Stranger 2 (who doesn’t have any money of her own). In advance of learning about the outcome of the game, subjects are given the opportunity to commit some of their experimental dollars toward the punishment of Stranger 1, should she fail to share her windfall with Stranger 2.

Most people who are put in this strange laboratory situation agree in advance to commit some of their experimental dollars to the purpose of punishing Stranger 1’s stingy behavior. And it is on the basis of this finding that many social scientists believe that humans have a capacity for moral outrage: We’re willing to pay good money to “buy” punishment for scoundrels.

In the rest of the piece, I go on to point out the rather serious inferential limitations of the third-party punishment game as it is typically carried out in experimental economists’ labs. I also point to some contradictory (and, in my opinion, better) experimental evidence, both from my lab and from other researchers’ labs, that gainsay the widely accepted belief in the reality of moral outrage. I end the piece with a proposal for explaining what the appearance of moral outrage might be for (in a strategic sense), even if moral outrage is actually not a unique emotion (that is, a “natural kind” of the type that we assume anger, happiness, grief, etc. to be) at all.

I don’t want to steal too much thunder from the Center‘s own coverage of the piece, so I invite you to read the entire piece over on their site. Feel free to post a comment over there, or back over here, and I’ll be responding in both places over the next few days.

As I mentioned above, I’ll be doing some additional writing for the center in the coming six months or so, and I’ll be speaking at a Center event in New York City in a couple of months, which I will announce soon.

Happy International Forgiveness Day!

August is a notoriously slow month for news (and blogging). It’s also somewhat bereft of holidays and official days of observance. According to the web site Holiday Insights, August does host a few official holidays (that is, days of observance that were established by presidential proclamation or acts of Congress). These include U.S. Coast Guard Day, National Lighthouse Day, Aviation Day, Senior Citizen’s Day, and Women’s Equality Day. The unofficial August holidays, I have just learned, also include National Dog Day, Presidential Joke Day (on August 11, 1984, President Reagan joked into a live microphone that the U.S. had officially outlawed “Russia” and would begin bombing five minutes thereafter), and Vesuvius Day (take a guess). But these are the exceptions that prove the rule: August is a month in which we’re not encouraged to be mindful of very much.

Even so, over the past couple of years, I’ve become rather fond of today, the first Sunday in August: This is the day on which a group called the Worldwide Forgiveness Alliance is trying to establish a worldwide observance of International Forgiveness Day. I don’t know anything about this group, other than what is posted on their web site, but their self-described mission is “to evoke the healing spirit of Forgiveness worldwide.” I don’t think they’re taking their cues from scholarly writings or scientific research on forgiveness, reconciliation, and peacemaking. Instead, it appears to be a truly grassroots movement, trying on a very small scale to encourage forgiveness not only as personal tool for overcoming anger and resentment, but also as a way of repairing relationships between individuals, communities and conflict groups.

For the past 18 years, according to their web site, they have hosted an “International Forgiveness Day” event, and today is no exception. So if you’re out in the Bay Area today, and are curious, you might consider getting out to San Rafael to see what they’re up to. Also, their web site indicates that they will be live-streaming the event here, so I suppose you could celebrate remotely from your deck chair or porch swing.

But perhaps even that feels like more effort than the heat will permit you to expend. I sympathize. If that’s the case, consider sparing a thought for forgiveness today. Or raise two cheers for forgiveness. Or why not make a forgiveness day gazpacho? (This is my plan.) I can think of many worse things to celebrate on a slow, muggy Sunday in August.

The Real Roots of Vengeance and Forgiveness

Yesterday, somebody pointed me to this article, which I wrote a few years ago for a magazine called Spirituality and Health. I had not realized until yesterday that the magazine had made the article available on the web. Even though it’s several years old, I still like the way it reads. In fact, it’s as decent a précis of my book Beyond Revenge as you’re going to find anywhere.

I’m not exactly a regular reader of this particular magazine, but their editorial staff have taken an interest in some of my research and writing over the years, including some of our (by which I mean, my and my collaborators’) work on forgiveness and gratitude, for which I have always been appreciative. The founder of the magazine, whom I was fortunate enough to know, was T. George Harris–one of the most colorful figures in the history of 20th-century magazine publishing. Some readers of this blog might know of George’s work in helping to turn Psychology Today into the behemoth it eventually became, but there is much more to George’s personal and professional life that’s worth knowing about.

George died last year at the age of 89. I found two really nice chronicles of his life–this one from the local San Diego paper (George was a La Jolla resident), and this one written by Stephen Kiesling, who not only is the editor-in-chief at Spirituality and Health, but also was one of George’s closest friends and fondest admirers.