Is Kim Davis Really Wrestling with a Violated Religious Conscience?


Kim Davis, you’ll surely recall, is the Kentucky Clerk of Courts who ended up spending five days in a jail cell after refusing to comply with a federal judge’s orders to sign marriage licenses for gay couples. She’s out now, and back at work, and so far has been allowing her deputy clerks to sign the licenses in her stead. So most of us were hoping we’d never hear about Kim Davis again.

But she’s back in the news. Upon returning to work, according to Brian Mason, one of her deputies, Davis confiscated all copies of the County’s standard marriage certificate and replaced them with the Kim Davis Custom Edition, which “deletes all mentions of the County, fills in one of the blanks that would otherwise be the County with the Court’s styling, deletes her name, deletes all of the deputy clerk references, and in place of deputy clerk types in the name of Brian Mason, and has him initial rather than sign.”

So the struggle continues. In an interview with ABC News that came out today, Davis tried to explain why signing a marriage license for a gay couple violates her religious conscience, and some of these statements are helpful in helping us to get into Davis’s mind:

“I can’t put my name on a license that doesn’t represent what God ordained marriage to be,” she told a reporter. “They’re not valid in God’s eyes.”

Sounds like a classic claim of a violated religious conscience, doesn’t it? Davis can’t participate in the duties that the State of Kentucky requires of her office, she says, because of a religious objection. Most bloggers are focusing on whether the law entitles Davis to accommodations, and whether such accommodations are possible without placing an undue burden on a class of people, but I have been thinking about a different issue:

Is it possible that Davis only thinks that signing a gay couple’s marriage certificate would violate her religious conscience? Is it possible that even Kim Davis doesn’t understand Kim Davis’s motivations?

If we’re not dealing with a violation of religious conscience, what else might we be dealing with? I see three possible alternatives.

Open-Hearted Ignorance? Why are Clerks of Courts required to sign marriage licenses in the first place? Best I can tell, the Clerk’s function in this setting is merely to aver that all relevant documents are in order (i.e., “I looked for everything; it’s all there.”). Now, I’ve never seen a Kentucky marriage license, so I may be wrong here, but I have seen a California marriage license (my own), and as far as I can tell, the clerk is merely certifying that all of the documents that are required to execute a legal marriage contract are on file, that there are no valid legal objections to the marriage on file, and that the credentials of the person officiating (“solemnizing”) the marriage are approved by the State. In other words, the Clerk’s signature is merely an assurance that the paperwork is right. This fits with my intuitions about what a Clerk of Court’s job is more generally: Opening envelopes, cashing the money order you sent in to pay for your traffic ticket, checking boxes, and not losing things.

I wonder if perhaps Davis simply hasn’t thought carefully enough about what her signature on a marriage license—any marriage license–actually means. She’s in pretty deep now, and has dug in her heels, so she probably cannot be reached with reason alone, but it seems like someone ought to try. Who knows? Perhaps with time, perspective, and some discussion with people who love her and whom she trusts, Davis could be convinced that her signature is really just a bit of truth-telling (“Yep!  Everything’s on file!), and thus is in fact extremely consistent with her Christian values rather than an affront to them.

Disgust Masquerading as a Violated Religious Conscience? Second, I wonder whether Davis might be confusing disgust with a violation of religious conscience. Legal scholars such as Martha Nussbaum, as well as many social scientists, have written on the centrality of disgust to people’s anti-gay prejudices, and the Kim Davis situation may provide an interesting and never-before-seen variation on this theme. If Davis is disgusted by gay people or by gay marriage, then she might be hesitant to associate her name with a gay couple’s marriage license because disgust follows the laws of sympathetic magic: The things associated with you carry some of your traits along with them. In the same way that people don’t want to drink from a glass that once held a cockroach, those who are extremely disgusted by homosexuality or homosexual marriage might be unwilling to associate themselves—or their signatures—with a marriage license—a piece of paper—that has been filed by two gay people. After all, the signature has been a highly cherished representation of the self, one’s will, and one’s intentions, for 3,500 years.

If Davis’s problem is in fact not that her religious conscience has been violated, but instead is that she is disgusted by the idea of gay marriage, then the religious language can be removed from the debate entirely and we can start discussing instead what society should do with an elected public official who finds part of her work disgusting.

Weaponized Conscience? Finally, there is the possibility that Davis isn’t experiencing a genuine violation of her religious conscience at all, but instead is simply using that plea as a cudgel that enables her to impose burdens on a class of people whose behavior she (or, perhaps, she believes, God) dislikes. By Davis’s own account, she believes God has been “using” her through this debacle (“For God to use somebody like me, it’s so humbling,” she has said). I’d certainly like for her to explain that statement. It could mean a lot of things, but one thing it could mean is that this entire dust-up, in her heart of hearts, isn’t actually about “U.S. versus God & Davis:” Instead, in Davis’s heart of hearts, it could be about “God versus the U.S. via Davis.”

Take a moment to let that one sink in.

No Truth Serum

None of us, save perhaps those people in Davis’s most intimate inner circle, are ever going to know what’s really going on in Davis’s heart. Is her conscience really, truly, legitimately violated? Is she absolutely sure? Could she be disabused of that intuition by taking a little time to think about what a Clerk of the Courts actually does (remember, their job is merely to open envelopes, cash checks, check boxes, and not lose things) and what the Clerk’s signature on a marriage license actually means? Is her perceived violation of religious conscience just a case of projected disgust dressed in its Sunday best? Or is her plea of violated religious conscience a disingenuous one—a rhetorical weapon that Davis is using to impose a burden on a class of people whose behavior she (or, she thinks, God) dislikes?

The last possibility is the most interesting—and the most troubling, actually, for it raises an ominous but delightfully “meta” sort of question about religious conscience. And it’s a question that defenders of religious liberty—both liberal and conservative—need to take seriously. Should one’s religious (or non-religious) conscience be bothered if someone has debased the very notion of religious conscience by using it as moral cover for a less conscionable goal? I’m not saying she has, and I’m not saying she hasn’t, because I do not know.

But I’d certainly like to know, because the liberal compact is something like this: “You’re free to believe what you want, and I’m going to defend that freedom on your behalf. But it seems to me that there’s a caveat: “If you’re just messing around and don’t actually believe what you claim to believe, then don’t count on me to make good on my side of the bargain either.”

h/t Deb Lieberman.


Thinking Outside the Box: The Power of Apologies in Cooperative Agreements

TWO kids I know—let’s call them Jeff and Mimi–wanted a cat, so they begged their reluctant parents for months. Eventually the parents gave in, but they forced the kids into an agreement: “The cat box will need to be cleaned every day. We expect you to alternate days. If you miss your day, we’ll take fifty cents out of your allowance and give it to your sister/brother. If you like, you can think of the fifty-cent transfer as a ‘fine’ that we pay to your sister/brother for your failure to hold up your end of the agreement.” The kids agreed and Corbin the cat was purchased (or, rather, obtained from the Humane Society). The litter box-cleaning arrangement went well for three whole days, but compliance started to wane on day four. Hostilities began to simmer. Jeff became reluctant to clean the litter box on “his” days because of Mimi’s failures to keep up her end of the bargain, and vice versa.

“Recriminations” began to pile up.

After two weeks, the agreement was declared dead. The parents became the chief cleaners of the litter box. The father continues to wonder whether he should have read more game theory before even entertaining the idea of getting a cat.

Cooperative agreements like these tend to be dicey propositions: What’s Mimi supposed to do if Jeff fails to clean the box on his appointed day? Should he view it as a sign that Jeff no longer intends to honor the agreement (in which case she should stop honoring it herself, notwithstanding the $.50 fine that Jeff had to pay to her), or should she view it as a one-off aberration (in which case she might want to continue honoring the agreement)? It’s not clear, and that lack of clarity can make problems for the stability of such agreements.

Luis Martinez-Vaquero and his colleagues addressed this issue in a recent article that caught my eye. The paper is complex, but it’s full of interesting results (some quite counter-intuitive) about when strategic agents should be expected to make commitments, honor commitments, retaliate when those commitments are broken, and so forth. I suggest you give it a read if you are at all interested in these issues. But what really grabbed my interest was the authors’ exploration of the idea that the key to getting such agreements to “work” (by which I mean, “become evolutionarily stable*”) was to build in an apology-forgiveness system that causes agreement-violators to pay an additional cost (over and above the fine specified in the agreement itself) after a failure to cooperate, which might cause the defected-against partner to persist in the agreement despite the fact that it has been violated.

The researchers’ results enabled them to be surprisingly precise about the conditions under which highly cooperative strategies that used apologies and forgiveness in this way would evolve*: The costs of cooperating (cleaning the litter box) must be lower than the cost of the apology (the amount of money the deal-breaker voluntarily passes to his/her sibling), which in turn must be lower than the fine for non-compliance that is specified within the agreement itself (fifty cents). When those conditions are in place, you can get the evolution of actors who like to make agreements, accept agreements, honor agreements, and forgive breaches of agreements so that cooperation can be maintained even when those agreements are occasionally violated due to cello lessons that run late, or unscheduled trips to the emergency room, or geometry exams that simply must be studied for.

I’ve written here and there (and here) about the value of apologies and compensation in promoting forgiveness, but the results of Martinez-Vaquero and colleagues suggest (to me, anyway) that forgiveness-inducing gestures such as apologies and offers of compensation can come to possess a sort of fractal quality: People often overcome defections in their cooperative relationships through costly apologies, which promote forgiveness. Throughout the history of Western Civilization, various Leviathans have capitalized on the conflict-minimizing, cooperation-preserving power of costly apologies by institutionalizing these sorts of innovations within contracts and other commitment devices that specify fines and other sanctions if one party or the other fails to perform. But after the fine has been paid for failure to perform, what’s to keep the parties motivated to continue on with their agreement? Martinez-Vaquero et al.’s paper suggests that a little “apology payment” added on top of the fine might just do the trick. Apologies within apologies.

By the way, Jeff and Mimi’s parents are reviewing the terms of the old agreement later this week. Perhaps it can be made to work after all.


Martinez-Vaquero, L. A., Han, T. A., Pereira, L. M., & Lennaerts, T. (2015). Apology and forgiveness evolve to resolve failures in cooperative agreements. Scientific Reports, 5: 10639.  doi:10.1038/srep10639.


*By which I mean “come to characterize the behavior of individuals in the population via a cultural learning mechanism that causes individuals to adopt the strategies of their most successful neighbors.”


Human Oxytocin Research Gets a Drubbing

There’s a new paper out by Gareth Leng and Mike Ludwig1 that bears the coy title “Intranasal Oxytocin: Myths and Delusions” (get the full text here before it disappears behind a pay wall) that you need to know about if you’re interested in research on the links between oxytocin and human behavior (as I am; see my previous blog entries here, here, and here). Allow me to summarize some highlights, peppered with some of my own (I hope not intemperate) inferences. Caution: There be numbers below, and some back-of-the-envelope arithmetic. If you want to avoid all that, just go to the final paragraph where I quote directly from Gareth and Mike’s summary.

brain-OTFig 1. It’s complicated.

  1. In the brain, it’s the hypothalamus that makes OT, but it’s the pituitary that stores and distributes it to the periphery. I think those two facts are pretty commonly known, but here’s a fact I didn’t know: At any given point in time, the human pituitary gland contains about 14 International Units (IU) of OT (which is about 28 micrograms). So when you read that a researcher has administered 18 or 24IU of oxytocin intranasally as part of a behavioral experiment, bear in mind that they have dumped more than an entire pituitary gland’s worth of OT into the body.
  2. To me, that seems like a lot of extra OT to be floating around out there without us knowing completely what its unintended effects might be. Most scientists who conduct behavioral work on OT with humans think and of course hope that this big payload of OT is benign, and to be clear, I know of no evidence that it is not benign. Even so, research on the use of OT for labor augmentation has found that labor can be stimulated with as little as 3.2 IU of intranasal OT during childbirth by virtue of its effects on the uterus. This is saying a lot about OT’s potential to influence the body’s peripheral tissues because that OT has to overcome the very high levels of oxytocinase (the enzyme that breaks up OT) that circulate during pregnancy. It of course bears repeating that behavioral scientists typically use 24 IU to study behavior, and 24 > 3.2.2
  3. Three decades ago, researchers found that rats that received injections of radiolabeled OT showed some uptake of the OT into regions of the brain that did not have much of a blood brain barrier, but in regions of the brain that did have a decent blood brain barrier, the concentrations were 30 times lower. Furthermore, there was no OT penetration deeper into the brain. Other researchers who have injected rats with subcutaneous doses of OT have managed to increase the rats’ plasma concentrations of OT to 500 times their baseline levels, but they found only threefold increases in the CSF levels. On the basis of these results and others, Leng and Ludwig speculate that as little as 0.002% of the peripherally administered OT is finding its way into the central nervous system, and it has not been proven that any of it is capable of reaching deep brain areas.
  4. The fact that very low levels of OT appear to make it into the central nervous system isn’t a problem in and of itself—if that OT reaches behaviorally interesting brain targets in concentrations that are high enough to produce behavioral effects. However, OT receptors in the brain are generally exposed to much higher levels of OT than are receptors in the periphery (where baseline levels generally range from 0 to 10 pg/ml). As a result, OT receptors in the brain need to be exposed to comparatively high amounts of OT to produce behavioral effects—sometimes as much as 5 to 100 nanograms.
  5. Can an intranasal dose of 24 IU deliver 5 – 100 nanograms of OT to behaviorally relevant brain areas? We can do a little arithmetic to arrive at a guess. The 24 IU that researchers use in intranasal administration studies on humans is equivalent to 48 micrograms, or 48,000 nanograms. Let’s assume (given Point 3 above) that only .002 percent of those 48,000 nanograms is going to get into the brain. If that assumption is OK, then we might expect that brain areas with lots of OT receptors could—as an upper limit—end up with no more than 48,000 nanograms * .00002 = .96 (~1) nanogram of OT. But if 5 – 100 nanograms is what’s needed to produce a behavioral effect, then it seems sensible to conclude that even a 24 IU bolus of OT (which, we must remember, is more than a pituitary gland’s worth of OT) administered peripherally is likely too little to produce enough brain activity to produce a behavioral change—assuming that it’s even able to get into deep brain regions.

Leng and Ludwig aren’t completely closed to the idea that intranasal oxytocin affects behavior via its effects on behaviorally relevant parts of the brain that use oxytocin, but they maintain a cautious stance. I can find no better way to summarize their position clearly than by quoting from their abstract:

The wish to believe in the effectiveness of intranasal oxytocin appears to be widespread, and needs to be guarded against with scepticism and rigor.

1If you don’t know who Gareth Leng and Mike Ludwig are, by the way, and are wondering whether their judgment is backed up by real expertise, by all means have a look at their bona fides.

2A little bet-hedging: I think I read somewhere that there is upregulated gene expression for oxytocin receptors late in pregnancy, so this could explain the uterus’s heightened sensitivity to OT toward the end of pregnancy. Thus, it could be that the uterus becomes so sensitive to OT not because 3.2 IU is “a lot of OT” in any absolute sense, but because the uterus is going out of its way to “sense” it. Either way, 3.2 IU is clearly a detectible amount to any tissue that really “wants”* to detect it.

*If you’re having a hard time with my use of agentic language to refer to the uterus, give this a scan.


What Does Revenge Want?

hellocopyIn The Princess Bride, the Spanish swashbuckler Inigo Montoya (played memorably by Mandy Patinkin), is the poster child for the seductive power of revenge. Montoya is a man whose entire adult life has been directed and shaped by his desire to avenge the death of his father. (The scene in which he ultimately fulfills this central goal of his life can be seen here in this YouTube clip. Only a few weeks ago did I realize that the other actor in this scene is the great Christopher Guest).

History provides us with other fascinating examples of the compelling power of the desire for revenge: Geronimo, the famous Apache warrior, describes his satisfaction after the slaughter of the Mexican forces that had massacred his mother, wife, and children only a year before: “Still covered in the blood of my enemies, still holding my conquering weapon, still hot with the joy of battle, victory, and vengeance, I was surrounded by the Apache braves and made war chief of all the Apaches. Then I gave orders for scalping the slain. I could not call back my loved ones, I could not bring back the dead Apaches, but I could rejoice in this revenge.[1] In Blood Revenge, the anthropologist Chris Boehm recounts how one of his informants described how a tribal Montenegrin typically feels after taking revenge against an enemy: “[H]e is happy; then it seems to him that he has been born again, and as a mother’s son he takes pride as though he had won a hundred duels.”[2]

Inigo Montoya, Geronimo, and tribal Montenegrins notwithstanding, the notion that revenge brings satisfaction does not sit easily with all psychologists. Kevin Carlsmith, Tim Wilson, and Dan Gilbert published a paper in 2008, for instance, that seemed to suggest that people only believed that revenge would bring satisfaction.[3] The notion that revenge was satisfying was, the authors suspected, another example of affective forecasting (an idea that I generally like, by the way): people only think that revenge is satisfying. They found that people did indeed expect to feel better after seeking revenge, but when given an actual opportunity to punish a bad guy after mistreatment in the lab, Carlsmith and colleagues found that avengers actually felt less satisfied than did victims who stayed their hands.

The Carlsmith et al. finding—people only think revenge will make them feel better, when it in fact makes them feel worse—has become a kind of psychological truism about revenge. Just yesterday, for example, this bit of common knowledge got repeated in an otherwise excellent article on revenge that appeared in the New York Times. In the article, Kate Murphy writes, “But the thing is, when people take it upon themselves to exact revenge, not only does it fail to prevent future harm but it also ultimately doesn’t make the avenger feel any better. While they may experience an initial intoxicating rush, research indicates that upon reflection, people feel far less satisfied after they take revenge than they imagined.”

But Murphy’s claim ignores a lot of data that indicates otherwise.[4] A series of experiments from Mario Gollwitzer and his colleagues have shown that people are in fact capable of experiencing just as much satisfaction from revenge as they were hoping for, especially when they become aware that the transgressor has learned a lesson from the punishment and has decided to mend his or her ways as a result of it. What revenge wants, it appears, is not simply to lob a grenade over a wall at a bad guy, or to blow off a little steam. Instead, what revenge wants is a reformed scoundrel—an offender who both realizes that what he or she has done to a victim was morally wrong and who acknowledges his or her intent to avoid harming the avenger again in the future. When revenge accomplishes these goals, it seems, it really does satisfy. The bad guy has been deterred from repeating his or her harmful actions. The goal has been accomplished. The itch has been scratched.

Let me be clear: Revenge is bad. It’s bad for relationships (generally), it’s bad for business, it’s bad for societies, and it’s bad for world peace. It’s probably even bad for your health. I’ve never met anybody who wanted more revenge in the world (except, of course, they were the wronged party). But no one has ever gotten rid of any bad thing in the world by understanding it less clearly. Ugly or not, revenge that finds its target can be as exhilarating as winning the Super Bowl. Once we really understand that revenge wants deterrence, we’ll be in a better position to create institutions that can provide alternative means of scratching the itch.

Boehm, C. (1987). Blood revenge: The enactment and management of conflict in Montenegro and other tribal societies (2nd ed.). Philadelphia: University of Pennsylvania Press.

Carlsmith, K. M., Wilson, T. D., & Gilbert, D. T. (2008). The paradoxical consequences of revenge. Journal of Personality and Social Psychology, 95, 1316-1324. doi: 10.1037/a0012165

Funk, F., McGeer, V., & Gollwitzer, M. (2014). Get the message: Punishment is satisfying if the transgressor responds to its communicative intent. Personality and Social Psychology Bulletin, 0146167214533130. doi: 10.1177/0146167214533130

Geronimo. (1983). Geronimo’s story of his life. New York: Irvington.

Gollwitzer, M., & Denzler, M. (2009). What makes revenge sweet: Seeing the offender suffer or delivering a message? Journal of Experimental Social Psychology, 45, 840-844. doi: 10.1016/j.jesp.2009.03.001

Gollwitzer, M., Meder, M., & Schmitt, M. (2011). What gives victims satisfaction when they seek revenge? European Journal of Social Psychology, 41, 364-374. doi: 10.1002/ejsp.782

[1] Geronimo (1983).

[2] Boehm (Boehm, 1987).

[3] Carlsmith, Wilson, and Gilbert (2008).

[4] Funk, McGeer, and Gollwitzer (2014); Gollwitzer and Denzler (2009); Gollwitzer, Meder, and Schmitt (2011).

A P-Curve Exercise That Might Restore Some of Your Faith in Psychology

I teach my university’s Graduate Social Psychology course, and I start off the semester (as I assume many other professors who teach this course do) by talking about research methods in social psychology. Over the past several years, as the problems with reproducibility in science have become more and more central to the discussions going on in the field, my introductory lectures have gradually become more dismal. I’ve come to think that it’s important to teach students that most research findings are likely false, that there is very likely a high degree of publication bias in many areas of research, and that some of our most cherished ideas about how the mind works might be completely wrong.

In general, I think it’s hard to teach students what we have learned about the low reproducibility of many of the findings in social science without leaving them with a feeling of anomie, so this year, I decided to teach them how to do p-curve analyses so that they would at least have a tool that would help them to make up their own minds about particular areas of research. But I didn’t just teach them from the podium: I sent them away to form small groups of two to four students who would work together to conceptualize and conduct p-curve analysis projects of their own.

I had them follow the simple rules that are specified in the p-curve user’s guide, which can be obtained here, and I provided a few additional ideas that I thought would be helpful in a one-page rubric. I encouraged them to make sure they were sampling from the available population of studies in a representative way. Many of the groups cut down their workload by consulting recent meta-analyses to select the studies to include. Others used Google Scholar or Medline. They were all instructed to follow the p-curve manual chapter-and-verse, and to write a little paper in which they summarized their findings. The students told me that they were able to produce their p-curve analyses (and the short papers that I asked them to write up) in 15-20 person-hours or less. I cannot recommend this exercise highly enough. The students seemed to find it very empowering.

This past week, all ten groups of students presented the results of their analyses, and their findings were surprisingly (actually, puzzlingly) rosy: All ten of the analyses revealed that the literatures under consideration possessed evidentiary value. Ten out of ten. None of them showed evidence for intense p-hacking. On the basis of their conclusions (coupled with the conclusions that previous meta-analysts had made about the size of the effects in question), it does seem to me that there really is license to believe a few things about human behavior:

(1) Time-outs really do reduce undesirable behavior in children (parents with young kids take notice);

(2) Expressed Emotion (EE) during interactions between people with schizophrenia and their family members really does predict whether the patient will relapse in in the successive 9-12 months (based on a p-curve analysis of a sample of the papers reviewed here);

(3) The amount of psychological distress that people with cancer experience is correlated with the amounts of psychological distress that their caregivers manifest (based on a p-curve analysis of a sample of the papers reviewed here);


(4) Men really do report more distress when they imagine their partners’ committing sexual infidelity than women do (based on a p-curve analysis of a sample of the papers reviewed here; caveats remain about what this finding actually means, of course…)

I have to say that this was a very cheering exercise for my students as well as for me. But frankly, I wasn’t expecting all ten of the p-curve analyses to provide such rosy results, and I’m quite sure the students weren’t either. Ten non-p-hacked literatures out of ten? What are we supposed to make of that? Here are some ideas that my students and I came up with:

(1) Some of the literatures my students reviewed involved correlations between measured variables (for example, emotional states or personality traits) rather than experiments in which an independent variable was manipulated. They were, in a word, personality studies rather than “social psychology experiments.” The major personality journals (Journal of Personality, Journal of Research in Personality, and the “personality” section of JPSP) tend to publish studies with conspicuously higher statistical power than do the major journals that publish social psychology-type experiments (e.g., Psychological Science, JESP and the two “experimental” sections of JPSP), and one implication of this fact, as Chris Fraley and Simine Vazire just pointed out is that the former set of experiment-friendly journals are more likely, ceteris paribus, to have higher false positive rates than is the latter set of personality-type journals.

(2) Some of the literatures my students reviewed were not particularly “sexy” or “faddish”–at least not to my eye (Biologists refer to the large animals that get the general public excited about conservation and ecology as the “charismatic megafauna.” Perhaps we could begin talking about “charismatic” research topics rather than “sexy” or “faddish” ones? It might be perceived as slightly less derogatory…). Perhaps studies on less charismatic topics generate less temptation among researchers to capitalize on undisclosed researcher degrees of freedom? Just idle speculation…

(3) The students went into the exercise without any a priori prejudice against the research areas they chose. They wanted to know whether the literatures the focused on were p-hacked because they cared about the research topics and wanted to base their own research upon what had come before–not because they had read something seemingly fishy on a given topic that gave them impetus to do a full p-curve analysis. I wonder if this subjective component to the exercise of conducting a p-curve analysis is going to end up being really significant as this technique becomes more popular.

If you teach a graduate course in psychology and you’re into research methods, I cannot recommend this exercise highly enough. My students loved it, they found it extremely empowering, and it was the perfect positive ending to the course. If you have used a similar exercise in any of your courses, I’d love to hear about what your students found.

By the way, Sunday will be the 1-year anniversary of the Social Science Evolving Blog. I have appreciated your interest.  And if I don’t get anything up here before the end of 2014, happy holidays.

Do Humans Have Innate Concepts for Thinking About Other People?

Gossip is one of life’s greatest consolations and one of our most reliable conversational fall-backs. In a world without gossip, many of us could realize Tim Ferriss’s ideal of a Four-Hour Work Week without even putting any of his advice into practice. Gossip is also, according to the anthropologist Donald Brown (1991), a human universal—one of those pan-human traits that people within every world society can be expected to evince.

Our ability to gossip, as is the case with all ostensive communication, is premised on the idea that our listeners are in possession of concepts that enable them to convert the sounds coming out of our mouths into ideas that resemble those we are trying to convey. Which got me to thinking about the psychology that makes gossip possible: Are there universal “person concepts”—species-typical cognitive representations of particular human traits or attributes—that every human reliably acquires during normal development? If you flew back from your vacation in Tanzania with a Hadza man or woman whom you planned to entertain in your home for a couple of weeks, would the two of you be able to settle into your living room and enjoy a little TMZ* (assuming you spoke Hadza and could translate)? The Hadza are arguably the last full-time hunter-gatherer society on the planet; it’s difficult to imagine a society more different from our own. Could you trust that your Hadza friend had acquired all of the person concepts that would enable him or her to follow the action? Are there any universal and native social concepts upon which all humans rely in order to make social life work?

I’ll get to that in a moment, but first a slightly bigger question: Does the mind contain any native concepts at all? Here in 21st century, many scholars in the social sciences would answer this question affirmatively, having turned their backs on the most hardcore versions of the Blank Slate theory that Steven Pinker describes in his aptly titled (2002) book, The Blank Slate. (The Major Blank-Slater of Western thought, John Locke, famously wrote “If we will attentively consider new-born children, we shall have little reason to think that they bring many ideas into the world with them.”). Even so, there is still much to be debated and discovered about innate ideas.

For starters, how many innate ideas are there? Conceding that there are more than zero of them is not a particularly bold claim. Are there handfuls? Dozens? Scores? Many evolutionary psychologists and cognitive scientists prefer large numbers here, and not without good reason: It’s difficult to imagine how even the basic behavioral tasks that humans must accomplish to stay alive—finding food, water, and warmth, for starters—could be accomplished unless the mind contained some built-in conceptual content.

To Find Food, a Newborn Baby Needs FOOD

Since Locke brought up the case of “new-born children,” let’s think about babies for a moment. A newborn infant comes into the world with a pressing problem: She must find something to eat. Locke thought the infant came into the world with the ability to experience hunger, but he did not think the infant came into the world with a concept of FOOD. The so-called Frame Problem, which Daniel Dennett (2006) so vividly described, makes it unlikely that a newborn infant could solve this problem (“Find food”) before it starved unless it had some built-in representation of what FOOD is. The selection pressure for the evolution of a conceptual short-cut here is enormous: Successful food-finding in the first hour after birth is a predictor of infant survival, so that first hour matters. The clock is ticking. Therefore, a cognitive design that requires infants to find food on a blind trial-and-error basis is likely to be a losing design in comparison to a design that comes with a built-in concept for FOOD from the outset.

For human infants, the FOOD concept involves the activity of neurons that respond to the olfactory properties of specific volatile chemicals that human mothers emit via the breast, possibly along with visual and tactile features of the human breast as well (Schaal et al., 2009). Through a matching-to-template process, human infants can quickly locate breast-like objects in their environments, which of course are the only objects in the universe that are specially designed to provide human neonates with nutrition and hydration.

What about More “Complex” Concepts?

Convincing you that human neonates possess an innate concept for FOOD is perhaps an easy sell, but in a recent paper in Current Directions in Psychological Science, Andy Delton and Aaron Sell (2014) argued that humans come to possess a variety of universal and reliably developing social concepts as well, which enable them to regulate the universal components of human social life. For Delton and Sell, there can be “no motivation without representation,” so if there are certain adaptive challenges that humans have evolved behavioral programs to surmount, there should also be concepts within the human mind that enable them to parse their worlds into adaptively meaningful units so that the stimuli that are relevant to achieving those adaptive goals can be easily identified.

Delton and Sell’s list of candidates for intuitive concepts (which they in no way claim to be exhaustive) includes COOPERATOR, FREE RIDER, NEWCOMER, KINSHIP, ROMANTIC PARTNER, ROMANTIC RIVAL, ENTITLEMENT, DISRESPECT, INGROUP, and OUTGROUP, among others (see the Table below). The claim here, again, is that if humans are going to have evolved goals that involve “establishing cooperative relationships,” “deterring free riders,” or “evaluating whether to engage in trade with someone from an outgroup,” they will need concepts to represent what COOPERATORS, FREE RIDERS, and OUTGROUPS actually are. There can be no motivation without representation.

(By the way, to claim that such concepts are “innate” or “native” is not to claim that they are present in the mind from birth, but rather, that the human genome possesses the programs for assembling these representations within the mind at developmentally appropriate points in the human life cycle, and with appropriate kinds of environmental inputs. Concepts come and concepts go as we develop. Think of how the concept of FOOD gets overwritten once infants turn away from breast milk and toward other foods during the first three to four years of life. The FOOD concept within the mind/brain changes over ontogeny, but the genes that give rise to that initial FOOD concept—which the infants match against environmental inputs on the basis of the olfactory, visual, and tactile information—remain in the genome and are passed onto one’s genetic heirs so the concept can be re-constructed during ontogeny.)

Delton-Sell-Figure1From Delton and Sell, 2014

Looking for Universal Concepts in the Dictionary

Another paper was recently published that provides some confirmatory evidence, of a sort, for Delton and Sell’s position. The personality psychologist Gerard Saucier and his colleagues (2014) read through the English dictionaries representing the languages of 12 geographically and linguistically distinct cultural groups from all over the world (see Table below) in hopes of finding the universal concepts that humans use to parse up the actions and dispositions of other humans.

Saucier_Figure1From Saucier et al., 2014

The logic behind Saucier et al.’s effort was straightforward: All human societies should end up making words to represent the attributes that humans universally use to parse their social lives—presuming, I suppose, that those concepts are worth talking about. (Universal social concepts for which humans universally make words might be only a subset of all universal social concepts: Some universal social concepts might not be worth talking about, though I can’t think off-hand of what such concepts might be. Can you?)

By scouring these dictionaries, Saucier and colleagues ultimately located nearly 17,000 words across the 12 languages that could be used to refer to human attributes. Through a reduction process that enabled them to thrown out synonyms and variations on common roots (fool, foolish, foolishly, “to fool,” and “to be fooled” can all be reduced to a single attribute concept, as can all of the other words that gloss in English as “to be foolish”), they were able to greatly simplify the number of attribute concepts within each language to more manageable numbers.

Having reduced each language’s human attribute lexicon down in this fashion, they then looked for attribute terms that cropped up in either (a) all 12 of the languages they studied; or (b) 11 of the 12 languages they studied. With their “11 out of 12” rule, they were taking a cue from the anthropologist Donald Brown, who argued that “Human Universals” should be manifest in the ethnographic materials for 95% of the world’s societies. Placing the empirical estimate at 100% would be too strict because it doesn’t allow for ethnographers’ oversights. With only 12 dictionaries to work with, 11 out of 12 is as close as you can come to 95%.

What Saucier and colleagues discovered was fascinating. All twelve languages had human attribute concepts corresponding to BAD, GOOD, USELESS, BEAUTIFUL, DISOBEDIENT, STUPID, ALIVE, BLIND, SICK, STRONG, TIRED, WEAK, WELL, AFRAID, ANGRY, ASHAMED, JEALOUS, SURPRISED, BIG, LARGE, SMALL, HEAVY, OLD, and YOUNG. If you use the slightly more lenient “11 out of 12” criterion for judgments of universality, you get to add EVIL, HANDSOME, GOSSIP, HUMBLE, LOVE, CLUMSY, DRUNK, FOOLISH, QUICK, SLOW, UNABLE, WISE, DEAD, SLEEPY, HUNGRY, PAIN, PLEASURE, THIRSTY, HAPPY, SATISFIED, TROUBLED, FAT, LITTLE, SHORT, TALL, MARRIED, POOR, RICH, and STRANGER.

To me, this is a fascinating list. Some of the traits on the list involve moral evaluation (e.g., BAD, GOOD, EVIL, HUMBLE). Others clearly have to do with physical health, condition, or capacity for work (e.g., ALIVE, BLIND, SICK, WELL, QUICK, SLOW, UNABLE, STRONG, WEAK). Others relate to reproductive value (e.g., BEAUTIFUL, HANDSOME, MARRIED), and age (YOUNG, OLD). Many of the universals relate to more temporary motivations, emotions, and behavioral dispositions (e.g., TIRED, AFRAID, ANGRY, ASHAMED, JEALOUS, SURPRISED, PLEASURE, THIRSTY, HUNGRY, PAIN, COLD, HOT). And still others are associated with reliability and judgment (e.g., CLUMSY, WISE, RIGHT, USELESS). I think Delton and Sell would be especially pleased to see that STRANGER even makes it to the list—consistent with their speculation that humans possess innate “NEWCOMER TO A COALITION” and “OUTGROUP” concepts.

I wouldn’t want to overstate the significance of Saucier and colleagues’ findings (although I think the findings are extremely important): As I mentioned above, just because we lack a word for something doesn’t mean we don’t have an innate concept for it (remember that infants can find food because they come into the world with a well-developed FOOD concept, even though they can’t converse with you about food). Saucier’s list of universal person words almost surely does not exhaust the list of evolved person concepts that humans reliably acquire through ontogeny, but it might be a decent rough draft of the set of person concepts that all adults eventually find regular occasions to gossip about. And of course, if you can count on the fact that your Hadza houseguest has concepts for Bad, Good, Beautiful/Handsome, Love, Drunk, Sick, Ashamed, Jealous, Fat, Short, Old, Young, Rich, and Poor, then translating an episode of TMZ for him or her should be no trouble whatsoever.

Postscript: After reading this post, Paul Bloom wrote me to ask why I “didn’t mention the enormous  developmental psych literature that looks at exactly this question—work that studies babies with an eye toward exploring exactly which concepts are innate and which are learned, e.g., Carey, Baillergeon, Wynn, Spelke, Gergely, Leslie, and so on.” (Paul was too modest, I think, to put himself to this list, but he should have.) Hat in hand, I couldn’t agree more. If you don’t know the work of the scientists that Paul mentioned above, you can look to it for further evidence that humans come into the world with complex social concepts. ~MEM


Brown, D. E. (1991). Human universals. Boston, MA: McGraw-Hill.

Delton, A. W., & Sell, A. (2014). The co-evolution of concepts and motivation. Current Directions in Psychological Science, 23(2), 115-120.

Dennett, D. C. (2006). Cognitive wheels: The Frame Problem of AI. New York: Routledge.

Pinker, S. (2002). The blank slate: The modern denial of human nature. New York: Viking.

Saucier, G., Thalmayer, A. G., & Bel-Bahar, T. S. (2014). Human attribute concepts: Relative ubiquity across twelve mutually isolated languages. Journal of Personality and Social Psychology, 107(1), 199-216.

Schaal, B., Coureaud, G., Doucet, S., Delaunay-El Allam, M., Moncomble, A.-S., Montigny, D., . . . Holley, A. (2009). Mammary olfactory signalisation in females and odor processing in neonates: Ways evolved by rabbits and humans. Behav Brain Res, 200, 346-358.

*I trust that you were able to infer that by TMZ I meant the celebrity gossip show and not the cancer drug or the Soviet motorcycle manufacturer.

The Myth of Moral Outrage

This year, I am a senior scholar with the Chicago-based Center for Humans and Nature. If you are unfamiliar with this Center (as I was until recently), here’s how they describe their mission:

The Center for Humans and Nature partners with some of the brightest minds to explore humans and nature relationships. We bring together philosophers, biologists, ecologists, lawyers, artists, political scientists, anthropologists, poets and economists, among others, to think creatively about how people can make better decisions — in relationship with each other and the rest of nature.

In the year to come, I will be doing some writing for the Center, starting with a piece I that has just appeared on their web site. In The Myth of Moral Outrage, I attack the winsome idea that humans’ moral progress over the past few centuries has ridden on the back of a natural human inclination to react with a special kind of anger–moral outrage–in response to moral violations against unrelated third parties:

It is commonly believed that moral progress is a surfer that rides on waves of a peculiar emotion: moral outrage. Moral outrage is thought to be a special type of anger, one that ignites when people recognize that a person or institution has violated a moral principle (for example, do not hurt others, do not fail to help people in need, do not lie) and must be prevented from continuing to do so . . . Borrowing anchorman Howard Beale’s tag line from the film Network, you can think of the notion that moral outrage is an engine for moral progress as the “I’m as mad as hell and I’m not going to take this anymore” theory of moral progress.

I think the “Mad as Hell” theory of moral action is probably quite flawed, despite the popularity that it has garnered among may social scientists who believe that humans possess “prosocial preferences” and a built-in (genetically group-selected? culturally group selected?) appetite for punishing norm-violators. I go on to describe the typical experimental result that has given so many people the impression that we humans do indeed possess prosocial preferences that motivate us to spend our own resources for the purpose of punishing norm violators who have harmed people whom we don’t know or otherwise care about. Specialists will recognize that the empirical evidence that I am taking to task comes from that workhorse of experimental economics, the third-party punishment game:

…[R]esearch subjects are given some “experimental dollars” (which have real cash value). Next, they are informed that they are about to observe the results of a “game” to be played by two other strangers—call them Stranger 1 and Stranger 2. For this game, Stranger 1 has also been given some money and has the opportunity to share none, some, or all of it with Stranger 2 (who doesn’t have any money of her own). In advance of learning about the outcome of the game, subjects are given the opportunity to commit some of their experimental dollars toward the punishment of Stranger 1, should she fail to share her windfall with Stranger 2.

Most people who are put in this strange laboratory situation agree in advance to commit some of their experimental dollars to the purpose of punishing Stranger 1’s stingy behavior. And it is on the basis of this finding that many social scientists believe that humans have a capacity for moral outrage: We’re willing to pay good money to “buy” punishment for scoundrels.

In the rest of the piece, I go on to point out the rather serious inferential limitations of the third-party punishment game as it is typically carried out in experimental economists’ labs. I also point to some contradictory (and, in my opinion, better) experimental evidence, both from my lab and from other researchers’ labs, that gainsay the widely accepted belief in the reality of moral outrage. I end the piece with a proposal for explaining what the appearance of moral outrage might be for (in a strategic sense), even if moral outrage is actually not a unique emotion (that is, a “natural kind” of the type that we assume anger, happiness, grief, etc. to be) at all.

I don’t want to steal too much thunder from the Center‘s own coverage of the piece, so I invite you to read the entire piece over on their site. Feel free to post a comment over there, or back over here, and I’ll be responding in both places over the next few days.

As I mentioned above, I’ll be doing some additional writing for the center in the coming six months or so, and I’ll be speaking at a Center event in New York City in a couple of months, which I will announce soon.