Tag Archives: trust

Trust in the Time of Coronavirus: Low Trusters are Particularly Skeptical of Local Officials and Their Own Neighbors

A few days ago, I saw the results of a new Pew poll on Americans’ trust in the wake of the Coronavirus outbreak. The poll, based on a random sample of 11,537 U.S. adults, addressed two questions: Which groups of people and societal institutions do Americans trust right now? And how do their background levels of generalized trust influence their trust in those specific groups of people and institutions?

The takeaway is troubling: High trusters and low trusters have comparable amounts of trust in our federal agencies and national institutions, but they have vastly different amounts of trust in the responses and judgments of their local officials and neighbors.

To examine these issues, the Pew resesarchers first divided the sample into three groups based on their responses to three standard questions for measuring generalized trust. Helpfully, they called these three subgroups Low Trusters, Medium Trusters, and High Trusters.

As many other researchers have found, generalized trust was associated with ethnicity (white Americans have higher levels of generalized trust than blacks and hispanics do), age (the more you have of one, the more you have of the other) education (ditto), and income (ditto). These results are hardly surprising–ethnicity, age, education, and income are among the most robust predictors of trust in survey after survey–but they do nevertheless provide an interpretive backdrop for the study’s more important findings.

What really struck me were the associations of people’s levels of generalized trust and their sentiments toward public institutions and groups of other people. Low, medium, and high trusters had fairly similar evaluations of how the CDC, the news media, and even Donald Trump were responding: On average, people at all three levels of generalized trust had favorable evaluations of the CDC; on average, people at all three levels of generalized trust had lukewarm evaluations of Trump’s response.

Where the three groups of trusters differed more conspicuously was in their evaluations of their state officials, their local officials, and–most strikingly–ordinary people in their communities. About 80% of high trusters thought their local and state officials were doing an excellent or good job of responding to the outbreak. Only 57% of low trusters said the same.

But the biggest gulf in the sentiments of high trusters and low trusters was in their evaluations of ordinary people in their communities. Eighty percent of high trusters said that ordinary people in their community were doing an excellent or good job in responding to the outbreak. Only 44% of low trusters approved.

 

High trusters, medium trusters, and low trusters also had widely divergent opinions about the responses of ordinary people–both across the country and in their local communities.

Most people, regardless of how much generalized trust they had, thought their state governments, local governments, and local school systems were responding with the right amount of urgency to the outbreak. However, high trusters and low trusters differed greatly in their attitudes toward the responses of their neighbors. Where as16% of high trusters thought ordinary people in their local communities were overreacting; 35% of the low trusters–more than twice as many–thought ordinary people in their local communities were overreacting.

What I find troubling about these statistics is that all epidemics, like all politics, are local. The people who should be best equipped to tell you about what’s going on in your community are the people who are paid to know what’s going on in your community and the people who actually live in your community. We’re entitled to clear and accurate information from local officials, and we should be ashamed that local people cannot always trust their judgment. But local officials are not the only source of information that people should be able to trust. An ordinary person in your community could, in principle, be able to tell you whether a teacher at your kid’s school or a cashier at your local grocery store tested positive. How much unnecessary risk do we expose ourselves to when some of us inhabit communities or worldviews that cause us to perceive our local officials and neighbors are liars, incompetents, or chicken-littles?

Behavioral Altruism is an Unhelpful Scientific Category

Altruism has been a major topic in evolutionary biology since Darwin himself, but altruism (the word) did not appear even once in Darwin’s published writings.[1] The omission of altruism from Darwin’s thoughts about altruism is hardly surprising: Altruism had appeared in print for the first time only eight years before The Origin of Species. The coiner was a Parisian philosopher named Auguste Comte.

Capitalizing on the popularity he had already secured for himself among liberal intellectuals in both France and England, Comte argued that Western civilization needed a complete intellectual renovation, starting from the ground up. Not one to shrink from big intellectual projects, Comte set out to do this re-vamping himself, resulting in four hefty volumes. Comte’s diagnosis: People cared too much for their own welfare and too little for the welfare of humanity. The West, Comte thought, needed a way of doing society that would evoke less égoisme, and inspire more altruisme.

Comte saw a need for two major changes. First, people would need to throw out the philosophical and religious dogma upon which society’s political institutions had been built. In their place, he proposed we seek out new principles, grounded in the new facts emerging from the new sciences of the human mind (such as the fast-moving scientific field of phrenology), human society (sociology), and animal behavior (biology).

Second, people would need to replace Christianity with a new religion in which humanity, rather than the God of the Abrahamic religions, was the object of devotion. In Comte’s new world, the 12-month Gregorian calendar would be replaced with a scientifically reformed calendar consisting of 13 months (each named after a great thinker from the past—for example, Moses, Paul the Apostle, Gutenberg, Shakespeare, and Descartes) of 28 days each (throw in a “Day of the Dead” at the end and you’ve got your 365-day year). Also, the Roman Catholic priesthood would be replaced with a scientifically enlightened, humanity-loving “clergy” with Comte himself—no joke—as the high priest.

Comte’s proposals for a top-down re-shaping of Western society didn’t get quite the reception he was hoping for (though they caught on better than you might think: If you’re ever in Paris or Rio, pay a visit to the Temples of Humanity that Comte’s followers founded around the turn of the 19th century). In England especially, the scientific intelligentsia’s response was frosty. On the advice of his friend Thomas Huxley, Darwin also steered clear of all things Comtean, including altruism.

Nevertheless, altruism was in the air, and its warm reception among British liberals at the end of the 19th century is how the word percolated into everyday language. It’s also why the word is still in heavy circulation today. The British philosopher Herbert Spencer, an intellectual rock star of his day, was a great admirer of Comte, and he played a major role in establishing a long-term home for altruism in the lexicons of biology, social science, and everyday discourse.[2] Spencer used the term altruism in three different senses—as an ethical ideal, as a description of certain kinds of behavior, and as a description for a certain kind of human motivation. (He wouldn’t have understood how to think about it as an evolutionary concept.)[3]

Here, I want to look at Spencer’s second use of the word altruism—as a description of a class of behaviors—because I think it is a deeply flawed scientific concept, despite its wide usage. At the outset, I should note that as a Darwinian concept—an evolutionary pathway by which natural selection can create complex functional design by building traits in individuals that cause them to take actions that increase the rate of replication of genes locked inside their genetic relatives’ gonads—altruism has none of the conceptual problems that behavioral altruism has.

With Spencer’s behavioral definition of altruism, he meant to refer to “all action which, in the normal course of things, benefits others instead of benefiting self.”[4] A variant of this definition is embraced today by many economists and other social scientists, who use the term behavioral altruism to classify all “costly acts that confer benefits on other individuals.”[5] Single-celled organisms are, in principle, as capable of Spencerian behavioral altruism as humans are. Social scientists who subscribe to the behavioral definition of altruism have applied it to a wide range of human behaviors. Have you ever jumped into a pool to save a child or onto a hand grenade to spare your comrades? Donated money to your alma mater or a charity? Given money, a ride, or directions to a stranger? Served in the military? Donated blood, bone marrow, or a kidney? Reduced, re-used, or recycled? Adopted a child? Held open a door for a stranger? Shown up for jury duty? Volunteered for a research experiment? Taken care of a sick friend? Let someone in front of you in the check-out line at the grocery store? Punished or scolded someone for breaking a norm or for being selfish? Taken found property to the lost and found? Tipped a server in a restaurant in a city you knew you’d never visit again? Pointed out when a clerk has undercharged you? Lent your fondue set or chain saw to a neighbor? Shooed people away from a suspicious package at the airport? If so, then you, according to the behavioral definition, are an altruist.[6]

Some economists seek to study behavioral altruism in the laboratory with experimental games in which researchers give participants a little money and then measure what they do with it. The Trust Game, which involves two players, is a great example. We can call the first actor an Investor because he or she is given a sum of money—say, $10—by the experimenter, some or all of which he or she can send to the other actor, whom we might call the trustee. The investor knows that every dollar he or she entrusts to the trustee gets multiplied by a fixed amount—say, 3—so if the investor transfers $1 to the trustee, the trustee now has $3 more in his or her account as a result of the investor’s $1 transfer. Likewise, the investor knows that the trustee will subsequently decide whether to transfer some money back. Under these circumstances, according to some experimental economists, if the Investor sends money to the Trustee, it is “altruistic” because it is a “costly act that confers an economic benefit upon another individual.”[7] But the lollapalooza of behavioral altruism doesn’t stop there: It’s also altruistic, per the behavioral definition that economists embrace, if the Trustee transfers money back to the Investor. Here, too, one person is paying a cost to provide a benefit to another person.

Notice that motives don’t matter for behavioral altruism. (To social psychologists like Daniel Batson, altruism is a motivation to raise the welfare of another individual, pure and simple. Surprising as it might seem, this is also, in fact a conceptually viable scientific category. But that’s another blog post.) All that matters for a behavior to be altruistic is that it entails costs to actors and benefits to recipients. Clearly, donating a kidney or donating blood are costly to the donor and beneficial to the recipients, but even when you hold a door open for a stranger, you pay a cost (a few seconds of your time and a calorie or so worth of physical effort) to deliver a benefit to someone else. By this definition, even an insurance company’s agreement to cover the MRI for your (possibly) torn ACL qualifies: After all, the company pays a cost (measured in the thousands of dollars) to provide you with a benefit (magnetic confirmation either that you need surgery or that your injury will probably get better after a little physical therapy).

But a category that lumps together recycling, holding doors for strangers, donating kidneys, serving in the military, and handing money over to someone in hopes of securing a return on one’s investment—simply because they all involve costly acts that confer benefits on others—is a dubious scientific category. Good scientific categories, unlike “folk categories,” are natural kinds—as Plato said, they “carve nature at its joints.” Rather than simply sharing one or more properties that are interesting to a group of humans (for example, social scientists who are interested in a category called “behavioral altruism”), they should share common natural essences, common causes, or common functions. Every individual molecule with the chemical formula H2O is a member of a natural kind—water—because they all share the same basic causes (elements with specific atomic numbers that interact through specific kinds of bonds). These deep properties are the causes of all molecules of H2O that have ever existed and that ever will exist. Natural kinds are not just depots for things that have some sort of gee-whiz similarity.[8]

If behavioral altruism is a natural kind, then knowing that a particular instance of behavior is “behaviorally altruistic” should enable me to draw some conclusions about its deep properties, causes, functions, or effects. But it doesn’t. All I know is that I’ve done something that meets the definition of behavioral altruism. Even though I have, on occasion, shown up for jury duty, held doors open for strangers, received flu shots, loaned stuff to my neighbors, and even played the trust game, simply knowing that they are all instances of “behavioral altruism” does not enable me to make any non-trivial inferences about the causes of my behavior. By the purely behavioral definition of altruism, I could show up for jury duty to avoid being held in contempt of court, I could give away some old furniture because I want to make some space in my garage, and I could hold the door for someone because I’m interested in getting her autograph. The surface features that make these three behaviors “behaviorally altruistic” are, well, superficial. Knowing that they’re behaviorally altruistic gives me no new raw materials for scientific inference.

So if behavioral altruism isn’t a natural kind, then what kind of kind is it? Philosophers might call it a folk category, like “things that are white,” or “things that fit in a bread box,” or “anthrosonic things,” which comprise all of the sounds people can make with their bodies—for example, hand-claps, knuckle- and other joint-cracking, the lub-dub of the heart’s valves, the pitter-patter of little feet, sneezes, nose-whistles, coughs, stomach growls, teeth-grinding, and beat-boxing. Anthrosonics gets points for style, but not for substance: My knowing that teeth-grinding is anthrosonic does not enable me to make any new inferences about the causes of teeth-grinding because anthrosonic phenomena do not share any deep causes or functions.

Things that are white, things that can fit in a bread box, anthrosonics, things that come out of our bodies, things we walk toward, et cetera–and, of course, behavioral altruism–might deserve entries in David Wallechinsky and Amy Wallace’s entertaining Book of Lists[9], but not in Galileo’s Book of Nature. They’re grab-bags.

~

[1] Dixon (2013).
[2] Spencer (1870- 1872, 1873, 1879).
[3] Dixon (2005, 2008, 2013).
[4] Spencer (1879), p. 201.
[5] Fehr and Fischbacher (2003), p. 785.
[6] See, for instance, Silk and Boyd (2010), Fehr and Fischbacher (2003); Gintis, Bowles, Boyd, & Fehr (2003).
[7] Fehr and Fischbacher (2003), p. 785.
[8] Slater and Borghini (2011).
[9] Wallechinsky, Wallace, and Wallace (2005).

REFERENCES

Dixon, T. (2005). The invention of altruism: August Comte’s Positive Polity and respectable unbelief in Victorian Britain. In D. M. Knight & M. D. Eddy (Eds.), Science and beliefs: From natural philosophy to natural science, 1700-1900 (pp. 195-211). Hampshire, England: Ashgate.

Dixon, T. (2008). The invention of altruism: Making moral meanings in Victorian Britain. Oxford, UK: Oxford University Press.

Dixon, T. (2013). Altruism: Morals from history. In M. A. Nowak & S. Coakley (Eds.), Evolution, games, and God: The principle of cooperation (pp. 60-81). Cambridge, MA: Harvard University Press.

Fehr, E., & Fischbacher, U. (2003). The nature of human altruism. Nature, 425, 785-791.

Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (2003). Explaining altruistic behavior in humans. Evolution and Human Behavior, 24, 153-172.

Silk, J. B., & Boyd, R. (2010). From grooming to giving blood: The origins of human altruism. In P. M. Kappeler & J. B. Silk (Eds.), Mind the gap: Tracing the origins of human universals (pp. 223-244). Berlin: Springer Verlag.

Slater, M. H., & Borghini, A. (2011). Introduction: Lessons from the scientific butchery. In J. K. Campbell, M. O’Rourke, & M. H. Slater (Eds.), Carving nature at its joints: Natural kinds in metaphysics and science (pp. 1-31). Cambridge, MA: MIT Press.

Spencer, H. (1870- 1872). Principles of psychology. London: Williams and Norgate.

Spencer, H. (1873). The study of sociology. London: H. S. King.

Spencer, H. (1879). The data of ethics. London: Williams and Norgate.

Wallechinsky, D., & Wallace, A. (2005). The book of lists: The original compendium of curious information. Edinburgh, Scotland: Canongate Books.

The Trouble with Oxytocin, Part III: The Noose Tightens for The Oxytocin–>Trust Hypothesis

https://i0.wp.com/media-cache-ak0.pinimg.com/736x/2b/1f/9b/2b1f9b4e930d47f31b1f7f3aecd0b0cf.jpgMight be time to see about having that Oxytocin tattoo removed…

When I started blogging six months ago, I kicked off Social Science Evolving with a guided tour of the evidence for the hypothesis that oxytocin increases trusting behavior in the trust game (a laboratory workhorse of experimental economics). The first study on this topic, authored by Michael Kosfeld and his colleagues, created a big splash, but most of the studies in its wake failed to replicate the original finding. I summarized all of the replications in a box score format (I know, I know: Crude. So sue me.) like so:

Box Score_Dec2013By my rough-and-ready calculations, at the end of 2013 there were about 1.25 studies’ worth of successful replications of the original Kosfeld results, but about 3.75 studies’ worth of failed replications (see the original post for details). Even six months ago, the empirical support for the hypothesis that oxytocin increases trust in the trust game was not looking so healthy.

I promised that I’d update my box score as I became aware of new data on the topic, and a brand new study has just surfaced. Shuxia Yao and colleagues had 104 healthy young men and women play the trust game with four anonymous trustees. One of those four trustees (the “fair” trustee) returned enough of the subject’s investment to cause the subject and the trustee to end up with equal amounts of money; the other three trustees (designated as the “unfair players”) declined to return any money to the subject at all.

Next, subjects were randomly assigned to receive either the standard dose of intranasal oxytocin, or a placebo. Forty-five minutes later, participants were told that they would receive an instant message from the four players to whom they had entrusted money during the earlier round of the trust game. The “fair” player from the earlier round, and one of the “unfair” players, sent no message at all. The second unfair player sent a cheap-talk sort of apology, and the third unfair player offered to make a compensatory monetary transfer to the subject that would make their payoffs equal.

Finally, study participants took part in a “surprise” round of the trust game with the same four strangers. The researchers’ key question was whether the subjects who had received oxytocin would behave in a more trusting fashion toward the four players from Round 1 than the participants who received a placebo instead.

They didn’t.

In fact, the only hint that oxytocin did anything at all to participants’ trust behaviors was a faint statistical signal that oxytocin caused female participants (but not male participants) to treat the players from Round 1 in a less trusting way. If anything, oxytocin reduced women’s trust. I should note, however, that this females-only effect for oxytocin was obtained using a statistically questionable procedure: The researchers did not find a statistical signal of an interaction between oxytocin and subjects’ sex, and without such a signal, their separation of the men’s and the women’s data for further analyses really wasn’t licensed. But regardless, the Yao data fail to support the idea that oxytocin increases trusting behavior in the trust game.

It’s time to update the box score:

Box_Score_Jun2014

In the wake of the original Kosfeld findings, 1.25 studies worth of results have accumulated to suggest that oxytocin does increase trust in the trust game, but 4.75 studies worth of results have accumulated to suggest that it doesn’t.

It seems to me that the noose is getting tight for the hypothesis that intransasal oxytocin increases trusting behavior in the trust game. But let’s stay open-minded a while longer. As ever, if you know of some data out there that I should be including in my box score, please send me the details. I’ll continue updating from time to time.

The Trouble with Oxytocin, Part I:
Does OT Actually Increase Trusting Behavior?

It’s the holiday season, when many people try to clear a little mental space for thoughts about peace on earth and good will toward humanity. In this spirit, I thought I’d inaugurate this blog with a close look at an endocrine hormone that, according to some researchers, can promote trust, generosity, empathy, and, yes, even world peace. I’m referring, of course, to oxytocin (OT).

I’ve been involved with a few research projects on OT over the past few years, mostly in collaboration with my former PhD student Ben Tabak (plus some other colleagues here in Miami), but I’ve made no secret of my concerns about the validity of the techniques that scientists use to measure and manipulate OT experimentally. I also remain unconvinced that intranasally administered OT even makes it into the human brain in the first place. (Many experts think the brain is involved in the control of behavior, so this particular gap in our scientific knowledge seems to me like a problem that OT researchers should be taking a lot more seriously.)

I’ll probably write about these issues in the future, but for now I want to look closely at a much more circumscribed OT-related idea that took the scientific world by storm a few years back. This is the notion that spraying a little OT up people’s noses causes them to become more trusting toward strangers. Let’s look at the initial test of this hypothesis, as well as the evidence that emerged in the wake of the initial experiment, with the goal of estimating the strength of the evidence both for, and against, this charming idea.

 The Kosfield (2005) Experiment

In the very first experiment on oxytocin’s effect on trusting behavior, which bore the definitive title “Oxytocin increases trust in humans” [1], Kosfeld and colleagues randomly assigned 58 healthy men to receive either OT, or an equivalent amount of placebo, via a nasal spray. After the sprays had been given a chance to “kick in” (50 minutes), participants played four rounds (each time with different partners) of the Trust Game—one of the workhorses of experimental economics. The Trust Game is a two-player game in which one player takes on the role of the Investor (these are the subjects whose oxytocin-influenced behavior matters for our purposes here), and the other takes on the role of the Trustee. The Trust Game is hard to describe succinctly, but the Kosfeld paper has a helpful illustration.

Trust Game_Kosfeld

The Trust Game is a two-stage game. In Stage 1, the Investor chooses how much money (in the Kosfeld experiment, either 0, 4, 8, or 12 “monetary units,” or “MU”) from a bolus 12 of MUs (which the experimenter provides) to transfer to an anonymous Trustee. (Participants are told that these MUs will be converted into real cash after the experiment ends.) The experimenters typically triple the transfer on its way to the Trustee. As a consequence, if the Investor sends 4 MU to the Trustee from her bolus of 12 MU (second branch from the left, marked “4”), the Trustee will finish Stage 1 with her original 12 MU, plus the additional 4 MU * 3 = 12 MU that result from the 4-MU transfer from the Investor (after the experimenters multiply that transfer by 3). In contrast, the Investor will be left with 12 – 4 = 8 MU at the end of Stage 1.

In Stage 2, the Trustee is given a choice to send as much or as little of her 24 MU back to the Investor as she wishes. This is called a back-transfer. If the Trustee chooses to send 0 back, she keeps all 24 MU for herself. Anything she does sends back to the Investor gets subtracted from the Trustee’s 24 MUs, and is added to the 8 MU that remained in the Investor’s account at the end of Stage 1. The game is called the trust game under the assumption that people generally like money and prefer to have as much of it as possible. Under this assumption, it does make sense to conceptualize Investors’ choices about how much to send to their Trustees during Stage 1 as measures of their trust that the Trustees will reciprocate during Stage 2.

So, the key question is this: Did OT increase Investors’ Stage 1 transfers in the Kosfeld experiment? That is, did OT increase their trusting behavior? Here’s what the authors wrote: “The investors’ average transfer is 17% higher in the oxytocin group (Mann-Whitney U-test; z = -1.897, P = 0.029, one-sided), and the median transfer in the oxytocin group is 10MU, compared to a median of only 8MU for subjects in the placebo group” (p. 674). The figure below, also from the Kosfeld paper, shows the distribution of transfers for the OT group and the placebo group.

OT_TRUST_DISTRIB_KOSFELD_CORRECT

Look at the far right side of the figure: The difference in the percentages of participants in the OT and placebo conditions who transferred all of their MUs (12) to their four Trustees is really quite arresting. The authors summarize this result on p. 647: “Out of the 29 subjects, 13 (45%) in the oxytocin group showed the maximal trust level [that is, they entrusted all of their MUs to their Trustees on all 4 rounds], whereas only 6 of the 29 subjects (21%) in the placebo group showed maximal trust.” Mind you, a statistical purist would likely have winced at the researchers’ use of a one-tailed statistical test—especially since the difference in the distributions for the two groups would not have registered as statistically significant at p < .05 (which signals that the results would be expected less than 5% of the time in a world in which the null hypothesis is true) with a two-tailed test. Nevertheless, just by looking at the figure you can understand why the authors got excited by their data.

The Kosfeld paper has become a citation classic. Google Scholar tells me that it has been cited 1,673 times as of today (by means of comparison, Watson and Crick’s 1953 Nature paper on the structure of DNA, which has also been sort of important for science, has been cited 9,130 times). But is it correct? That is to say, are the Kosfeld findings robust enough to license the conclusion that oxytocin really does increase trust in humans? Allow me to lay out the post-Kosfeld evidence so you can make up your own mind. I have located five post-Kosfeld experiments that examined the effects of intranasal OT on trusting behavior in the trust game, and I restrict my remarks to those experiments only. (I’m ignoring studies on people’ s self-reported trust of strangers, for example, as well as a few other experiments that have used experimental games other than the trust game.) I have scored each of these five replication experiments as either a successful replication or a failure to replicate (or some admixture of success and failure). (Caveat lector: None of these studies is an exact replication of Kosfeld).

The Post-Kosfeld Experiments

Replication 1: Baumgartner et al. (2008) In 2008, Baumgartner and colleagues ran a reasonably close replication of the Kosfeld experiment, though they modified the protocol so participants could play the trust games while their brains were being scanned via fMRI.[2] Forty-nine men, randomly assigned to receive either OT or placebo, played a series of six trust games (interleaved with six other kinds of games, which I’m ignoring) with anonymous partners. At the end of the first six trust games, Investors received the feedback that only 50% of their Trustees had made back-transfers. After this disappointing feedback, the Investors played six new trust games (interleaved with some other games) with six new anonymous partners. The figure below, from the supplemental online materials for the paper, shows the main results.

Baumgartner_FIGURE_SOI

As you can see on the left side of the figure, OT did not meaningfully increase trust during the first six “Pre-Feedback” rounds. Baumgartner mostly ignored those results, however, and focused instead in their discussion on the right side of the figure: In the six “Post-Feedback” Trust Games, OT participants entrusted significantly more money to their Trustees, on average, than did the placebo participants.

But it seems to me that we, as dispassionate consumers, are ill-advised to discount the lack of OT-vs.-placebo differences on the Pre-Feedback rounds: I myself am going to score them as an unambiguous  “failure to replicate.” Nevertheless, it’s nearly Christmas, and science would stop progressing if we were unwilling to open our minds to new ideas, so I’m happy to score the results from the post-feedback rounds as a “successful replication” of Kosfield. I am going to score Baumgartner, then, as a 50% successful replication and a 50% failure to replicate.

Replication 2: Mikolajczak et al. [3] Mikolajczak and colleagues randomly assigned 60 healthy men to either OT or placebo, and then had them play ten trust games with partners who had been described as “reliable,” and ten with partners who had been described as “unreliable” (and some other trials that aren’t directly relevant here). Men in the OT group entrusted more money, on average, to partners who had been described as “reliable” than did men in the placebo group, although there were was no OT-vs.-placebo difference in the amounts entrusted to partners who had been described as “unreliable.” The results for the “reliable” partners can be interpreted as a reasonably successful replication of Kosfeld, and a good story can be told for why the results for “unreliable” partners are not a failure to replicate Kosfeld, but I’m not sure whether we can just ignore the lack of OT effects for unreliable partners entirely. I am going to score Mikolajczak as a 75% successful replication and a 25% failure to replicate. I admit that this is a hard one to call, though, and other people of good will could come to different conclusions about how to score this study.

Replication 3: Barraza (2010). Jorge Barraza [4] found that 44 healthy men who received OT did not invest more money in four consecutive trust games than did 22 men who received placebo (disclosure: I was an outside reader of Jorge’s dissertation, and co-authored a paper based on some of the results he obtained during that work). I’m calling this one a 100% failure to replicate. Take note that Investors played their four games with a single anonymous partner, with feedback on the back-transfers after each game, which makes this experiment a bit different from the others included here. Even so, it’s a mistake to exclude Barraza if we want to know whether Kosfeld and colleagues were right to claim that “Oxytocin increases trust in humans.”

Replications 4 and 5: Klackl et al. (2012) and Ebert et al. (2013). Only two more to go. Klackl and colleagues performed a fairly close replication of the 2008 Baumgartner paper with 40 healthy men (sans fMRI) and found that participants who received OT did not, on average, send more money to partners during six pre-feedback games, or during six post-feedback games.[5] (This study, therefore, is not only a failure to replicate Kosfeld, but also a failure to replicate Baumgartner.) Finally, Ebert et al. found that 26 people (13 who had been diagnosed with Borderline Personality Disorder and 13 non-diagnosed controls; mostly women) were no more trusting of 20 strangers in a series of trust games following OT administration than they were following administration of a placebo (all 26 participants did OT trials on one occasion, and placebo trials on another occasion, with counterbalancing).[6] On this basis, I’m calling Ebert, too, a 100% failure to replicate.

Summing Up

So, does OT increase trust in humans? The Kosfeld experiment found a faint statistical signal (remember, p = .029, one-tailed) for an effect of OT across a series of trust games with different Trustees, but statistical hard-liners who would insist on a p value less than .05—two-tailed—might reasonably argue that Kosfeld did not even find a phenomenon in need of replication to begin with. That said, the post-feedback rounds from Baumgartner look quite consistent with the claim that OT increases trusting behavior, as do Mikolajczak’s results for “reliable” partners (though I can’t convince myself to call Mikolajczak a 100% successful replication because of the failure to find effects for the “unreliable” partners). On the other hand, the pre-feedback rounds from Baumgartner, and the results from Barraza, Klackl, and Ebert, look to me like out-and-out failures to replicate Kosfeld.  (Plus, I’m going to weight 25% of the Mikolajczak results as a failure to replicate; again, I don’t think we can just ignore the lack of effects for unreliable partners, or pretend that the original Kosfeld hypothesis explicitly entails such a pattern.)

Adding up these scores, then, leads me to conclude that the original Kosfeld results have been succeeded by 1.25 studies’ worth successful replications and 3.75 studies’ worth of failures to replicate. Here’s the box score for the replications:

 

Replication

Outcome

1

Baumgartner

2

Mikolajczak

3

Barraza

4

Klackl

5

Ebert

Total

Success

.50

.75

0

0

0

1.25

Failure

.50

.25

1.0

1.0

1.0

3.75

With the relevant post-Kosfeld data favoring failures to replicate by 3:1, I think a dispassionate reader is justified in not believing that OT increases trusting behavior–at least not in the context of the trust game. Should we do a few more studies just to make sure? Fine by me, but it seems to me that we, as a field, should have some sort of stop-rule that would tell us when to turn away from this hypothesis entirely–as well, of course, as how much data in support of the hypothesis we would need to justify our acceptance of it. In addition, I’m struck by the fact that no one has ever gotten around to reporting the results of an exact replication of Kosfeld. In light of the Many Labs Projects’ recent successes in identifying experimental results that do and do not replicate, I’d personally be content to believe the results of several (five, perhaps?) large-N, coordinated, pre-registered exact replications of the Kosfeld experiment. But until then, or until new data come in that are relevant to this question, I know what I am going to believe.

By the way, if you don’t like how I scored the studies, I would be curious to know how you would synthesize these results to come to your own conclusion. Also, there could be other data on this topic out there that I have failed to include. If you’ll let me know about them, I’ll get around to incorporating them here and updating my box score accordingly.

References

1.         Kosfeld, M., et al., Oxytocin increases trust in humans. Nature, 2005. 435: p. 673-676.

2.         Baumgartner, T., et al., Oxytocin shapes the neural circuitry of trust and trust adaptation in humans. Neuron, 2008. 58: p. 639-650.

3.         Mikolajczak, M., et al., Oxytocin makes people trusting, not gullible. Psychological Science, 2010. 21: p. 1072-1074.

4.         Barraza, J.A., The physiology of empathy: Linking oxytocin to empathic responding. 2010, Unpublished Doctoral Dissertation, Claremont Graduate University: Claremont, CA.

5.         Klackl, J., et al., Who’s to blame? Oxytocin promotes nonpersonalistic attributions in response to a trust betrayal. Biological Psychology, 2012. 92: p. 387-394.

6.         Ebert, A., et al., Modulation of interpersonal trust in borderline personality disorder by intranasal oxytocin and childhood trauma. Social Neuroscience, 2013. 8: p. 305-313.