Tag Archives: Evolution

Behavioral Altruism is an Unhelpful Scientific Category

Altruism has been a major topic in evolutionary biology since Darwin himself, but altruism (the word) did not appear even once in Darwin’s published writings.[1] The omission of altruism from Darwin’s thoughts about altruism is hardly surprising: Altruism had appeared in print for the first time only eight years before The Origin of Species. The coiner was a Parisian philosopher named Auguste Comte.

Capitalizing on the popularity he had already secured for himself among liberal intellectuals in both France and England, Comte argued that Western civilization needed a complete intellectual renovation, starting from the ground up. Not one to shrink from big intellectual projects, Comte set out to do this re-vamping himself, resulting in four hefty volumes. Comte’s diagnosis: People cared too much for their own welfare and too little for the welfare of humanity. The West, Comte thought, needed a way of doing society that would evoke less égoisme, and inspire more altruisme.

Comte saw a need for two major changes. First, people would need to throw out the philosophical and religious dogma upon which society’s political institutions had been built. In their place, he proposed we seek out new principles, grounded in the new facts emerging from the new sciences of the human mind (such as the fast-moving scientific field of phrenology), human society (sociology), and animal behavior (biology).

Second, people would need to replace Christianity with a new religion in which humanity, rather than the God of the Abrahamic religions, was the object of devotion. In Comte’s new world, the 12-month Gregorian calendar would be replaced with a scientifically reformed calendar consisting of 13 months (each named after a great thinker from the past—for example, Moses, Paul the Apostle, Gutenberg, Shakespeare, and Descartes) of 28 days each (throw in a “Day of the Dead” at the end and you’ve got your 365-day year). Also, the Roman Catholic priesthood would be replaced with a scientifically enlightened, humanity-loving “clergy” with Comte himself—no joke—as the high priest.

Comte’s proposals for a top-down re-shaping of Western society didn’t get quite the reception he was hoping for (though they caught on better than you might think: If you’re ever in Paris or Rio, pay a visit to the Temples of Humanity that Comte’s followers founded around the turn of the 19th century). In England especially, the scientific intelligentsia’s response was frosty. On the advice of his friend Thomas Huxley, Darwin also steered clear of all things Comtean, including altruism.

Nevertheless, altruism was in the air, and its warm reception among British liberals at the end of the 19th century is how the word percolated into everyday language. It’s also why the word is still in heavy circulation today. The British philosopher Herbert Spencer, an intellectual rock star of his day, was a great admirer of Comte, and he played a major role in establishing a long-term home for altruism in the lexicons of biology, social science, and everyday discourse.[2] Spencer used the term altruism in three different senses—as an ethical ideal, as a description of certain kinds of behavior, and as a description for a certain kind of human motivation. (He wouldn’t have understood how to think about it as an evolutionary concept.)[3]

Here, I want to look at Spencer’s second use of the word altruism—as a description of a class of behaviors—because I think it is a deeply flawed scientific concept, despite its wide usage. At the outset, I should note that as a Darwinian concept—an evolutionary pathway by which natural selection can create complex functional design by building traits in individuals that cause them to take actions that increase the rate of replication of genes locked inside their genetic relatives’ gonads—altruism has none of the conceptual problems that behavioral altruism has.

With Spencer’s behavioral definition of altruism, he meant to refer to “all action which, in the normal course of things, benefits others instead of benefiting self.”[4] A variant of this definition is embraced today by many economists and other social scientists, who use the term behavioral altruism to classify all “costly acts that confer benefits on other individuals.”[5] Single-celled organisms are, in principle, as capable of Spencerian behavioral altruism as humans are. Social scientists who subscribe to the behavioral definition of altruism have applied it to a wide range of human behaviors. Have you ever jumped into a pool to save a child or onto a hand grenade to spare your comrades? Donated money to your alma mater or a charity? Given money, a ride, or directions to a stranger? Served in the military? Donated blood, bone marrow, or a kidney? Reduced, re-used, or recycled? Adopted a child? Held open a door for a stranger? Shown up for jury duty? Volunteered for a research experiment? Taken care of a sick friend? Let someone in front of you in the check-out line at the grocery store? Punished or scolded someone for breaking a norm or for being selfish? Taken found property to the lost and found? Tipped a server in a restaurant in a city you knew you’d never visit again? Pointed out when a clerk has undercharged you? Lent your fondue set or chain saw to a neighbor? Shooed people away from a suspicious package at the airport? If so, then you, according to the behavioral definition, are an altruist.[6]

Some economists seek to study behavioral altruism in the laboratory with experimental games in which researchers give participants a little money and then measure what they do with it. The Trust Game, which involves two players, is a great example. We can call the first actor an Investor because he or she is given a sum of money—say, $10—by the experimenter, some or all of which he or she can send to the other actor, whom we might call the trustee. The investor knows that every dollar he or she entrusts to the trustee gets multiplied by a fixed amount—say, 3—so if the investor transfers $1 to the trustee, the trustee now has $3 more in his or her account as a result of the investor’s $1 transfer. Likewise, the investor knows that the trustee will subsequently decide whether to transfer some money back. Under these circumstances, according to some experimental economists, if the Investor sends money to the Trustee, it is “altruistic” because it is a “costly act that confers an economic benefit upon another individual.”[7] But the lollapalooza of behavioral altruism doesn’t stop there: It’s also altruistic, per the behavioral definition that economists embrace, if the Trustee transfers money back to the Investor. Here, too, one person is paying a cost to provide a benefit to another person.

Notice that motives don’t matter for behavioral altruism. (To social psychologists like Daniel Batson, altruism is a motivation to raise the welfare of another individual, pure and simple. Surprising as it might seem, this is also, in fact a conceptually viable scientific category. But that’s another blog post.) All that matters for a behavior to be altruistic is that it entails costs to actors and benefits to recipients. Clearly, donating a kidney or donating blood are costly to the donor and beneficial to the recipients, but even when you hold a door open for a stranger, you pay a cost (a few seconds of your time and a calorie or so worth of physical effort) to deliver a benefit to someone else. By this definition, even an insurance company’s agreement to cover the MRI for your (possibly) torn ACL qualifies: After all, the company pays a cost (measured in the thousands of dollars) to provide you with a benefit (magnetic confirmation either that you need surgery or that your injury will probably get better after a little physical therapy).

But a category that lumps together recycling, holding doors for strangers, donating kidneys, serving in the military, and handing money over to someone in hopes of securing a return on one’s investment—simply because they all involve costly acts that confer benefits on others—is a dubious scientific category. Good scientific categories, unlike “folk categories,” are natural kinds—as Plato said, they “carve nature at its joints.” Rather than simply sharing one or more properties that are interesting to a group of humans (for example, social scientists who are interested in a category called “behavioral altruism”), they should share common natural essences, common causes, or common functions. Every individual molecule with the chemical formula H2O is a member of a natural kind—water—because they all share the same basic causes (elements with specific atomic numbers that interact through specific kinds of bonds). These deep properties are the causes of all molecules of H2O that have ever existed and that ever will exist. Natural kinds are not just depots for things that have some sort of gee-whiz similarity.[8]

If behavioral altruism is a natural kind, then knowing that a particular instance of behavior is “behaviorally altruistic” should enable me to draw some conclusions about its deep properties, causes, functions, or effects. But it doesn’t. All I know is that I’ve done something that meets the definition of behavioral altruism. Even though I have, on occasion, shown up for jury duty, held doors open for strangers, received flu shots, loaned stuff to my neighbors, and even played the trust game, simply knowing that they are all instances of “behavioral altruism” does not enable me to make any non-trivial inferences about the causes of my behavior. By the purely behavioral definition of altruism, I could show up for jury duty to avoid being held in contempt of court, I could give away some old furniture because I want to make some space in my garage, and I could hold the door for someone because I’m interested in getting her autograph. The surface features that make these three behaviors “behaviorally altruistic” are, well, superficial. Knowing that they’re behaviorally altruistic gives me no new raw materials for scientific inference.

So if behavioral altruism isn’t a natural kind, then what kind of kind is it? Philosophers might call it a folk category, like “things that are white,” or “things that fit in a bread box,” or “anthrosonic things,” which comprise all of the sounds people can make with their bodies—for example, hand-claps, knuckle- and other joint-cracking, the lub-dub of the heart’s valves, the pitter-patter of little feet, sneezes, nose-whistles, coughs, stomach growls, teeth-grinding, and beat-boxing. Anthrosonics gets points for style, but not for substance: My knowing that teeth-grinding is anthrosonic does not enable me to make any new inferences about the causes of teeth-grinding because anthrosonic phenomena do not share any deep causes or functions.

Things that are white, things that can fit in a bread box, anthrosonics, things that come out of our bodies, things we walk toward, et cetera–and, of course, behavioral altruism–might deserve entries in David Wallechinsky and Amy Wallace’s entertaining Book of Lists[9], but not in Galileo’s Book of Nature. They’re grab-bags.

~

[1] Dixon (2013).
[2] Spencer (1870- 1872, 1873, 1879).
[3] Dixon (2005, 2008, 2013).
[4] Spencer (1879), p. 201.
[5] Fehr and Fischbacher (2003), p. 785.
[6] See, for instance, Silk and Boyd (2010), Fehr and Fischbacher (2003); Gintis, Bowles, Boyd, & Fehr (2003).
[7] Fehr and Fischbacher (2003), p. 785.
[8] Slater and Borghini (2011).
[9] Wallechinsky, Wallace, and Wallace (2005).

REFERENCES

Dixon, T. (2005). The invention of altruism: August Comte’s Positive Polity and respectable unbelief in Victorian Britain. In D. M. Knight & M. D. Eddy (Eds.), Science and beliefs: From natural philosophy to natural science, 1700-1900 (pp. 195-211). Hampshire, England: Ashgate.

Dixon, T. (2008). The invention of altruism: Making moral meanings in Victorian Britain. Oxford, UK: Oxford University Press.

Dixon, T. (2013). Altruism: Morals from history. In M. A. Nowak & S. Coakley (Eds.), Evolution, games, and God: The principle of cooperation (pp. 60-81). Cambridge, MA: Harvard University Press.

Fehr, E., & Fischbacher, U. (2003). The nature of human altruism. Nature, 425, 785-791.

Gintis, H., Bowles, S., Boyd, R., & Fehr, E. (2003). Explaining altruistic behavior in humans. Evolution and Human Behavior, 24, 153-172.

Silk, J. B., & Boyd, R. (2010). From grooming to giving blood: The origins of human altruism. In P. M. Kappeler & J. B. Silk (Eds.), Mind the gap: Tracing the origins of human universals (pp. 223-244). Berlin: Springer Verlag.

Slater, M. H., & Borghini, A. (2011). Introduction: Lessons from the scientific butchery. In J. K. Campbell, M. O’Rourke, & M. H. Slater (Eds.), Carving nature at its joints: Natural kinds in metaphysics and science (pp. 1-31). Cambridge, MA: MIT Press.

Spencer, H. (1870- 1872). Principles of psychology. London: Williams and Norgate.

Spencer, H. (1873). The study of sociology. London: H. S. King.

Spencer, H. (1879). The data of ethics. London: Williams and Norgate.

Wallechinsky, D., & Wallace, A. (2005). The book of lists: The original compendium of curious information. Edinburgh, Scotland: Canongate Books.

Thinking Outside the Box: The Power of Apologies in Cooperative Agreements

TWO kids I know—let’s call them Jeff and Mimi–wanted a cat, so they begged their reluctant parents for months. Eventually the parents gave in, but they forced the kids into an agreement: “The cat box will need to be cleaned every day. We expect you to alternate days. If you miss your day, we’ll take fifty cents out of your allowance and give it to your sister/brother. If you like, you can think of the fifty-cent transfer as a ‘fine’ that we pay to your sister/brother for your failure to hold up your end of the agreement.” The kids agreed and Corbin the cat was purchased (or, rather, obtained from the Humane Society). The litter box-cleaning arrangement went well for three whole days, but compliance started to wane on day four. Hostilities began to simmer. Jeff became reluctant to clean the litter box on “his” days because of Mimi’s failures to keep up her end of the bargain, and vice versa.

“Recriminations” began to pile up.

After two weeks, the agreement was declared dead. The parents became the chief cleaners of the litter box. The father continues to wonder whether he should have read more game theory before even entertaining the idea of getting a cat.

Cooperative agreements like these tend to be dicey propositions: What’s Mimi supposed to do if Jeff fails to clean the box on his appointed day? Should he view it as a sign that Jeff no longer intends to honor the agreement (in which case she should stop honoring it herself, notwithstanding the $.50 fine that Jeff had to pay to her), or should she view it as a one-off aberration (in which case she might want to continue honoring the agreement)? It’s not clear, and that lack of clarity can make problems for the stability of such agreements.

Luis Martinez-Vaquero and his colleagues addressed this issue in a recent article that caught my eye. The paper is complex, but it’s full of interesting results (some quite counter-intuitive) about when strategic agents should be expected to make commitments, honor commitments, retaliate when those commitments are broken, and so forth. I suggest you give it a read if you are at all interested in these issues. But what really grabbed my interest was the authors’ exploration of the idea that the key to getting such agreements to “work” (by which I mean, “become evolutionarily stable*”) was to build in an apology-forgiveness system that causes agreement-violators to pay an additional cost (over and above the fine specified in the agreement itself) after a failure to cooperate, which might cause the defected-against partner to persist in the agreement despite the fact that it has been violated.

The researchers’ results enabled them to be surprisingly precise about the conditions under which highly cooperative strategies that used apologies and forgiveness in this way would evolve*: The costs of cooperating (cleaning the litter box) must be lower than the cost of the apology (the amount of money the deal-breaker voluntarily passes to his/her sibling), which in turn must be lower than the fine for non-compliance that is specified within the agreement itself (fifty cents). When those conditions are in place, you can get the evolution of actors who like to make agreements, accept agreements, honor agreements, and forgive breaches of agreements so that cooperation can be maintained even when those agreements are occasionally violated due to cello lessons that run late, or unscheduled trips to the emergency room, or geometry exams that simply must be studied for.

I’ve written here and there (and here) about the value of apologies and compensation in promoting forgiveness, but the results of Martinez-Vaquero and colleagues suggest (to me, anyway) that forgiveness-inducing gestures such as apologies and offers of compensation can come to possess a sort of fractal quality: People often overcome defections in their cooperative relationships through costly apologies, which promote forgiveness. Throughout the history of Western Civilization, various Leviathans have capitalized on the conflict-minimizing, cooperation-preserving power of costly apologies by institutionalizing these sorts of innovations within contracts and other commitment devices that specify fines and other sanctions if one party or the other fails to perform. But after the fine has been paid for failure to perform, what’s to keep the parties motivated to continue on with their agreement? Martinez-Vaquero et al.’s paper suggests that a little “apology payment” added on top of the fine might just do the trick. Apologies within apologies.

By the way, Jeff and Mimi’s parents are reviewing the terms of the old agreement later this week. Perhaps it can be made to work after all.

Reference:

Martinez-Vaquero, L. A., Han, T. A., Pereira, L. M., & Lennaerts, T. (2015). Apology and forgiveness evolve to resolve failures in cooperative agreements. Scientific Reports, 5: 10639.  doi:10.1038/srep10639.


 

*By which I mean “come to characterize the behavior of individuals in the population via a cultural learning mechanism that causes individuals to adopt the strategies of their most successful neighbors.”

 

The Real Roots of Vengeance and Forgiveness

Yesterday, somebody pointed me to this article, which I wrote a few years ago for a magazine called Spirituality and Health. I had not realized until yesterday that the magazine had made the article available on the web. Even though it’s several years old, I still like the way it reads. In fact, it’s as decent a précis of my book Beyond Revenge as you’re going to find anywhere.

I’m not exactly a regular reader of this particular magazine, but their editorial staff have taken an interest in some of my research and writing over the years, including some of our (by which I mean, my and my collaborators’) work on forgiveness and gratitude, for which I have always been appreciative. The founder of the magazine, whom I was fortunate enough to know, was T. George Harris–one of the most colorful figures in the history of 20th-century magazine publishing. Some readers of this blog might know of George’s work in helping to turn Psychology Today into the behemoth it eventually became, but there is much more to George’s personal and professional life that’s worth knowing about.

George died last year at the age of 89. I found two really nice chronicles of his life–this one from the local San Diego paper (George was a La Jolla resident), and this one written by Stephen Kiesling, who not only is the editor-in-chief at Spirituality and Health, but also was one of George’s closest friends and fondest admirers.

 

The Trouble with Oxytocin, Part III: The Noose Tightens for The Oxytocin–>Trust Hypothesis

https://i0.wp.com/media-cache-ak0.pinimg.com/736x/2b/1f/9b/2b1f9b4e930d47f31b1f7f3aecd0b0cf.jpgMight be time to see about having that Oxytocin tattoo removed…

When I started blogging six months ago, I kicked off Social Science Evolving with a guided tour of the evidence for the hypothesis that oxytocin increases trusting behavior in the trust game (a laboratory workhorse of experimental economics). The first study on this topic, authored by Michael Kosfeld and his colleagues, created a big splash, but most of the studies in its wake failed to replicate the original finding. I summarized all of the replications in a box score format (I know, I know: Crude. So sue me.) like so:

Box Score_Dec2013By my rough-and-ready calculations, at the end of 2013 there were about 1.25 studies’ worth of successful replications of the original Kosfeld results, but about 3.75 studies’ worth of failed replications (see the original post for details). Even six months ago, the empirical support for the hypothesis that oxytocin increases trust in the trust game was not looking so healthy.

I promised that I’d update my box score as I became aware of new data on the topic, and a brand new study has just surfaced. Shuxia Yao and colleagues had 104 healthy young men and women play the trust game with four anonymous trustees. One of those four trustees (the “fair” trustee) returned enough of the subject’s investment to cause the subject and the trustee to end up with equal amounts of money; the other three trustees (designated as the “unfair players”) declined to return any money to the subject at all.

Next, subjects were randomly assigned to receive either the standard dose of intranasal oxytocin, or a placebo. Forty-five minutes later, participants were told that they would receive an instant message from the four players to whom they had entrusted money during the earlier round of the trust game. The “fair” player from the earlier round, and one of the “unfair” players, sent no message at all. The second unfair player sent a cheap-talk sort of apology, and the third unfair player offered to make a compensatory monetary transfer to the subject that would make their payoffs equal.

Finally, study participants took part in a “surprise” round of the trust game with the same four strangers. The researchers’ key question was whether the subjects who had received oxytocin would behave in a more trusting fashion toward the four players from Round 1 than the participants who received a placebo instead.

They didn’t.

In fact, the only hint that oxytocin did anything at all to participants’ trust behaviors was a faint statistical signal that oxytocin caused female participants (but not male participants) to treat the players from Round 1 in a less trusting way. If anything, oxytocin reduced women’s trust. I should note, however, that this females-only effect for oxytocin was obtained using a statistically questionable procedure: The researchers did not find a statistical signal of an interaction between oxytocin and subjects’ sex, and without such a signal, their separation of the men’s and the women’s data for further analyses really wasn’t licensed. But regardless, the Yao data fail to support the idea that oxytocin increases trusting behavior in the trust game.

It’s time to update the box score:

Box_Score_Jun2014

In the wake of the original Kosfeld findings, 1.25 studies worth of results have accumulated to suggest that oxytocin does increase trust in the trust game, but 4.75 studies worth of results have accumulated to suggest that it doesn’t.

It seems to me that the noose is getting tight for the hypothesis that intransasal oxytocin increases trusting behavior in the trust game. But let’s stay open-minded a while longer. As ever, if you know of some data out there that I should be including in my box score, please send me the details. I’ll continue updating from time to time.

Of Crackers and Quackers: Human-Duck Social Interaction is Regulated by Indirect Reciprocity (A Satire)

1280px-221_Mallard_DuckWatching the ducks on a neighborhood pond can be an entertaining and rewarding pastime. I myself, along with my nine-year-old co-investigator, have taken daily opportunities to feed some ducks on a nearby pond over the past several months. In doing so, we not only had fun but also managed to conduct some urban science that led us to a new scientific discovery: Mallards (Anas platyrhynchos L.) engage in indirect reciprocity with humans. Scientists have known for decades, of course, that indirect reciprocity was critical to the evolution of human social interaction in large-scale societies, but we believe we are the first to identify indirect reciprocity at work in human-duck social interaction.

Here’s how we made this discovery.

On random days, we take a soda cracker along with us to feed to a single lucky duck. On the other days, we take our walks without a cracker. What my young co-investigator and I have noticed is that on cracker days, after we’ve fed the cracker to the first duck that approaches us (the “focal duck,” which we also call “the recipient”), other ducks (which we call “entertainment ducks,” or “indirect reciprocators”) appear to take notice of our generosity toward the recipient. Almost immediately, the indirect reciprocators start to perform all sorts of entertaining behaviors: They swim toward us eagerly, they waddle up to us enthusiastically, they stare at us with their dead, obsidian eyes, they quack imploringly. It’s all very amusing and my co-investigator and I have a great time. Take note of the fact that we always bring only a single cracker with us on cracker days. As a result, the indirect reciprocators have absolutely nothing to gain from the entertainment they provide. In fact, they actually incur costs (in the form of energy expended and lost foraging time) when they do so. Thus, their indirect reciprocity behavior is altruistic.

Our experience with the indirect reciprocators is very different on non-cracker days. If a focal duck comes up to us on a non-cracker day, there’s just no cracker to be had, no matter how charming or insistent the request. Dejected, the focal duck typically waddles or paddles away within a few seconds. Now, what do you suppose the entertainment ducks do after we refuse to feed the focal duck? That’s right. They withhold their entertainment behaviors. This pattern, of course, is exactly as one would expect if the entertainment ducks were regulating their entertainment behaviors according to the logic of indirect reciprocity.

Theorists typically assume that the computational demands for indirect reciprocity to evolve are quite extensive. For instance, indirect reciprocators need to possess computational machinery that enables them to acquire information about the actions of donors—either through direct sensory experience of donor-recipient interactions, or (more rarely) language-based gossip, or (even more rarely) social information stored in an external medium, such written records or the reputational information that’s often available in online markets. Indirect reciprocators also need be able to tag donors’ actions toward recipients as either “beneficial” or “non-beneficial,” store that social information in memory, and then feed that information to motivational systems that can produce the indirect reciprocity behaviors that will serve as rewards to donors. However, the indirect reciprocity we’ve identified in our mallards suggests that those computational requirements may be fulfilled in vertebrates more commonly than theorists originally thought.

Neither of us could figure out for sure whether the focal ducks were transmitting information about our generosity/non-generosity to the indirect reciprocators through verbal (or non-verbal) communication, but we think it is unlikely. Instead, we suspect that the indirect reciprocators were directly observing our behavior and then using that sensory information to regulate their indirect reciprocity behavior.

In support of this interpretation, we note that on several cracker days, it was not only other ducks that engaged us as indirect reciprocators, but individuals from two different species of turtles (which we believe to be Rachemys scripta and Apalone ferox) as well. The turtles’ indirect reciprocity behaviors, of course, were different from those of the ducks, due to differences in life history and evolutionary constraints: The turtles didn’t reward our generosity through waddle-based or quack-based rewarding, but rather, by (a) rooting around in the mud where the focal duck had received the cracker earlier, and (b) trying to grab the focal duck by the leg and drag it to a gruesome, watery death. The fact that turtles engaged in their own forms of indirect reciprocity suggests that they, at least, were obtaining information about our generosity via direct sensory experience, rather than through duck-turtle communication or written or electronic records: It is widely accepted, after all, that turtles don’t understand Mallardese or use eBay.

The involvement of turtles as indirect reciprocators also suggests that indirect reciprocity might be even more prevalent–and more complex–than even we originally suspected. Not only does indirect reciprocity evolve to regulate interactions within species (viz., Homo sapiens), and between species (viz., between Homo sapiens and Anas platyrhynchos L., as we have documented here), but also among species (Homo sapiens as donors, Anas platyrhynchos L. as recipients, and Rachemys scripta and Apalone ferox as indirect reciprocators).

Finally, we should point out that although our results are consistent with the indirect reciprocity interpretation that we have proffered here, other interpretations are possible as well. We look forward to new work that can arbitrate between these two accounts (and perhaps others). We also see excellent opportunities for simulation studies that can shed light on the evolution of indirect reciprocity involving interactions between two or even three different species, which my co-Investigator thinks she might pursue after she has mastered long division.

h/t Eric P.

Why Do Honor Killings Defy the First Law of Homicide? And Will Smaller Families Lead to Fewer Of Them?

Few categories of human rights violations more deeply scandalize the liberal (with a little-L) moral sensibility than honor killings do. Reliable numbers are hard to come by, but by most credible accounts it seems likely that several thousand Muslim women each year (and more than a few men) are stoned, burned, hanged, strangled, beheaded, stabbed, or shot to death for the sins of getting raped, falling in love, or dressing immodestly. But to anyone who thinks about human behavior from an evolutionary point of view, honor killings are not just morally outrageous: They’re also really puzzling.

As Martin Daly and Margo Wilson documented in their marvelous book Homicide, killers are very rarely the genetic relatives of their victims. Instead, they’re most often strangers, or rivals, or cuckolded lovers (who, of course, are not each others’ kin even if married—at least, not in the sense that matters to natural selection). Indeed, the typically low level of kinship between the victims of homicides and the people who kill them is so predictable that we could get away with calling it “The First Law of Homicide.” When two genetic relatives are involved in a homicide, it’s usually either as co-victims or co-perpetrators, not as victim and perpetrator.

In a sense, a general reluctance to harm or kill one’s genetic relatives is not exactly breaking news. We’ve understood since William Hamilton’s 1963 and 1964 papers that natural selection creates organisms that appear designed to maximize their inclusive fitness (which incorporates the reproductive success of the individual in whom the gene is physically located, as well as the reproductive success of other individuals who are carrying copies of that gene around) rather than their simple direct fitness. Genes “want” to maximize the total number of copies of themselves that are floating around in the world, even if some of those copies are located in other individuals’ gonads. The principle of kin selection virtually guarantees that we’re walking around with instincts that restrain us from harming our relatives, even when they’ve irritated us. To be clear, I’m not saying people never kill their kin (mental illness is a real wild card here), but the fitness disincentives of doing so were so high as our psychology was evolving that the perceived incentives to do so now have to be very high indeed.

Which is what makes honor killings so puzzling. In a recent article, Andrzej Kulczycki and Sarah Windle summarized data on the circumstances behind more than 300 honor killings across Northern Africa and the Middle East. What jumps off the page when you look at their data is how flagrantly honor killings flout the First Law of Homicide: About three-quarters of honor killings are carried out by family members of the victim. To be specific, the victims’ brothers carry out 29% of them, fathers and (to a much lesser extent, mothers), carry out about 25%, and “other male relatives” carry out an additional 19% of them. (Of the remaining 25%, virtually all are carried out by the victims’ husbands/ex-husbands.)

I’m really interested in that 75% that violate the First Law of Homicide. For the perpetrators of honor killings to over-ride their intuitive aversions to killing their own daughters or sisters, the perceived costs of “dishonor” must be very high indeed. We can’t precisely measure the exact fitness value of honor for someone who lives in a so-called culture of honor, of course, but the link between fitness and honor is undeniable. If you live in an honor culture, your honor determines your (and your children’s) job prospects, marriage prospects, ability to recruit help from neighbors, ability to secure a loan, and protection against those who would otherwise do you harm. Honor is an insurance policy, a social security check, and a glowing letter of recommendation rolled into one bundle. The fitness costs of tarnished honor in an honor culture can be steep.

One of the things I came to appreciate about honor while doing research for one of my books is that honor is a sacred commodity. It doesn’t follow the laws we expect actual physical stuff to obey, or the normal laws of economics, or even the normal rules that govern our everyday psychology. It follows the laws of Sacred Things. If you feel sad one day, you can be pretty sure that the feeling won’t last forever. Dishonor doesn’t work like that. Dishonor doesn’t wash off or fade away with time. Dishonor has to be purged or atoned for. More importantly for my argument here, dishonor does not dilute. The dishonor that a “dishonorable” behavior creates for a family is not like a fixed quantity of scarlet paint that can be used to make only a finite number of scarlet letters. When a young woman “dishonors” her family, there’s enough dishonor to thoroughly cover every one of her brothers and sisters, no matter how many brothers and sisters she has.

There’s an interesting prediction waiting in the wings. If I’m right that dishonor does not dilute, then the perceived fitness-associated costs of a single act of dishonor will be larger for a father and mother with many children than for a father and mother with only with only a few children. This has implications for reducing honor killings. Let me illustrate with a thought experiment.

The Costs of Dishonor to a Father Are Higher in Large Families

Say I am a father with nine children and one of my daughters has done something (or, more likely, has had something done to her) that has brought dishonor upon herself and each of her eight siblings. (Believe me, I am more appalled by having to write sentences like these than you are by having to read them, but I can’t come up with a better way to think through these issues than to try to step into the shoes of someone who is actually factoring honor-related concerns into their social decision-making). As the father of these nine children, the dishonored daughter has reduced my fitness by 9d because each of my children will suffer an honor-related fitness cost of d. (It might be better to quantify the hit to my fitness as 9 * .5 = 4.5 because my genetic relatedness to my children with respect to a rare allele that I possess is 0.5 rather than 1.0, but that won’t change anything in what’s to come. Can we please agree to work with 9 so as to make the math prettier?) So, if I am a father of nine children, and I can restore my family’s honor by murdering my dishonored daughter, I can recover 8d units of fitness (by restoring the damaged honor of my other eight children), and it costs me (I know, the thought sickens me as well) the fitness decrement I suffer through murdering one of my offspring.

If, on the other hand, I have only two children, then the perceived fitness cost of my daughter’s dishonor is 2d (a cost of d is imputed to both of my children), and I’d only be able to recover 1d unit of fitness (for my remaining, unmurdered child) by murdering the dishonored daughter. So, for a father with only two children, the calculus is not so clear: Am I better off in the long run to have two children whose honor is tarnished, or only one child whose honor is restored? For any plausible value of d, it’s hard to imagine that the decision-making scales would tilt in favor of killing the dishonored daughter if doing so would leave you with only one child. I’m betting that the father of two will stay his hand under circumstances in which the father of nine might not.

If I’m right about this, then a demographic shift toward smaller families in developing societies could eventually help to solve the problem of honor killings. I couldn’t find any direct evidence to support this prediction, but Manuel Eisner and Lana Ghuneim recently published a study in which they surveyed 856 Jordanian adolescents from 14 different schools to examine the predictors of their attitudes toward honor killings. They found that even when they controlled for the students’ sex (male vs. female), their religion (Muslim vs. non-Muslim), whether their mothers worked outside of the home (a good proxy for modernization), and the parents’ educational levels (also a good proxy for modern thinking), children with four or more siblings had more favorable attitudes toward honor killings than did children with three or fewer siblings. Not an exact test of my prediction, but to the extent that kids adopt their parents’ views, it seems to me that these results are at least tantalizingly consistent.

Do the human rights groups that want to reduce honor killings and other kinds of honor-related violence around the world ever talk about family size as a truly exogenous (and, in principle, modifiable) cause of honor killings? People are pinning their hopes for solving so many other problems around the world on reductions in family size, so perhaps I’m not being too pie-in-the-sky to add “reductions in honor-related violence” to that list of “Ways In Which We’d Be Better Off If People Had Fewer Kids.” As families shrink, I’m guessing that spared lives become subjectively more valuable than restored family honor.

A Refreshingly Human-Sounding Public Radio Interview: Yours Truly on Morality, Revenge, Forgiveness and Evolution

I have a friend who won’t listen to public radio in the U.S. It’s not that he objects to public radio programming or pubic radio values: It’s just that he doesn’t like the sonic quality of public radio programs. In the United States, at least, public radio is very heavily produced. I generally cannot be on a radio show that is syndicated to NPR (National Public Radio) stations unless I’m willing to schlep myself over to an ISDN studio because NPR requires “that noiseless ISDN sound.” Turn your radio right now to an NPR station and you’ll get a decent sampling of what I’m describing.  Sometimes, I like that sound, but I must agree with my friend. It does sound rather sterile.

Ever since my friend mentioned this to me, I have been struck by how slick I sound (relative to real life) in general when I am on public radio shows in the United States. It’s not always a kind of slick that I like. Some of it has to do with the ISDN sound, but some of it also has to do with the editing after the interview is finished. Everyone involved ends up, I think, sounding smarter and more eloquent than they did during the interview itself. That’s not always a bad thing–nobody wants to sound like an idiot if he or she can help it–but as a listener, all of that sweet perfection can make you wonder if you’re at risk of getting a cavity.

I therefore found my recent interview with Charlotte Graham from a Radio New Zealand, for her show Summer Nights, quite refreshing–particularly (though not only) from an aural point of view. It’s really just an uninterrupted and unedited phone call between me in Miami (at 9:00 PM my time) and Charlotte in New Zealand (where it was 3:00 in the afternoon of the following day). The phone line wasn’t, to say the least, ISDN quality, and both of us (though I to a rather greater extent than Charlotte) exhibited a healthy dose of the errors and disfluencies that characterize most people’s real conversations. Even so, we managed to cover some decent conceptual territory on evolution, culture, morality, revenge, and forgiveness.

Here’s a link to the interview. Hope you enjoy it.

I’m feeling Edge-y about Human Evolutionary Exceptionalism

Unless your Internet has been broken for the past few days, by now you’re probably aware that John Brockman, via his Edge.org web site, has published the responses to his Annual Edge Question of the Year. For more than 15 years, John has been inviting people who think and write about science and the science-culture interface to respond to a provocative question. The question for 2014 was “What scientific idea is ready for retirement?” Brockman explains:

Science advances by discovering new things and developing new ideas. Few truly new ideas are developed without abandoning old ones first. As theoretical physicist Max Planck (1858-1947) noted, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” In other words, science advances by a series of funerals. Why wait that long? What scientific idea is ready for retirement? Ideas change, and the times we live in change. Perhaps the biggest change today is the rate of change. What established scientific idea is ready to be moved aside so that science can advance?

I received an invitation to participate this year, and it didn’t take me long to settle on a topic. Something has been bugging me about the application of evolutionary thinking to human behavior for a while, and the Question of the Year was the perfect place to condense my thoughts into a 1000-word essay. What scientific idea, in my opinion, is ready for retirement? I nominated Human Evolutionary Exceptionalism. Here’s how I framed the problem:

Humans are biologically exceptional. We’re exceptionally long-lived and exceptionally cooperative with non-kin. We have exceptionally small guts and exceptionally large brains. We have an exceptional communication system and an exceptional ability to learn from other members of our species. Scientists love to study biologically exceptional human traits such as these, and that’s a perfectly reasonable research strategy. Human evolutionary exceptionalism, however—the tendency to assume that biologically exceptional human traits come into the world through exceptional processes of biological evolution—is a bad habit we need to break. Human evolutionary exceptionalism has sown misunderstanding in every area it has touched.

In my essay, I went on to describe examples of how human evolutionary exceptionalism has muddled the scientific literatures on niche construction, major evolutionary transitions, and cooperation. You can read my entire essay over here, but for this blog I’m reproducing what I had to say about the major evolutionary transitions—for reasons that I will make clear presently. The most critical part below is bolded and italicized. Here’s what I wrote:

Major Evolutionary Transitions. Over the past three billion years, natural selection has yielded several pivotal innovations in how genetic information gets assembled, packaged, and transmitted across generations. These so-called major evolutionary transitions have included the transition from RNA to DNA; the union of genes into chromosomes; the evolution of eukaryotic cells; the advent of sexual reproduction; the evolution of multicellular organisms; and the appearance of eusociality (notably, among ants, bees, and wasps) in which only a few individuals reproduce and the others work as servants, soldiers, or babysitters. The major evolutionary transitions concept, when properly applied, is useful and clarifying.

It is therefore regrettable that the concept’s originators made category mistakes by characterizing two distinctly human traits as outcomes of major evolutionary transitions. Their first category mistake was to liken human societies (which are exceptional among the primates for their nested levels of organization, their mating systems, and a hundred other features) to those of the eusocial insects because the individuals in both kinds of societies “can survive and transmit genes . . . only as part of a social group.”…

Their second category mistake was to hold up human language as the outcome of major evolutionary transition. To be sure, human language, as the only communication system with unlimited expressive potential that natural selection ever devised, is biologically exceptional. However, the information that language conveys is contained in our minds, not in our chromosomes. We don’t yet know precisely where or when human language evolved, but we can be reasonably confident about how it evolved: via the gene-by-gene design process called natural selection. No major evolutionary transition was involved.

This past Monday morning, right as I was about to go to the Edge web site to check out some of the other essays, someone e-mailed me to let me know about an uncanny coincidence. Just hours before my Edge essay came out—in which I was calling for the retirement of the misconception (among others) that human language was the outcome of a major evolutionary transition, Martin Nowak had published an essay on the Templeton Big Questions web site in which he was pushing in exactly the opposite direction. Here’s what Nowak had to say (my emphasis in boldface and italics):

I would consider these to be the five major steps in evolution: (i) the origin of life; (ii) the origin of bacteria; (iii) the origin of higher cells; (iv) the origin of complex multi-cellularity and (v) the origin of human language. Bacteria discovered most of biochemistry, higher cells discovered unlimited genetics; complex multicellularity discovered intricate developmental processes and animals with a nervous system. Humans discovered language.

Human language gave rise to a new mode of evolution, which we call cultural or linguistic evolution. The enormous speed of human discovery and invention is driven by this new mode of evolution. An idea or concept that originates in one brain can quickly spread to others. Structural changes (memories) are imprinted from one brain to another. Prior to human language the most crucial information transfer of evolution was mostly in terms of genetic information. Now we have genetic and linguistic evolution. The latter is much faster.  Presumably the collective information in human brains evolves at a much faster rate than any previous evolutionary system on earth. The growing world wide connectivity speeds up this linguistic evolutionary process.

Now, Nowak and I both agree that human language is a Very Special Way of transmitting information, but I say human language was not the outcome of a major evolutionary transition. Nowak says it was. We can’t both be right, so what’s going on? With respect, I think it’s Nowak who’s muddling things.

It was John Maynard Smith and Eörs Szathmáry who were actually responsible for popularizing the idea that human language was a major transition in evolution (see my essay and read between the lines; you’ll know whom I’m talking about even though I followed Brockman’s instructions to talk about ideas rather than the people who promote them). But as I wrote in my essay, Maynard Smith and Szathmáry made a category mistake when they did so: Here’s the first sentence from the description of their book, The Major Transitions in Evolution: “During evolution, there have been several major changes in the way that genetic information is organized and transmitted from one generation to the next.”

The critical word in the last sentence is “genetic.” Evolutionary transitions are about information stored in DNA, not about information in people’s minds. So, by Maynard Smith and Szathmáry’s own definition of major evolutionary transitions, human language categorically, absolutely cannot be one of them.

This has got to be incredibly obvious to anyone who takes a moment to think about it, so I’m not quite sure why influential people keep the misconception going. Equally puzzling to me, though, is why Maynard Smith and Szathmáry committed this error in the first place. Those gents are/were smart (Maynard Smith died in 2004; Szathmáry is still with us), and few people have ever had cause to doubt Maynard Smith’s judgment (though read Ullica Segerstrale’s biography of Bill Hamilton to learn about a striking exception to that rule). In any case, I can’t understand why Nowak continues to promulgate the notion that the evolution of human adaptations for nice things like human societies (as he did here) and human language (as in the Big Questions Online piece)—often using Maynard Smith and Szathmáry’s book for citation firepower—are comparable to actual Major Evolutionary Transitions that involved actual “major changes in the way that genetic information is organized and transmitted from one generation to the next.”

Human language is fascinating, puzzling, and a prime target for theory-building and research. Ditto for human cooperation and human societies. But these interesting features of human life are made neither grander, nor more comprehensible, by trying to get them into The Major Evolutionary Transitions club. They just don’t have the proper credentials.