Tag Archives: psychology

cicbs_laboratory

33 Reasons to Join Our Lab

  1. Our lab has 17 Papers in Nature (four others in press).
  2. Our lab is currently funded by two NIH grants and three crappy little NSF grants.
  3. Our lab partners with the private sector.
  4. Our lab has alumni organizations in Los Angeles, New York City, London, and Boston.
  5. Our lab had 11 post-docs last year. 14 of them got tenure-track jobs.
  6. Our lab does strategic planning retreats.
  7. Our lab has a growler refilling station.
  8. All the students in our lab do New York Times op-eds as first-year projects.
  9. Our lab maintains a database of job offers we’ve turned down.
  10. Our lab is seeking an angel investor.
  11. Our lab already tested that.
  12. Our lab has a Michelin star.
  13. Did you ever slow down and really think about how such a thing could ever evolve? Our lab did.
  14. Our lab installed whiteboards on the outside of the building.
  15. Two years ago, our lab director got one MacArthur genius grant for his research and another to fix the nonprofit sector.
  16. Our lab was featured on Anthony Bordain: No Reservations.
  17. Our lab is a member of the National Academy.
  18. Our lab director has an amazing TED talk titled “Can attractive CVs change the world?”
  19. Our lab has its own Battle of the Bands.
  20. Tom Wolfe is writing a hit piece about our lab.
  21. Our lab has a condiments station.
  22. When our lab goes to scientific conferences, somebody stays behind as a designated survivor.
  23. I have a Science Review paper coming out on our lab’s Nature papers.
  24. Our lab has a Tesla recharging station.
  25. This second-year dude in our lab used data from Google Scholar and Spotify to prove we’re in the same cultural clade as Charles Darwin, Marvin Minsky, and Bowie.
  26. Our lab director has no qualms about dialing it in.
  27. What did you think of Ethan Hawke’s performance in the biopic about our lab?
  28. Our lab knows a shoe can be both stylish and comfortable.
  29. September through May, our lab hosts a “Second Saturdays” arts and culture event.
  30. As soon as our lab gets back from winter break, we’re going to disrupt six different research areas.
  31. This American Life’s is doing their next live event from our lab’s journal club.
  32. Our lab is a hub for Spirit Airlines.
  33. Our lab didn’t even bother to test that: It’s axiomatic.

~

A P-Curve Exercise That Might Restore Some of Your Faith in Psychology

I teach my university’s Graduate Social Psychology course, and I start off the semester (as I assume many other professors who teach this course do) by talking about research methods in social psychology. Over the past several years, as the problems with reproducibility in science have become more and more central to the discussions going on in the field, my introductory lectures have gradually become more dismal. I’ve come to think that it’s important to teach students that most research findings are likely false, that there is very likely a high degree of publication bias in many areas of research, and that some of our most cherished ideas about how the mind works might be completely wrong.

In general, I think it’s hard to teach students what we have learned about the low reproducibility of many of the findings in social science without leaving them with a feeling of anomie, so this year, I decided to teach them how to do p-curve analyses so that they would at least have a tool that would help them to make up their own minds about particular areas of research. But I didn’t just teach them from the podium: I sent them away to form small groups of two to four students who would work together to conceptualize and conduct p-curve analysis projects of their own.

I had them follow the simple rules that are specified in the p-curve user’s guide, which can be obtained here, and I provided a few additional ideas that I thought would be helpful in a one-page rubric. I encouraged them to make sure they were sampling from the available population of studies in a representative way. Many of the groups cut down their workload by consulting recent meta-analyses to select the studies to include. Others used Google Scholar or Medline. They were all instructed to follow the p-curve manual chapter-and-verse, and to write a little paper in which they summarized their findings. The students told me that they were able to produce their p-curve analyses (and the short papers that I asked them to write up) in 15-20 person-hours or less. I cannot recommend this exercise highly enough. The students seemed to find it very empowering.

This past week, all ten groups of students presented the results of their analyses, and their findings were surprisingly (actually, puzzlingly) rosy: All ten of the analyses revealed that the literatures under consideration possessed evidentiary value. Ten out of ten. None of them showed evidence for intense p-hacking. On the basis of their conclusions (coupled with the conclusions that previous meta-analysts had made about the size of the effects in question), it does seem to me that there really is license to believe a few things about human behavior:

(1) Time-outs really do reduce undesirable behavior in children (parents with young kids take notice);

(2) Expressed Emotion (EE) during interactions between people with schizophrenia and their family members really does predict whether the patient will relapse in in the successive 9-12 months (based on a p-curve analysis of a sample of the papers reviewed here);

(3) The amount of psychological distress that people with cancer experience is correlated with the amounts of psychological distress that their caregivers manifest (based on a p-curve analysis of a sample of the papers reviewed here);

and

(4) Men really do report more distress when they imagine their partners’ committing sexual infidelity than women do (based on a p-curve analysis of a sample of the papers reviewed here; caveats remain about what this finding actually means, of course…)

I have to say that this was a very cheering exercise for my students as well as for me. But frankly, I wasn’t expecting all ten of the p-curve analyses to provide such rosy results, and I’m quite sure the students weren’t either. Ten non-p-hacked literatures out of ten? What are we supposed to make of that? Here are some ideas that my students and I came up with:

(1) Some of the literatures my students reviewed involved correlations between measured variables (for example, emotional states or personality traits) rather than experiments in which an independent variable was manipulated. They were, in a word, personality studies rather than “social psychology experiments.” The major personality journals (Journal of Personality, Journal of Research in Personality, and the “personality” section of JPSP) tend to publish studies with conspicuously higher statistical power than do the major journals that publish social psychology-type experiments (e.g., Psychological Science, JESP and the two “experimental” sections of JPSP), and one implication of this fact, as Chris Fraley and Simine Vazire just pointed out is that the former set of experiment-friendly journals are more likely, ceteris paribus, to have higher false positive rates than is the latter set of personality-type journals.

(2) Some of the literatures my students reviewed were not particularly “sexy” or “faddish”–at least not to my eye (Biologists refer to the large animals that get the general public excited about conservation and ecology as the “charismatic megafauna.” Perhaps we could begin talking about “charismatic” research topics rather than “sexy” or “faddish” ones? It might be perceived as slightly less derogatory…). Perhaps studies on less charismatic topics generate less temptation among researchers to capitalize on undisclosed researcher degrees of freedom? Just idle speculation…

(3) The students went into the exercise without any a priori prejudice against the research areas they chose. They wanted to know whether the literatures the focused on were p-hacked because they cared about the research topics and wanted to base their own research upon what had come before–not because they had read something seemingly fishy on a given topic that gave them impetus to do a full p-curve analysis. I wonder if this subjective component to the exercise of conducting a p-curve analysis is going to end up being really significant as this technique becomes more popular.

If you teach a graduate course in psychology and you’re into research methods, I cannot recommend this exercise highly enough. My students loved it, they found it extremely empowering, and it was the perfect positive ending to the course. If you have used a similar exercise in any of your courses, I’d love to hear about what your students found.

By the way, Sunday will be the 1-year anniversary of the Social Science Evolving Blog. I have appreciated your interest.  And if I don’t get anything up here before the end of 2014, happy holidays.

The Trouble with Oxytocin, Part II: Extracting the Truth from Oxytocin Research

Two weeks ago, the Society for Personality and Social Psychology (SPSP) held its annual meeting in Austin, TX. I tried to get there myself, as I had been invited to give a talk on the measurement of oxytocin in social science research as part of the “Social Neuroendocrinology” pre-conference. However, some things were brewing on the home front that kept me in Miami. Undeterred, the pre-conference organizers arranged for me to give my talk via Skype, which worked out reasonably well.

In this essay, I’ve turned some of that talk into the second installment in my “The Trouble with Oxytocin” series (the first installment is here). It’s a bit wonkish, focusing as it does on the importance of a bioanalytical technique called extraction, but it’s an important topic nonetheless. Many of the social scientists who are studying oxytocin have decided that they can skip this step entirely. As a result of their decision to take this shortcut, it’s quite possible that many scientific claims about the personality traits, emotions, and relationship factors that influence circulating oxytocin levels are—how to put this diplomatically?—without adequate basis in fact. I’ll substantiate this claim anon, but first, a bit of nomenclature.

A Bit of Nomenclature

Applied researchers generally measure oxytocin in bodily fluids by immunoassay—a technique so ingenious that the scientists who developed it received a Nobel Prize in 1977. Simplifying greatly, to develop an immunoassay for Substance X, you inject animals (probably rabbits) with Substance X and wait for the animal(s) to produce an immune reaction. To the extent that one of the antibodies an animal produces in response to Substance X is sensitive to Substance X, but not to other substances that can masquerade as Substance X, you may be in a position to conclude that you have successfully produced a “Substance X antibody.” With that antibody in hand, you’ve got the most important ingredient for developing an immunoassay.

Antibodies can be used to make several types of immunoassays, but two types are prominent in the oxytocin field: Radioimmunoassays (RIA) and Enzyme-Linked Immunosorbent Assays (ELISA, or EIA). Both methods are widely accepted (although ELISAs don’t require the analysts to handle radiation—a benefit to be sure). I wanted to familiarize you with these terms here at the outset only because I don’t want my toggling back and forth between them to distract you. The focal issue for our purposes here is the issue of extraction.

To Be Exact, You Must Extract

Extraction is a set of preliminary processes an analyst can use to separate Substance X from other substances in a sample of (for instance) blood plasma that might interfere with the immunoassay’s ability to quantify precisely how much Substance X is in the sample. I’m going to skip the details, but you can read up here. Antibodies can bind to all sorts of substances that are not Substance X (for example, proteins, other peptides, or their degradation products) if you’re not careful to remove that other stuff first. More relevant for our purposes here, researchers have known for a really long time that a failure to extract before conducting immunoassays for plasma oxytocin will result in profound overestimates of how much oxytocin is actually in the sample.

This is not some well-kept industry secret. The manufacturers of some of the more widely used commercial ELISAs have been admonishing the users of their assays to extract samples since at least 2007. Below is a snip from an instruction manual bearing a 2006 copyright. (The admonition gets repeated in this 2013-copyright instruction manual also):

Instruction Manual

What the manufacturers are showing here (see the two columns of data on the left) is that when they performed their oxytocin assay on a sample of human blood plasma without performing an extraction step, they read off an oxytocin concentration of 2,761 pg/ml (picograms [10-12 grams] per milliliter). When they performed the extraction step on the same sample, they got a value of 3.4 pg/ml—three orders of magnitude smaller. Plain English translation: “There are some substances in human blood plasma that fool our antibody into believing they’re oxytocin molecules. You’d better get rid of those imposters before you run our assay on your sample. After you do that, we think you’ll be OK.” Keep this value of 3.4 pg/ml in mind. As I’ll show you below, it’s the sort of value, more or less, that one ought to be expecting from assays that actually measure oxytocin.

Like I say, the need for extraction is no secret. Basic biological researchers who study oxytocin have been extracting their samples since The Waltons had a prime-time slot on CBS. But extraction takes a lot of time, so it is expensive. Perhaps this is why a team of researchers started to skip the extraction step in the early 2000s.[1] In no time at all, other social scientists were following in their footsteps, and with that, a Pandora’s box was opened. Most social scientists just stopped extracting, often citing the originators of this custom to justify their choice.

In what follows, I’ll chronicle what happened to the social science literature on oxytocin as a result of this fateful methodological choice. Table 1, below, is from a paper that Armando Mendez, Pat Churchland, and I published last year.[2] It illustrates the typical oxytocin values one can expect to see in samples of extracted plasma measured by radioimmunoassay versus the values one can expect to see when using one of the commercial ELISAs on raw (i.e., unextracted) plasma.

MCA Table 1.jpgFrom McCullough, Churchland, and Mendez (2013)

A few things stand out in Table 1. First, when you measure oxytocin in blood plasma using RIA on extracted samples, you typically find that healthy, non-pregnant women and men have oxytocin levels of somewhere between 0 and 10 picograms per milliliter of blood plasma. This is consistent with that value of 3.4 pg/ml that I suggested you keep in mind from the 2006 instructions that came with that assay kit.

Below are some values that Ben Tabak, our neuroscience/biochemistry colleagues, and I obtained on 35 women whose oxytocin we measured in five different samples of plasma. Mean values were in the 1-2 picogram range.[3]

Tabak ValuesAdapted from Tabak et al., (2011)

The Tabak et al. (2011) sample was small. We had oxytocin values for only a few dozen women, so I won’t be offended if you don’t want to place too much trust in them, but here are some values that Tim Smith and his colleagues obtained with an RIA on extracted samples from 180 male-female couples: Again, their mean values hovered around 1-2 picograms per milliliter. [4]

Smith DataFrom Smith et al., 2013

So this is very reassuring.  The values that we got, and the values that Smith and his colleagues got, are very consistent with the 1-10 pg/ml range that we’ve come to expect over the past 35 years.

MCA Table 1.jpgFrom McCullough, Churchland, and Mendez (2013)

But now take a look the right side of Table 1 above to see what happens when you assay plasma for oxytocin using commercial ELISAs without extraction. It doesn’t matter whether you’re studying healthy non-pregnant women, healthy non-pregnant men, pregnant women, or new mothers: You’re going to get mean oxytocin values in the 200-400 pg/ml range, that is, values that are 100 to 200 times higher than what you get with RIAs on extracted samples.

Consider, for instance, the data below, which come from this paper, which the authors accurately described in the abstract as “[u]tilizing the largest sample of plasma OT to date (N = 473).” They found a mean value for men of approximately 400 pg/ml and a mean value for women of around 359 pg/ml.[5]

Weisman CurvesFrom Weisman et al. (2013)

Mean values of 200, 300, and 400 pg/ml for oxytocin in unextracted plasma are not exceptions to an otherwise orderly corpus of findings. They are what you should expect to find if you perform an oxytocin assay without extraction. For instance, the data below, from this paper show the sorts of oxytocin values you can expect to find in the plasma of pregnant and recently pregnant women when you use ELISA on raw plasma:[6]

Feldman ValuesFrom Feldman et al. (2007)

The values above are measured in picomolars rather than in pg/ml, but oxytocin has a molecular mass of 1007 Daltons, so by sheer coincidence one picomolar of oxytocin is roughly equivalent to one pg/ml. In other words, these authors also got mean values for oxytocin using an ELISA on raw plasma that are way too high—and look at the upper end of those ranges—3,648 pg/ml! There’s just no good reason for believing that there could be 300 picograms of OT—much less 3,648—in a milliliter of blood plasma.

Why are these ELISAs giving such high values? There’s nothing wrong in principle with using an ELISA to measure OT in plasma, even though some of the commercial assays have used antibodies whose sensitivity and specificity is far from ideal. (This is an extremely important issue, by the way, but not the one to tackle here.) Instead, the predominant reason why researchers are getting such wacky values from these ELISAs is that they’re skipping the extraction step.

How do I know? Because I know what happens if you do extract your samples before you assay them via ELISA. Our research group found that when you extract your samples before you analyze them with a certain commercial ELISA kit, the mean values drop from somewhere around 358 pg/ml to somewhere around 1.8 pg/ml—just as you’d expect, given the admonitions in the manufacturer’s instructions.[7] And here are some extracted values that Karen Grewen and her colleagues got for 20 healthy breastfeeding mothers when they used the same ELISA that gave Weisman et al. those values in the 300-400 pg/ml range for raw plasma.[8] ELISAs can give plausible values if you extract first.

Grewen ValuesFrom Grewen, Davenport, and Light (2010)

Estimating OT from Unextracted Samples: Is There Any Signal Amidst the Noise?

Of course, none of this would matter very much if there were some way to statistically transform the OT values you obtain from unextracted plasma into the values you would have obtained from extracted plasma, but that doesn’t seem to be the case: The evidence currently available suggests that the values from the two methods are, quite possibly, uncorrelated.

We looked at this issue in our 2011 paper.[7] We had 39 plasma samples, which we analyzed with one of the most widely used commercial ELISAs, both before and after extraction. The correlation coefficients ranged from .09 to -.14, depending on distributional assumptions. Kelly Robinson and her colleagues just came to the same conclusion with their own data—52 samples of blood plasma from seals.[9] In fairness, I have to acknowledge another study that revealed a very high correlation between the oxytocin values derived from extracted samples versus those obtained from unextracted samples (0.89), but that study was based on very little data (11 samples of blood serum, rather than plasma, from Rhesus monkeys), so it would be a mistake to give it too much weight.[10]

Conclusion

So, what shall we conclude about oxytocin assays on unextracted plasma, given the data we have to go on at this point? Well, on the plus side, raw plasma is cheaper and quicker to assay than extracted plasma. Nobody disputes that. On the minus side, if you don’t extract those samples before you assay them, you apparently convert those ingenious oxytocin assays into random number generators, and there are cheaper ways to generate random numbers.

For ten years, many social scientists who study oxytocin have been side-stepping an expensive but evidently crucial extraction step. If you’ve come to believe that the trust of a stranger, or sharing a secret, or sensitive parenting, or mother-infant bonding, or your mental health, can influence (or is influenced by) how much oxytocin is coursing through your veins, you might want to take a second look. Chances are, those findings came from studies that used immunoassays on unextracted plasma (it’s easy to know for sure: just check the papers’ Method sections), and if so, there’s little compelling reason to think the results are accurate.

Now, if any researchers out there have data that can prove that we should be taking the results from immunoassays on unextracted samples at face value, they would do the field a great favor to make those results public, and at that point I will happily concede that all my worrying has been for nought. Even better, perhaps someone could conduct a large, pre-registered study on the correlation of OT values from extracted versus raw plasma. Pre-registration is easy (for example, here), and would increase the inferential value of such a study immensely. In any case, more data on this topic would be most welcome. I, for one, would love to know whether we should be taking the results of studies on raw plasma seriously, or whether we’d be better off by dragging them into the recycle folder.

References

1.         Kramer, K.M., et al., Sex and species differences in plasma oxytocin using an enzyme immunoassay. Canadian Journal of Zoology, 2004. 82: p. 1194-1200.

2.         McCullough, M.E., P.S. Churchland, and A.J. Mendez, Problems with measuring peripheral oxytocin: Can the data on oxytocin and human behavior be trusted? Neuroscience and Biobehavioral Reviews, 2013. 37: p. 1485-1492.

3.         Tabak, B.A., et al., Oxytocin indexes relational distress following interpersonal harms in women. Psychoneuroendocrinology, 2011. 36: p. 115-122.

4.         Smith, T.W., et al., Effects of couple interactions and relationship quality on plasma oxytocin and cardiovascular reactivity: Empirical findings and methodological considerations. International Journal of Psychphysiology, 2013. 88: p. 271-281.

5.         Weisman, O., et al., Plasma oxytocin distributions in a large cohort of women and men and their gender-specific associations with anxiety. Psychoneuroendocrinology, 2013. 38: p. 694-701.

6.         Feldman, R., et al., Evidence for a neuroendocrinological foundation of human affiliation: Plasma oxytocin levels across pregnancy and the postpartum period predict mother-infant bonding. Psychological Science, 2007. 18: p. 965-970.

7.         Szeto, A., et al., Evaluation of enzyme immunoassay and radioimmunoassay methods for the measurement of plasma oxytocin. Psychosomatic Medicine, 2011. 73: p. 393-400.

8.         Grewen, K.M., R.E. Davenport, and K.C. Light, An investigation of plasma and salivary oxytocin responses in breast- and formula-feeding mothers of infants. Psychophysiology, 2010. 47: p. 625-632.

9.         Robinson, K.J., et al., Validation of an enzyme-linked immunoassay (ELISA) for plasma oxytocin in a novel mammal species reveals potential errors induced by sampling procedure. Journal of Neuroscience Methods, in press.

10.       Michopoulos, V., et al., Estradiol effects on behavior and serum oxytocin are modified by social status and polymorphisms in the serotonin transporter gene in female rhesus monkeys. Hormones and Behavior, 2011. 58: p. 528-535.