In scientific papers we are very strict about citing sources. Not only do we put a list of our references at the end of papers, but we also indicate which reference gave us which fact right there in the text: “junco fledglings have big fuzzy eyebrows (LaBarbera 2012).” This makes fact checking easy.
Scientists writing for the general public don’t usually do this. Depending on the form of a science-for-a-general-audience column, references may all be at the end only, or they may not be there at all. When researchers write about their own research without any citations, saying “My research shows…” and “Many studies have found…” but not actually citing them, it’s up to you to either blindly believe them (don’t do this) or to check their sources yourself. If they do good research, this shouldn’t be hard.

Sometimes a “research” column is like a coot: fine at first glance, but when you look close, really weird feet.
You know what I mean.
The first source to check is the researcher. Does Googling them give you a lab website on a real university website with headings like Students and Research and CV? Good. Does it give you a page on the website of a politically-motivated group masquerading as a research group? Bad!
On the researcher’s website, you should be able to find a list of their publications. This will be under a heading of Publications, or else it will be on their CV. The next step is to look at the titles of those publications and try to find the one(s) that contain the research they wrote about in their column. If the column was “Ants Have Weird Heads and Hate You,” the papers might be “Aesthetic traits of the head in a social insect” and “Human-ant interactions in an urban landscape” — they’ll be sciencier-sounding than the column, but obviously relevant.
Looking at those same publications: do you see journal titles, like Science or Animal Behaviour, or do you see some variation of in prep. or working paper or in review? If there isn’t an actual journal title, then that paper has not been accepted for publication anywhere, meaning it hasn’t passed peer review. Peer review is that thing where science is checked by other scientists before we throw it out into the world; it’s where your paper comes back from the journal with comments from other researchers like “This is the wrong statistical test to use with these data, please re-do” and “This sample size seems too small to show what you say it shows,” and you have to appease these reviewers before you get the paper published. A published paper, after peer review, seemed like solid science to at least three scientists. An unpublished paper could be utter nonsense. If the column is based on unpublished papers—or on a book; books are not peer-reviewed—then that is a big red flag.
Diving into the publications themselves lets you do two more checks. (Most researchers have links to PDFs of their publications on their website, so you can use those PDFs to do this.) Did the column cover a topic that isn’t obviously the author’s area of expertise—for example, an economist talking about genetics? Then you’ll want to look at the list of authors in the relevant publication. Right below the names will be their institutional affiliations, which will include departments, e.g. “Dept. of Integrative Biology, UC Berkeley”. Check to see if anyone on the paper is in the right department to be an expert on the topic in question: does the economic paper have an author on it from a department with the word Biology or Genetics in its title? If so, good. If not, the authors may be way out their field and totally lost.
This isn’t a perfect test, since sometimes researchers are in one department but do great work across several fields. However, good researchers should be passing the rest of these tests, especially the next one, so if they fail this test and the next one, they’re probably bad news.
Remember the part in the column where the researcher said “Many other studies have found x” to back up their point? You need to find out if there really are other studies. In the relevant publication, check the Introduction (at the beginning) and Discussion (at the end) for a similar statement, like “Other studies have also found x,” and then look for citations. Does it say “Other studies have also found x (Amat 1991, Johansson 2006)”? Great! Pass! There probably are other studies that have found x. Does it not mention any other studies at all, anywhere? Whoops. Then the statement in the column was probably a lie by someone counting on you to not know how to check them.
This source-checking shouldn’t take you long, and it’s a good idea when you have a bad gut feeling about some writing that claims to be based on research. If it turns out to be good research, then, awesome, you’ve learned something new! If it doesn’t, then you’ve avoided being fooled.
(Note: this post was inspired by an recent column in the New York Times’ Opinionator—the one with the economist talking about genetics. If you read that and thought it seemed a bit weird, try doing the checks I describe here on it.)
Spoilsport!
Why can’t I do my research in the bottom of a pint glass and throw together some random words that essentially say “I reckon …” and get paid for it?
Often you CAN – that’s the problem! But hey, cut me in on some of those big bucks and this post will come right down… :-)
I wish.
Have you read “Bad Science” (Ben Goldacre)?
I like your Kookaburra. Also, great post about giving credit for others research.
I loved the comparison to the coot. Exactly right.