When I taught a science for non-science majors at Cornell University, my favorite lecture was on science in the popular media. My learning objective was straight forward: I wanted students to pick up a newspaper, flip to the science section, and detect bullshit.
This is such an important life skill because at this point in time, blogs, newspapers, and magazines print unscientific bullshit. They do it because it sells. I have never seen an article so neatly package all of the problems that lead to this state of affairs than John Bohannon’s recent “Chocolate Diet” charade. If you haven’t read it yet, please do. I’ll wait.
Here’s the breakdown: Diana Löbl and Peter Onneken are working on a “diet science” documentary, and want to show how diet news can get reported. So they enlist Johannes to conduct a shoddy study. Participants will either make no dietary changes, go on a diet, or go on a diet with a bar of chocolate. By only including 15 people in their study, and measuring a wealth of factors, they were guaranteed to find that the chocolate dieters fared better in some way than the other groups. The finding would be meaningless, but it would be “statistically significant” – within the confines of their prohibitively small study. They lucked out, and the 5 chocolate eaters lost weight.
Next, they needed to publish their study somewhere. Fortunately for Johannes, it’s getting harder and harder to distinguish “fake” publishers from real ones. Real journals will conduct real peer review, where third-party scientists must evaluate the article before it can be published. Due to the intentionally poor design of this study, it (hopefully) wouldn’t pass peer review in any credible journal.
A fake publisher, on the other hand, will not conduct review. Instead, it will collect a publication fee, then immediately accept and publish the article without review. Bohannon’s paper was easily accepted into the International Archives of Medicine without any peer review.
Once the paper was published, it took off in the science and diet sections of many, many magazines, blogs, and newspapers. You can easily understand why- “Chocolate helps you lose weight” is the sort of headline The Daily Mail and its ilk dream about.
Back to my students. To consider a news piece credible, students first needed to ask three questions.
1. Is the finding reported by a scientist? Who? Does the scientist work for a biased source?
2. Was the finding published in a peer-reviewed journal?
3. Can you find the article? Do the conclusions the authors draw in the abstract reflect what is reported in the newspaper?
How would my students fare if they were given this article? It turns out, not so well. Here are my answers, working from Bohannon’s press release.
Question 1: Is the finding reported by a scientist? Does the scientist work for a biased source?
Johannes Bohannon is presented as the research director of the nonprofit Institute of Diet and Health, a fictional institute. They have a website that is intentionally minimalist. It doesn’t look incredibly legitimate, but it also doesn’t scream crazy. I imagine most students would give it a pass, despite the possibility that, say, this fictional institute is funded by Nestle.
Question 2: Was the finding published in a peer-reviewed journal?
Yes of course it was! Here’s the real problem predatory journals present to the common reader going forward. The International Archives of Medicine (IAM) is a legitimate enough sounding journal. The Journal’s website has an ISSN, citation metrics, and assurances that it is included in the JournalGuide whitelist of reputable titles. As Bohannon reports, this journal used to be reputable, as it was published under BioMed Central, a trustworthy open-access publisher.
In short, one would need to do some serious digging to realize the journal is bogus.
Question 3. Can you find the article? Do the conclusions the authors draw in the abstract reflect what is reported in the newspaper?
Yes and yes. This is an easy one, because Bohannon’s wrote both the press release and the article.
It’s clear that we have a problem here. What prevents someone from doing poor work (or fabricating work) and “publishing” the work in predatory journals? Nothing. And unless some sort of watchdog agency is set up to accredit peer-reviewed journals, this will not change any time soon. If real scientists can’t distinguish a quality journal from a predatory one, what chance do non-scientists have? As it is, I find Beall’s list of questionable publishers to be the most useful resource for investigating journals. But this is a list built by a science librarian who saw a problem, and can’t be expected to be definitive or exhaustive. The JournalGuide, cited on IAM’s website, appears to be a reasonable resource in its own right (it’s listed in this Nature article on resources for evaluating journals), but obviously it’s not perfect if IAM is listed.
If policing publishers isn’t the answer, then what can be done? Bohannon’s piece makes one thing abundantly clear: the popular media did not do their homework on this piece. Many magazines ran the story without contacting Bohannon at all. If he was contacted, he was never contacted by someone who knew enough about science to ask the right questions. In short, science journalism lack skepticism. It seems that most magazines will run an article without asking the three questions I listed above. And even if they do, it’s clear that a more careful tact must be taken in the future to prevent spreading misinformation.
Until that day arrives, I urge you, the reader, to bring a healthy dose of skepticism to every new miracle finding you read, especially related to human health, sexuality, and diet. Do your homework, because no one else is doing it for you.
For as long as I can remember, I’ve enjoyed video games. If there’s leveling up, character optimization, or strategic battles to be had, I’m in. At my worst, I can burn an entire Sunday; I’ll admit that I’ve banned myself from playing DOTA2 after one too many late nights.
At their best, video games can be a vehicle for me to work on my goals. For example, I’ve been changing the language settings on my videogames to Spanish. I’m not learning a ton of practical vocabulary, sure, but immersion is a great thing, even if it’s silly.
A newer concept for me is Gamification. At it’s core, gamification is about turning stuff that you want to get done into a game. Duolingo is a simple example of this. Complete vocabulary lessons to level up and unlock new lessons. When accomplishing tasks results in leveling up and earning rewards, the same compulsion that lead me to waste my teen years playing Everquest will now work in my favor. That’s the idea, at least, and I manage to keep up with Duolingo fairly well.
HabitRPG is MUCH more ambitious with this concept. It goes all out, giving you an avatar with the ability to earn mounts, group up with friends, and fight bosses. You get to choose your “quests” so it can be tailored for whatever purpose you’d like. If you set a daily goal to study for one hour, you’ll gain experience points and gold for completing it, but take damage each day you feel to check it off. You can then spend accumulated gold to buy rewards that you specify, or in-game swag.
There’s no doubt that consistency and repetition are the keys to setting up positive habits. This is where a tool like HabitRPG shines- it rewards positive habits in a fun and imaginative way.
I’m setting up my HabitRPG account to encourage more productive use of my free time. In a future blog post, I’ll reflect on how gamification can be used in the laboratory.
Last year, I wrote about using Evernote as my digital lab notebook. With the release of Findings, a new digital notebook software from the people who created my favorite reference management software Papers, I thought I would reflect on my digital notebook needs.
A digital notebook should be:
- Indexed and searchable, with both automatic (embedded text searching) and manual (tags) search functions. Evernote handles this quite well- I can easily manage what tags I have available and organize them how I wish, but searching my notebook will also find terms in embedded word documents and PDFs, for example.
- OS integration. The great thing about drinking the Apple koolaid is these apps can work well across platforms. We don’t tend to have our laptop on our experimental benchtop; being able to pull up a notebook on my iPhone is great.
- Multimedia friendly. My notebook is a mix of text, snapshots, data annotated in Powerpoint, excel files, word documents, and PDFs. Again, Evernote handles this quite well. It falters a bit in flexibility when printing out my notebook- usually my images don’t come out formatted quite right and I end up with a single image per page.
- Traceable. Ultimately, a lab notebook is for tracing the lineage of data. Whether this is at the troubleshooting or the writeup phase, I need to understand what the starting material, protocol, and resulting output were at each stage of the experiment. Science is seldom perfect, and a good lab notebook can prevent some confusing mixups (was that DNA sample prepared before or after I optimized the pH of buffer X?) Here is where Evernote isn’t perfect, largely because this is a science problem.
I’m looking forward to trying out Findings and reporting back how it improves on these key issues (and others I haven’t thought about).
Wow. These reviews are extremely incriminating; reviewers obviously spotted some of the key problems with this paper. “This paper claims that cells from any somatic tissue can be reprogrammed to a fully pluripotent state by treatment for a few days with weak acid.
This is such an extraordinary claim that a very high level of proof is required to sustain it and I do not think this level has been reached”
Retraction Watch readers are of course familiar with the STAP stem cell saga, which was punctuated by tragedy last month when one of the authors of the two now-retracted papers in Nature committed suicide.
In June, Science‘s news section reported:
Sources in the scientific community confirm that early versions of the STAP work were rejected by Science, Cell, and Nature.
Parts of those reviews reviews have surfaced, notably in a RIKEN report. Science‘s news section reported:
For the Cell submission, there were concerns about methodology and the lack of supporting evidence for the extraordinary claims, says [stem cell scientist Hans] Schöler, who reviewed the paper and, as is standard practice at Cell, saw the comments of other reviewers for the journal. At Science, according to the 8 May RIKEN investigative committee’s report, one reviewer spotted the problem with lanes being improperly…
View original post 2,685 more words
I adapted this code and produced my first plot with error bars in R! Thanks Martin!
Ah, the barplot. Loved by some, hated by some, the first graph you’re likely to make in your favourite office spreadsheet software, but a rather tricky one to pull off in R. Or, that depends. If you just need a barplot that displays the value of each data point as a bar — which is one situation where I like a good barplot — the barplot( ) function does just that:
Done? Not really. The barplot (I know some people might not use the word plot for this type of diagram, but I will) one typically sees from a spreadsheet program has some gilding: it’s easy to get several variables (“series”) of data in the same plot, and often you’d like to see error bars. All this is very possible in R, either with base graphics, lattice or ggplot2, but it requires a little more work. As usual when it…
View original post 1,123 more words
Well, the BBC and The New York Times have both published pieces on the Russian hackers “CyberVor”. The claim is that 1.2 billion user names and passwords from some 420,000 websites have been hacked. The sites/users affected, nature of the vulnerability, and severity of the threat have not been disclosed.
Skeptics have pointed out that, well, things don’t really add up. The biggest problem is that The New York Times was fed the CyberVor piece by Hold Security firm. This is the very same firm that stands to profit from this security breach, by charging $120/year for their services. The New York Times piece, to my eye, does not validate the information provided by Hold Security. The truth is that its in Hold Security’s interest to exaggerate the breach, and in The New York Times’ interest to report the story as quickly as possible. Without released facts or data, this entire story could have been fabricated by Hold Security. This is unlikely, as The New York Times piece claims that two unaffiliated sources verified the database as authentic. Still, experts seem to think the threat could be exaggerated.
So what does Hold Security know about CyberVor? According to The New York Times piece (which means according to Hold Security, who sell the solution to the CyberVor problem), CyberVor is made up of fewer than a dozen men in South Central Russia. Hold Security knows this because they have been in communication with them. Seriously.
I’ll be interested in seeing how this develops. As it is, I see a lot of big claims with no evidence or specifics, and the group making the claim profiting from the resulting panic. I also find it odd that CyberVor and Hold Security communicate. Is that normal? Do hackers usually chat with data security firms? I can’t even verify that CyberVor is a thing from anyone other than Hold Security because, well, Hold Security coined the term CyberVor, and that is all we have to go on.
As other skeptics have advised, however the threat plays out, taking cyber security seriously is a wise decision. I recently invested in password management software, and highly recommend it.