Getting hit by lightning is not fun! If you would like to help me in my recovery efforts, which include moving to the SW, feel free to hit the fundraiser at A New Life on GiveSendGo, use the options in the Tip Jar in the upper right, or drop me a line to discuss other methods. It is thanks to your gifts and prayers that I am still going. Thank you.
Back many moons ago, when I taught some basic science courses for a small university (tempted to put that in quotes), I encouraged a certain amount of skepticism and critical thinking in my students via a fictional sensational news/marketing push/story. I can’t remember all of it (stupid lightning), but enough to lead into today’s post.
The push story was that anyone contracting a particular gum infection, “gumjooboo,” stood a 99 percent chance of dying from it. Thing was, there was a special toothbrush now for sale that while very expensive, was 99.9999 percent effective in preventing gumjooboo. I then walked the students through parsing and researching to show that gumjooboo only effected a small percentage of a particular tribe in one small area of the Amazon basin. This led into some discussions on probability, odds, etc. Also, a bit of discussion on interpreting scientific papers.
First, and I say this as a former (still commit a bit of it) science journalist and former member of the National Association of Science Writers, take any media story about scientific research with a tun of salt. You might be amazed at how often the media presentation is 180 degrees from what the study says or shows. Number of reasons for it, including lack of specialization, lack of experience, and even deliberate misrepresentation.
Specialized reporting is not what it used to be in the corporate media. Newspapers and television stations, much less networks, had dedicated science and medical reporters. You had people like Jon Van covering science at the Chicago Tribune. You had people like Jules Bergman at ABC covering aviation and space. These were people with years, decades, of experience. You don’t find that as much anymore in corporate media. Instead, you far too often find people right out of J-school tasked with covering various science and medical studies. Funny thing is, we saw that coming and Jon, the medical reporter at the Tribune, and I had a very interesting discussion on that subject many years back. Sadly, they were right in their predictions. New media is mixed, but there are some excellent science and medical reporters out there.
So, let’s take a quick look at how to read a scientific paper. In fact, let’s do so in part in the context of current events.
First, in what publication is it appearing? If Scientific American, make that several tuns of salt right at the start. I, personally, no longer trust anything they print. If it is a paper on astrophysics and it’s in a biology journal or the Journal of Irreproducible Results, that should be a red flag. If it’s a medical study and in a medical or biological journal, that’s a good start.
Next, you are going to have a title. The title should be even, maybe even boring, as “reputable” journals avoid sensationalism. It it is highly sensationalistic, such as saying the President’s mother is an alien or gas stoves are causing massive brain damage and asthma, good odds it is not a valid study, and that it’s not in a reputable or good publication. Though to be fair, The Enquirer seems to be gaining ground on many so-called scientific journals.
Up next is the list of authors. It should include current employers for each (J. Blowhard, National Institute of Health) and in many online papers links to previous papers, etc. It should also lead to affiliations, that is the organizations to which that researcher belongs. Now, if you find in a paper on the horrors of using gas stoves that one or more of the authors worked for a company that removed gas from buildings or was dedicated to eliminating gas as a fuel, that’s another red flag. Always check current and past employers, professional affiliations, and previous papers.
Now, sometimes it is up front and above board, and as such listed high up. Most times, however, it is buried towards the bottom of the paper. The “that” is a disclosure of who funded the research in question. Sometimes it is open, quite often it is a foundation or fund that has a noble-sounding name. Always check that out, as quite often the major source of funding for that noble-sounding trust or whatever is a major industry organization or even a company. If that organization or company is either dedicated to eliminating the horror that is gas, or promoting the competition for gas, well, yeah, that’s another red flag.
If you really want some fun that’s not a gas, go look up how much FDA nutrition research over the years was funding by trusts and funds bearing names like Kellogg, or even directly by major food companies. It’s not even that hard as it is well documented. Funding is king, and often is key to understanding and evaluating the research paper in question. Even when the U.S. government funds research, look to see if it is taxpayer funding or courtesy of a grant to the government by an industry-funded trust.
Next up should be an abstract. This is a synopsis of the paper and it’s conclusions. To be honest, it is all that is read by far too many corporate media reporters and is why such reporting is often “just a bit outside.” They can be confusing, and it can be easy to read into them what you want to read into them. Good ones are not, but you’re dealing with scientists and engineers who are talking to themselves, not authors used to talking to the public.
First up in the paper should be the background. What led to the paper? Why did they do it the way they did it? Lots of good information usually, and it is often fun to read between the lines on this. One of my favorites remains a research study on coffee filtration, which when you read a bit between the lines, boiled down to: we are coffee-heads, some of us have lipid problems, so we decided to see if filtering made a difference because we are NOT giving up our coffee. It is also good place to start spotting red-flags, as if the background is sketchy, the study is sketchy.
Next thing to look at is methodology. Most good studies are looking at a real-world situation, and therefore the methodology should mirror the real-world as much as possible. Not so easy on things like black holes, but on possible pollutants and such from gas stoves, dead easy. Therefore, if you see things where the methodology basically sealed an area such that it was guaranteed to raise concentrations (layers of plastic, foam bars, etc.) as it is almost air-tight, that’s a big red flag.
Every good paper should have a section on prior research. It’s part of the discussion of why this research was needed and what the paper contributes to the discussion. Remember, real science is about questioning, researching, debating, and testing. Science is never settled, and thinking back on watching a grad student all but dance in Spacelab Control when she was proved right on a theory and her professor wrong still makes me smile. Note, the professor wasn’t upset, he found it a good thing. That’s real science.
If a paper either doesn’t have such a section, or it is woefully incomplete, it is not a valid scientific study and paper. For example, if a study uses limited numbers, small area, and questionable methodology to reach a conclusion, and fails to discuss an easily found paper on PubMed that involved samples relating to half a million children worldwide, there is an issue there.
There should also be a conclusion, but by this point unless it says Jeffrey Epstein didn’t hang himself, you should have all the data you need. If a paper is nothing but a series of red flags, it is not a paper but propaganda. Next question is who is behind it or why. Why would anyone want to force a large segment of the population onto a underpowered and problematic grid and away from clean energy that is harder to control and/or cut-off than electricity? Good questions.
You should always have questions at the end of a paper. In good papers, most of mine are along the lines of ‘who is doing the next step research on this’ and ‘where can I find more information.’ In bad papers and propaganda, more towards the above.
When it comes to papers and the media coverage of same, trust no one. Rather, trust but verify. Especially if research is being used to push major policy decisions.
UPDATE: Got reminded that you also need to check if data is being accurately compared. Not saying there can be a tendency in bad papers/propaganda to do apple/orange comparisons (it’s another one of those red flag things), but be sure it is apple to apple, and not an attempt at a quick tap dance.
UPDATE II: In regards the whole gas debacle, check out this very good thread that takes it completely apart. It is very much worth the read. Also, if you think they really have stopped the effort to ban gas, please think again.
I’m a retired patent attorney, and your article has certainly struck a chord with me. I would also add my two cents on conclusions. In my experience, a lot of scientists that I have worked with tended to use multivariate statistics. Unfortunately, or fortunately, depending on your perspective, examiners in the US Patent Office likes to see simple experiments where one variable, the key on in a patent, is isolated in order to prove the point of the invention. It took me quite a while to understand some of these statistical models and, importantly, the underlying assumptions in order to figure out why the multivariate models did not show truth when a single variable was studied. It took me a little longer to understand that scientists have a strong desire to show their work is valid that can sometimes be undermined by simple experiments. My take: true scientists aren’t afraid to answer questions about their work and to do additional work to answer those questions.
Excellent point(s) and thank you for bringing them up! The input is very much appreciated.
I took a statistics class with the famous duo Box and Hunter and the first day they presented a graph with number of storks in pond on the x-axis vs babies born in the nearby hospital on the y-axis and got a straight line.
It was a demonstration of a “lurking variable” and was the same logic that the instructors pointed out prevents one from concluding cigarettes cause cancer. There could be a lurking variable “Genetic factor X” where the person highly susceptible to cancer is also highly lured into smoking.
But the courts today don’t care and the recent Round-up settlement is a case in point.
Dennis – agreed. Not only is correlation not causation, there are times it’s not even correlation.
I quote George Box a lot, to the discomfort of those around me.
Excellent article. I’ve written scientific papers for decades and your analytic approach is the right one to determine motive, method, analysis, conclusion, and future work.
Thank you!