Issues with Data in Sports Nutrition

laughing_data

Issues With Data in Sports Nutrition

By: Brad Dieter, PhD

Read Time: 10 if you are focused; 15 if you are eating lunch

Tl;dr: Applying molecular biology to sports nutrition and exercise physiology is hard. We need to be really careful how we draw conclusions based on data.

What a lot of people reading this may not know is I spend a large majority of my time in a lab. My training is in molecular exercise physiology and biostatistics, and I am currently doing a research fellowship in molecular mechanisms of chronic disease, specifically diabetes and end organ failure in the kidney.

Essentially, I spend a ton of time looking at data from molecular biology assays* and trying to interpret them into something meaningful at the level of the human body.

Complexity

To set the stage a bit, an assay is an analytic procedure in laboratory medicine, pharmacology, environmental biology and molecular biology for qualitatively assessing or quantitatively measuring the presence or amount or the functional activity of a target entity (the analyte).  Essentially, it means taking a sample, running a test, and getting some arbitrary piece of data (usually numbers on an excel sheet). Now despite my almost decade of training, let me be the first to say that I spend a majority of my time confused and puzzled.

Frankly, most of the time I wrestle with translating the assays to physiological meaningfulness due to the complexity of the assays themselves. For example, attempting to quantify cell-signaling mechanisms from Western Blots done on tissue (e.g. mTOR signaling) is fraught with a lot of issues.

Let us imagine that you read a hypothetical study about whey[-]inducing muscle protein synthesis. In the paper , consuming 10 grams of whey protein results in a 1.2 fold increase in phosphorylation of P70S6k. This leads to the inevitable conclusion that 10 grams of whey protein in adult men can activate muscle protein synthesis signaling mechanisms. This is where we need to step back and ask a critically important question, “is this conclusion accurate, and if not, what does it really mean”?

First, there are the nuances of the assay itself. Tissue prep (buffers used, homogenization time, fractionation of cellular compartments, etc.), protein quantification (DC assays versus Bradfords or spectrophotometry), sample prep (dilution, sample buffers used, boiling time), SDS-page electrophoresis, the actual transfer to the membrane, antibody conjugation, blocking buffers, wash buffers, exposure time etc.).

wb-pic-2

Each step along the way brings noise into the data and can drastically alter the meaningfulness of the data.

The Devil is in the Details

This brings us directly to the next piece, the rigors of the data analysis. Was the exposure on the linear part of the curve? Is there saturation? Were the correct loading controls used? Is the full blot presented?

In my lab, I often run Western Blots and analyze the data on different parts of the exposure curve. I get drastically different absolute and relative results. Here is an example we ran in our lab to demonstrate this. We took the same amount of protein and exposed it at different lengths of time. We then examined the exposure where ALL the data fell in the correct range (the lighter bands) and then examined the exposure where it didn’t all fall in the correct range of exposure (the darker bands above).

wb-expsoure

You can see that the darker exposure, which visually looks like better data actually drastically reduces the interpretation of the data, especially the increases from condition 2 to condition 3.

protein-analysis

Another issue with this assay is that unless the whole blot is shown, it is quite possible that the bands shown are non-specific or the wrong protein (I spent 6 months during my PhD unraveling a wrong molecular weight on western blot mystery. It resulted in me concluding that 20 plus papers published data on the wrong size band. The issue was due to an antibody problem from a specific company and research labs not calling them to verify the weird migration down the SDS PAGE gel).
This is why the devil is in the details. . .

Nobody Knows the Trouble I’ve Seen

This is what greatly troubles me. Currently, there is a lot of speculating, interpreting, and making health claims from data that are 1) from an isolated study, 2) messy and hard to interpret, 3) possibly misleading.

Now don’t get me wrong, there are a lot of people on the internet way, way, way smarter than me, but their training does not adequately prepare them to understand the nuances of thee data they are reading and interpreting. It has a lot more do to with playing in the proverbial dirt and the hands on experience of lab work than it does being smart. The experiential knowledge is essential when deriving meaning from these data. I often struggle with this issue. How can we better inform the non-laboratory scientist of these issues?

Now we get to a much more “meta” issue. How much of these data and studies have any real meaningfulness?

There is a big difference between statistical significance in a biochemical assay and physiological meaningfulness in a human body. For example, did that “supposed” activation of the mTOR pathway result in more muscle? Better health? Combat sacropenia?

I think this is something that is greatly underappreciated and under-discussed.

Let me offer the figures below as a perfect example where statistical significance and physiological meaningfulness diverge. Which diet would you choose to follow? The one with the statistical significance or the one with the apparent larger results? (Robert Frost would suggest choosing the latter.)
diet-stat

Hubris and Humility

Let me also offer something for you to chew on. The more complicated statistics get, the less anyone understands them, even the best in the world. For example, I exchanged emails last week about a fairly common but complex analytic method with a top senior scientist who received his PhD in biostatistics and epidemiology and has conducted numerous large studies. He said, “We haven’t used that in our own work in the past because frankly no one on our team here really knows how to accurately interpret the results into anything meaningful”.

If that doesn’t give you pause to stop and think, I am not sure what would.