Evidence-based nutrition: is proof of efficacy for nutrients too high?

…evidence-based nutrition (EBN), has seemingly swallowed EBM [evidence-based medicine] whole without either asking how well it might fit, or adapting it to the unique features of the nutrition context evidence-based nutrition (EBN), has seemingly swallowed EBM whole without either asking how well it might fit, or adapting it to the unique features of the nutrition context.

This passage is from a new paper by Robert Heaney, Connie Weaver, and Jeffrey Blumberg titled “EBN (Evidence-Based Nutrition) Ver. 2.0 (1).

This shorter paper more concisely summarizes a paper published late last year titled “Evidence-based criteria in the nutritional context” (2) from a workshop gathering in 2008 in which Blumberg, Heaney, Weaver attended along with Michael Huncharek, Theresa Scholl, Meir Stampfer, Reinhold Vieth, and Steven Zeisel who also contributed.

EBN, influencing nutrient recommendation policies generally at the population level, affects all of us in some way or another.  Below I summarize these important papers that raise this question:

Is the same high level of certainty required regarding the nutrient intake recommendations to prevent disease as is needed for drugs used to treat disease? (2)

What are EBM and EBN?

EBM is a hierarchy of evidence levels developed to standardize the interpretations of medical treatments.  The RCT (Randomized, placebo-Controlled Trials) are rightly considered the highest quality design as they infer strong causal relationships.  Studies that do not randomize, observational research, case studies, and expert opinions follow in evidence rank.  EBM draws conclusions on treatments based on the level and amount of research.

As Blumberg et al. describe (2), these rules were adopted by the nutritional science field in the 1990′s with the 1997 Dietary Reference Intakes and the 2005 Dietary Guidelines for Americans.  The FDA published EBN criteria for nutrient claims here, and the American Dietetic Association has criteria here.  So how does EBM differ from EBN?

(i) medical interventions are designed to cure a disease not produced by their absence, while nutrients prevent dysfunction that would result from their inadequate intake; (ii) it is usually not plausible to summon clinical equipoise for basic nutrient effects, thus creating ethical impediments to many trials;(iii) drug effects are generally intended to be large and with limited scope of action, while nutrient effects are typically polyvalent in scope and, in effect size, are typically within the “noise” range of biological variability; (iv)drug effects tend to be monotonic, with response varying in proportion to dose, while nutrient effects are often of a sigmoid character, with useful response occurring only across a portion of the intake range; (v) drug effects canbe tested against a nonexposed (placebo) contrast group, whereas it is impossible and/or unethical to attempt a zero intake group for nutrients; and (vi) therapeutic drugs are intended to be efficacious within a relatively short term while the impact of nutrients on the reduction of risk of chronic disease may require decades to demonstrate – a difference with significant implications for the feasibility of conducting pertinent RCTs.

In contrast to drug interventions where the null hypothesis is that there is no health benefit, essential nutrients must have some health benefit, otherwise they would not be defined as such.  Thus the questions instead must be:

(i) What is the full spectrum of dysfunctions or diseases produced by low intake of a nutrient? and (ii) How high an intake is required to ensure optimal physiological function or reduced risk for disease across all body systems and endpoints?

The effects of nutrients on diseases can take decades to manifest as measurable outcomes, which vary by individual and organ system, etc.  The Recommended Dietary Allowances (RDAs) generally focus on “single organ system endpoints;” often defining amounts for disease prevention of the disease in which the consensus is strongest on (they term this the “index” disease).  Vieth and Heaney are prominent vitamin D researchers, so it fits that they provide the example that the amount of D needed to reduce risk of falls and fractures (nonindex) is greater than that needed to prevent rickets or osteomalacia (index diseases).

A very recent example was the Institute of Medicine’s review of calcium and vitamin D, which concluded that nobody needs to be taking more than 600 IU of vitamin D unless you are over 70 years old.  The report dismissed the research links between vitamin D and calcium and disease as not conclusive enough except for bone health.  ViethWilliam Grant and John Cannell have publicly responded so far in protest, the middle writing that over 100 diseases have some link to vitamin D in research, and the latter writing that Heaney’s (and 13 other vitamin D researchers) input was not considered.  Just today (March 29) a debate occurred at the Harvard School of Public Health between Walter Willett, Bess Dawson-Hughes, JoAnn Manson, and Patsy Brannon (the latter 2 were on the IOM committee that analyzed the vitamin D research).  A video is posted here.  Dr. Manson brought up the point a couple times that randomized controlled trials are simply not available yet for many of the vitamin D/disease links that observational studies show.  Dr. Willett questioned whether these ultra-conservative recommendations should be made, or if we should go with the best evidence that we have now instead of demanding the RCTs (and in fact Willett and Dr. Dawson-Hughes essentially both questioned how the IOM arrived to some of their efficacy and risk conclusions).  The IOM recommendations for D is a prime example of taking EBN too far.  Of course many of the vitamin D links require more study, but we shouldn’t ignore them and establish recommendations far too low to the public until the evidence builds up to a semi-arbitrary level of confidence.

Just because RCTs on nonindex diseases do not find significant differences with regard to disease outcomes between groups does not mean they don’t exist.  RCTs for example may be more likely to contain control groups with intakes too close to the typical intake (as true placebo/no-intake groups aren’t possible), transforming studies into testing “is more better.”  Graphically (1):

It is not ethical to use the RCT design to test if a low amount of a nutrient causes disease, because it would be inevitable that some of the subjects in the study would develop the disease.  This is where observational studies greatly aid, as they can test (without intervening and causing harm) in a spectrum of subjects who are consuming low to high amounts of a nutrient and assess disease endpoints.  They link to Hill’s seminal 1965 paper that give general guidelines to assess causation from observational research.

To go back to their questioning if we need the amount of certainty for nutrient intake recommendations to prevent disease as that for drugs to treat disease, Blumberg et al. suggest that confidence in nutrient recommendations can be made at a lesser certainty than what is established for drugs (and nutrients with a high benefit:risk ratio may require less certainty of efficacy).  Unlike for drugs where irrefutable proof of efficacy should be demanded, decisions for nutrient recommendations should be instead based on if the evidence shows an inadequate (or excessive) intake shows probable harm.  They show this graphically like so (2):

They note of a lack of standardization of factors that influence certainty of a recommendation for RCTs, cohorts, or case-controls, and list some of them (2):

Apart from benefit:risk ratio, they also list some other factors that influence confidence include consequences of type II error (relationships that are real but not yet conclusively proven), deployment cost, opportunity cost, and multiplicity of lines supporting the evidence.

Within RCTs, the interactions of nutrients must be considered with regard to certainty.  The authors give several examples in the Supporting Information, like calcium and vitamin D on bone, or interactions between B vitamins and the endogenous antioxidant network.  If RCTs are designed without taking existing knowledge into account, they may produce a false result.

I have to interject my own slight skepticism here, focusing on one nutrient that Heaney, Weaver, and Blumberg bring up in their newest paper (1).  They note a 24 year delay in mandatory folate fortification because of a level of confidence that is too high, which may have saved at least 6,000 infants from neural tube defects.  But they do not mention the growing concern that with the reduction of this index disease secondary consequences are arising: the form used for fortification increases blood concentrations of unmetabolized folic acid and natural folates and some research links this with certain cancers and cognitive impairment.  What may have been a scientific triumph in the reduction of NTDs may have unintentionally led to an increased in nonindex diseases with a lack of foresight.  What if many times more people now will develop diseases than what are prevented in number of NTD cases?  Maybe thousands of cancer cases were prevented with the extended delay prior to fortification.  Although it seems logical that nutrients not require the same level of confidence as drugs, this may be difficult to generalize to each nutrient.  Even though the above hypothetical graphics incorporate a benefit:risk ratio, will we be able to accurately estimate this from current data?  Perhaps the folate example can be considered in the context of new recommendations, but wouldn’t it then increase conservatism among researchers and call for a higher evidence level?

In the Supporting Information, the authors argue that although nutrient toxicity of certainly a concern (they use the example that mercury may now accompany omega-3 sources), it generally isn’t to the same degree of drugs, which are usually more targeted and potent in their effects.  By their model, this greater safety net would allow for more aggressive recommendations based on a lesser certainty.  So now what should be done about folate?  If lesser evidence is needed to give a nutrient recommendation that may benefit health, the same logic should go the other way- it seems a reasonable body of research exists on the concerns of folate fortification, so we should be more proactive about it.  Weighing the systemic effects of each nutrient and together in different contexts (diseases, genetic interactions, etc) is invariably difficult, though I think with the constant improvement in research design, technology use, and building on existing knowledge, fewer cases of concern will likely surprise us (like that of the vitamin D deficiency epidemic or that of folic acid and NTDs, and Willett gives the example of how long it took for us to discover the negative effects of partial hydrogenation on health- almost 100 years!  But research moves much quicker now.).  But those examples are not what future research will bring us- it is the subtle effects from nutrients on our chronic health status that are still largely ill understood, and the importance that nutrigenomics brings to the table in taking mass micro-measurements.  How should this data be gathered and interpreted if we demand each subtlety be held to RCTs, which generally only measure the group averages?  Individual differences will require different design considerations as well.  Keith Grimaldi gives a nice example of this (and other examples of concerns these papers bring up) in his post: “Is it the vitamins or the trials?” Also see “So vitamins fail again, this time it’s folate and B12. Really?“.

Of another concern, would a reduction in a necessary confidence level in nutrient recommendations increase an already problematic health claims problem by food and supplement companies?  This is not something that was brought up in the articles, but it would seem inevitable.  Food companies already often piggy back recommendations by regulatory bodies (like the low fat phase or advertising specific vitamins), so if they loosen the confidence constraints they may be further abused; this seems to fall within the realm of public health.

Blumberg et al. have considered future research needs and suggest that beside measurements like drug and alcohol use, physical activity level, etc that are commonly measured as covariates in studies, DNA (especially for SNP analysis- which definitively influences many study variables), fasting and postprandial serum/plasma (for metabolomics and proteomics analysis), urine, relevant tissue samples, and archiving primary data would be ideal so that re-analysis of study results can be done once new biological relationships are discovered and newer technology is available.  Intake biomarkers should be gathered so assess compliance and inter-individual bioavailability variation, and different forms of nutrients should be considered in reviews and meta-analyses.  They note that the inclusion of more than one endpoint (into a “global index”) in studies would improve the ability of studies to see if a nutrient has subtle effects that may not be statistically significant individually.  With regard to meta-analyses, they suggest that meta-analyses that only use RCTs miss “differing design features [that] can provide insight into variability in the physiologic resasons for heterogeneity.”  Considering the covariates is crucial to using meta-analyses to examine not only what the average effect of a nutrient is, but how much the effect can vary in different studies.  Such considerations lead Heaney et. al (1) to write:

It is likely that most reviews of nutrients will come to erroneous conclusions if they are not performed by individuals who are content experts in the relevant biology.

These arguments make sense and are important problems that many scientists may not currently consider.  The risk of making nutrient recommendations based on less than conclusive proof should be considered against not making recommendations when a relationship is real but conclusive proof is still lacking (which seems to be the case for vitamin D right now).  Their concluding paragraph puts it perfectly:

To sum up, it is both appropriate and necessary to make recommendations in the absence of definitive proof, particularly when it is recognized that not changing an existing recommendation is itself a recommendation. That fact cannot be side stepped. With nutrients, the question is always not ‘‘whether’’ but‘‘how much?’’

If there is anything I hope readers take from this post it is that you not only have to carefully consider research methodologies but how they are interpreted in the bigger picture based on predefined levels of confidence.  Regulatory bodies can set evidence-based criteria how they choose, leading to differing results between them.  Is it a wonder why there are so many different recommendations on nutrients/foods, not even considering so many other influences?  Consensus’ are extremely difficult to reach (see the vitamin D debate video linked in this post for a perfect current example).

References

1. Robert P. Heaney, Connie M. Weaver, & Jeffrey Blumberg (2011). EBN (Evidence-Based Nutrition) Ver. 2.0 Nutrition Today : 10.1097/NT.0b013e3182076fdf

2. Blumberg J, Heaney RP, Huncharek M, Scholl T, Stampfer M, Vieth R, Weaver CM, & Zeisel SH (2010). Evidence-based criteria in the nutritional context. Nutrition reviews, 68 (8), 478-84 PMID: 20646225

  • http://twitter.com/eurogene keith grimaldi

    Excellent review Colby! Thanks for putting all of this information together in one place. It will be a useful link to give to the kneejerk “you need to do a proper clinical trial” responses. As you say, all nutrition trials are never agent vs. placebo, never a nice all or nothing that can be achieved with a new drug that is not available elsewhere. Adherence, or lack of, is one big problem.

    Another problem that you touch on is that while we can wait for a new drug to be thoroughly tested, we need to eat – recently this has become an issue for infants (or rather their mothers) – exclusive breast feeding for at least 6 months is recommended to all mothers by the WHO, but there is good evidence that those at risk of developing celiac disease should have gluten introduced gradually at between 4-6 months, and this is the official advice of the ESPGHAN (European pediatric gastroenterologists). There are some clinical trials in progress, meanwhile babies need feeding, the clinical trials will take years and still may not give definitive answers… maybe the answer is that genetics is useful, 6 months exclusive breast feeding is not the best for all. I think that applying EBN we would apply ESPGHAN advice to those with the HLA genes involved in CD and WHO advice to the rest (see: http://bit.ly/e3Kr4X, http://1.usa.gov/f1XrUu, http://bit.ly/ha6H4b and http://bit.ly/9xo8lt).

    PS I’m also jealous, I know how much work went into your post and I am very behind in keeping my blog up to date!

  • http://jeremy.cherfas.myopenid.com/ Jeremy Cherfas

    Interesting post, especially if you are interested in comparing “nutrients” with “drugs”. If you are interested in food and diet, though, you can begin to see why it is so very difficult to get the nutritional medical establishment even to consider the possibility that nutrient deficiencies may be more amenable to changes in diet, and especially increases in dietary diversity, rather than supplements, fortification and, lately, biofortification.

    I’d be interested in your thoughts on this.

  • Pingback: A consensus paper on dietary fats and cardiovascular disease | Nutritional Blogma

  • Pingback: Shaky science journalism at SciAm on salt | Nutritional Blogma