Why?
Because their findings depend on the criteria they used in order to determine which studies should be included. So when considering a meta-analysis on the impact of low-carb diets (LCD), variables that would affect outcome might include the definition of LCD (ie how many grams per day of carbohydrates constitutes a low carb diet), the duration of the diet, the number of databases searched, how risk of bias was assessed and applied, and investigation of the causes of heterogeneity to name just a few of those found in the more complete (AMSTAR) list seen here:
And in fact, a study analyzing the quality of meta-analyses of low-carb diets was recently published in Obesity Reviews, and its findings fall in line with my very admitted confirmation bias which sees low-carb diets being as good or as bad as any other diet, and that at the end of the day, what matters more than the diet prescribed is diet adherence.
The authors found that,
"critically low quality (low-carb diet/LCD) meta-analyses showed superiority of LCD for weight loss while moderate quality showed inconsistent results, and high quality showed little or no difference"Of course all of the studies included looked at overall losses between different prescribed diets, but in my opinion, that may not be the best way to evaluate them.
Because as the DIETFITs study so elegantly illustrated, there are people who do incredibly well with low-carb or low-fat diets, while other people do incredibly poorly, and all within the same study population.
I would argue further that this is true for any diet.
All this to say, be wary both of any study or meta-analysis that crowns one diet better than another, and of anyone suggesting that a particular diet isn't worth trying. One person's best diet is another person's worst.
(Photo by Jenna Hamra from Pexels)