The idea of creating expert panels has a certain logic to it. After all, who is better qualified to determine best medical practices than medical experts? The medical field itself has been edging toward a similar approach in recent years with "evidence based medicine," an approach that assumes it is possible to determine what works and what doesn't by reviewing published medical literature.If the medical literature is as bad as he says it is, shouldn't the first cut prescription be to fix the publishing process in medicine? On the whole, I'm skeptical. But I do like Hanson's betting markets on scientific ideas.
Evidence-based medicine has some value, but it can provide misleading information. Determining which studies to review, for example, can introduce biases. Whether investigators accept published data at face value or repeat primary data analyses also matters. If the data in a published study were poorly analyzed or, for argument's sake, completely invented, relying on it can lead to faulty conclusions. It's an unfortunate reality, but our medical literature is significantly contaminated by poorly conducted studies, inappropriate statistical methodologies, and sometimes scientific fraud.
Studies published in the medical literature are mostly produced by academics who face an imperative to publish or watch their careers perish. These academics aren't basing their careers on their clinical skills and experiences. Paradoxically, if we allow the academic literature to set guidelines for accepted practices, we are allowing those who are often academics first and clinicians second to determine what clinical care is appropriate.
Consciously or not, those who provide the peer review for medical journals are influenced by whether the work they are reviewing will impact their standing in the medical community. This is a dilemma. The experts who serve as reviewers compete with the work they are reviewing. Leaders in every community, therefore, exert disproportional influence on what gets published. We expect reviewers to be objective and free of conflicts, but in truth, only rarely is that the case.
On the other hand, today I also read this about Stephen Glantz's metastudies on smoking and heart attacks.
Sure enough, the Institute of Medicine's study did not include the hospital data from the UK, nor did it include a study of the entire US which showed no association between the introduction of smoking bans and declines in heart attack rates.You can get any result you like out of a metastudy if you're careful about which studies to include (among a host of other decisions about weighting data from various studies); that's why you really have to trust the guy who wrote the study, or, better, have several metastudies that all point in the same direction.
The absence of this data from the IoM's report is troubling, since Dr Siegel had made the committee aware of this contradictory evidence
The fact that this data was omitted, and that Stanton Glantz was given another starring role in the creation of the IoM's report, should be of concern to anyone who expects impartial research from such organisations.
HT on WSJ piece: Book of Joe