|
......................m......................... Specific guide to this web site for:
Additional Topics: Large Randomized Clinical Trials
|
October 17, 1990 Simon
Dack, M.D. (as well as individual letters to all members of the JACC
editorial board) Dear Dr. Dack The Journal of the American College of Cardiology (JACC) continues to be one of the most clinically relevant high caliber journals in the field of cardiology. However, a recent article published in the JACC raises the concern that additional measures may be helpful to maintain the goals of publishing high quality articles with valid conclusions. In the September JACC, there was a study performed by Geltman et al', which evaluated myocardial perfusion in patients with angina who had angiographically normal coronary arteries. The fundamental framework from which they performed their statistical analysis, in my opinion, has absolutely no validity. Although I realize this sounds presumptuous, I ask you to show this article to any statistician in an attempt to find even one that says the method of analysis used in this article has even some validity. (Please see the enclosed "Analysis of Statistical Methods".) What can be done to provide a more uniform level of quality of the articles published in the JACC? It may be useful to consider the following: 1) Routinely conduct a prereview of the articles submitted to JACC by a statistician in regards to a cursory evaluation of the basic statistical framework of the articles, as well as for any major variances from standard statistical methods. This brief prereview could then be forwarded with the submitted articles to the clinical reviewers who have the final responsibility for reviewing the article. The clinical reviewer would not be bound in any way by this preliminary opinion on the validity of the statistical analysis. Even assuming that a given article is accepted, this may help to allow for suggestions by the clinical reviewer to optimize the validity of the analysis conducted on the data presented. If the volume of work would be too great to initially prescreen all submitted articles, then a more selective approach can be considered. When an article has been reviewed by the clinical reviewer and publication of the paper appears likely, an expert in statistics can review the paper's statistical methods at that time. 2) Consider asking each of the clinical reviewers to specifically assess whether all the conclusions that the authors make in an article are direct and logical extensions from the data presented. (Unfortunately as I am sure you have noticed, it is all too frequent that data will be presented in a statistically valid way, but the conclusions reached in a the article are an unreasonable extension beyond what a more conservative assessment of the data implications might conclude.) Though the Geltman article problems were not primarily in this particular area, this problem is not uncommon given the inherent pressures on an author to present one's own data in the most flattering light to increase its potential significance and viability for publication. Thank you for your time and consideration in this matter. I hope that you can consider the suggestions made in this letter without being initially turned off by any tone of pretentiousness. It certainly was not my intent. Sincerely yours, Eric F. Roehm, M.D., F.A.C.C. cc: Anthony DeMaria, M.D., Suzanne Knoebel, M.D., Dan McNamara, M.D., Charles Fisch, M.D., Richard Cannon, M.D. (JACC editors) |