LETTERS TO THE EDITOR
deed be concealing or distorting actual differences in interests which were resolved prior to the vote. Rather than examine extreme or deviant issues, the value of which may be "diluted" in averages as Drs. Fontana and Wunderlich suggest, I suggest that a more interesting question to pursue would be a more refined description of just who HSA interest groups are. The empirically defined groups of "consumers" and "providers" are too broad. Scratch beneath the surface of many consumers and one will often find a shade of green not unlike that of a dollar bill. I would propose, as I did in my paper, that if enough data were collected, multi-dimensional scaling techniques could be used to describe the separate political interest groups composed of both consumers and providers, thereby expanding our understanding one step further. Randolph M. Grossman Consultant Chi Systems, Inc. 330 E. Liberty St., Suite 4A Ann Arbor, MI 48104
On Studying Data Reliability Dr. Gittelsohn's article' (p. 682) states mortality data is coded by ICDA8, while hospital data is coded by the CPHA version, H-ICDA. This was indeed true in Vermont during the study years; however, this very unique situation only applies to hospitals in Vermont and Rhode Island and introduces a selective bias, making it incorrect to generalize results to other more typical states. Only hospitals subscribing to PAS (a computerized discharge abstract processing system) sponsored by the Commission on Professional and Hospital Activities in Ann Arbor, Michigan, use the H-ICDA version for diagnostic coding. PAS subscribers represent a minority of total hospital discharges in the US, primarily smaller hospitals. In 1973 only approximately 1,800 out of over 7,900 American hospitals subscribed to PAS; today the only states where all hospitals subscribe to PAS are Vermont and Rhode Island. 1184
The majority of larger hospitals in the US representing the bulk of total hospital discharges do not subscribe to PAS and most of these code discharge diagnoses using ICDA-8; most states also code mortality data in ICDA-8. A study of PAS hospitals in a PAS state is not randomized, and certainly not representative of US hospitals in general. This purports to be a study comparing cause of death and diagnoses in American hospitals; yet, almost onethird (12/37 per cent) of the citations are from foreign journals. Was any professional health records' expertise utilized? It seems inappropriate to study hospital records without utilizing the practitioners who are professionally trained, credentialed, and who best know how to read, interpret, and maintain Medical Records, and are responsible for diagnostic coding and indexing. There is no mention of involving Registered Medical Record Administrators (RRA) or Accredited Medical Record Technicians (ART), no references from medical record science texts or standard reference, none from Medical Record News, Journal of the American Medical Record Association, and apparently no consultation with the American Medical Record Association. It would enhance authors' credibility when studying data reliability to consult the professionals entrusted with ensuring that reliability. Who else best knows the problems? Rand Baird, BS, RRA, MPH 733 N. Kings Rd. Los Angeles, CA 90069
REFERENCE 1. Gittelsohn AM and Senning J: Studies on the reliability of vital and health records: I. comparison of cause of death and hospital record diagnosis. Am J Public Health 69:680-683, 1979.
Editor's Note: Editor's Note: Mr Baird's baccalaureate degree is in Medical Record administration.
Authors' Rejoinder Failure to mention MR News and JAMRA may have been an oversight, but we are not familiar with relevant articles in these publications. We should
appreciate citations pertaining to the question of data reliability contained therein. The selection of references was based on all available English language citations relating to cause of death reliability using computerized searches and standard bibliographic technique. Certainly RRAs and ARTs deserve recognition for the important work in which they are engaged. Such omission was not intentional. Many individuals and groups have contributed to the development and maintenance of the hospital abstracting and vital record data systems under consideration. We herewith recognize all of them for their contributions. The issues raised concerning HICDA, PAS, representativeness, randomization, and selection bias do not apply, for we are not attempting to describe the experience in all American, Canadian, English, and New Zealand hospitals. Rather, we are concerned with the correspondences between two statewide data sets in Vermont where we have been working for a number of years. If a person's death is ascribed to an overwhelming disease such as lung cancer on the death certificate, then it is indeed remarkable that the hospital record of terminal stay contains no reference to the condition and vice versa. This means that there is an error in one record or both. The magnitude of this discrepancy diminishes the credibility of the two record systems. We feel that the reporting of such findings may have a salutory effect in the design and implementation of vital and health data quality control programs. Alan M. Gittelsohn, PhD, et al. Professor, Biostatistics Johns Hopkins University School of Hygiene and Public Health Baltimore, MD 21205
Documenting Sentinel Health Events Counting sentinel health events (the occurrence of an unnecessary disease, an unnecessary disability, or an unnecessary untimely death) has been proposed as a method for assessing the quality of medical care. In 1976,1 91 conditions in which unnecessary disAJPH November 1979, Vol. 69, No. 11