Medical Malpractice Insurance companies, employing a two-tier system to pigeonhole doctors by costs, have been demonstrated to inaccurately classify those physicians 22% of the time. (Rand Corp. Report, New England Journal of Medicine.)

“Consumers, physicians, and purchasers are all at risk of being misled by the results produced by these tools,” concluded the researchers who analyzed aggregated claims data for 2004 and 2005 from four Massachusetts insurance companies. Using “commercial software,” they looked at data from 12,789 physicians in 10 specialties and constructed “homogeneous episodes of care” to create cost profiles for healthcare episodes such as treatments for diabetes, heart attack or urinary tract infection.

Physicians in the lowest 25% were defined as “lower cost.” In this binary analysis, all other were deemed “not lower cost.” Unfortunately, the misclassifications called anywhere from 29-67% “lower cost” when they weren’t (depending upon specialty.) Meanwhile, anywhere from 10-22% did not receive the “lower costs” badge of honor when they should have. The AMA claims this as proof that the health insurance industry’s cost-of-care rating systems have some major flaws which must be resolved. We agree, to say the least.

“The RAND study shows that physician ratings conducted by insurers can be wrong up to two-thirds of the time for some groups of physicians,” said AMA President J. James Rohack. We note that such injustices are becoming all too common, as 70-75% of all malpractice suits are also wrongfully filed. Rohack went on to call for resolution, but declined to suggest exactly how that might be achieved.

What can be deduced from this? First and foremost, like any scientific method, classifications of this kind must be thoroughly researched and well thought out, to ensure that the declarations and results are accurate, that physicians are not maligned inappropriately. This profiling clearly fails in most material aspects. Some might say it is impossible to perform a valid, unbiased Rating system by which to compare health care providers, as there are so many variables. Even if the flawed results are put aside, it’s still apples and oranges. For the sake of argument, consider this: What is inexpensive in one area of New York City may be inordinately high in another, and considered obscene in a rural area in the Midwest. One doctor’s experience and skill may far exceed another. One facility may be a bare-minimums clinic, while another in the very same area may be state-of-the-art. If a lower-cost upscale NYC physician moves to the cornfields of Kansas, is he suddenly a high risk by comparison?

At the heart of the matter, this remains relative; Meaningful figures simply cannot be determined by cost alone. Publishing such comparisons is ill-advised, and working from the conclusions drawn by such comparisons is inherently flawed.