Originally posted by skypilot
...On a percentage basis, Manila was at 2.9 percent versus 2.19 for UC Irvine, hardly an earthshaking differences especially considering the small sample size.
You also have to factor in where these graduates did their residencies which is even more important. If they were really incompetent they should not have been allowed to complete their residency.
Also for what and how severe were the infractions? Would the location where the physician is practicing be a factor?
http://www.ctnow.com/media/acrobat/2003-06/8395716.pdf
Well, actually they are pretty different considering the sample size (not considered small by the AMA, which is receptive of IMGs, and isn't racist, lest a knee-jerk response is "ulterior motive?") and *range* (but note that the larger the sample, the smoother the continuum, making it inherently EASIER to point to one school singled out and another that is barely not, where results are not "earthshaking"ly different).
But even more damning is that there were three databases looked at, two (independent) states and an independently collated national one. In all three, the same 4 schools were in the bottom 5%, adding to statistical validity.
Also keep in mind that the article notes the national stats are under-representative (e.g., in the more complete data set for California, 10.24% Manilla vs. 7.39% UC Irvine, the latter of which is ALSO in the bottom decile!).
Certainly, where residency was done would be important, and it may be that there are geographic clusters of where the physicians practice that should be looked at as (statistical) factors, ...in determining the root CAUSE of the problems. Further, "really incompetent" is not a useful criterion, for what is important about the study is the correlation it found with certain schools, not the amount of incompetency, or the root cause of the problems, although certainly schools/residency programs/licensing boards should (will) examine the possibilities (more below).
But if either of those factors were somewhat determined by school attended (e.g., troublesome states/residency programs are most receptive to school x), results are nonetheless damning to the schools in question, and if not dependent, still should not be dismissed -- the claim isn't that the troublesome schools necessarily don't teach well, but that the correlation exists (and another that those schools have lower admissions standards)...the two int'l schools *may* be getting penalized in terms of residency choices, and the two domestic *in theory* may be getting penalized by racist residency policies, but the result is the same -- stats don't say anything aside from what they state, i.e., grads from some schools have had increased rates of being disciplined, and this is not to be dismissed via extenuating hypotheticals. I would think a valid (statistical) question is, do the stats have any prediction value?
As to the argument (taken independently) that the worst students should not have passed their residency:
1) there will always be a distribution of results, with tails, so unless the residency programs at the bottom turn out to be non-random (TBD), argument makes no sense;
2) argument is a "turn" in debate-speak, i.e., their schools should arguably not have let them graduate. Yes, training continues after school, so schools cannot *necessarily* be accused of graduating more incompetents, and likewise if doctors practiced without residency, probably more total disciplinary action would be taken (less training), thus quality of residency training certainly matters; but by same token, if ultimate competency of their practicing grads is of concern, those schools who for example may be "penalized" in the residency matching process (still a dependency in the endpoints) arguably ought to hold back more students, or maybe get better state residency agreements. Nevertheless, med school candidates should look carefully at the implications of going to school x, as the stat as a probability still has meaning [think of why any such stats might matter to "you", not how they necessarily reflect on the actual quality of the school].
The bigger picture is that certainly, with any certification/assessment process, there will be a range in the measure (ideally, a normal curve), and those on the bottom, wherever the bottom is, will be termed, "problematic". The interesting result isn't some supposed arbitrary numerical cutoff defining incompetence, but that the tail is not random, that those data points correlate well with 4 schools, across several data sets. I'd personally like to see the same data matched to USMLE, MCAT and college gpa, as I too have hypotheses, but alas, this will never happen and thus they are untestable.
Either way, if policy-makers above the school level decide to change policy (likely for the wrong, that is, unsupported reasons), they could still make "good" decisions, i.e., by improving the assessment methods (..add CSA for all, which is going to happen eventually anyway), or they could make dumb decisions, like penalize int'l schools (state or fed residency program requirements) over hurting domestic minority ones. But the more I think about it (as I edit my original post), the more I'm convinced that nothing *new* will come about at those higher levels, since it would be dangerous politically to even go there, to invite the "race" card (consider this: the Establishment has had the same raw data for years). Thus, the study matters only for intellectual curiosity and for those to reflect on how they might be affected probabilistically. And *maybe*, hopefully, to speed up what's already coming down the pipeline -- CSA for all (thus, help competent IMGs). Who wants to write their congressmen?
-Pitman