Measuring Patients’ Experiences With Individual Physicians: Are We Ready for Primetime? Presented...
-
Upload
ezra-hunter -
Category
Documents
-
view
213 -
download
0
Transcript of Measuring Patients’ Experiences With Individual Physicians: Are We Ready for Primetime? Presented...
Measuring Patients’ Experiences With Individual Physicians:
Are We Ready for Primetime?
Presented at:
Academy Health Annual Research Meeting
San Diego, CA
7 June 2004
Commonwealth Fund and Robert Wood Johnson Foundation
Dana Gelb Safran, ScDThe Health Institute
Institute for Clinical Research and Health Policy StudiesTufts-New England Medical Center
___________________________________________________________________________
Focusing on PhysiciansFocusing on Physicians
Survey-based measurement of patients’ experiences with individual physicians is not new.
What’s new: Efforts to standardize and potential for public reporting.
IOM report Crossing the Quality Chasm gave “patient-centered care” a front row seat.
Methods and metrics have been honed through 15 years of research and through several recent large-scale demonstration projects
But putting these measures to use raises many questions about feasibility and value.
Survey-based measurement of patients’ experiences with individual physicians is not new.
What’s new: Efforts to standardize and potential for public reporting.
IOM report Crossing the Quality Chasm gave “patient-centered care” a front row seat.
Methods and metrics have been honed through 15 years of research and through several recent large-scale demonstration projects
But putting these measures to use raises many questions about feasibility and value.
Ambulatory Care Experiences Survey Project
Statewide demonstration project in Massachusetts
Collaboration: 6 Payers 6 Physician Network Organizations Massachusetts Medical Society Massachusetts Health Quality Partners
Testing the feasibility and value of measuring patients’ experiences with individual primary care physicians and practices
Primary impetus: plans seeking to standardize surveys
IOM “Chasm” report further propelled the work
Principal Questions of the Statewide Pilot
What sample size is needed for highly reliable estimate of patients’ experiences with a physician?
What is the risk of misclassification under varying reporting frameworks?
Is there enough performance variability to justify measurement?
How much of the measurement variance is accounted for by physicians as opposed to other elements of the system (practice site, network organization, plan)?
Sampling Framework
Eastern, MA Central, MA Western, MA
Tufts, BCBSMA, HPHC, Medicaid
BCBSMA, Fallon, Medicaid
BCBSMA, HNE, Medicaid
PNO1 PNO2 PNO3
34 Sites
143 Physicians
23 Sites
35 Physicians
PNO6
10 Sites
37 Physicians
PNO4 PNO5
Both commercially insured & Medicaid patients sampledOnly commercially insured patients sampled
Measures from the Measures from the Ambulatory Care Experiences Survey (ACES)Ambulatory Care Experiences Survey (ACES)
CommunicationComprehensiveness
·whole-person orientation ·health promotion/ patient empowerment
Integration•team•specialists•lab
Continuity·longitudinal
·visit-based
OrganizationalAccess
Interpersonal Treatment
Trust
PrimaryCare
Sample Size Requirements for Varying Physician-Level Reliability ThresholdsSample Size Requirements for Varying Physician-Level Reliability Thresholds
Number of Responses per Physician Needed to Achieve DesiredMD-Level Measurement Reliability
Reliability:0.7
Reliability:0.8
Reliability:0.95
ORGANIZATIONAL/STRUCTURAL FEATURES OF CARE
Organizationalaccess
23 39 185
Visit-based continuity 13 22 103
Integration 39 66 315
DOCTOR-PATIENT INTERACTIONS
Communication 43 73 347
Whole-personorientation
21 37 174
Health promotion 45 77 366
Interpersonaltreatment
41 71 337
Patient trust 36 61 290
What is the Risk of Misclassification?
Not simply 1- MD
Depends on:
Measurement reliability (MD)
Proximity of score to the cutpoint
Number of cutpoints in the reporting framework
Risk of Misclassification at Varying Distances from the Benchmark
and Varying in Measurement Reliability (MD )
Probability of Misclassification at Varying Thresholds ofMD-Level Reliability
MD Mean ScoreDistance fromBenchmark(Points)
MD=.70
MD=.80 MD=.90
1 38.0 34.5 27.42 27.1 21.2 11.53 18.0 11.5 3.64 11.1 5.5 0.85 6.3 2.3 0.16 3.3 0.8 <0.0017 1.6 0.3 <0.0018 0.7 <0.001 <0.0019 0.3 <0.001 <0.001
10 0.1 <0.001 <0.001
50th p’tile
3.26
αMD=0.7
αMD=0.8
αMD=0.9
4.9
6.3
0 65 100
Certainty and Uncertainty in ClassificationComparison with a Single Benchmark
= area of uncertainty
Significantly below Significantly above
αMD=0.7
αMD=0.8
αMD=0.9
10th p’tile 90th p’tile 0 53 76 100
6.3 6.3
4.9 4.9
3.26 3.26
Certainty and Uncertainty in Classification Cutpoints at 10th & 90th Percentile
= area of uncertainty
Bottom Tier Middle Tier Top Tier
Substantially Below Average Average Substantially Above Average MEASURE RELIABILITY (MD) 0.9 50 0.01 0 50 0.01 0 0.8 50 0.6 0 50 0.5 0 0.7 50 2.4 0 50 2.4 0 0 52.9 64.6 76.3 88.0
10th ptile 50th ptile 90th ptile
100
Substantially Substantially Below Average Below Average Average Above Average Above Average
MEASURE RELIABILITY (MD) 0.9 50 19.7 3.3 50 2.2 0 50 17.6 3.2 50 0 0 0.8 50 28.5 11.1 50 8.8 0.4 50 27.0 11.2 50 0.4 0 0.7 50 33.0 17.3 50 14.7 2.0 50 32.0 17.4 50 2.3 0 0.6 50 36.4 22.5 50 19.9 4.7 50 35.4 22.8 50 5.4 0.1 0.5 50 38.7 27.7 50 25.2 8.7 50 38.3 27.3 50 9.7 0.4
52.9
10th ptile 25th ptile
58.5 70.8
75th ptile
76.3
90th ptile
100 64.6 0
50th ptile
0 10 20 30 40 50 60
75
77.5
80
85
90
92.5
95
97.5
100
0 10 20 30 40 50 60
75
77.5
80
85
90
92.5
95
97.5
100
Number of Doctors
MD
Me
an S
core
, %Variability Among Physicians (Communication)___________________________________________________________________________
25th-75th percentile range of group scores Group Mean score
60
65
70
75
80
85
90
95
100
Eastern Region Central Region Western Region
Gro
up
Me
an S
co
re, %
Variability Across Practice Sites (Communication)
50
55
60
65
70
75
80
85
90
95
100
25th-75th percentile range of site scores
Site Mean score
Sit
e a
nd
MD
Me
an S
co
reVariability Among Physicians within Sites
(Communication)
25th-75th percentile range of MD scores MD Mean score
Site A-1 Site A-2 Site A-3 Site A-4
Allocation of Explainable Variance: Doctor-Patient Interactions
3825 22
29
6274 77 84
70
160
20
40
60
80
100
Doctor
Site
Network
Plan
Comm
unicat
ion
Whole
-per
son o
rienta
tion
Health
pro
motio
n
Inte
rper
sonal
trea
tmen
t
Patie
nt tru
st
45 56 77
39 3623
8160
20
40
60
80
100
OrganizationalAccess
Visit-basedContinuity
Integration
Doctor
Site
Network
Plan
Allocation of Explainable Variance: Organizational/Structural Features of Care
Summary and Implications
With sample sizes of 45 patients per physician, most survey-based measures achieved physician-level reliability of .7-.85.
With a 3-level reporting framework, risk of misclassification is low – except at the boundaries, where risk is high irrespective of measurement reliability.
Individual physicians and practice sites accounted for the majority of system-related variance on all measures.
Within sites, variability among physicians was substantial.
Summary and Implications (cont’d)
Feasibility of obtaining highly reliable measures of patients’ experiences with individual physicians and practices has been demonstrated.
The merits and value of moving quality measurement beyond health plans and network organizations is clear.
By adding these aspects of care to our nation’s portfolio of quality measures, we may reverse declines in interpersonal quality of care.