Predicting high-cost users of medical care and the persistence of high expenditures over time




Скачать 75.63 Kb.
НазваниеPredicting high-cost users of medical care and the persistence of high expenditures over time
страница1/2
Дата17.10.2012
Размер75.63 Kb.
ТипДокументы
  1   2


Proposed ICHPR 2005 Invited Sessions

4/18/05


Session 1:

PREDICTING HIGH-COST USERS OF MEDICAL CARE AND THE PERSISTENCE OF HIGH EXPENDITURES OVER TIME

Organizer: Steve Cohen

The concentration of health care expenditures in a small percentage of the population has motivated efforts to ensure sufficient sample representation of this subgroup in health care expenditures studies. The impact of this skewed distribution on the precision of national health expenditure estimates is examined and innovative strategies to identify future cases are presented.  

1. Using the SF-12 to Predict Health Care Expenditures

John Fleishman Ph.D .and Joel Cohen Ph.D, Center for Financing, Access and Cost Trends, Agency for Healthcare Research and Quality and Mark Kosinski, M.S. QualityMetric


Risk adjustment models often use demographic or diagnostic information to predict health care expenditures.  Some research, based on samples from restricted populations, suggests that patient-reported health status measures can enhance prediction of expenditures and improve risk adjustment.  This project examines relationships between measures of physical and mental health status, based on the SF-12 fielded in the MEPS self-administered questionnaire in 2000, and expenditures for health care in 2001.  We examine the extent to which the SF-12 physical and mental health scores improve prediction over and above demographic characteristics and self-reported chronic conditions.  We also examine whether the SF-12 adds to predictions based on prior expenditures.  The results can point to potential enhancements in risk adjustment models.

 

2. An Evaluation of the Performance of Prediction Models to Identify High Expenditure Cases

Steven B. Cohen, Ph.D., Trena Ezzati-Rice M.S., and William Yu, M.S., Center for Financing, Access and Cost Trends, Agency for Healthcare Research and Quality


In order to satisfy analytic objectives for nationally representative population based surveys, the adopted sample designs often include oversampling techniques to ensure sufficient sample sizes are achieved for specific policy-relevant subgroups. This strategy is attractive in terms of both cost efficiency and precision, with respect to meeting underlying survey design requirements. For population subgroups defined by characteristics that are more static in nature, such as race/ethnicity, gender, age interval, and chronic conditions of long durations, ensuring sufficient sample size through the implementation of an oversampling strategy is a more straight forward operation. Alternatively, achieving sample size targets for population subgroups that are more dynamic in nature, such as the poor or near poor, individuals with high levels of medical expenditures, and the uninsured, is a more difficult enterprise. In this paper, the performance of alternative prediction models to identify future high expenditure cases is evaluated.

Examples of these applications are drawn from the Medical Expenditure Panel Survey (MEPS). Given the high concentration of health care expenditures in a given year among a relatively small percentage of the population, a prediction model that can accurately identify the persistence of high levels of expenditures is an important analytical tool. This type of modeling effort also enhances the ability to discern the causes of high health care expenses and the characteristics of the individuals who incur them. This feature also applies to prediction models that can accurately identify those individuals with persistently low or average levels of expenditures. The models that are presented have particular relevance as statistical tools to facilitate efficient sampling strategies that permit the selection of an over-sample of individuals likely to incur high levels of medical expenditures in the future. Furthermore, such modeling efforts are particularly attractive to assist in the targeting of disease management programs to high cost cases, which may facilitate reductions in the concentration of overall future year health care expenditures.







3. The impact of diagnosis accuracy on predictive power of cost prediction models using the MEPS

Joel Cohen and John Fleishman, AHRQ

 

A number of models have been developed that use claims data for purposes of risk adjustment and expenditure prediction.  One widely used algorithm, developed by DxCG Inc., uses ICD-9 codes, typically obtained from large claims databases, to generate expenditure predictions.  Applying this algorithm to household survey data presents some challenges.  One issue is whether using 3-digit ICD-9 codes versus 5-digit ICD-9 codes has a major impact on the expenditure predictions.  A second issue revolves around insurance status:  Different prediction models have been developed for data from different claims databases (private payer, Medicare).  Household survey respondents, however, may have multiple sources of insurance in the course of a year, and procedures for dealing with such individuals need to be developed.  This presentation will discuss these issues in the context of using DxCG models to predict expenditures for respondents in the Medical Expenditure Panel Survey, a nationally representative household survey.

 

Potential Discussants

Alan Monheit, Ph.D. University of Medicine and Dentistry, New Jersey, Jack Feldman, Ph.D., NORC

Session 2:

Statistical Issues in the Hospital CAHPS (HCAHPS) Survey.”

Organizer: James O’Malley, Harvard Medical School.


1. Overview of HCAHPS, design of the HCAHPS instrument, and political issues

Paul Cleary, Harvard Medical School

This begins with an overview of the HCAHPS project, including the role it plays in providing information to consumers about the quality of care at hospitals, and the cyclical process of designing the instrument will be described. The discussion of survey content will include the choice of appropriate scales for the items, and the use of screener items, criterion variables, demographic items, and rating items. Common pitfalls such as misleading and inappropriately ordered questions will also be discussed. In the latter part of the talk, the roles played by the various statistical methods used in designing a survey will be presented. Particular emphasis will be placed on psychometric concepts such as validity and reliability, as these are often not included in statistics programs. The constraints placed on the survey design by political forces will also be discussed.


2: Issues concerning sample size calculation and reporting in HCAHPS

Marc Elliott, Rand Corporation


This will encompass the finite population versus infinite population controversy, the difficulties with constructing reports for hospitals when there is substantial variation in sample size (particularly several hospitals with small sample size), and whether case-mix adjustment should be used when making reports. The issue of shrinkage, where hospitals with small sample sizes are pulled towards the mean more than hospitals with big sample sizes, and the concerns of the hospitals about this phenomenon will also be discussed.


3: Hierarchical Factor Analysis for Survey Data with Structured Nonresponse

James O’Malley, Harvard Medical School.


Health care quality surveys in the USA are administered to individual respondents (hospital patients, health plan members) to evaluate performance of health care units (hospitals, health plans). Due to both planned item nonresponse (caused by screener items and associated skip patterns) and unplanned nonresponse, quality measures, such as item means, are based on different subsets of the survey respondents. For better understanding and more parsimonious reporting of dimensions of quality, we analyze relationships between quality measures at the unit level, by applying techniques such as factor analysis to covariance structure estimated at the unit level in a hierarchical model. At the lower (patient) level we first fit generalized variance-covariance functions that take into account the nonresponse patterns in the survey responses. A between unit covariance matrix is then estimated using a hierarchical model, which evaluates the fitted generalized variance-covariance functions to account for sampling variation. Maximum quasilikelihood and Bayesian inferential procedures are used for model fitting. At the second (plan or hospital) level, we propose comparing two analytic strategies: (1) estimating an unstructured covariance matrix and applying an exploratory factor analysis to summarize relationships, and (2) estimating a factor analytic structure integrated into the model (thus more closely related to confirmatory factor analysis). The latter strategy allows specific hypotheses concerning the number of factors, and the grouping of items into related factors that define composite items, to be tested. Results from this work will be presented along with a description of the computational difficulties encountered and solutions used to overcome these.


Discussant: Ron Hays, Rand Corporation

Session 3:

Imputation in high-dimensional complex surveys

Organizers: Tom Belin and Recai Yucel


1. Multiple imputation using chained hierarchical models

Speaker: Recai Yucel, University of Massachusetts, Amherst


Multiple imputation is an increasingly popular method for handling missing data due to item nonresponse in surveys. When using multiple imputation, it is beneficial to reflect the sample design in the imputation model. If the sample design involves clustering, one way to represent the cluster effects is via random effects in the imputation model. Although this idea has been developed in detail for imputing continuous variables, it is less well-developed for imputing categorical variables and mixtures of categorical and continuous variables. In this paper, we describe two approaches to producing multiple imputations for such variables. The first approach extends the general location model proposed by Olkin and Tate (1961) to include random effects. Imputations under this approach are drawn from the joint predictive distribution of the missing values, and thus follow the fully model-based paradigm for multiple imputation. This approach is problematic in highly multivariate problems, however, due to the number of parameters in the imputation model. For such situations, we propose an extension of the methods given by Raghunathan et al. (2001), in which we produce imputations by fitting chained hierarchical models and by drawing missing values variable-by-variable from the chained models. We illustrate and compare these techniques using simulated data.


2. Multiple Imputation by Ordered Monotone Blocks: The case of the Anthrax Vaccine Clinical Trial

M. Baccini (University of Florence), S. R. Cook (Columbia University), C. Frangakis (Johns Hopkins University), F. Li (Johns Hopkins University), Fabrizia Mealli (University of Florence), and D. B. Rubin (Harvard University)


Multiple imputation generally involves specifying a joint distribution for all variables in a data set; the data model is often supplemented by a prior distribution for the parameters’ vector governing the distribution of the variables in the Bayesian setting. The Anthrax Vaccine Trial data created new challenges for multiple imputation because of the large number and different types of variables in the data set and the limited number of units within each treatment arm: in order to ensure that no data from one treatment arm contaminates imputed data from another arm, imputations must be done independently across treatment arms. In addition, data models for multiple imputation are often based on the multivariate normal or general location model, neither of which is appropriate for the Anthrax Vaccine data set, as well as for other complex observational and experimental data, which often include continuous, semi-continuous, ordinal, categorical, and binary variables. An intuitive method for handling such complex data sets with missing values is to instead specify, for each variable with missing values, a univariate conditional distribution given all other variables. Such univariate distributions take the form of regression models (e.g., linear regression, logistic regression), which are straightforward to work with and can accurately reflect different data types. Software such as MICE (van Buuren et al. (1999), and IVEWare (Raghunathan et al. 2001) impute missing data this way. Imputation based on univariate conditional distributions is valid for monotone missing data, if for each variable, the univariate distribution involved is conditional only on those other variables that are more observed than the variable being imputed. However, when missing data are not monotone, univariate imputation strategies have the theoretical drawback that the collection of fully conditional distributions may not correspond to any joint distribution for all the variables, i.e., the conditional distributions may be incompatible. The multiple imputation proposed here, and implemented for the Anthrax Vaccine Trial as the motivating case, aims to capitalize on the simplicity of univariate conditional modeling while minimizing incompatibility. In Rubin (2003) a potentially incompatible MCMC algorithm similar to what is proposed here is presented, except that it is less sophisticated than our current plan. We propose a new method, imputation by ordered monotone blocks, to impute missing data, extending the theory for monotone patterns of missing data to arbitrary patterns by breaking the problem into a collection of smaller problems where missing data do form a monotone pattern. Different types of univariate models are also used, depending on the variables being continuous, semi-continuous, binary variables, ordinal, or categorical.


3. Multiple Imputation of Missing Income Data in the National Health Interview Survey
Nathaniel Schenker, National Center for Health Statistics


The National Health Interview Survey (NHIS) provides a rich source of data for studying relationships between income and health and for monitoring health and health care for persons at different income levels. However, the nonresponse rates are high for two key items, total family income in the previous calendar year and personal earnings from employment in the previous calendar year. To handle the problem of missing data on family income and personal earnings in the NHIS, multiple imputation of these items, along with personal earnings status and ratio of family income to the Federal poverty threshold (derived from the imputed values of family income), was performed for the survey years 1997 – 2002. (There are plans to create multiple imputations for the years 2003 and beyond as well, as the data become available.) Files of the imputed values, as well as documentation, are available at the NHIS Web site (http://www.cdc.gov/nchs/nhis.htm). This presentation describes the approach used in the multiple-imputation project and evaluates the methods via analyses of the multiply imputed data. The analyses suggest that imputation corrects for biases that occur when estimates are based on just the complete cases, and that multiple imputation results in gains in efficiency as well.

Session 4:

USING RANDOMIZED STUDIES TO INFORM THE ANALYSIS OF OBSERVATIONAL STUDIES”

Organizer: Therese Stukel


Invited panel format. Panelists would have 10 minutes to speak. The moderator (Therese) would prepare a list of questions for the panel, then open discussion to the audience.


1. Miguel Hernan, Assistant Professor of Epidemiology, Harvard school of Public Health.
Squaring a "correct" analysis of observational data from the Nurses Health Study with the

findings from the Women's Health Initiative RCT.


2. Muhammad Mamdani, Senior Scientist and Pharmaco-epidemiologist, Institute for Evaluative Sciences, Toronto, has used the Ontario Drug Benefit data (universal coverage for adults age 65+) to perform several observational drug studies, such as (i) studies comparing the effects of the COX-2 inhibitors, Vioxx and Celebrex, on population rates of GI bleed and coronary heart failure (Lancet, BMJ), and obtained the same results as the subsequent clinical trials; and the effects of statins on hip fracture and could not replicate clinical trial results, possibly due to unmeasured confounding by key risk factors, obesity, weight-bearing exercise, smoking, and ethnicity.


3. Steve Pauker, Vice-Chair for Clinical Affairs and Associate Physician-in-Chief, Department of Medicine at New England Medical Center, and founder of the Division of Clinical Decision Making. Dr Pauker evaluated the results of the Nurses Health Study with a view to why the effects of HRT might differ from that of the Women’s Health Initiative. Using simulation studies, he demonstrated that more than the gap between the studies is potentially due to ascertainment bias of silent ischemic events and socio-economic and lifestyle differences.


Panelist #4: TBN

Session 5:

Methods in longitudinal data analysis

Organizer: Jim Lubitz


Talk 1. Validation of life table approaches to estimating population health status.

Liming Cai, NCHS

Longitudinal data bases are ideal for describing the health histories of individuals. Unfortunately, there are no national data basis that have followed individuals for an extended time to track health events and health changes at the person level. Therefore, life table techniques have been used to simulate individuals’ health experiences by building simulated populations. But, how accurately do the results from simulated populations reflect actual health experiences? The Cardiovascular Health Study offers an opportunity to address this question because it has 14 years of annual follow-up data on a 6000 person study group. We will report the results of simulation cohorts from fitting two life table models; the multistate life table (MSLT) and the semi-markov process model (SMP) to the actual CMS data. The SMP model incorporates new approaches to dealing with left-censored data. We will focus on comparing distributional statistics for estimates such as active life expectancy, age at onset of disability, and years in various health. A split sample of the CHS will be used, with estimation and verification sub-samples.


Talk 2. New findings on non-response from the Medicare Current Beneficiary Survey (MCBS)

John Kautter, Ph.D., RTI

The MCBS is a continuous survey of about 12.5 thousand Medicare beneficiaries begun in 1991. Respondents remain in the survey for 3.5 years. Because there are data on the use and cost of health care services and on mortality for all Medicare beneficiaries, both respondents and non-respondents, the MCBS offers an unusual opportunity to study the effect of non-response, both refusals at the onset of the survey, and throughout the 3.5 years follow-up period. A recently completed study by RTI International, under contract with the Centers for Medicare and Medicaid Services, analyzed the extent and impact of non-response in the MCBS for data years 1977-99. The study found that initial refusers were healthier than respondents (perhaps a busy, active senior phenomena). Correction for non-response, using traditional approaches, succeeded in bringing estimates for respondents and non-respondents very close on measures like per capita Medicare spending.

Talk 3. “Handling Incomplete Data in Longitudinal Clinical Trials”

Geert Molenberghs (Limburgs Universitair Centrum, Belgium)

The Randomized Controlled Trial (RCT) is often used to establish a causal effect of a new treatment on a response. The allocation of the different treatments must be truly randomized. By keeping the groups as similar as possible at baseline, the effects from other factors than the intervention itself will be minimal and hence differences in response on a clinical outcome can be ascribed solely and entirely to differences in treatment allocation. However, in practice, this paradigm is jeopardized in two important ways. First, some patients may not receive the treatment as planned in the study protocol, because they are sloppy. Some may take more than planned at their own initiative. In rare cases, patients may even gain access to medication allocated to the other treatment arm(s). Second, some patients may leave the study, some rather early after their enrollment in the trial, some at a later stage. In such cases, virtually no data or, at best, partial data only are available. This is bound to happen in studies that run over a relatively long period of time and/or when the treatment protocol is highly demanding. Thus, in reality, missing data is an almost ever-present problem in clinical trials. In RCTs, these missing data undermine the randomization basis for estimates of treatment efficacy. Regardless of the cause, inappropriate handling of the missing information can lead to bias. In the RCT setting, a commonly used method to analyze longitudinal data with non-response is mostly based on setting a subject's missing response equal to their last observed response (last observation carried forward, LOCF). There are several objections to the use of this method. Other methods include complete case analysis (CC) or simple forms of imputation. This is often done without questioning the possible influence of these assumptions on the final results, even though several authors have written about this topic. We will contend that analyzing the data as if it were complete after carrying the last observation forward, is an unscientific approach. One should shift from an LOCF analysis to the scientific methods at hand, which make the assumptions transparent and thus allow the sensitivity of the conclusions to these assumptions to be assessed and reported. Material to be discussed appears in Molenberghs, G., Thijs, H., Jansen, I., Beunckens, C., Kenward, M.G., Mallinckrodt, C., and Carroll, R.J. (2004) (“Analyzing incomplete longitudinal clinical trial data,” Biostatistics, 5, 445-464.)
Session 6:
Methods of Risk Adjustment for Skewed Outcome Data”

Organizer: Julianne Souchek

1. Addressing Skewness and Kurtosis in Risk Adjustment - Alberto Holly and Yevhen Pentsak,Institute of Health Economics and Management, University of Lausanne, Switzerland.

In prospective risk-adjustment, we need to predict health care expenditures of the sickness funds enrolles. The prediction is usually based on the estimation of a linear model where the dependent variable is sickness funds payments to providers - for both inpatient and outpatient services - and the explanatory variables are the risk adjusters which help predict these payments.

However, health care expenditures are usually highly skewed, highly peaked, and its far-right tail is too long even for a lognormal distribution. These characteristics imply that the third and fourth moments of the distribution of the positive expenses are essential in order to model it. In this research project we consider a linear model of the form , and our primary interest is to estimate the vector of regression coefficients without transforming the data (for example, by taking the logarithm of ). To this end, we assume that the conditional distribution of given belongs to four-parameter families of distribution, these parameters being related to the first four conditional moments of given .

Although OLS procedures yield consistent estimator of , it should be less efficient than the Maximum Likelihood Estimator (MLE) of which explicitly takes into account the additional information on the third and fourth moments. Similarly, the distribution of the OLS t-ratio should be affected by the departure from normality of the distribution of conditional on since the variance of the asymptotic distribution of the variance of OLS estimator also depends on these moments. In order to evaluate the order of magnitude of these expected effects, we consider in detail two particular cases: the Pearson’s type IV and the generalised gamma distribution; the use of the latter has recently been advocated by Manning, Basu and Mullahy (2003).

Besides efficiency considerations, we address the following research questions:

  1. What are the properties of the OLS estimator of the variance?

  2. What are the properties of the OLS t- ratio?

  3. What are the properties of the optimal predictor when the true distribution is either a Pearson’s type IV or a generalised gamma distribution?

We apply these ideas to merged hospital records and insurance data and a health-based risk adjustment model in Switzerland .


2. Using Diagnosis-Based Risk Adjustment and Self-Reported Health Status to Predict

Mortality - Kenneth Pietz and PhD, Laura A Petersen, MD, MPH, VA Medical Center and Baylor College of Medicine

Background: Both diagnosis-based risk adjustment variables and self-reported health status have been found to predict mortality.

Objectives: (1) To compare the ability of two diagnosis-based risk adjustment systems and health self-report obtained at approximately the same time to predict long-term mortality. (2)To determine whether health self-report data contains health information not contained in diagnosis-based risk adjustment systems.

Methods: This was a cohort study using VA administrative databases. We tested the ability of Diagnostic Cost Groups (DCGs), Adjusted Clinical Groups (ACGs), and SF-36V (SF-36 for veterans) Physical Component Score (PCS) and Mental Component Score (MCS) to predict one-year and five-year mortality. The additional predictive value of adding PCS and MCS to ACGs and DCGs was also evaluated. Logistic regression models were compared using Akaike’s information criterion and the c-statistic. The outcome was all-cause mortality.

Results: The sample contained 35,337 VA beneficiaries at eight medical centers during fiscal year (FY) 1998 who voluntarily completed an SF-36V survey. The c-statistics for PCS and MCS combined with age and gender were 0.757 for 1-year mortality and 0.765 for 5-year mortality. For DCGs with age and gender the c-statistics for 1- and 5-year mortality were 0.793 and 0.781, respectively. Adding PCS and MCS to the DCG model increased the c-statistics to 0.811 for 1-year and 0.796 for 5-year mortality.

Conclusions: The diagnosis-based risk adjustment variables showed slightly better performance than the health self-report variables in predicting mortality. Health self-report may add health risk information in addition to age, gender and diagnosis for predicting mortality.

(6) Methods of Risk Adjustment for Skewed Outcome Data (cont’d)


3. Risk Adjustment with Flexible Link and Variance Function Models - Anirban Basu Ph.D., University of Chicago; Bhakti Arondekar Ph.D., GlaxoSmithKline; Paul Rathouz Ph.D., University of Chicago.

Background: Traditional models such as ordinary least squares (OLS) regression and transformation models such log-OLS regression have been shown to be problematic is modeling skewed outcome variables such as costs. Researchers have suggested the use of generalized linear models (GLM) to overcome these problems. However, specifications of a link function and/or variance function in GLM are seldom driven by theory and are often difficult to ascertain using available diagnostic tests. Recently we proposed an extension to the estimating equations in generalized linear models to estimate parameters in the link function and variance structure simultaneously with regression coefficients. Rather than focusing on the regression coefficients, the purpose of these models is consistent estimation of the mean of the outcome as a function of a set of covariates, and various functionals of the mean function

Objective: To illustrate the biases that may arise in using alternative estimators to model expenditure data.

Methods: We estimate the potential 1-year cost savings if HF could be prevented in post-MI patients using data for years 1998, 1999 and 2000 from a large claims database. We model the total medical expenditures for each patient over the 1-year period post index date. Covariates include HF, age, sex, death and comorbidities at index hospitalization, type of insurance, procedures performed, year and the type of MI. The estimators that we consider are the ordinary least squares (OLS) regression, log-transformed OLS regression with and without heteroscedastic smearing , the gamma regression with log-link and the extended estimating equations (EEE) model that estimates both the link and variance parameters for the data along with the regression coefficient.

Results: 10,462 patients were eligible for the study. Completed one-year follow-up data were available for 7428 patients. No significant differences were found in observed variables between those with complete one year data and those without. Significant differences in estimated cost-savings were found between estimators. The EEE model was found to the appropriate estimator based on a broad set of goodness of fit tests and tests of over-fitting. Based on this estimator, the potential 1-year cost-saving due to preventing HF in post-MI patient who develop HF was estimated to be $14700 (1135).

Conclusions: Careful selection of estimator is important for modeling cost data. The EEE estimator seems to perform better than alternative estimators studied.


Potential Discussants: Willard G. Manning, U Chicago, John Mullahy, U Wisconsin, Alan Zaslavsky, Harvard University

Session 7:
Population Needs-Based Funding Models”, International Health Policy session
Organizers: Lisa Lix, Manitoba Centre for Health Policy, University of Manitoba; Therese Stukel, Institute for Clinical Evaluative Sciences, University of Toronto


The basic objective of most publicly funded health care systems is to allocate health care resources to populations based on need. Health need, as opposed to health status, is defined as the ability to benefit from a health intervention. Defining the empirical relationship between need and resource allocations is the single most intractable task in establishing population-based funding formulas. Need is typically measured directly (morbidity, disability) or indirectly (mortality, SES) using population based census or survey data.

Many jurisdictions have developed population need-based funding models based on indirect measures of need to establish regional funding levels for health care resource allocations. Others have implemented a “risk-adjustment” model to predict future costs using clinical diagnoses from health administrative data.

The purpose of this session is to share international perspectives on the use of direct, indirect and use-based measures of population need in funding formulae.


Speakers:

1. Professor Peter C. Smith, Centre for Health Economics and Department of Economics and Related Studies, University of York, UK. Professor Smith has acted as consultant to numerous agencies, including government departments, the OECD, the World Health Organization and the World Bank. He is also joint editor-in-chief of the journal Health Care Management Science, a member of council of the Royal Statistical Society and a commissioner at the Audit Commission. His work in health care policy includes the development of the "York formulae", which were implemented in 1995 as the basis for allocating National Health Service funds to geographical areas. Since then, he has been involved in numerous studies relating to risk adjusted capitation and the associated topic of equity in health and health care.

2. Dr. Thérèse A. Stukel, Senior Scientist and Vice President, Research, at ICES; Professor of Biostatistics and Health Services Research, Dartmouth Medical School, Hanover, NH; and Adjunct Professor, Department of Health Policy, Management and Evaluation, University of Toronto. She recently developed boundaries for the Ontario Ministry of Health for Local Health Integration Networks (LHINs). She is directing a methodology group for the “LHIN Funding Model Principles Think Tank” of the Ontario Ministry of Health to develop a population needs-based funding model based on indirect and direct measures of need for Ontario LHINs by Spring 2005. Her will also study the value added of including diagnosis groups obtained from health services utilization data using DCGs.

3. Dr. Peter Crampton, Senior Lecturer and Acting Department Head, Wellington School of Medicine and Health Sciences, University of Otago, New Zealand. Dr. Crampton has a background in public health medicine and general practice. His research is focused in two broad areas: 1) social indicators and social epidemiology, and 2) health services research related to primary health care funding and organisation. He works closely with the New Zealand Ministry of Health in a variety of policy areas related to public health and primary care. New Zealand recently introduced a population-based funding model for primary care, which incorporated an area-based measure of deprivation developed by Dr. Crampton.

Discussants:

1. Dr. Robert Reid, Department of Health Care and Epidemiology, University of British Columbia, Canada. His interests revolve around primary care organization and design as well as the translation of preventive care research into clinical practice. Dr. Reid’s research has examined the use of the ACG system to develop population-based measures of need.

2. Dr. Cameron Mustard, President and Senior Scientist, Institute for Work and Health, Toronto, and Professor, Department of Public Health Sciences, Faculty of Medicine, University of Toronto, Canada. He is also Associate Director and Fellow of the Population Health Program of the Canadian Institute of Advanced Research, a member of the Board of Directors of the Canadian Institute for Health Information, Chair of the Canadian Population Health Initiative Council, and is a member of the Research Advisory Council of the Workplace Safety and Insurance Board of Ontario. His research interests include the epidemiology of socioeconomic health inequalities across the human life course, and the distributional equity of publicly funded health and health care programs in Canada.

Session 8:

Advanced Methods for Estimating Health Disparities”

Organizers: Anirban Basu, Douglas Staiger

1. Racial disparities in self-rated health at older ages: the contribution of neighborhood-level factors – Kathleen Cagney1, Christopher Browning2, and Ming Wen2. 1University of Chicago, 2Ohio State University, 3University of Utah.


Objectives: Racial differences in self-rated health at older ages are well-documented; African-Americans consistently report poorer health, even when education, income and other health status indicators are controlled. The extent to which neighborhood-level characteristics mediate this association remains largely unexplored. We ask whether neighborhood social and economic resources help to explain the self-reported health differential between African-Americans and Whites.

Methods: Using the 1990 Decennial Census, the 1994-95 Project on Human Development in Chicago Neighborhoods Community Survey, and selected years of the 1991-2000 Metropolitan Chicago Information Center Metro Survey, we examine the impact of neighborhood structure and social organization on self-rated health for a sample of Chicago residents 55+ (N=636). We use multi-level modeling techniques to examine both individual and neighborhood-level covariates.

Results: Findings indicate that affluence, a neighborhood structural resource, contributes positively to self-rated health and attenuates the association between race and self-rated health. When the level of affluence in a community is low, residential stability is negatively related to health. Collective efficacy, a measure of neighborhood social resources, is not associated with health for this older population.

Discussion: Analyses incorporating individual and neighborhood-level contextual indicators may further our understanding of the complex association between sociodemographic factors and health.


2. Valuation of Arthritis Health States Across Ethnic Groups and Between Patients and Community Members - Julianne Souchek1, Margaret Byrne2, Adam Kelly1, Marsha Richardson1, Chong Pak1, Harlan Nelson1, Maria Suarez-Almazor1. 1Baylor College of Medicine/Michael E. DeBakey VA Medical Center, Houston, TX; 2University of Pittsburgh, Pittsburgh, PA


Objectives: To examine differences in the valuation of health states by patients and community members from different ethnic backgrounds.

Methods: We surveyed 193 community members identified by random digit dialing: 64 white (W), 65 African-American (AA) and 64 Hispanic (H). The patient sample included 198 individuals diagnosed with osteoarthritis (OA) and drawn sequentially from a health-provider institution clinic lists, 66 per ethnic group. Participants were interviewed face to face and asked to rate two different scenarios describing patients with arthritis (mild and severe) using visual analog scale (VAS), standard gamble (SG) and time trade-off (TTO Differences were adjusted for cohort, age, age-squared, gender, and education.

Results: The difference between the utility scores for mild OA and severe OA was significantly smaller for AA than W by the VAS, TTO, and SG methods. The difference between mild and severe states was smaller for H than W by the SG method. For the severe OA state the odds that AA had scores > 0.80 relative to W was 2.22 using the TTO method. Preferences for the mild OA state were not different among ethnic groups. Using the SG method, the odds that the scores were > 0.80 in the public cohort vs. the patient cohort were greater than 1 for severe OA and for mild OA. The public gave the severe OA state a higher preference score than patients did using the VAS method. Education and age had significant, independent effects on utility scores. Age increased the SG utility scores, and the difference between severe and mild health states was less by VAS for older individuals. Education ameliorated the effects of other variables on TTO and SG scores.

Conclusions: Our findings show significant differences between ethnic groups in the valuation of health, with AA reporting less difference between the mild OA state and the severe OA state than W by the VAS, TTO and SG methods of valuating health states. H were less willing than W to risk death to move from a severe OA state to a mild OA state. Members of the public were less willing than patients to risk death to achieve perfect health. These differences suggest that in health decision-making, valuation of health states cannot be used interchangeably across ethnic groups, and that benefits and risks of treatments should be discussed thoroughly with patients before they make a treatment choice.


Advanced Methods for Estimating Health Disparities (cont’d)


3. Implementing the IOM Definition of Disparities: An Application to Mental Health Care -

Thomas G. McGuire, Margarita Alegria, Benjamin L Cook, Kenneth B. Wells, Alan Zaslavsky, Harvard University

In a recent report, the Institute of Medicine (IOM) defines a health service disparity between population groups to be the difference in treatment or access not justified by the differences in health status or preferences of the groups. This paper proposes an implementation of this definition, and applies it to disparities in outpatient mental health care. The Health Care for Communities (HCC) survey re-interviewed 9,585 respondents from the Community Tracking Study in 1997-98, oversampling individuals with psychological distress, alcohol abuse, drug abuse, or mental health treatment. The HCC is designed to make national estimates. We modeled expenditures using a Generalized Linear Model (GLM) with quasilikelihoods and a probit model. We adjust for group differences in health status by transforming the entire distribution of health status for minority populations to approximate the white distribution. We compare disparities according to the IOM definition to other methods commonly used to assess health services disparities. Our method based on the IOM definition finds significant service disparities between Whites and both Blacks and Latinos. Estimated disparities from this method exceed those for competing approaches, due to the inclusion of effects of mediating factors (such as income) in the IOM approach. A rigorous definition of disparities is needed to monitor progress against disparities and to compare their magnitude across studies. With such a definition, disparities can be estimated by adjusting for group differences in models for expenditures and access to mental health services.

Session 9:

SELECTION BIAS IN OBSERVATIONAL STUDIES”

Organizers: Anirban Basu, Douglas Staiger

When observational data are used for the estimates of economic endpoints, selection bias may limit an investigator’s ability to generate an unbiased estimate of the causal relationship between treatment and the economic outcome. Selection bias arises when factors that influence the treatment choice, such as patient health and provider skills, also influence outcomes independent of the. Adequately accounting for these factors is necessary if observational data are to be used in economic evaluations. This session focuses on work that address selection bias in observational studies.

1. Using Instrumental Variable Analysis and Propensity Scores to Compare Outcomes Localized Breast Cancer Treatments in a Medicare Population - Daniel Polsky Ph.D., University of Pennsylvannia

Objective. To use instrumental variable (IV) analysis and propensity scores to correct for observational data bias in a study of the relationship between treatment for localized breast cancer (breast conserving surgery with radiation therapy (BCSRT) versus mastectomy (MST)) and economic outcomes. The economic outcomes studied are five-year post-treatment survival, quality adjusted live years, total medical costs, and cost-effectiveness.

Data Sources. Medicare claims for a national random sample of 2,907 women (age 67 or older) with localized breast cancer who were treated between 1992 and 1994; Medicare’s Physician Provider file; Area Resource File; and the American Hospital Association’s Annual Survey of Hospitals, 1992-1994.

Study Design. We constructed instrumental variables for treatment received from a linear probability model of the effects of economic factors and patient characteristics on actual treatment. We then estimated a linear probability model of three-year survival with both observational data (actual treatment) and the instrumental variables for treatment. We used 5 propensity score stratum, these strata were validated and then treatment effects were estimated within each strata.

Principal Findings. Contrary to the results of randomized clinical trials that found no difference in survival, analysis with the observational data found highly significant differences in survival among the three treatment alternatives: 79.2% survival for BCSO, 85.3% for MST, and 93.0% for BCSRT. The IV results and the propensity score results, in contrast, were consistent with the clinical trial results.

Conclusions. Observational data on health outcomes of alternative treatments for localized breast cancer should not be used for cost-effectiveness studies without appropriate adjustment. Both the instrumental variable method and propensity score methods produce results that are consistent with randomized clinical trials. The appropriate method depends on whether there is more potential for bias for non-linearity or unobserved variables.
  1   2

Похожие:

Predicting high-cost users of medical care and the persistence of high expenditures over time iconThis section describes the real-time systems which are available for noaa klm direct readout users. These systems include the High Resolution Picture

Predicting high-cost users of medical care and the persistence of high expenditures over time iconWorldwide Interoperability for Microwave Access, or Wimax for short, is a next generation open standard that seeks to serve users' increasing demands for high

Predicting high-cost users of medical care and the persistence of high expenditures over time icon1984: Graduation Diploma with high honors at Carroll High School, Ozark, Alabama, usa, at the end of a one-year experience as an exchange student in U. S. A. Member of the National Honor Society. 1985

Predicting high-cost users of medical care and the persistence of high expenditures over time iconApproved Real time pcr and nanoparticle diagnostic facilities for high throughput quantitative analysis of

Predicting high-cost users of medical care and the persistence of high expenditures over time iconPart I. Embassies, high commissions, deputy high commissions & consular posts

Predicting high-cost users of medical care and the persistence of high expenditures over time iconWelcome to Valencia High School! The experience of high school only happens once in a lifetime. Take advantage of what this school has to offer. Get involved

Predicting high-cost users of medical care and the persistence of high expenditures over time iconTransit-time mechanism of plasma instability in high-electron mobility transistors, phys stat sol. (a) 202, No. 10, R113-R115 (2005) 1097

Predicting high-cost users of medical care and the persistence of high expenditures over time iconIn the high court of south africa (western cape high court, cape town)

Predicting high-cost users of medical care and the persistence of high expenditures over time iconIn the high court of south africa iwestern cape high court, cape town]

Predicting high-cost users of medical care and the persistence of high expenditures over time iconSurviving Time: Aboriginality, suicidality, and the persistence of identity in the face of radical developmental and cultural change

Разместите кнопку на своём сайте:
Библиотека


База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека
Главная страница