Summary Chapter 1, Introduction (Mattias Fritz)




Скачать 341.98 Kb.
НазваниеSummary Chapter 1, Introduction (Mattias Fritz)
страница21/22
Дата08.10.2012
Размер341.98 Kb.
ТипДокументы
1   ...   14   15   16   17   18   19   20   21   22

Erik Mohlin 18240




Article: Johannesson M, Theory and Methods of economic evaluation of health care, Boston: Kluwer Academic Publishers: 1996:173-192

(Nicklas Pettersson)


Remark: Instead of trying to draw the different figures in the chapter I have used references in the text.


First a few definitions:

  • cost-benefit analysis (CBA) = monetary valuation of cost/benefit from different interventions

  • cost-effectiveness analysis (CEA) = nonmonetary valuation of health outcomes of alternative interventions (years of life gained etc)

  • cost-utility analysis (CUA) = a kind of CEA which measures the benefits to reflect individuals´ preferences for the health outcomes


10 Cost-utility analysis

A treatment may lead to different health effects/side-effects others then the one used in the study. To take account of all effects quality-adjusted life-years (QUALYs) was introduced, which incorporates both quality and quantity of life. When QUALYs are used in CBA it is often called CUA instead (there are also other measures than QUALYs, e.g. health year equivalents (HYEs) taking account of individual preferences.


The chapter is divided into three sections: 1st: different methods of measuring the quality weights used in QUALYs. 2nd and 3rd: relationship between QUALYs, HYEs and individual preferences. Next, implications of discounting for QUALYs and HYEs and thereafter the relationship between CUA and CBA are analysed. At last a cost-utility application and some conclusions.


10.1 QUALYs and their measurement

QUALYs are constructed by assigning every lifeyear a weight between 0 = immediate death (ID) or health status equal to death, and 1 = full health (Q*), and then sum the weights in the different years (one example is arthritis where you could use quality weight 0.7/year).


There are two lines of reasoning behind quality weights:

1st: It should be determined by some kind of socio-political process or ”decision-maker”, and should be politically acceptable. Then it doesn’t need to have anything with individual preferences to do. The decision-maker view in economic evaluation might be problematic because it is not clear who makes the decision (it is usually not identified in studies). It also lacks theoretical foundation, and the weights become more or less arbitrary. A perfect political system is needed where decision-makers act as perfect agents for the population.

2nd: Based on individual preferences. An individual should prefer the treatment that leads to the most QUALYs (assuming that other costs not included in QUALYs are equal among different treatments) if the number of QUALYs is based on the quality weights of the individual. This is the main concern. In reality the difference between the two lines may not be so clear.


There are different sources of quality weights used to construct QUALY:s: 1st: Assumption by the researcher (which seems unreliable). Though, if sensitivity-analysis could show that a specific weight doesn’t change the conclusion of a study, it may be unnecessary to measure that weight at all. 2nd: An expert judgement (e.g. by physicians). This is close to the decision-maker approach. The question is what implies that they actually know the individuals preferences. 3rd: Weights from other studies. Here is a risk for a tendency to turn guesses into truths, i.e. earlier mistakes are repeated. 4th One of the following three measurements:


1st(fig1, p176): Rating scale, also called visual analog scale(VAS). Patient/respondent locates health status along a line with ID and Q* as endpoints. If the scale isn’t already between 0 and 1 it is normalized, leaving a quality weight between 0 and 1. This is easy to use, but the problem is that an ordinal ranking is used which involves no choice between health states. Therefore you can not compute any trade-off or difference between different health states observed.


2nd(fig2, p177): Standard gamble method (SG). The health state is compared to a SG game with probability (p) of Q* during a specific number of years and (1-p) of ID. The value of p where the individual is indifferent between the gamble and the assessed health (Q) is equal to the quality weight, e.g. indifference between arthritis (for 10 years) and 0.7 FH and 0.3 ID, yields quality weight 0.7. This is based on von Neumann-Morgenstern expected utility theory, where a valid cardinal utility function (CUF) assigns numerical values to different outcomes. Expected utilities of gambles involving these outcomes result in a ranking of such gambles that agrees with the individuals preferences, i.e. the individual rank the gambles according to their expected utility (expected utility theory). According to this theory, SG method can be used to estimate a CUF for different outcomes, (fig3, p178). This is done by arbitrarily determining the utilities of the best and the worst possible outcomes, and then the value of all other outcomes can be measured relative to these reference states using gambles. Reference states are Q* for the same number of years, and ID. At the indifference probability (p) the cardinal utility is: u(Q)=E[u(p*u(Q*)+(1-p)*u(ID)] and u(ID)=0 u(Q)=E[u(p*u(Q*))]. (E=expected). Therefore this measures the fraction of the utility of Q* for the assessed state. The advantage is that this is based on utility theory and QUALY:s, the disadvantage is that some people may not understand the concept of probabilities used. To chose between Q* and then ID is also a very hypothetical question (it might be the case in surgery, but then you probably don’t know the time-horizon). Therefore there are difficulties to use and validate SG-theory in reality.


3rd(fig4, p180): Time-trade-off (TTO). Quality weights are assessed by finding out where the individual is indifferent between T years in Q and X years in Q*, and then compute X/T. The difference between this method and SG is that here we use number of healthy years instead of utility of healthy years. Both methods can be illustrated in the same graph:


Utility




1 Q*


Q


X T Years


The advantage of TTO is that it fits the theory of utility and QUALY:s better to show two combinations of life-years and quality with the same cardinal utility and (therefore) it is easier to use than SG method (fig5,p181). Though, the choice between two different time-periods with different health states seems quite unrealistic. Using CUA the author prefers either the SG or the TTO method to VAS. But he also says that if the scores from VAS could be converted to TTO or SG, he’d prefer that method because of its practical advantages, though the method first must be shown to be reliable. There is no obvious choice between TTO and SG. If the underlying assumptions of QUALY:s are valid they both should yield the same number. Both may also suffer from a starting point bias. Turning to which population quality weights should be used on, a natural candidate is the target population of a health care programme. It could also be individuals in the actual health state, though here is risk of exaggeration of their own health state from the subjects. Others argue that a general population should be used since it represents potential patients and more than one quality weight could be assessed from each respondent. A problem with a sample from a general population is though that it will lack the experience of the health states. There is also a problem with anchoring between questions if many weights are to be assessed from each individual. The methods mentioned above are the usual ones, but some other approaches to quality weights have been used also, e.g. multiattribute utility functions. Here the level of different attributes of a health state is observed, and then entered into a utility function to derive a quality weight.


10.2 QUALYs and individual preferences

To be useful, QUALYs must reflect the individual’s preferences i.e. the individual should be able to rank different treatments and chose the one which gives the most QUALYs. The only theoretical foundation that has been given for QUALYs is the CUF theory. In practice this means that expected utility equals cardinal utility multiplied with probability and the expected utility will rank different “gambles” of health profiles according to the individual’s preferences. A health profile is a combination of life-years health status each year until death.


To describe the assumptions underlying QUALY:s to be a CUF a simplified model is used with the following assumptions(i.e. Pliskins model which is directly related to TTO and SG): Utility solely depends on Q and T. All health states are preferred to death and there is no health state worse than death. Lifetime health profile = (Q,T). A health state is always constant and there is no variation over time. QUALYs are not discounted and they are risk-neutral (generally used model, compare to risk-adjusted). Pliskins identify three conditions for QUALYs to be a valid CUF which restricts the shape of the utility function even more:

1st(fig6, p186): T and Q must be mutually utility independent, e.g. indifference between a health state for one year and a gamble for one year implies indifference for the health state for an arbitrarily number of years and the game for the same amount of years. The shape of the utility functions over life-years is the same for all health states, and utility functions all start at zero. If the ranking of gambles of quality is independent of life-years that implies that ranking of gambles of life-years is independent of quality.

2nd(fig7, p187): Constant proportional trade-off property, i.e. the proportion of remaining life one would be willing to trade for a specified quality improvement is independent of the amount of remaining life. Then a quality weight = the fraction of number of healthy years.


If both these assumptions hold, a specific shape of the utility function over life-years is imposed, known as constant proportional risk posture over life-years (fig8, p188). This is also called constant relative risk aversion. The utility function exists if c=-T[U’’/U’] is constant., where U’/U’’ is the first/second order derivative of the utility function with respect to life-years. If the assumptions hold, we also know that risk-adjusted QUALYs (RAQs) are a valid CUF. RAQs= U(Q,T)=[V(Q)*T]r, where r is a risk aversion parameter equal to 1-c. When r=1, the subject is risk neutral. V(Q) is a value function of quality measured on a scale between 0=ID and 1=Q*, and therefore measures desirability of Q to Q* as a fraction of X years, i.e. the quality weight computed from TTO. RAQs can also be measured in terms of SG: U(Q,T)=U(Q)*Tr. U(Q) is measured on a scale similar to that of V(Q), and measures the desirability of Q as the fraction of the utility of X years of Q*, i.e. the quality weight by SG method. The assumptions of RAQs model is (for short): The constant proportional risk posture over life-years has to hold in all health states and the risk parameter r has to be the same in all health states (i.e. relative risk aversion has to be constant). Note that using RAQs would necessitate estimating the risk parameter r. RAQs has seldom been used in actual applications, and was derived for health profiles with constant quality over time.


For the commonly used risk-neutral QUALYs (RNQs), the shape of the utility function has to be constrained even further (fig9, p190). RNQs(TTO)=U(Q,T)=V(Q)*T and RNQs(SG)=U(Q,T)=U(Q)*T. If risk neutrality holds (i.e. r=1), then V(Q)=U(Q) and both quality weights and the number of QUALYs will be the same for the TTO and SG methods. This means that the assumptions of RNQs can be stated as: risk neutrality over life-years holds in all health states, i.e. the utility function with respect to life-years is linear in all health states. Important is that both TTO and SG methods have the same theoretical foundation in the theory of QUALYs. If one model is valid, then the other one is as well. We see now that TTO is not derived from SG. TTO also a theoretical advantage to SG: Estimating RNQs in the case of certainty (no probabilities of different outcomes), the number of QUALYs based on TTO will always rank health profiles according to individual preferences as long as the constant proportional trade-off assumptions hold. This is not true for SG, (fig8,p188). If risk is introduced, none of the measures are a valid CUF, and will therefore not rank risky health profiles the same way as expected utility.


Releasing the assumption of constant quality over time, the individual has a probability distribution of different health states at each point in time, and the QUALYs are just the sum of the quality weights multiplied by the probabilities. For QUALYs to be a valid CUF if the quality isn’t constant, additive utility independence between different periods has to be assumed, i.e. the health state in one period is independent of the health state in another period. Additive utility independence does not imply that risk neutrality necessarily holds. To achieve that you have to assume that both risk neutrality with respect to life-years in all health states and that additive utility independence both are present. If additive utility independence holds, then the utility of a health state is always the same in every year and can be multiplied by the probability of that health state to achieve its contribution to the total utility. If not, the health state now depends on the health state in previous years. If the health state is constant, then the utility is the same every year, and on the contrary if the quality varies over time, the health state depends on previous years unless the assumption of additive utility independence holds.


The essential feature of QUALYs is that the quality weights are always the same. You just measure it once, and then you can use it. This is suitable to e.g. Markov-models where all persons are assumed to be identical.


In the general case with varying health status over time, QUALYs have to be based on risk neutrality over life-years in all health states and additive utility independence. In the case of uncertainty where one would have to drop the assumption of additive utility independence, you would also have to measure either the health or the utility of Q* profiles directly instead of measuring quality weights. HYEs (mentioned above) could be one way of doing this, i.e. assessing health profiles instead of health states.

Article:Designing and Conducting Cost-Benefit Analyses (Christofer Ohlsson)

Authors: Magnus Johansson and Milton C. Weinstein (p 1085-1092)


INTRODUCTION


In cost-benefit analysis both costs and benefits are measured in monetary terms. Unlike cost-effectiveness analysis, cost-benefit analysis permits a direct comparison of benefits and costs in the same units. The challenge is to measure health benefits in monetary units. The consequences of a program should be measured as the willingness to pay of the individuals who bear the consequences. These consequences may include benefits such as increased probabilities of survival or improved quality of life, and offsetting adverse effects such as side effects and inconvenience of treatment. Thus, the major requirement in the design of cost-benefit (compared to cost-effectiveness) studies is a method for measuring willingness to pay for the net consequences of health programs.


HUMAN CAPITAL


Until recently, most cost-benefit studies in the health field used the human capital method to value health improvements. This method values life saving and health improvements in terms of the amount of additional economic productivity associated with improved health, as measured by earnings in the labour force and as the decreased health care costs. However, this approach does not measure the value that individuals place on their own health and survival. Instead, social choice theory imply that the value of health improvements should be based on individuals own willingness to pay for these improvements. These values can in turn be based on either revealed preferences in markets or by direct elicitations known as contingent valuations.


REVEALED PREFERENCES


The revealed preferences approach observes decisions that individuals actually make concerning health risks (thereby inferring their willingness to trade money for these consequences). Examples of sources of revealed preference data: the labor market (where wage premiums are offered to induce workers to accept more risky jobs), consumer decisions whether to use automobile safety belts and whether to use smoke-detectors. However, it is often difficult to apply these revealed preferences for health care programs because health care services are usually not purchased directly on a market.


An alternative method is to assess willingness to pay for the consequences themselves (such as changes in mortality probabilities and health status). For example, willingness to pay for a hypertension treatment program could be estimated by revealed preferences from labour market studies in which health effects comparable in magnitude to those of the hypertension program are valued. One disadvantage of decoupling the valuation of consequences from the program context is that the context effects may be important - the importance attached to health risks depends on numerous attributes of the source of those risks and not just their magnitude. Still, if evidence from actual decision-making behaviour is desired, it may be necessary to extrapolate from one context of health risk reduction to another.


CONTINGENT VALUATION


The chief alternative to using revealed preference to estimate the willingness to pay for improved health consequences is to use surveys to investigate the expressed willingness to pay of individuals. This method is known as the contingent valuation method and has led to increased interest in the application of cost-benefit analysis to medical care.


OPEN-ENDED CONTINGENT VALUATION QUESTIONS


Contingent valuation questions are classified as either open-ended or binary valuation questions. In open-ended valuation questions, the researcher tries to measure the maximum willingness to pay of each respondent, and in binary valuation questions the respondent accepts or rejects only one price (bid) level for the good.2 An example of an open-ended contingent valuation question: “This question concerns how you value your treatment against high blood pressure…bla bla bla…At present, a patient treated for high blood pressure pays on average SEK 350 a year in user fees. How much is the highest amount you would be willing to pay per year in the form of user fees for your current treatment?..............SEK per year


It is possible to ask the respondent directly for the maximum willingness to pay, but usually aids are used to make it easier for the respondent to answer. One aid that is used in interviews is the bidding game (A first bid is made to the respondent, who accepts or rejects, and then the bid is raised or lowered depending on the answer…process goes on until maximum willingness to pay is reached.) Another aid is payments cards used to display a range of willingness-to-pay values from which the respondent may chose. The major problem with the bidding game method is that it involves the risk of starting-point bias (see discussion about biases below). Payment cards can lead to similar problems because the multiple choices offered may affect the selection made.


BINARY CONTINGENT VALUATION QUESTIONS


Problems with starting point bias in contingent valuation questions have led to the use of binary valuation questions. In a binary contingent valuation question, the respondent is asked to accept or reject a single bid, which they would have to pay in exchange for a program or some improvement in health status. An example of a binary contingent valuation question: “This question concerns how you value your treatment against high blood pressure…bla bla bla…At present, a patient treated for high blood pressure pays on average SEK 350 a year in user fees. Would you choose to continue your current treatment against high blood pressure, if the user fees for the treatment were raised to SEK 2000 per year? …..YES …..NO


A respondent who accepts the bid in a binary contingent valuation question is assumed to have a maximum willingness to pay in excess of the bid, whereas a respondent who rejects the bid is assumed to have a maximum willingness to pay less than the bid. The population of respondents is stratified into subsamples, each offered a different bid. By analyzing the subsamples, it is possible to calculate the proportion of respondents who are willing to pay each bid (price). This is illustrated in Fig 1: The curve is interpreted as an aggregate demand curve for the program or health improvement.





Binary valuation questions is the most commonly used elicitation technique. An advantage of this technique is that it resembles a market situation for the respondent because individuals are used to decide whether to buy a good at a specific price. The binary approach also avoids the problem of starting point bias because the individual is only given one bid.


The disadvantage of the binary approach is that the information from each respondent is substantially less than in the case with open-ended questions. To estimate the mean willingness to pay the relationship between the bid and the proportion of respondents who are willing to pay the bid has to be estimated, i.e the curve in Fig 1 has to be estimated. The mean willingness to pay in the population is the area below the curve, and the median willingness to pay is the price where the proportion of acceptance is 0.5, i.e the price at which 50% would answer yes and 50% would answer no. To estimate the relationship between the proportion that are willing to pay and the bid, either regression analysis3 or nonparametric methods can be used. (If you want to study this more carefully, see page 1088).


In practice, it is useful to use both the nonparametric method and logistic regression to analyze data from a binary contingent valuation study. The general recommendation in the literature is currently to use binary rather than open-ended contingent valuation questions because binary questions more closely reflect real decisions made by the respondents on the market and because open-ended questions suffer from starting-point bias (if bidding games or payment cards are used) or nonresponse (if maximum willingness to pay is assessed directly).


POTENTIAL BIAS IN A CONTINGENT VALUATION STUDY


The survey instrument in a contingent valuation study is of the outmost importance. When constructing it, a number of sources of potential bias are important to bear in mind. The sources of potential bias can be divided into five main areas; incentives to misrepresent responses, implied value cues, scenario misspecification, sample design and execution biases, and inference biases. The first three concern the design of the instrument, whereas the other two concern more general issues relating to the sampled population and to the way the results are used.


  1. Incentives to misrepresent responses: produce two types – strategic and compliance bias. Strategic bias means that respondents feel it is in their self-interest to state a lower or higher amount than their true value, e.g. the free-rider problem. The most well-known type of compliance bias is interviewer bias: respondents overstate or understate their true valuation to please the interviewer.

  2. Implied value cues: Starting-point bias (respondent’s valuation is affected by some suggested amount given in the question, e.g. the first bid in a bidding game). Range bias (instead of a single starting point, there is a range of potential amounts which affects the respondent, e.g. payment cards). Relational bias (means that the description of the good being valued presents information about its relationship to other goods, thereby influencing the amount given by the respondent). Importance bias (can arise if the valuation instrument suggests that the good being valued is particularly important – may lead to exaggerations of respondents valuation). Position bias (is induced if the ordering of questions suggests some value of the good that influences the valuation.

  3. Scenario misspecification: occurs when the respondent does not respond to the correct contingent scenario, usually because the question is formulated incorrectly. Theoretical misspecification (occurs if the scenario represents an incorrectly specified consequence of the policy change being valued). Amenity misspecification (occurs when the good being valued by the respondent differs from the one intended by the researcher). Context misspecification (also a case of misunderstanding, but not the good itself but the context of the market that is perceived in a way that differs from what was intended).

  4. Sample design and execution biases: are mainly statistical problems. Arises if the population surveyed does not correspond to the population to whom the benefits will accrue. Nonresponse will also bias if respondents’ willingness to pay is not the same as the unmeasured willingness to pay of nonrespondents.

  5. Inference bias: Can arise if preferences have changed from the time of the contingent valuation survey to the time the results are used for decision making. They may also occur if the analyst adds together the willingness to pay for each of a number of different goods to evaluate a policy package but these goods were valued independently of each other.


The following conditions are useful to think about in the design of a contingent valuation study – that is, the scenario should be: theoretically accurate, policy-relevant, understandable by the respondents as intended, plausible to the respondent, and meaningful to the respondent. This illustrates the importance of a plausible and meaningful scenario to enhance validity. The first two criteria can be fulfilled through correctly specifying the problem, and the other three can be satisfied by careful pilot testing of the instrument.


THE NOAA PANEL REPORT AND THE NOAA PROPOSED REGULATIONS


There has been a major controversy regarding the use of the contingent valuation method to measure so-called “existence values” in environmental economics. Existence values are values of environmental resources not directly related to any use of the resources. The premise is that people are willing to pay to preserve a wilderness area even though they never expect to visit the area. Existence values can be viewed as a form of altruism, and the issue is thus similar to the question of how to measure the willingness to pay of individuals for health programs that affect others.


There has been a fierce debate between economists whether the contingent valuation method can be used to assess existence values. Following this debate, the responsible government agency for damage assessments in connection with oil spills, the National Oceanic and Atmospheric Administration (NOAA), appointed a panel of economic experts to evaluate the use of the contingent valuation method in determining existence values. The panel identified a number of problems with the method, but at the same time stated that the method seemed to yield some useful information (see page 1090). The panel also proposed a number of guidelines for contingent valuation studies:

  • Use the binary approach for eliciting willingness to pay, rather than open-ended questions.

  • Use interviews rather than mail surveys. Face-to-face interviews the preferred approach, but telephone acceptable in some cases.

  • A budget constraint should be imposed on the respondents by reminding them of alternative spending possibilities and making it clear that the payment will reduce the consumption of other goods.

  • Respondents shall be given the option of not responding to a contingent “yes or no” question (by adding a “no answer” or “don’t know” option).

  • Answers shall be followed up with a question about the reason for the response – as a way to sort out responses that reflect true valuations of the good from protest bids and other types of answers.

  • Use a “conservative design” of contingent valuation studies: when there is ambiguity about some aspect of the survey or the analysis of the data, the option that will tend to underestimate willingness to pay should be chosen.


CONTINGENT VALUATION AND CLINICAL RESEARCH


If a clinical trial is used as a basis for a cost-benefit analysis, there are two different ways to use the contingent valuation method. The contingent valuation method could be incorporated directly in a clinical trial that compares two or more alternative treatments to investigate the willingness to pay for the treatments in the different groups. One test of validity could then be carried out by testing if the willingness to pay increases with the size of the health effects. If, for example, an experimental treatment reduces the risk of some event, one could measure the perceived risk reduction of the patients and test whether the willingness to pay increases with the perceived risk reduction.


Alternatively, the results of the clinical trial could be used to describe the health consequences of different treatments, and then these health improvements could be valued using the contingent valuation method in either a general population sample or a sample of patients with the disease under study. By valuing the size of the health improvement in different subsamples, it can be tested if willingness to pay increases with the size of the health improvement.


The authors conclude by stating that cost-benefit analysis and cost-effectiveness analysis should not necessarily be viewed as mutually exclusive approaches. To use cost-effectiveness analysis for decision making, the willingness to pay per effectiveness unit (e.g. life years or QALYs4 gained) has to be determined. The contingent valuation method (or revealed preference studies) could then be used to estimate the willingness to pay per unit of effectiveness in order to provide an external standard for cost-effectiveness ratios.


It is also important to remember that the contingent valuation method should be regarded still as an experimental method. Much more basic research remains to be done to test the validity of the method before it can be used with any confidence in decision making about the allocation of resources in medical care.


Article: At what coronary risk level is it cost-effective to initiate cholesterol lowering drug treatment in primary prevention?


Author: M. Johannesson


Summary author: Fredrik Nilsson


Introduction


Coronary heart disease is one of the most common causes of death in Western societies. The cholesterol level is one of the most important risk factors for the disease. Considerable efforts have been made to develop effective drugs which will lower cholesterol levels. It has become important to show that they are cost-effective, i.e. provide good value for money. Especially important for interventions, that may involve large fractions of the population. In secondary prevention, i.e. in patients with pre-existing coronary heart disease, treatment with cholesterol lowering drugs has been shown to be cost-effective in most patient populations. This is due to the high absolute risk of coronary heart disease in this patient population.


In primary prevention, cholesterol lowering treatment is unlikely to be cost-effective for all patients with elevated cholesterol levels, and so it is crucial to determine in which patient populations’ treatment should be initiated. In this study the author estimates at what risk of coronary heart disease it is cost-effective to initiate cholesterol lowering drug treatment in Sweden for men and women of different ages.


Methods


The cost-effectiveness was estimated as the incremental cost per quality-adjusted life-year (QALY) gained of cholesterol lowering drug treatment compared to no treatment. QALYs are constructed by weighting different health states between 0 (dead) and 1 (full health). The analysis included both direct and indirect costs of the intervention and morbidity, and the full future costs of decreased mortality. Estimations were carried out for men and women separately at eight different ages (35, 40, 45, 50, 55, 60, 65 and 70 years). A treatment duration of 5 years was used and it was estimated at what 5-year-risk of coronary heart disease it would be cost-effective to initiate cholesterol lowering drug treatment. For the treatment to be cost-effective the cost per QALY gained had to be at or below a specific cost-effectiveness threshold. The threshold value corresponds to how much society is willing to spend in order to gain a QALY. The thresholds used were: $40 000, $60 000 and $100 000 per QALY gained.


The cost-effective model

Based on incidence and survival data, the model can be used to estimate the average 5-year-risk of a coronary event and the life-expectancy among persons initially free from cardiovascular disease in Sweden. These estimates are shown in Table 1.


Table 1. The average risk of coronary heart disease and the predicted life-expectancy for men and women in Sweden who are initially free from cardiovascular disease

Age (years)

5-year risk of a CHD event (%)

Life expectancy (years)

Men

Women

Men

Women

35

0,23

0,06

43,18

47,74

40

0,61

0,18

38,45

42,90

45

1,29

0,42

33,83

38,13

50

2,36

0,81

29,32

33,49

55

3,83

1,39

25,02

29,00

60

5,41

2,22

20,94

24,67

65

6,99

3,34

17,20

20,56

70

8,12

4,64

13,84

16,73


The estimated life-expectance among 35-year-old men and women corresponds nearly exactly to the life-expectancy in the general population in Sweden. Among 70-year-olds the estimated life-expectancy is somewhat higher than the life-expectancy in the general population in Sweden. This is logical since the estimated life-expectancy is for a population initially free from cardiovascular disease.


To estimate at what risk level treatment is cost-effective, the coronary risk in the model was raised until the cost per QALY gained of treatment corresponded to the threshold value of a QALY gained used in the estimations.


Costs

The costs of the intervention were divided into the costs of the drug, the costs of laboratory tests and the costs of physician visits. The morbidity-associated costs after a coronary event were divided into health care costs and the (direct costs) and lost productivity (indirect costs) due to the coronary event. The difference between total consumption and production in added life-years was also included in the cost-effectiveness analysis.


Quality of life adjustment

To use QALYs, data about quality of life weights in persons free of coronary heart disease are needed. Different quality weights were used in different age-groups.


Results


The optimal risk cut-off

Table 2 shows at what 5-year-risk of coronary heart disease it is cost-effective to initiate cholesterol lowering drug treatment for men and women at different ages.


Table 2. The optimal risk cut-off for men and women at different ages for different valuations of a quality-adjusted life-year (QALY) gained. Lipid lowering treatment is cost-effective if the 5-year-risk of coronary heart disease exceeds the per cent risk shown in the table. The percent of the population eligible for treatment according to the risk cut-off is shown in parentheses.




The optimal risk cut-off is shown for three different threshold values of a QALY gained. Irrespective of the threshold value, the optimal risk cut-off increased with age was higher for men than for women. If society is willing to pay $60 000 to gain a QALY, it was cost-effective to initiate treatment if the 5-year-risk of coronary heart disease exceeded 2,4% for 35-year –old men, 4,6% for 50-year-old men and 10,4% for 70-year-old men. The corresponding risk cut-off values for women were 2.0%, 3.5% and 9.1%.


The fraction of the population eligible for treatment increased with age and was higher for men than women.


Discussion

The results from this study can serve as a basis for developing treatment guidelines for cholesterol lowering in primary prevention based on cost-effectiveness. By comparing the absolute risk of a patient with the risk cut-off value for that age and gender it can be determined if cholesterol lowering drug treatment is cost-effective.


A problem with the current recommendations (by the “Sheffield table for primary prevention” and “the European Society of Cardiology’s”), from a cost-effectiveness viewpoint, is that the risk cut-off for treatment is independent of age and gender. As shown in this article, the optimal risk cut-off value varied greatly, especially with age. The risk cut-off values in the above guidelines are reasonably similar to our estimates at older ages, but are well above the estimates at younger ages. Thus, from a cost-effectiveness viewpoint, the guidelines seem overly conservative as a basis for treatment decisions in younger and middle-aged men and women.


1   ...   14   15   16   17   18   19   20   21   22

Похожие:

Summary Chapter 1, Introduction (Mattias Fritz) iconChapter 5 Summary and Conclusions

Summary Chapter 1, Introduction (Mattias Fritz) iconExecutive Summary Introduction

Summary Chapter 1, Introduction (Mattias Fritz) iconIntroduction and Chapter 1

Summary Chapter 1, Introduction (Mattias Fritz) iconChapter 1 Introduction

Summary Chapter 1, Introduction (Mattias Fritz) iconChapter 2 1 Introduction

Summary Chapter 1, Introduction (Mattias Fritz) iconChapter 4 Summary of audit of education-based citizenship initiatives

Summary Chapter 1, Introduction (Mattias Fritz) iconAcknowledgements IV Summary V 1 Introduction 1 2 Methods 2

Summary Chapter 1, Introduction (Mattias Fritz) iconChapter introduction and approach

Summary Chapter 1, Introduction (Mattias Fritz) iconChapter 1 Introduction to nde

Summary Chapter 1, Introduction (Mattias Fritz) iconChapter 1: introduction: articles

Разместите кнопку на своём сайте:
Библиотека


База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека
Главная страница