Скачать 0.59 Mb.
DR. MASTROTOTARO: To some degree you can, yes.
The other point about this particular one that they showed in the analysis, it is actually from a group of sensors that were manufactured that had different sensitivites than the other ones we used in the study. One of the things that we were doing in the clinical trial is investigating not only how well sensors in general perform, but how well sensors with various sensitivities produced in different ways or not produced in different ways, but just had resulting different sensitivities behaved.
We knew that there were a few batches with lower sensitivity based on in vitro tests of samples that we tested prior to using the devices. But we wanted to see just how low we could go with sensitivity and what kind of range was appropriate for the system. In interest of making sure that we sent everything -- we sent everything to the FDA. So, every sensor, regardless of the range, was sent. Sensors -- we now know sensors with sensitivities that are somewhat lower do not perform as well as other ones with higher sensitivity and, therefore, manufacturing devices today, a sensor like this would never be used in the field.
DR. HABIG: I think I will ask one more question then about this one.
This was a sensor in a study in a patient at home or do you know exactly the conditions of who got those glucose readings? Was it a patient at home with their glucose meter?
DR. MASTROTOTARO: This is John Mastrototaro again.
Yes. All the data from the study that was presented today were from patients using the system at home with their meter.
DR. HABIG: Okay. I mean, just visually looking at it, it looks like the early part of the results from the glucose meter are high, aberrantly high, perhaps, and in the later part, they are closer or low. It occurs to me that there could be a glucose meter issue here that certainly compounds, if it is a sensitivity issue.
DR. MASTROTOTARO: This is John Mastrototaro again. One of the problems or potential issues that there are that we did not address is that the comparative meter reading that we used in the analysis we assumed for this are the gold standard.
If there were meter values that were outliers, they are still part of the data analysis and in terms of presenting the data to you today, it is presented as if the system -- the CGMS system was at fault when in actuality there could be cases when it was the meter that was at fault.
DR. HABIG: This is still Bob Habig.
But that will be true in the marketplace. They will be out there and if people get -- if their meter isn't working, if they don't keep that running well, then this could happen in the marketplace as well.
DR. MASTROTOTARO: That is true.
DR. HABIG: Then I guess my last question is does the sponsor know whether this is the worst case and FDA found it and has used it and could there be a bunch more of these or this is really the worst one? I don't know if we have a sense that -- I think, you know, the two slides, one of them was really a good fit and this is a really bad fit. Is there some sense of how atypical this is?
DR. MASTROTOTARO: This is John Mastrototaro again.
Qualitatively looking at the slides and looking at ones that look pretty and ones which don't look as pretty, this is in the bottom, probably 2 to 3 percent of the slides. In fact, because it is actually got three days worth, it is probably one of the very few, maybe one of only a few that has that much data that looks like that.
DR. HABIG: Thank you. No further.
DR. NIPPER: Thanks, Dr. Habig.
Janine Janosky, Dr. Janosky.
DR. JANOSKY: Primarily, the questions will deal with biostatistical issues but not 100 percent.
It seems to me that during the calibration phase, you are actually looking at association. You are trying to associate the values that you are getting from the meter with the values that you are getting from your device and the relationship between that.
Once we get past the calibration phase, what are we looking at though? Should we still be looking at association or should we exclusively be looking at agreement? Because now the issue is not what is the relationship between these two devices. It is what is the actual value that I should be reporting and then ultimately that I might make a decision upon.
So, talk about that issue a little bit and address it and why you are not dealing exclusively agreement when we look at that validation phase.
DR. GROSS: This is Todd Gross.
Let me first say that the calibration is I would characterize one of conversion, units, rather than association in the strict statistical sense because the sensor has known operating principles that suggests that it does respond to glucose. We simply need to know what the conversion factor is from output to milligrams of glucose.
Once that conversion has been made and we are expressing both the sensor's output in milligrams per deciliter and comparing it to meter values that are expressed in milligrams per deciliter. The relationship demonstrates the ability of the sensor to track up and down and that information is useful but not the only piece of information that clinicians would need.
So both, you know, knowing that there is a high correlation between the sensor and the observed meter values is useful in knowing that there are highs and lows, particularly in cases where the calibration from sensor to sensor may have some error in it, it is still useful to know that the sensor is tracking up and down.
In terms of agreement, we have looked at two different measures of agreement and with the help of the FDA, we have focused more on categorical agreement because that expresses the ability of the sensor to identify the specific excursions, high and low. But we feel that both pieces of information are useful.
DR. JANOSKY: Ultimately, you will want to be able to look at one point and be satisfied that that point is the actual value at that point in time. Is that true? Or is it ultimately that you just want to look at the rhythm?
MR. GREGG: Dr. Marcus, could you respond to that, please?
DR. MARCUS: I think the recognized clinical utility for self blood glucose metering systems, which we all utilize and have been invaluable, are to enable the patient to make acute changes at the time when an event occurs, either hyper or hypoglycemia. The purpose of the sensor of the CGMS is to look at patterns and trends, which occur, maybe unbeknownst to the patient, to interpret those in a retrospective manner that will allow the patient to in the future have a better outcome.
So, what is of interest to the clinician is the patterns and trends. The graphs, which are generated, together with the self blood glucose metering systems, a visual review of that will tell the clinician how well the fit is and that is one of the discriminations that the physicians will perform to see whether to weight that very heavily or to not weight it as heavily as he might otherwise.
DR. JANOSKY: Okay. If that is a proposed clinical application, you are not evaluating it along those means. Do you have data that support that irrespective of point estimations, that the trends are in agreement? Because you are looking at point estimations, but now you are telling me that the clinical application is more likely trend estimations. Those are two very different concepts, two different ways of evaluating the device.
So, have you evaluated this device based upon trend estimations, not via points but via curves, via whatever might be going on?
DR. GROSS: We haven't performed any analyses that look at anything other than the paired measurements between the meter and the sensor.
Obviously, the optimal analytical framework would be to have the true glucose at various points in time. The sensor measures every five minutes. So, we would like to have true glucose every five minutes. That wasn't feasible in a home use study. So, we have not yet created a data set that would allow us more accurately measure trends beyond what we have done with the correlation, which measures trends but in a -- clearly in a situation of reduced amount of data available in terms of, you know, 11 or 12 meter measurements per day.
DR. JANOSKY: Okay. So, you don't have data supporting trends in terms of curval functions, increasing functions, even though that was suggested that perhaps was a clinical use of the device?
DR. MASTROTOTARO: This is John Mastrototaro.
I would like to just discuss that a little bit because that is kind of what we wanted to try to do with the categorical agreement, but I don't think the way we did the categorical agreement exactly addressed what you are proposing. Just to show you an example of that on this slide basically when we are doing the categorical agreement, what we would have liked to have been able to say is when the meter values were low and the sensor was low, then that means we caught that trend in downward glucose levels and, conversely, when the meter values went high and the CGMS tracked that high trend, then we would like to say "yes," that we successfully monitored and tracked that trend.
So, if you looked at this particular slide, in this case there is one event of a high excursion here and there is one event of a low excursion there. Yet, in actuality, in order to get a quantitative measure for this in the categorical agreement that we did, each time there is a meter value here, we tried to get a paired sample out of that. So, this would potentially result in four values when it is really only one event; likewise on the high side, there are two values, which should really only be one event.
One of the problems you see also in this is that as blood glucose timing coming out of this is different, you can actually get blood glucose value of 90, say, in this particular example, coupled with a sensor value of 50 or so, which would actually turn out to be one that in the categorical agreement that says that it does not agree and, yet, when you are -- if you are looking just at this plot, you would say that, yes, it does trend that low glucose value.
So, it is one of the problems, I think, that we had with the way that the categorical agreement was done and we did not come up with quantitative way of addressing that. Qualitatively, we could certainly go through, you know, slide by slide and say, yes, it was high when it was high and it was low when it was low, but we didn't think that would be an appropriate way to do it.
DR. JANOSKY: I will take it in a slightly different direction for awhile.
If we think about linear regression and that is what you were using again for your calibration method or attenuation or whatever you prefer to call that particular phase where you are trying to determine what are the factors that you need to add to these values to get to where you want to be.
You are not restrained in what those values are. Is that correct? So, for one patient, those values might be hovering around a hundred, though unlikely, but they might be. Another patient, those values might go anywhere from 200 or so down to 70 or so. So, for any one patient, that variability during that phase of those values being used to determine the model is not consistent. Is that true?
So that the variability for those observations comparing each patient to another patient is quite different.
DR. MASTROTOTARO: This is John Mastrototaro.
If you are referring to the meter values that are used for comparison, certainly in some patients they may be all around 150 and in others they may vary quite broadly. So, yes, from one person to the next, the values that are used to generate the calibration may be very different.
DR. JANOSKY: Okay. So, given that, given that piece of information, if we use linear regression, which is what you had used during that phase, is it reasonable to generate a model and to suggest imputed values outside that range? So, if you take a patient that you have recorded values during that phase, let's say, from 150 to 70, and then you go into the future and you are seeing values in the 200 range, is that calibration then method still appropriate to be used? You are outside of the values in which you had set that regression.
DR. MASTROTOTARO: John Mastrototaro again.
Let's say that in one day their blood sugars were relatively stable, 70 to 150. The meter values entered in that one day are used to generate the regression model for that one day. The following day, if their blood glucose values varied on different scales, maybe from 150 to 300, let's say, then all the values entered on that day are used to generate the regression for that day.
Also, if you did use a regression -- if for some reason you had only Accu-Cheks and a certain small range, but you didn't have values later on when the glucose was, indeed, out of that range because the sensor responds linearly, it should still pick up those excursions beyond the range of the values that was used to generate the regression model.
In fact, in the 1-point calibration if you took it to the extreme, if everything was equal, then in the 1-point calibration, because of the linearity of the product, you are assuming that you can measure glucose values beyond that one value that was used to calibrate.
DR. JANOSKY: Okay. It is the latter that I am getting to. And is that reasonable from a linear regression perspective in all the assumptions that accompany linear regression? And also in the issue of variability across ranges of blood glucose values, is it fair to make that conclusion?
DR. GROSS: This is Todd Gross.
The regression that is used to calibrate the sensor is not an unconstrained linear regression in which both parameters are allowed to -- both the slope and the interceptor are allowed to freely vary, but it is rather a constrained linear regression in which the intercept is fixed. Given that the sensor responds linearly to glucose in vitro, it can be very accurately calibrated using a single point.
The use of the regression calibration is done in order to allow all of the meter measurements that were taken during a single calendar day to contribute to an aggregate single point calibration, but still with a fixed intercept. So, there -- and I absolutely agree in linear regression model, you should be very concerned about the range of predictor values and there is a caution against using that resulting regression equation to predict values that are outside of the range that was used to create it.
That restriction doesn't apply in this case because of the fixed intercept, the known linearity of the glucose sensor.
DR. JANOSKY: I am hearing something differently than I had heard before. Fixing the intercept? Is that --
DR. GROSS: The performance of the sensor is -- and as John has mentioned, the sensitivity checks that are performed each day that the sensor is used, looks at the ratio between the milligrams of glucose in the meter reading and the nanoamps that are being read by the sensor at that point in time.
Moving to a regression calibration, rather than we maintained what is the known principles of operation of the sensor, which is that in the absolute absence of glucose, the sensor will produce, in general, no electrical current. So, we don't allow the intercept of the model to rise from a fixed point in creating the calibration.
DR. JANOSKY: But that is not a fixed point within the regression.
DR. GROSS: It absolutely is. The regression calibration is done using a model with a fixed intercept.
DR. JANOSKY: Is that consistent with what was presented to us today?
|Veterinary medicine advisory committee||Ranch hand advisory committee|
|Advisory committee on immunization practices||National Vaccine Advisory Committee (nvac)|
|External Advisory Committee on Cities and Communities||Wildlife Diversity Policy Advisory Committee|
|Schedule 5 Appendix c other Medical Devices||Peer reviewed by the Arizona Department of Commerce Economic Research Advisory Committee|
|Food and drug administration national institutes of health advisory Committee on: transmissible spongiform||Advisory Committee, Cuyahoga Valley School-to-Career Consortium, Broadview Heights, Ohio 1996-2002|