Medical Devices Advisory Committee




НазваниеMedical Devices Advisory Committee
страница6/16
Дата26.10.2012
Размер0.59 Mb.
ТипДокументы
1   2   3   4   5   6   7   8   9   ...   16
Agenda Item: Open Committee Discussion

DR. REJ: Thank you, Mr. Chairman.

I have a couple of questions regarding the calibration. If I understood the sponsor's presentation correctly, that in the design of the studies presented to the FDA and the panel, you limited the calibration to four so that you could use the other conventional blood glucose meters as a check on your algorithm. Is that correct?

DR. MASTROTOTARO: Yes, that is correct.

DR. REJ: Okay. But now Dr. Gutman said it is limited to four. Did I misinterpret it that you would -- if this device becomes available, that all blood glucose measured on a finger stick would be used in the calibration algorithm. Is that correct?

DR. MASTROTOTARO: That is correct.

DR. NIPPER: Excuse me. A technical point. The transcribers will need the names of the responder, so that way they can tell who is talking.

Thanks.

DR. MASTROTOTARO: This is John Mastrototaro.

And the answer is "yes," if they entered, say, six finger sticks in a day, then all six would be used.

DR. REJ: Is there a maximum number?

DR. MASTROTOTARO: There is not, no.

DR. REJ: Okay. So, it would be as many as -- as few or as many as they did.

In the presentation of statistical data by Dr. Gross, his first list of -- or first procedure that was to use to validate this device was the Bland Altman plot. I didn't see it anywhere on the submission. So, either I overlooked them or do you have them in a graphic someplace so that we could see that analysis?

DR. GROSS: We have not prepared a Bland Altman plot. We have provided the average of the different scores and standard deviation of the different scores, which is the foundation of the Bland Altman analysis.

Also, as we pointed out, numerical agreement is not what we would feel to be the most appropriate measure for this device. So, we have only gone as far as providing those summary measures.

DR. REJ: I would disagree with that premise.

Since you stated that you did the Bland Altman plot, I assume that you did have those internally in --

DR. GROSS: No, I am sorry. We have not prepared a Bland Altman plot.

DR. REJ: Okay. But your prepared statement says that you did do a Bland Altman analysis. So, you must have them internally that you haven't shared. Is that correct?

DR. GROSS: No. We have calculated the difference between each paired meter value and sensor value and have calculated the average difference and the standard deviation of the differences only.

DR. REJ: Okay. And would that look -- this is as close to 1 as I could find in any of the data. This is from the FDA. I am referring to Dr. Campbell's graphic of the calibrated CAL BG versus CGMS-2. Would a Bland Altman plot of the data from this device look substantially like that graphic?

DR. GROSS: Could I see that?

DR. REJ: Maybe Dr. Campbell could put that linear fit -- here it is limited -- I believe that these data are limited to just those which were paired for calibration. Is that correct, Dr. Campbell?

DR. CAMPBELL: Yes.

DR. REJ: So that the dispersion would be different or about the same if you used the pairs that were not used for calibration?

DR. NIPPER: Dr. Campbell, you will need to go to a microphone, please? You could sit next to Dr. Janosky there or find another --

DR. REJ: I am sorry to refer to the FDA, but I think that is important.

DR. CAMPBELL: Give me a second. I have the picture for the validation set as well, but it will just take a second for me to find it.

This is the calibration. This is Greg Campbell. This is the validation set. I actually did but didn't bring the plots where you look at the actual blood glucose instead of the predicted one on the X axis. So, I hope that answers your question.

DR. REJ: Yes. Okay. So, it is not substantially different.

DR. GROSS: If I may, though, the traditional Bland Altman plot would plot the differences on the Y axis as -- I am sorry -- this is Todd Gross again -- would plot the different scores on the Y axis, as Dr. Campbell has shown. However, it would plot the average of the two values on the X axis.

DR. REJ: But the Y axis would be essentially unchanged. Correct?

DR. GROSS: It would, yes.

DR. REJ: I guess I will have to ask another question for Dr. Campbell at this stage.

No, actually for Mr. Dawson. The plot that you show -- you have the responses of two sensors. Your first example showed good agreement with the validation data and another that showed lesser agreement. That is the one with an R squared of .13. If I could see that? Your second example.

I would like to ask both the sponsor and the FDA statistician, this is the model using -- I see four points that were used -- no, actually more -- six points that were used in calibration for this example. Is that correct?

MR. DAWSON: That is right, for this particular set.

DR. REJ: Would this plot look substantially different if every one of the calibration blood glucoses were used -- I am sorry -- the validation blood glucoses were used in the calibration? In other words, that you used all 16 points or however many points there are on that graph for the calibration itself, would that look substantially different?

MR. DAWSON: I really couldn't say that. I really don't know.

DR. REJ: Maybe the sponsor could --

MR. DAWSON: Would you want to just make an estimate based on looking at the symbols for the calibration points?

DR. REJ: Yes.

MR. DAWSON: Okay. Down at the bottom of the trend chart, you see a couple of plus signs. Those are calibration points.

DR. REJ: Right.

MR. DAWSON: And they are rather distant from the line. So, it is possible that the statistical diagnostic results would be even worse if it were based on all of the observations.

DR. REJ: If all the observations were used in the calibration.

DR. GROSS: This is Todd Gross.

Obviously, it is difficult to say how the shape of that would change, but I can make two points. One is that in general, additional values result in better fit and that here the calibration -- the identification of calibration values appears to have been successful in getting the preprandial value since we see many of them in the low range. Adding postprandial values, I would predict, would improve the fit.

DR. REJ: Okay. And, clearly, these data would be available on analysis when one downloaded it to the PC and saw that there was this -- basically the residuals between calculated or those that were used in the calibration versus the sensor output were large. So that might alert the person interpreting this that these data -- there might be something wrong either with the sensor or the way it was implanted, that this would be obvious from looking at the download. Is that correct?

Is there something in your software that looks at these residuals and if that is too great, it alerts the health care provider about that?

DR. GROSS: This is Todd Gross again.

First of all, let me clarify that the graphing utility would provide a plot of each calendar day. This is a plot for the entire sensor. The graphing utility would regress to all of the data points. So, we would expect there to be a better fit, but we have provided summary statistics, the correlation coefficient and the mean, absolute relative error in particular, that would aid the clinician in evaluating the fit, in addition to the visual impact of that fit.

DR. REJ: But there is no specific warning. In this case, again, a very preliminary look at this graph indicates to me there are about as many below as above in that. I mean, there may be a bias to the high side for the finger stick measurements compared to the sensor, but in general there are some that are kind of on the low side and some that are on the high side. Your device as it currently is or the software doesn't provide the residual values to the person, other than in this format?

Did I make that clear, the question?

DR. GROSS: Yes, I understand. Did you want to respond?

DR. MASTROTOTARO: This is John Mastrototaro again.

The two things that would be supplied with each daily trend plot would be the mean absolute error between the meter values that were entered in the sensor value and then also the correlation coefficient for each day.

DR. REJ: Is there some sort of diagnostic built in that that if it exceeds a certain limit that maybe less weight would be given to --

DR. MASTROTOTARO: We were actually planning on -- after reviewing all of the daily mean absolute errors and correlation coefficients to come up with some recommendations on bounds for those that the physician or health care professional could use as a gauge to determine if it is data that they should weigh more heavily or not in their analysis.

DR. REJ: I think that would be useful if this device comes to market.

DR. MASTROTOTARO: The other thing that I would like to mention in this particular one that they showed, in our submission, the Volume 2 of 2, dated February 9th, you can actually see this data on page 220 in your binder. One of the things you will notice is that is a Section 2 of the binder. It turns out that this particular one is from one of three small batches or four small batches of sensors that we deemed had low sensor sensitivity from doing an analysis of some sensors that were held back that we looked at retrospectively.

So, we had identified certain lots from that category. The other thing to notice is that the data doesn't look quite the same way when you look at the entire range as opposed to just blowing up from 50 to 200 or 250 in the graph.

DR. REJ: But were there any points outside of those ranges?

DR. MASTROTOTARO: No, there were not.

DR. REJ: Okay.

One last question on the calibration and then I have one clinical question.

In your algorithm for doing the calibration, you are assuming the same -- basically the same output from the sensor over the 72 hours? In other words, using basically an average signal versus measured value -- I am not sure -- in your initial model, you used a 1-point calibration. Then you went on to basically a 4-point calibration, essentially using an average or linear regression over four measurements over a period of time.

Was that applied to -- equal weighting given to all four measurements?

DR. MASTROTOTARO: This is John Mastrototaro again.

That is correct and that was done on a day-by-day basis. So, for example, if there were two days of data with four finger sticks per day, we did not regress all eight of those values equally and come up with a calibration equation that passed all 48 hours, but rather we did a regression to the four points on day one, generated a curve and a slope for that and then separately did a regression for day two. That is how the product is envisioned to be used so the calibration is done on a daily basis with all the finger sticks --

DR. REJ: That was my question. If the sensor is limited -- has a limited life span, obviously, something is changing with it and it was my understanding from that that the single calibration point was used or the single algorithm was used for the lifetime of the sensor. That is not correct. It is for a 24 hour period. Okay.

The second question or the clinical side, I think, maybe, raised by Dr. Marcus, but perhaps Dr. Mestman can also comment on it, that you envisioned using this in the treatment of your patients on an occasional basis. Can you help clarify for me what you -- when you would use it and when you wouldn't use it, when you would recommend using it?

DR. MARCUS: Alan Marcus.

I would use it -- for instance, if I had a patient who had hypoglycemic unawareness, I would use the sensor to help pick out time periods when the patient was hypoglycemic and then to have the patient during that instance write down symptoms that they were having, being able to retrospectively educate the patient to recognize which symptoms were present or to pick up times when they were unaware of it.

If I had a patient who is poorly controlled, I would be able to pick up periods where there were obvious control issues after meals. If I had a patient whose blood sugars fluctuated widely throughout a 24 hour period, especially in response to activity in insulin, that would be a patient probably who would benefit from more diabetes education. I would be able to monitor insulin administration, meal types and foods.

DR. REJ: Those are clear examples but it doesn't really address the occasional per patient. Would you see the patient using it maybe 10 days a year or -- I am just -- just to get a feel for this or once a week?

DR. MARCUS: I can imagine the patient using it while they are under the process of initiation of control, it would be more intense. That may be three days every week or two weeks. After control is initiated, I couldn't foresee using it more than three days every six months or maybe not at all.

DR. REJ: Thank you very much.

DR. NIPPER: Dr. Mestman, would you like to comment?

DR. MESTMAN: That is all right. Thank you.

DR. REJ: Thank you.

DR. NIPPER: Woody.

DR. LEWIS: Sherwood Lewis.

I would like to direct my first question to Dr. Mastrototaro. I think I got it right. However, the same overhead that you showed also was shown by Dr. Mestman and Dr. Marcus, but maybe you might refer to that very early overhead glucose sensor profiles and a couple of things are not clear to me in that.

You indicate sensor values, blood values and finger stick values and aside from my not being sure of the distinction between blood and finger stick, I would like also to ask whether you ever used the sensor outputs in non-diabetic individuals and what those profiles would look like.

You have indicated here that a non-diabetic with blood glucose values and I presume they were done by the Accu-Chek or by some other either laboratory-based instrument. That is what makes me uncertain as to what that profile really indicates.

DR. MASTROTOTARO: This is John Mastrototaro speaking.

In terms of the item labeled "blood" versus "finger stick," the blood measure is from a YSI glucose analyzer. The finger stick is from a standard meter device.

In response to your second question, actually in this particular feasibility study we conducted, we had a Type 1 volunteer in the study, whose spouse participated as a non-diabetic patient and we tracked both of their blood sugars. So, we have used the product in some non-diabetic volunteers previously, yes.

DR. LEWIS: But that is not portrayed or displayed in any of your graphings.

DR. MASTROTOTARO: That is correct.

DR. LEWIS: My second question is related to that and since it appears that the Accu-Chek was the glucose meter used in all of these studies and you can correct me if that is not the case.

What had been done to establish the performance characteristics of the Accu-Chek itself? It seems that you are using this as the gold standard It is what you calibrate your sensor with and it is what you base all of your comparisons on.

I wonder if other glucose meters had been used or considered and for what reason it was pursued in this fashion.

DR. MASTROTOTARO: This is John Mastrototaro responding.

We have used the one touch, too, in one of the earlier feasibility studies, but we did elect to use the Accu-Chek Advantage in this one. It is our contention that based on the fact that all of these are approved devices, that most meters are all substantially equivalent. That is based on the FDA's approval of those meters.

DR. LEWIS: Thank you.

DR. NIPPER: Thank you.

Dr. Cooper.

DR. COOPER: I just have sort of a question about your reaction. I was impressed by Dr. Campbell's comments and especially the prediction model and some of that I have seen before, like the slope and the regression to the mean. I wonder if you all have tried those kind of things or what your response would be?

DR. GROSS: Well, first of all -- this is Todd Gross -- I would like to thank the FDA's statistical team for their thorough analyses with this data set. In terms of the issues that Dr. Campbell raised, I would first of all say that we have provided results from the clinical study that involve all of the sensors that were tested using the calibration methods that we have described.

We would first of all say that the performance is acceptable for the intended use. There are clearly areas where the regression -- where the calibration of the sensor can be improved and we are currently exploring those. I think that some of the issues that Dr. Campbell raised are things that we can explore further, but the current product and the current intended use match very well.

In terms of -- I don't know if you want me to address the specific statistical issues that he raised in terms of the slope of the residuals being .08. Yes, it is statistically significantly different from zero, but I think it shows actually that the calibration was very successful. It simply points out that there is still room to improve it.

In terms of homoschidasticity(?) of variance issues, I think that a more formal analysis is necessary in order to conclude, for instance, that the results that we have provided are somehow influenced or biased by unequal variances and that that is still an open question.

The errors and variables problem exists whenever you do regression calibrations and, again, we would point to that as an area where the calibration can be improved through further analysis.

DR. COOPER: So, you have no inherent initial negative reaction saying that those suggestions just really won't work or anything.

DR. GROSS: Yes. This is, obviously, the -- I mean, we have been in telephone conferences discussed these issues only briefly. So, I wouldn't want to comment without considering further what was done.

DR. COOPER: I understand. Thank you.

DR. NIPPER: Dr. Kroll.

DR. KROLL: Yes. My first question relates to how you would verify the function of a sensor or the individual sensors before the release to the users.

DR. MASTROTOTARO: This is John Mastrototaro.

One of the things that we have done a lot of over the past year is go through the manufacturing process and automate a lot of the steps of sensor manufacture. We now have a scheme that was actually reviewed by the FDA as part of this whole process, the GMP inspection.

One of the things that we do is there are many in process steps for each batch and lot of sensors that is performed in addition to final acceptance testing after sterilization of the product. We believe that because of the reproducibility we now have with the manufacturing processes and also the system checks that are done on sample devices along the way, that we can ensure that the sensors are stable when they are presented to a user.

DR. KROLL: Okay. Good.

Next group of my questions relate to problems with calibration. One that wasn't brought out very much is when you do a calibration, then you have a certain amount of imprecision on the X axis and have you looked at methods or statistical methods to try to evaluate how bad that imprecision is?

DR. MASTROTOTARO: John Mastrototaro again.

Are you referring to the meter values?

DR. KROLL: This is the meter values.

DR. MASTROTOTARO: Yes, that is correct. When we were initially looking at the 1-point calibration approach the way we thought to address that was to have the patient actually make two sequential meter readings and then average the two values to help reduce some of that potential error.

However, as I showed in one of the correlation plots where I showed the first finger stick value versus the second, there can be some significant error there and that is really one of the reasons why we have gone to the multiple sample regression approach because basically by coming up with a sensitivity factor for the sensor to calibrate it, which is averaged over each of the meter values that are entered into the device that, hopefully, we will diminish in effect of a potentially outliered meter value.

DR. KROLL: Well, I am not concerned with outlier so much, but how to use some techniques just to compare them, like the Demming Debias(?) Regression, which then can look at the -- you assume a certain amount of air on the X axis.

DR. MASTROTOTARO: One thing that we have done is we took sensored data in a study that we forced to equal exactly the meter values and then we introduced the plus or minus 20 percent type error into the meter values and saw what impact that had on the resulting comparison and basically it will increase your mean absolute error by about 10 percent if you included this plus or minus 20 percent variability in individual meter readings.

DR. KROLL: Okay. And would that information be available to physicians who were using them, the sensor?

DR. MASTROTOTARO: I don't quite -- would we add information that says that the mean absolute error with the meter use could be --

DR. KROLL: Well, for example, could you give them a table that says if you know that the meter has a 10 percent error, imprecision error, that that would vary the values by a certain amount so that they could interpret what they are saying a little bit differently.

DR. MASTROTOTARO: We certainly could.

DR. KROLL: Related to that, in some of the examples you showed us, for example, if -- I don't know what page it is on, but it is Patient 123 Census 331009 in which you show us -- using a regression calibration, you have meter values and you have the sensor values based on a type of regression.

The meter values there actually have a fairly decent spread over the range, but what systems would you put into place so when you do a calibration, you can look at the spread of values that the meters are picking up and make certain that spread is sufficient and if it is very narrow, if it falls over a range that is 50 milligrams per deciliter, that is probably going to give you a really inadequate calibration.

DR. MASTROTOTARO: This is John Mastrototaro.

Actually, what will happen -- that is a thing that we have found very interesting. If you go to about two or three pages after that, where there is the summary statistics, which are supplied to the patient, there is the section -- if you find that table --

DR. KROLL: Hold it up so we can see what you are talking about, John. There you go. Thank you.

This is an Excel spreadsheet.

DR. MASTROTOTARO: Yes. There is a section where the meter values -- Slide 825. It is a little hard to see on the screen. I apologize. This is a picture right off of the computer screen. There is a section, which shows the output from the meter values that were entered on each day. So, you can see what the minimum and maximum values were of the meter values entered. That is shown over here. The number of readings for example on the 5th of October was seven readings. Their average was 168 and it gives the minimum and maximum.

One of the problems that we will have with the linear regression is not so much that it won't regress well with fewer readings -- I mean, with a narrower range. That will be a part of an issue, having a narrow range. It will also effect the correlation coefficient number that you are calculating in that column because if you have a very narrow band, that means you haven't evaluated values over, you know, a wide enough range and you will get a poorer correlation coefficient number when the range is narrow as well. But the mean absolute error in percent should not be as affected by that.

DR. KROLL: Well, again, I don't put a lot of trust in correlation coefficients and I don't think they necessarily pick that up. I am interested in the individual case that there needs to be some alert to the physician or even to the patient with the realization they need to get values that spread all over the range in order to get a decent calibration because that greatly affects the slope and the intercept and would affect how the rest of the values are interpreted.

For example, if you had a scheme where you have got a patient right before, took a meter sample right before they ate and then took one, let's say, an hour after they ate. So, you would have a wide range of areas, so, you could get fairly predictive.

If you could predict it in a patient, but that you would have criteria for rejecting a calibration when that range is too small. Really, what we are saying is you can't always pick it up with a correlation coefficient. It ends up being a bad data set.

Then also you don't have great statistical criteria, except you do have a minimum and a maximum and you could get an idea of what that range is, how those values are distributed.

DR. MASTROTOTARO: This is John Mastrototaro.

When we have done the linear regression and calibration approach, it is not a true linear regression where you actually let the offset and slope vary to whatever value they would want to feed it to create this regression approach. We actually limit the offset value. So, we have a stake in the ground on the offside side and by doing that, if you have a narrow range, you won't have as big an issue as you would otherwise if you did not put that stake in the ground for the offset value.

DR. KROLL: I guess what I am -- I am going to come back to this later. What I am concerned about is that if you only go up to a value of 200, how do you know that a value of 300 is related to that?

Let me go on. Related again to calibration, do you have some type of fixed statistical criteria for rejection of a poor calibration? In other words, it would come up and it would say based on numbers and calculations made that you consider the calibration for this day would be inadequate.

DR. MASTROTOTARO: This is John Mastrototaro answering.

Yes, actually when it determines the progression slope required on the daily basis to convert the sensor signals to glucose, if that slope value is outside of a certain range in the product, it will not generate a daily trend plot for that day. By doing that, that will account for problems if you had a sense of which lost sensitivity for some reason or got pulled out of the body or something like that, the slope value would be too high and it would not present the data on that day.

DR. KROLL: Okay.

My last question refers to really verification on the performance of the sensor. First of all, how do you know that it is linear over the range that you have given, which is, what, 40 to 400?

DR. MASTROTOTARO: This is John Mastrototaro.

We have evaluated the sensor in vitro certainly over the entire range to show that it behaves linearly throughout the extent from 0 to 400. That formed the basis of what we used in vivo.

DR. KROLL: But is that done on, let's say, a certain number out of -- or, let's say, 1 out of 20 or 1 out of a hundred of every batch of sensors you produce or is it done on every sensor?

DR. MASTROTOTARO: In terms of testing the linearity and the sensitivity for sensors, we actually do sampling as they are built now within the batch at different stages of the assembly process. Then there is AQL testing done when sensors come back from sterilization from each batch and they all have to have a certain range of sensitivity and a certain linearity or correlation when they are tested from solutions that vary from 0 to 400 milligrams deciliter.

DR. KROLL: All right. And, again, related to that how do you verify that the sensors can pick up both the high and the low end? In other words, you have a value very close to 40 and a value at close to or near 400, that it is going to be accurate at that end, at those extremes.

DR. MASTROTOTARO: John Mastrototaro.

We have performed accuracy testing and precision testing at low glucoses, normal range of glucoses and high glucose values in vitro to address that.

DR. KROLL: Do you stress it? Do you go below 40 and above 400?

DR. MASTROTOTARO: In most of our testing, we actually go from 0 to 400, but we typically have not evaluated greater than 400. We have in some of our earlier in vitro experiments gone to 600 milligram per deciliter and the linearity even in recent studies we have done -- I know we did a test where it was up near 500 recently and it was still linear at 500.

DR. KROLL: Did you test different types of conditions, for example, you could have somebody who was dehydrated, some different types of medical conditions in which the interstitial water could be greatly affected with let's say greater or lesser salt contents.

MR. GREGG: Dr. Mestman, would you discuss the clinical trial and the different activities of the individuals?

DR. MESTMAN: Jorge Mestman.

The study that we did in the feasibility studies, all the patients were ambulatory and we didn't have any patient with illness beside diabetes to study. So, all of our patients were ambulatory patients and we checked the meter and the blood sugars at a different time of the day.

The activities of the patient were unlimited. So, many of these patients were exercising on a daily basis. We checked the blood sugars at any time before or after exercise and we didn't see any difference.

DR. KROLL: So, you don't have any checks then on potentially people who would be dehydrated or would have an excessively high sodium or a low sodium?

DR. MESTMAN: No. The answer is "no."

DR. KROLL: Thank you.

DR. NIPPER: In the interest of staying relatively on schedule, I want to stop questioning at this point. Mr. Reed has kindly agreed to hold his questions until after lunch. I understand that Dr. Gutman wants to make a couple of closing comments before we adjourn for lunch.

I want to thank the sponsors for forthcoming answers to the questions and I will turn it over to Dr. Gutman for closing comments.

DR. GUTMAN: I just want to offer some perspective. I actually almost would have preferred it at the end of questions, but maybe this will be helpful.

The statistics in this submission are obviously very important and our statisticians were putting forth the information to suggest improvements in techniques and as the company has indicated, the communication because of the short time since we have received the submission has been brief. We haven't really had a great deal of time to negotiate with them all of the nuances that were put on the table.

Although we do have some questions, in the context of the questions, we actually have specific questions about say the issue of calibration. We are not asking the panel today to solve all of the calibration or statistical issues that are on the table. You are certainly welcome to raise any of them, particularly any that we have missed. You are welcome to solve any problem that we -- some solution we or the company missed or not, required to necessarily deal with all of the issues that are on the table.

What you are very strongly being asked to do is to look at the product globally in terms of a threshold decision that we are going to have to grapple with at the end of the day and that is as configured calibrated and labeled, is this product now effective enough to go into the marketplace? That we are going to ask you to render a judgment.

It is our belief -- I believe in truth in labeling, so I will tell you it is the team's belief that there is more data required to characterize this and one of the questions on the table -- and I will put it on again and again this afternoon -- is whether that characterization can be done post-approval or whether it, in fact, requires more studies before we actually are to approve this and put it on the market.

So, that is something for you to think about as you are framing questions and something for you to think about as you plan to advise us in the afternoon is the bottom lines. Look at the whole picture. It may be that the calibration issues and the outstanding issues are enough that they need to be challenged in additional clinical studies before we say this is approvable.

It may be that because of the risk profile and the clinical benefits this product is ready to go on the market and it is better suited for some kind of unusual either analytical or clinical or outcome study and you need to give us your best pass on where we and the company should go.

But, again, I feel like -- I don't want to lead or mislead you. I don't want to thwart the discussion. I just want to keep you a little focused. Thanks.

DR. NIPPER: We depend on Dr. Gutman to refocus us as needed and we appreciate those comments.

I want to focus the group on lunch at this point. I want to thank you for your kind attention and for the noticeable reduction in cell phone calls that happened after we reconvened. We appreciate it.

We will be back again at 1:00 p.m. and we will continue open committee discussion at that point.

[Whereupon at 12:10 p.m., the meeting was recessed, to reconvene at 1:10 p.m., the same afternoon, April 26, 1999.]

1   2   3   4   5   6   7   8   9   ...   16

Похожие:

Medical Devices Advisory Committee iconVeterinary medicine advisory committee

Medical Devices Advisory Committee iconRanch hand advisory committee

Medical Devices Advisory Committee iconAdvisory committee on immunization practices

Medical Devices Advisory Committee iconNational Vaccine Advisory Committee (nvac)

Medical Devices Advisory Committee iconExternal Advisory Committee on Cities and Communities

Medical Devices Advisory Committee iconWildlife Diversity Policy Advisory Committee

Medical Devices Advisory Committee iconSchedule 5 Appendix c other Medical Devices

Medical Devices Advisory Committee iconPeer reviewed by the Arizona Department of Commerce Economic Research Advisory Committee

Medical Devices Advisory Committee iconFood and drug administration national institutes of health advisory Committee on: transmissible spongiform

Medical Devices Advisory Committee iconAdvisory Committee, Cuyahoga Valley School-to-Career Consortium, Broadview Heights, Ohio 1996-2002

Разместите кнопку на своём сайте:
Библиотека


База данных защищена авторским правом ©lib.znate.ru 2014
обратиться к администрации
Библиотека
Главная страница