Additive And Proportional Error Nonmem
Contents |
model makes the most sense to use. The following model is suggested in the manual for Xpose 2.0. $ERROR DEL=0 IF(F.EQ.0) DEL=1 IPRED=F nonmem proportional error model W=(F**2+THETA(.)**2)**0.5 ; constant + proportional W=W+DEL IRES=DV-IPRED IWRES=IRES/W Y=IPRED+W*EPS(1) Niclas Johhson
Additive Error Nonmem
cautioned me about the following: "You will have to be careful here, though, so that 0 isn't a additive error model nonmem valid value of F. Are you sure your problems with the first model is not due to a non-fixed SIGMA?" It was my understanding that THETA(.) in the
Iwres Nonmem
line W=(F**2+THETA(.)**2)**0.5 would provide the standard deviation (proportional part). The Sigma would be the variance for the constant CV portion. I did not use a fixed Sigma. If I do fix Sigma, would I add another Theta value to include the constant CV part. i.e. W=(((F**2)+THETA(.)**2)**0.5 + THETA(..) ???? The simplist control file will run without errors if the iwrestledabearonce Sigma is fixed at 1. In my case, I have 1-7 zero concentrations prior to the first detectable concentrations. The average Tlag was about 2 hours, although there were a few subjects without any apparent Tlag. I took note of the mixture model discussed recently; however, the presence or absense of Tlag is not determined by any known event. Tlag estimate is 0.622 which appears to be short. In a previous message via nmusers, Leonid Gibiansky suggested that I use a simplier model - e.g. Y = F*(1+EPS(1))+EPS(2). If this model is used, I do not know how to get IWRES for using Xpose. Clearly, there are DV values that need to be handled as true 0's. Is it possible to increase weighting for 0 concentrations so that the model can use this data to estimate the lag time better. Obviously, I would not want to influence the fits in a bad way. I can get out all concentrations equal to 0 except those that occur between the dose time and the tim
explain why a weighting like W=SQRT(THETA(5)**2+THETA(6)**2/F**2) used by Justin is equivelant to a combination error model as NONMEM guide suggested? What is the advantage of this form to the standard combination error http://www.cognigencorp.com/nonmem/nm/99may232001.html model? Any suggestion is appreciated. -- Yaning Wang Department of Pharmaceutics College of Pharmacy University of Florida _______________________________________________________ From:Leonid Gibiansky
about MM model. The control stream that I used was copy-paste from the real project that worked just fine both for simulations and estimation. Modeling requires common sense and diagnostics: the same model that https://www.mail-archive.com/nmusers@globomaxnm.com/msg02446.html is good for one dataset can be terrible for the other one. Moreover, for any error model you propose I can present you with the hypothetical situation that would violate the model assumption. That is why modeling is interactive process: you try one model (whether it is the error model variation or number of compartments, or type of nonlinearity) look on the diagnostics, correct the model, etc, until you are happy with the outcome. The proportional error problem that you pointed out is obvious, and indeed, manifest itself sometimes: I've seen it on several real data sets. If you face it, you just need to correct the error model to be in agreement with your data. For the log-transform, I would like to re-iterate that this is simply a trick to implement exponential error model in nonmem. What you and Nick say is that the proportional (or additive+proportional) model is good additive and proportional enough in most cases, and I would agree with it. But in some rear cases (I've seen in in the problem with noisy data for the PD biomarkers), the true exponential is much better, and then you have no choice except to log-transform. As to the bioanalytical data with negative concentrations, I do not believe that you will get them (on any FDA-submitted analysis) any time soon. Moreover, this could be irrelevant to the use of the additive part of the error model: more often that not, this additive part is much larger than the assay error, so it comes from some other sources, and I guess, those "other sources" cannot result in negative values. In those cases, error models with positive predictions would be more mechanistic. Thanks Leonid -------------------------------------- Leonid Gibiansky, Ph.D. President, QuantPharm LLC web: www.quantpharm.com e-mail: LGibiansky at quantpharm.com tel: (301) 767 5566 Martin Bergstrand wrote: Dear Leonid, As I have pointed out once before on NMusers (http://www.cognigencorp.com/nonmem/current/2009-April/1661.html) the error model that you are using can be very problematic. The RUV model only have the desired properties as long as THETA(7) is larger than TY (TY=IPRED in your example). If TY << THETA(7) this error model will give rise to an almost infinite RUV and hence completely unrealistic predictions (eg. DV: 10^-50 to 10^50). If you don't understand wha