Andy Bush | 6 Sep 00:53 2007
Picon

Specifying random effects for multiple covariates via lmer

While working through the text "Applied Longitudinal Analysis" by
Fitzmaurice, Laird and Ware, I encountered a fairly simple case study (pp
210-7) in which a longitudinal model specifies three random effects: (1)
random intercepts for id, (2) random slopes for covariate1 (Age | id), and
(3) random slopes for covariate2 (log(ht) | id).  I've had no difficulty
formulating lmer models with correlated random intercepts and slopes for
either of the covariates individually but have not succeeded when I try to
compose a model with correlated random intercepts and slopes for two
covariates.

Following is code that works well with the individual covariates separately:

m1=lmer(LFEV1~Age + loght + InitAge + logbht + (1 + Age | id),data=fev,

       na.action=na.omit, method="REML")

m2=lmer(LFEV1~Age + loght + InitAge + logbht+(1 + loght | id),data=fev,

       na.action=na.omit, method="REML")

I have yet to write code that succeeds with both covariates simultaneously.
I am running V 2.5.1 of R and the most recent version of lme4.

Any advice will be appreciated. If more detail is needed, I can send it but
thought this might be sufficient to pose the problem.

	[[alternative HTML version deleted]]

Douglas Bates | 6 Sep 04:22 2007
Picon

Re: Specifying random effects for multiple covariates via lmer

On 9/5/07, Andy Bush <ajbush@...> wrote:
> While working through the text "Applied Longitudinal Analysis" by
> Fitzmaurice, Laird and Ware, I encountered a fairly simple case study (pp
> 210-7) in which a longitudinal model specifies three random effects: (1)
> random intercepts for id, (2) random slopes for covariate1 (Age | id), and
> (3) random slopes for covariate2 (log(ht) | id).  I've had no difficulty
> formulating lmer models with correlated random intercepts and slopes for
> either of the covariates individually but have not succeeded when I try to
> compose a model with correlated random intercepts and slopes for two
> covariates.

> Following is code that works well with the individual covariates separately:

> m1=lmer(LFEV1~Age + loght + InitAge + logbht + (1 + Age | id),data=fev,
>        na.action=na.omit, method="REML")

> m2=lmer(LFEV1~Age + loght + InitAge + logbht+(1 + loght | id),data=fev,
>        na.action=na.omit, method="REML")

Maybe I am missing the point but wouldn't the model you are
considering be written as

lmer(LFEV1 ~ Age + loght + InitAge + logbht + (loght + Age|id), data =
fev, na.action = na.omit, method = "REML")

That provides correlated random effects for the intercept, the
coefficient for loght and the coefficient for Age at each level of the
id factor.

(Continue reading)

Andy Bush | 6 Sep 13:36 2007
Picon

Re: Specifying random effects for multiple covariates via lmer

Thank you for taking the time to try to help me, Douglas. I had already
formulated the suggested model as model m3 shown below.  Please note that id
is defined as "ordered" and all other variables are defined as "numeric".
Here are the models I've tried:

m1=lmer(LFEV1~Age+loght+InitAge+logbht+(1+Age|id),data=fev,
       na.action=na.omit,
       method="REML")
m2=lmer(LFEV1~Age+loght+InitAge+logbht+
       (1+loght|id),data=fev,
       na.action=na.omit,
       method="REML")
m3=lmer(LFEV1~Age+loght+InitAge+logbht+
       (Age+loght|id),
       data=fev,
       na.action=na.omit,
       method="REML")

lmer handles models m1 and m2 with no difficulty (and the results agree
quite closely with SAS Proc Mixed returns). However, lmer generates the
following warning messages for m3 and lead me to wonder if I'd mispecified
the model:

1: Estimated variance-covariance for factor 'id' is singular
 in: `LMEoptimize<-`(`*tmp*`, value = list(maxIter = 200L, tolerance =
1.49011611938477e-08,  
2: nlminb returned message false convergence (8)
 in: `LMEoptimize<-`(`*tmp*`, value = list(maxIter = 200L, tolerance =
1.49011611938477e-08,  

(Continue reading)

Douglas Bates | 6 Sep 14:31 2007
Picon

Re: Specifying random effects for multiple covariates via lmer

On 9/6/07, Andy Bush <ajbush@...> wrote:
> Thank you for taking the time to try to help me, Douglas. I had already
> formulated the suggested model as model m3 shown below.  Please note that id
> is defined as "ordered" and all other variables are defined as "numeric".
> Here are the models I've tried:
>
> m1=lmer(LFEV1~Age+loght+InitAge+logbht+(1+Age|id),data=fev,
>        na.action=na.omit,
>        method="REML")
> m2=lmer(LFEV1~Age+loght+InitAge+logbht+
>        (1+loght|id),data=fev,
>        na.action=na.omit,
>        method="REML")
> m3=lmer(LFEV1~Age+loght+InitAge+logbht+
>        (Age+loght|id),
>        data=fev,
>        na.action=na.omit,
>        method="REML")
>
> lmer handles models m1 and m2 with no difficulty (and the results agree
> quite closely with SAS Proc Mixed returns). However, lmer generates the
> following warning messages for m3 and lead me to wonder if I'd mispecified
> the model:
>
> 1: Estimated variance-covariance for factor 'id' is singular
>  in: `LMEoptimize<-`(`*tmp*`, value = list(maxIter = 200L, tolerance =
> 1.49011611938477e-08,
> 2: nlminb returned message false convergence (8)
>  in: `LMEoptimize<-`(`*tmp*`, value = list(maxIter = 200L, tolerance =
> 1.49011611938477e-08,
(Continue reading)

Bush, Andrew J | 6 Sep 17:58 2007

lmer vs lmer2

Dear Douglas,

In frustration, I invoked lmer2 this morning and I'm pleased to be able
to tell you that lmer2 copes well and quickly with the model having a
random intercept and two random covariate slopes.  I have not been able
to get lmer to converge for the model on the same data.

Andy

-----Original Message-----
From: dmbates@...
[mailto:dmbates@...] On Behalf Of Douglas
Bates
Sent: Wednesday, September 05, 2007 9:22 PM
To: ajbush@...
Cc: r-sig-mixed-models@...
Subject: Re: [R-sig-ME] Specifying random effects for multiple
covariates via lmer

On 9/5/07, Andy Bush <ajbush@...> wrote:
> While working through the text "Applied Longitudinal Analysis" by
> Fitzmaurice, Laird and Ware, I encountered a fairly simple case study
(pp
> 210-7) in which a longitudinal model specifies three random effects:
(1)
> random intercepts for id, (2) random slopes for covariate1 (Age | id),
and
> (3) random slopes for covariate2 (log(ht) | id).  I've had no
difficulty
> formulating lmer models with correlated random intercepts and slopes
(Continue reading)

Douglas Bates | 6 Sep 18:17 2007
Picon

Re: lmer vs lmer2

On 9/6/07, Bush, Andrew J <abush@...> wrote:
> Dear Douglas,

> In frustration, I invoked lmer2 this morning and I'm pleased to be able
> to tell you that lmer2 copes well and quickly with the model having a
> random intercept and two random covariate slopes.  I have not been able
> to get lmer to converge for the model on the same data.

Thanks for the information.

I expect to remove the confusion between lmer and lmer2 in the near
future.  The development version of the lme4 package has an lmer
function that is close to the current lmer2 in design.  It should
exhibit the same convergence behavior and be slightly faster on models
fit to large data sets than is the current lmer2.

This version has been in development for longer than I had expected.
I still have a few "infelicities" to resolve in the Laplace method for
generalized linear mixed models before I make test versions available.

I would be interested in the data set if you would be willing to
provide it.  I could perhaps incorporate it in the lme4 package so
others would have access to it.

> -----Original Message-----
> From: dmbates@...
[mailto:dmbates@...] On Behalf Of Douglas
> Bates
> Sent: Wednesday, September 05, 2007 9:22 PM
> To: ajbush@...
(Continue reading)

Martin Maechler | 12 Sep 15:46 2007
Picon

Re: lmer vs lmer2

>>>>> "DB" == Douglas Bates <bates@...>
>>>>>     on Thu, 6 Sep 2007 11:17:17 -0500 writes:

    DB> On 9/6/07, Bush, Andrew J <abush@...> wrote:
    >> Dear Douglas,

    >> In frustration, I invoked lmer2 this morning and I'm pleased to be able
    >> to tell you that lmer2 copes well and quickly with the model having a
    >> random intercept and two random covariate slopes.  I have not been able
    >> to get lmer to converge for the model on the same data.

    DB> Thanks for the information.

    DB> I expect to remove the confusion between lmer and lmer2 in the near
    DB> future.  The development version of the lme4 package has an lmer
    DB> function that is close to the current lmer2 in design.  It should
    DB> exhibit the same convergence behavior and be slightly faster on models
    DB> fit to large data sets than is the current lmer2.

    DB> This version has been in development for longer than I had expected.
    DB> I still have a few "infelicities" to resolve in the Laplace method for
    DB> generalized linear mixed models before I make test versions available.

    DB> I would be interested in the data set if you would be willing to
    DB> provide it.  I could perhaps incorporate it in the lme4 package so
    DB> others would have access to it.

Yes, indeed.  
The example might be particularly interesting as test case, not
only because some software implementations "converge" with
(Continue reading)

Henric Nilsson (Private | 13 Sep 09:51 2007
Picon

Re: lmer vs lmer2

Quoting Martin Maechler <maechler@...>:

>>>>>> "DB" == Douglas Bates <bates@...>
>>>>>>     on Thu, 6 Sep 2007 11:17:17 -0500 writes:
>
>     DB> On 9/6/07, Bush, Andrew J <abush@...> wrote:
>     >> Dear Douglas,
>
>     >> In frustration, I invoked lmer2 this morning and I'm pleased   
> to be able
>     >> to tell you that lmer2 copes well and quickly with the model having a
>     >> random intercept and two random covariate slopes.  I have not  
>  been able
>     >> to get lmer to converge for the model on the same data.
>
>     DB> Thanks for the information.
>
>     DB> I expect to remove the confusion between lmer and lmer2 in the near
>     DB> future.  The development version of the lme4 package has an lmer
>     DB> function that is close to the current lmer2 in design.  It should
>     DB> exhibit the same convergence behavior and be slightly faster  
>  on models
>     DB> fit to large data sets than is the current lmer2.
>
>     DB> This version has been in development for longer than I had expected.
>     DB> I still have a few "infelicities" to resolve in the Laplace   
> method for
>     DB> generalized linear mixed models before I make test versions   
> available.
>
(Continue reading)

Reinhold Kliegl | 13 Sep 11:13 2007
Picon

Re: lmer vs lmer2

Hi,

There appears to be one very striking outlier on logFEV1. If one  
removes this observation, both lmer and lmer2 converge to almost  
identical fixed-effect estimates. There are, however, substantial  
differences in the random-effect estimates, in particular in the  
covariances. I suspect this reflects instability due to the high  
correlation between Age and Height (0.89).

I was also wondering why you included inital age and initial height  
as separate predictors? As far as I could see, these values are  
always identical with the first measure of Height and Age within each  
subject.

Best
Reinhold

 > a <- a[a$LogFEV1>-0.5,]
 > lmer2(LogFEV1 ~ Height + Age + (Height+Age|ID),data=a)
Linear mixed-effects model fit by REML
Formula: LogFEV1 ~ Height + Age + (Height + Age | ID)
    Data: a
    AIC   BIC logLik MLdeviance REMLdeviance
  -4644 -4593   2331      -4688        -4662
Random effects:
  Groups   Name        Variance   Std.Dev.  Corr
  ID       (Intercept) 6.3740e-02 0.2524674
           Height      3.4275e-02 0.1851349 -0.919
           Age         1.1319e-05 0.0033643 -0.011 -0.176
  Residual             3.3700e-03 0.0580516
(Continue reading)

Reinhold Kliegl | 13 Sep 12:32 2007
Picon

Re: lmer vs lmer2

Hi,

Actually, I just took a look at the SAS code. They also removed ID=197!

They also included InitHeight and InitAge as subject-level  
predictors. If one runs the following model

lmer(LogFEV1 ~ Height + Age + InitHeight + InitAge + (Height+Age| 
ID),data=a, subset=ID != 197)
or
lmer2(LogFEV1 ~ Height + Age + InitHeight + InitAge + (Height+Age| 
ID),data=a, subset=ID != 197),

one can compare AIC and BIC values for SAS and lmer.

Interestingly, lmer and lmer2 yield smaller AIC (-4632) and BIC  
(-4570) values than what is listed as SAS output (-4576; -4550).
Smaller ist better. Does this also mean that lmer estimates are  
"better"?

Best
Reinhold

On Sep 13, 2007, at 9:51 AM, Henric Nilsson (Private) wrote:

> Quoting Martin Maechler <maechler@...>:
>
>>>>>>> "DB" == Douglas Bates <bates@...>
>>>>>>>     on Thu, 6 Sep 2007 11:17:17 -0500 writes:
>>
(Continue reading)

Andrew Robinson | 13 Sep 12:53 2007
Picon
Picon

[OT] was: lmer vs lmer2 is: great book!

On Thu, Sep 13, 2007 at 09:51:17AM +0200, Henric Nilsson (Private) wrote:
> The data set in question, and, I belive, most others from Fitzmaurice,  
> Laird and Ware's (2004) book on longitudinal data analysis, is  

Which, I would like to add is a great book.  As part of a review for
Statistics in Medicine I asked Fitzmaurice to count the occurrences of
"For example," and "In other words," ... it was 300 and 400 times,
respectively.

I heartily recommend it as an introductory text to the area.

Andrew

> available along with accompanying SAS programs at
> 
> http://biosun1.harvard.edu/~fitzmaur/ala/
> 
> In particular, the data used above is here
> 
> http://biosun1.harvard.edu/~fitzmaur/ala/fev1.txt
> 
> and the SAS code is here
> 
> http://biosun1.harvard.edu/~fitzmaur/ala/prog8_8.html
> 
> 
> HTH,
> Henric
> 
--

-- 
(Continue reading)

dave fournier | 5 Oct 07:46 2007

lmer vs lmer2


Hi,

I checked this example out with ADMB-RE using a modification of
our glmmADMB program  and have found the following:

1)

Parameter estimates with ADMB-RE are stable and
I get almost the same ones with or without the group 177 observations.

2) I get almost exactly the same LL estimate as SAS.

3) My estimates  for the fixed effects are similar to those in
    lmer2 except for the Intercept

Here are the estimates for lmer2 without group 177
    Estimate Std. Error t value
(Intercept) -1.948119   0.095877  -20.32
Height       1.640650   0.032800   50.02
Age          0.019379   0.001310   14.79
InitHeight   0.143977   0.111043    1.30
InitAge     -0.014618   0.007501   -1.95

these are the ADMB-RE estimates without group 177
  LL = 2294.85
   real_b           -2.0369e+000 1.0393e-001
   real_b           1.6460e+000 3.4587e-002
   real_b           1.9275e-002 1.3685e-003
   real_b           2.4857e-001 1.1984e-001
(Continue reading)

Douglas Bates | 5 Oct 00:41 2007
Picon

Re: lmer vs lmer2

On 10/5/07, dave fournier <otter@...> wrote:
>
> Hi,
>
> I checked this example out with ADMB-RE using a modification of
> our glmmADMB program  and have found the following:
>
> 1)
>
> Parameter estimates with ADMB-RE are stable and
> I get almost the same ones with or without the group 177 observations.
>
> 2) I get almost exactly the same LL estimate as SAS.
>
> 3) My estimates  for the fixed effects are similar to those in
>     lmer2 except for the Intercept
>
> Here are the estimates for lmer2 without group 177
>     Estimate Std. Error t value
> (Intercept) -1.948119   0.095877  -20.32
> Height       1.640650   0.032800   50.02
> Age          0.019379   0.001310   14.79
> InitHeight   0.143977   0.111043    1.30
> InitAge     -0.014618   0.007501   -1.95
>
> these are the ADMB-RE estimates without group 177
>   LL = 2294.85
>    real_b           -2.0369e+000 1.0393e-001
>    real_b           1.6460e+000 3.4587e-002
>    real_b           1.9275e-002 1.3685e-003
(Continue reading)

dave fournier | 5 Oct 15:48 2007

lmer vs lmer2

Thanks for that Doug, and I apologize for my bad eyesight.
I really can't see the screen in my old age!

It was unfortunate that when I removed the wrong
observations from the data the LL turned out to be
almost identical to the one from the SAS analysis.

Doing it properly, when I remove  the observations for group 197 from
the analysis I obtain the estimates

   real_b           -1.9486e+00 9.5787e-02
   real_b            1.6408e+00 3.3554e-02
   real_b            1.9368e-02 1.3501e-03
   real_b            1.4427e-01 1.1077e-01
   real_b           -1.4614e-02 7.4902e-03

which are identical  to lmer2
for all practical purposes.

  (Intercept) -1.948119   0.095877  -20.32
 > Height       1.640650   0.032800   50.02
 > Age          0.019379   0.001310   14.79
 > InitHeight   0.143977   0.111043    1.30
 > InitAge     -0.014618   0.007501   -1.95

However what I was  interested in was the application
of slightly robust methods in NLMM (Once you go robust
they are nonlinear even if the originalmodel is linear.)
So I fit the entire data set using a
conservative robust likelihood,
(Continue reading)


Gmane