Outline

  1. A simplified HCAHPS measurement model
  2. Robust Maximum Likelihood estimation
  3. Alternatives: Weighted Least Squares
  4. Issues with missing data

Overview

This week, we turn our attention to something important for scale development: the nature of item scores.

SEM and CFA were originally developed using normal (distribution) theory. That is, variables were assumed to be multivariate normal (i.e., continuous) as that makes Maximum Likelihood Estimation (MLE) a lot easier.

Let’s first start with an example.

1. A simplified HCAHPS measurement model

Though the HCAHPS questions probably align with several underlying constructs, we’re going to focus on only three of them:

  • Nurse Communication (3 items)
    • CMS_1_num: did nurses treat you with courtesy and respect?
    • CMS_2_num: did nurses listen carefully to you?
    • CMS_3_num: did nurses explain things in a way you could understand?
  • Doctor Communication (3 items)
    • CMS_6_num: did doctors treat you with courtesy and respect?
    • CMS_7_num: did doctors listen carefully to you?
    • CMS_8_num: did doctors explain things in a way you could understand?
  • Hospital Environment (2 items):
    • CMS_10_num: were your room and bathroom kept clean?
    • CMS_11_num: was the area around your room quiet at night?

Recall period: “During your hospital stay…”

Answer Choices for all items

  1. Never
  2. Sometimes
  3. Usually
  4. Always

Normal v. ordinal

To say that a variable is normally distributed, we assume that each person’s score is a random sample from:

Which leads to convenient forms for statistical inference.

However, let’s see what our data look like. For example, here’s the histogram and density plots for CMS_10: area around your room quiet?

So we know that the assumptions are violated here. The question, is what do we do about it?

Three options:

  1. Ignore it (not recommended!)
  2. Use a “robust” ML estimator (probably OK)
  3. Use an alternative based on categorical theory (the best IMO)

Estimation in CFA and SEM

The nuts and bolts of this is that two estimates of the population covariance matrix of the measured variables are computed:

  • \(\mathbf{S}\) the actual covariance matrix of the data
  • \(\Sigma(\theta)\) the model-implied covariance matrix (using the path diagram)
    • This matrix depends on the parameter estimates, \(\mathbf{\theta}\)

In MLE maximizing the likelihood is equivalent to minimizing the following:

\[ \hat{F} = \log\mid\hat{\Sigma(\theta)}\mid + \text{tr}\mid\mathbf{S}\hat{\Sigma}(\theta)^{-1}\mid - \log \mid \mathbf{S}\mid -p \]

where \(p =\) number of non-redundant entries in the sample covariance matrix.

The test statistic \(T = (N-1)f\), where \(f\) is the minimum of \(\hat{F}\), has a large sample \(\chi^{2}\) distribution with \(df = p - q\), but only with these assumptions:

  1. Observed variables have a multivariate normal distribution (or at least bivariate normal pairs)
  2. N is large enough (for asymptotic convergence of \(T \rightarrow \chi^{2}\))
  3. Parameters are not on the boundary (e.g., a variance of \(0\))

Who cares?

  • If variables are not multivariate normal, \(T \nrightarrow \chi^{2}\), so p-values and any fit statistic based on \(T\) will be incorrect
  • Further issues with using the regular estimate of \(\mathbf{S}\) when data are not normal

Now let’s look at our model:

mod1 <- "
  # Latents
  NurseCom =~ CMS_1_num + CMS_2_num + CMS_3_num
  DocCom =~ CMS_6_num + CMS_7_num + CMS_8_num
  Environ =~ CMS_10_num + CMS_11_num
"

# fit the model and see results
mod1_fit <- cfa(mod1, data = cahps)

summary(mod1_fit, fit.measures = T, standardized = T)
## lavaan 0.6-19 ended normally after 44 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        19
## 
##                                                   Used       Total
##   Number of observations                           940        1000
## 
## Model Test User Model:
##                                                       
##   Test statistic                               131.527
##   Degrees of freedom                                17
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              3824.869
##   Degrees of freedom                                28
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.970
##   Tucker-Lewis Index (TLI)                       0.950
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -4940.392
##   Loglikelihood unrestricted model (H1)      -4874.628
##                                                       
##   Akaike (AIC)                                9918.783
##   Bayesian (BIC)                             10010.855
##   Sample-size adjusted Bayesian (SABIC)       9950.512
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.085
##   90 Percent confidence interval - lower         0.072
##   90 Percent confidence interval - upper         0.098
##   P-value H_0: RMSEA <= 0.050                    0.000
##   P-value H_0: RMSEA >= 0.080                    0.730
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.032
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   NurseCom =~                                                           
##     CMS_1_num         1.000                               0.359    0.806
##     CMS_2_num         1.366    0.048   28.626    0.000    0.490    0.876
##     CMS_3_num         1.302    0.049   26.356    0.000    0.467    0.802
##   DocCom =~                                                             
##     CMS_6_num         1.000                               0.390    0.813
##     CMS_7_num         1.425    0.047   30.232    0.000    0.556    0.898
##     CMS_8_num         1.282    0.047   27.394    0.000    0.500    0.809
##   Environ =~                                                            
##     CMS_10_num        1.000                               0.501    0.676
##     CMS_11_num        0.853    0.079   10.754    0.000    0.427    0.523
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   NurseCom ~~                                                           
##     DocCom            0.095    0.007   13.985    0.000    0.679    0.679
##     Environ           0.122    0.010   11.789    0.000    0.682    0.682
##   DocCom ~~                                                             
##     Environ           0.122    0.011   11.155    0.000    0.623    0.623
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .CMS_1_num         0.069    0.004   16.118    0.000    0.069    0.350
##    .CMS_2_num         0.073    0.006   11.977    0.000    0.073    0.233
##    .CMS_3_num         0.121    0.007   16.283    0.000    0.121    0.356
##    .CMS_6_num         0.078    0.005   16.312    0.000    0.078    0.340
##    .CMS_7_num         0.074    0.007   10.680    0.000    0.074    0.194
##    .CMS_8_num         0.132    0.008   16.453    0.000    0.132    0.345
##    .CMS_10_num        0.298    0.026   11.366    0.000    0.298    0.543
##    .CMS_11_num        0.484    0.028   17.541    0.000    0.484    0.726
##     NurseCom          0.129    0.009   14.256    0.000    1.000    1.000
##     DocCom            0.152    0.010   14.511    0.000    1.000    1.000
##     Environ           0.251    0.031    8.139    0.000    1.000    1.000

Let’s unpack some more estimation issues.

With multivariate normal data, in addition to what was stated above:

  • the observed covariance matrix \(\mathbf{S}\) computed from the data is a consistent estimator of the population covariance matrix \(\mathbf{\Sigma}\).
  • That means that as \(N \to \infty\), the probability that the two matrices are different \(\downarrow 0\).
  • If the variables are not normal, this consistency may not hold.

This means that we may not get a good estimate of the covariance matrix from our data, so that our \(\chi^{2}\) test might be inaccurate, as well as the computation of the residuals that are the basis for RMSEA and SRMR.

  • Sometimes these violations aren’t too bad, but if you want to be sure…

2. Robust Maximum Likelihood Estimation

Robust estimation

  • Used in regression and other areas to deal with violations of normality (or other distributional assumptions)
  • Generally, the same estimation procedure is used, but standard errors and test statistics are adjusted
    • The effect is usually to reduce bias in standard errors, making them slightly larger
    • The effect on \(\chi^{2}\) tests is a little more complicated

Though this is good, robust techniques were derived to deal with violations of normality with continuous data, not ordinal-categorical data. Regardless, let’s see what happens when we use a robust version of ML estimation.

mod1_robust <- cfa(mod1, data = cahps, estimator = "MLR")
summary(mod1_robust, fit.measures = T, standardized = T)
## lavaan 0.6-19 ended normally after 44 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        19
## 
##                                                   Used       Total
##   Number of observations                           940        1000
## 
## Model Test User Model:
##                                               Standard      Scaled
##   Test Statistic                               131.527      76.636
##   Degrees of freedom                                17          17
##   P-value (Chi-square)                           0.000       0.000
##   Scaling correction factor                                  1.716
##     Yuan-Bentler correction (Mplus variant)                       
## 
## Model Test Baseline Model:
## 
##   Test statistic                              3824.869    2079.849
##   Degrees of freedom                                28          28
##   P-value                                        0.000       0.000
##   Scaling correction factor                                  1.839
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.970       0.971
##   Tucker-Lewis Index (TLI)                       0.950       0.952
##                                                                   
##   Robust Comparative Fit Index (CFI)                         0.973
##   Robust Tucker-Lewis Index (TLI)                            0.955
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -4940.392   -4940.392
##   Scaling correction factor                                  2.518
##       for the MLR correction                                      
##   Loglikelihood unrestricted model (H1)      -4874.628   -4874.628
##   Scaling correction factor                                  2.140
##       for the MLR correction                                      
##                                                                   
##   Akaike (AIC)                                9918.783    9918.783
##   Bayesian (BIC)                             10010.855   10010.855
##   Sample-size adjusted Bayesian (SABIC)       9950.512    9950.512
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.085       0.061
##   90 Percent confidence interval - lower         0.072       0.051
##   90 Percent confidence interval - upper         0.098       0.072
##   P-value H_0: RMSEA <= 0.050                    0.000       0.040
##   P-value H_0: RMSEA >= 0.080                    0.730       0.002
##                                                                   
##   Robust RMSEA                                               0.080
##   90 Percent confidence interval - lower                     0.062
##   90 Percent confidence interval - upper                     0.099
##   P-value H_0: Robust RMSEA <= 0.050                         0.003
##   P-value H_0: Robust RMSEA >= 0.080                         0.524
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.032       0.032
## 
## Parameter Estimates:
## 
##   Standard errors                             Sandwich
##   Information bread                           Observed
##   Observed information based on                Hessian
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   NurseCom =~                                                           
##     CMS_1_num         1.000                               0.359    0.806
##     CMS_2_num         1.366    0.078   17.465    0.000    0.490    0.876
##     CMS_3_num         1.302    0.082   15.781    0.000    0.467    0.802
##   DocCom =~                                                             
##     CMS_6_num         1.000                               0.390    0.813
##     CMS_7_num         1.425    0.079   17.965    0.000    0.556    0.898
##     CMS_8_num         1.282    0.082   15.673    0.000    0.500    0.809
##   Environ =~                                                            
##     CMS_10_num        1.000                               0.501    0.676
##     CMS_11_num        0.853    0.092    9.320    0.000    0.427    0.523
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   NurseCom ~~                                                           
##     DocCom            0.095    0.014    6.983    0.000    0.679    0.679
##     Environ           0.122    0.018    6.710    0.000    0.682    0.682
##   DocCom ~~                                                             
##     Environ           0.122    0.016    7.775    0.000    0.623    0.623
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .CMS_1_num         0.069    0.007    9.801    0.000    0.069    0.350
##    .CMS_2_num         0.073    0.011    6.907    0.000    0.073    0.233
##    .CMS_3_num         0.121    0.013    9.344    0.000    0.121    0.356
##    .CMS_6_num         0.078    0.008    9.490    0.000    0.078    0.340
##    .CMS_7_num         0.074    0.011    6.651    0.000    0.074    0.194
##    .CMS_8_num         0.132    0.012   10.667    0.000    0.132    0.345
##    .CMS_10_num        0.298    0.034    8.677    0.000    0.298    0.543
##    .CMS_11_num        0.484    0.034   14.326    0.000    0.484    0.726
##     NurseCom          0.129    0.020    6.460    0.000    1.000    1.000
##     DocCom            0.152    0.022    6.974    0.000    1.000    1.000
##     Environ           0.251    0.038    6.574    0.000    1.000    1.000

If you look closely, the unstandardized factor loadings are identical between the two models, but the standard errors are almost twice as large in the robust output. Similarly, the \(\chi^{2}\) test section includes a scaled version, and there are also robust versions of the other fit statistics.

3. Weighted Least Squares

An alternative to maximum likelihood estimation is weighted least squares.

To use this approach requires a different conceptualization of the model

Muthen (e.g., 1984) developed an approach based on the “underlying latent response distribution” model (see the Edwards, et al. Chapter).

The basic idea is to introduce new latent variables for each indicator:

The arrows from the small circles to the rectangles are usually denoted with squiggly lines. This is because they’re not continuous regression type paths, but rather a set of thresholds:

\[ y = \begin{cases} 1 & \text{if } y^{*} \leq \tau_{1} \\ 2 & \text{if } \tau_{1} < y^{*} \leq \tau_{2} \\ . & . \\ . & . \\ C-1 & \text{if } \tau_{C-2} < y^{*} \leq \tau_{C-1} \\ C & \text{if } \tau_{C-1} \leq y^{*} \end{cases} \]

Using these thresholds, special kinds of correlations between ordinal categorical variables can be computed.

  • Binary with Binary: tetrachoric
  • Other cases: polyserial

They are estimated based on contingency tables and the estimated thresholds using a very complex process. This matrix of correlations is then used as \(\mathbf{S^{*}}\).

WLS discrepancy function

A quite general discrepancy function works in WLS:

\[ F_{WLS} = (\mathbf{s} - \hat{\sigma})^{'}\mathbf{W}^{-1}(\mathbf{s} - \hat{\sigma}) \]

Here, \(\mathbf{s}\) are the unique elements of the sample covariance matrix, computed with the tetra and polychoric method above, and \(\hat{\sigma}\) are the unique elements of the implied matrix.

Again, a \(T\) based on this function can converge in large samples to a \(\chi^{2}\), but with no assumption of underlying normality of the observed scores, just the latent items.

To summarize:

  • The use of the “underlying variables” approach alters both how the covariance matrix is estimated based on the data and the discrepancy function.
  • Standard errors are affected as a side effect of how the discrepancy function operates
  • All other fit indices are altered as they either depend on the discrepancy function or \(T\)

Here’s are model estimated with weighted least squares

mod1_ordinal <- cfa(mod1, data = cahps, ordered = TRUE, estimator = "DWLS")
summary(mod1_ordinal, standardized=TRUE, fit.measure = TRUE)
## lavaan 0.6-19 ended normally after 25 iterations
## 
##   Estimator                                       DWLS
##   Optimization method                           NLMINB
##   Number of model parameters                        35
## 
##                                                   Used       Total
##   Number of observations                           940        1000
## 
## Model Test User Model:
##                                                       
##   Test statistic                                26.800
##   Degrees of freedom                                17
##   P-value (Chi-square)                           0.061
## 
## Model Test Baseline Model:
## 
##   Test statistic                             18128.072
##   Degrees of freedom                                28
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.999
##   Tucker-Lewis Index (TLI)                       0.999
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.025
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.042
##   P-value H_0: RMSEA <= 0.050                    0.995
##   P-value H_0: RMSEA >= 0.080                    0.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.030
## 
## Parameter Estimates:
## 
##   Parameterization                               Delta
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model        Unstructured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   NurseCom =~                                                           
##     CMS_1_num         1.000                               0.928    0.928
##     CMS_2_num         1.036    0.028   37.134    0.000    0.961    0.961
##     CMS_3_num         0.973    0.023   41.615    0.000    0.903    0.903
##   DocCom =~                                                             
##     CMS_6_num         1.000                               0.923    0.923
##     CMS_7_num         1.036    0.027   37.800    0.000    0.956    0.956
##     CMS_8_num         0.980    0.022   43.590    0.000    0.904    0.904
##   Environ =~                                                            
##     CMS_10_num        1.000                               0.750    0.750
##     CMS_11_num        0.803    0.043   18.715    0.000    0.602    0.602
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   NurseCom ~~                                                           
##     DocCom            0.657    0.019   34.119    0.000    0.767    0.767
##     Environ           0.527    0.023   22.953    0.000    0.758    0.758
##   DocCom ~~                                                             
##     Environ           0.502    0.023   21.799    0.000    0.725    0.725
## 
## Thresholds:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##     CMS_1_num|t1     -3.072    0.299  -10.290    0.000   -3.072   -3.072
##     CMS_1_num|t2     -1.868    0.081  -23.058    0.000   -1.868   -1.868
##     CMS_1_num|t3     -1.153    0.052  -21.965    0.000   -1.153   -1.153
##     CMS_2_num|t1     -2.727    0.190  -14.334    0.000   -2.727   -2.727
##     CMS_2_num|t2     -1.666    0.070  -23.818    0.000   -1.666   -1.666
##     CMS_2_num|t3     -0.722    0.045  -16.039    0.000   -0.722   -0.722
##     CMS_3_num|t1     -2.386    0.130  -18.423    0.000   -2.386   -2.386
##     CMS_3_num|t2     -1.635    0.068  -23.865    0.000   -1.635   -1.635
##     CMS_3_num|t3     -0.764    0.046  -16.779    0.000   -0.764   -0.764
##     CMS_6_num|t1     -2.727    0.190  -14.334    0.000   -2.727   -2.727
##     CMS_6_num|t2     -1.839    0.079  -23.215    0.000   -1.839   -1.839
##     CMS_6_num|t3     -1.036    0.050  -20.738    0.000   -1.036   -1.036
##     CMS_7_num|t1     -2.343    0.124  -18.915    0.000   -2.343   -2.343
##     CMS_7_num|t2     -1.498    0.063  -23.840    0.000   -1.498   -1.498
##     CMS_7_num|t3     -0.754    0.045  -16.595    0.000   -0.754   -0.754
##     CMS_8_num|t1     -2.303    0.119  -19.350    0.000   -2.303   -2.303
##     CMS_8_num|t2     -1.549    0.065  -23.895    0.000   -1.549   -1.549
##     CMS_8_num|t3     -0.678    0.045  -15.231    0.000   -0.678   -0.678
##     CMS_10_num|t1    -1.951    0.087  -22.546    0.000   -1.951   -1.951
##     CMS_10_num|t2    -1.288    0.056  -23.012    0.000   -1.288   -1.288
##     CMS_10_num|t3    -0.423    0.042  -10.009    0.000   -0.423   -0.423
##     CMS_11_num|t1    -1.746    0.074  -23.608    0.000   -1.746   -1.746
##     CMS_11_num|t2    -1.055    0.050  -20.952    0.000   -1.055   -1.055
##     CMS_11_num|t3     0.011    0.041    0.261    0.794    0.011    0.011
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .CMS_1_num         0.139                               0.139    0.139
##    .CMS_2_num         0.076                               0.076    0.076
##    .CMS_3_num         0.185                               0.185    0.185
##    .CMS_6_num         0.149                               0.149    0.149
##    .CMS_7_num         0.087                               0.087    0.087
##    .CMS_8_num         0.182                               0.182    0.182
##    .CMS_10_num        0.437                               0.437    0.437
##    .CMS_11_num        0.637                               0.637    0.637
##     NurseCom          0.861    0.027   31.606    0.000    1.000    1.000
##     DocCom            0.851    0.028   30.763    0.000    1.000    1.000
##     Environ           0.563    0.053   10.622    0.000    1.000    1.000

An epilogue on missing data

Because the tetra and polychoric correlations are based on contingency tables, this can cause problems if data are missing on particular items and in particular groups:

Let’s see this for the CAHPS items across white and non-white patients.

Here’s missingness for White patients:

## $CMS_1_num
## 
##           1           2           3           4        <NA> 
## 0.001136364 0.028409091 0.093181818 0.865909091 0.011363636 
## 
## $CMS_2_num
## 
##           1           2           3           4        <NA> 
## 0.003409091 0.044318182 0.192045455 0.746590909 0.013636364 
## 
## $CMS_3_num
## 
##           1           2           3           4        <NA> 
## 0.007954545 0.045454545 0.168181818 0.764772727 0.013636364 
## 
## $CMS_6_num
## 
##           1           2           3           4        <NA> 
## 0.003409091 0.028409091 0.120454545 0.832954545 0.014772727 
## 
## $CMS_7_num
## 
##          1          2          3          4       <NA> 
## 0.01136364 0.05795455 0.16022727 0.75795455 0.01250000 
## 
## $CMS_8_num
## 
##          1          2          3          4       <NA> 
## 0.01363636 0.05227273 0.18409091 0.73181818 0.01818182 
## 
## $CMS_10_num
## 
##          1          2          3          4       <NA> 
## 0.02613636 0.07272727 0.23863636 0.64204545 0.02045455 
## 
## $CMS_11_num
## 
##          1          2          3          4       <NA> 
## 0.04090909 0.10454545 0.36477273 0.47045455 0.01931818

Here it is for non-White patients:

## $CMS_1_num
## 
##           2           3           4        <NA> 
## 0.033333333 0.116666667 0.841666667 0.008333333 
## 
## $CMS_2_num
## 
##           2           3           4        <NA> 
## 0.033333333 0.183333333 0.775000000 0.008333333 
## 
## $CMS_3_num
## 
##           1           2           3           4        <NA> 
## 0.008333333 0.025000000 0.200000000 0.758333333 0.008333333 
## 
## $CMS_6_num
## 
##           2           3           4        <NA> 
## 0.033333333 0.091666667 0.866666667 0.008333333 
## 
## $CMS_7_num
## 
##          2          3          4       <NA> 
## 0.05000000 0.12500000 0.80833333 0.01666667 
## 
## $CMS_8_num
## 
##          2          3          4       <NA> 
## 0.04166667 0.17500000 0.75833333 0.02500000 
## 
## $CMS_10_num
## 
##          1          2          3          4       <NA> 
## 0.03333333 0.06666667 0.19166667 0.67500000 0.03333333 
## 
## $CMS_11_num
## 
##         1         2         3         4      <NA> 
## 0.0250000 0.1000000 0.2833333 0.5666667 0.0250000

Next week, we’ll connect what we did today to a different way to think about items and responses.