An analogy may help give a context for heywood cases: ML
estimation (and related methods) is like a religious fanatic in that it so
believes the models spesification that
it will do anything, no matter how implausible, to force the model on the data
(e.g., estimated correlation > 1.0). note that some SEM computer programs do
not permit certain heywood cases to appear in the solution. For example, EQS
does not allow the estimate of an error variance to be less than zero; that is,
it sets a lower bound of zero (i.e, an inequality constraint) that prevents a
negative variance estimate. However, solutions in which one or more estimates
have been constrained by the computer to prevent an illogical value may
indicate a problem (i.e., they should not be trusted). Researchers should also
attempt to determine the source of the problem instead of constraining an error
variance to be positive in a computer program for SEM and then rerunning the
analysis (chen et al., 2001).
The ML
methods is generally both scale in variant. The former means that if a variable’s
scale is linearly transformed, a parameter estimated for the transformed
variable can be algebraically converted back to the original metric. The latter
means the value of the ML fitting function in a particular sample remains the
same regardless of the scale of the observed variables(kaplan, 2000). However,
ML estimation may lose these properties if a correlation matrix is analyzed
instead of a covariance matrix. Some special methods to correctly analyze a
correlation matrix are discussed in Chapter 7.
When
a raw data file is analyzed, standard ML estimation assumes there are no
missing values. A special form of ML estimation available for raw data files
where some observations are missing at random was described earlier section
3.2).