slm.Rd
Fits a simple learning model (SLM) for probabilistic knowledge structures by minimum discrepancy maximum likelihood estimation.
slm(K, N.R, method = c("MD", "ML", "MDML"), R = as.binmat(N.R),
beta = rep(0.1, nitems), eta = rep(0.1, nitems),
g = rep(0.1, nitems),
betafix = rep(NA, nitems), etafix = rep(NA, nitems),
betaequal = NULL, etaequal = NULL,
randinit = FALSE, incradius = 0,
tol = 1e-07, maxiter = 10000, zeropad = 16,
checkK = TRUE)
getSlmPK(g, K, Ko)
# S3 method for class 'slm'
print(x, P.Kshow = FALSE, parshow = TRUE,
digits=max(3, getOption("digits") - 2), ...)
a state-by-problem indicator matrix representing the knowledge space. An element is one if the problem is contained in the state, and else zero.
a (named) vector of absolute frequencies of response patterns.
MD
for minimum discrepancy estimation, ML
for
maximum likelihood estimation, MDML
for minimum discrepancy
maximum likelihood estimation.
a person-by-problem indicator matrix of unique response patterns.
Per default inferred from the names of N.R
.
vectors of initial values for the error, guessing, and solvability parameters.
vectors of fixed error and guessing parameter values;
NA
indicates a free parameter.
lists of vectors of problem indices; each vector represents an equivalence class: it contains the indices of problems for which the error or guessing parameters are constrained to be equal. (See Examples.)
logical, if TRUE
then initial parameter values are
sampled uniformly with constraints. (See Details.)
include knowledge states of distance from the minimum
discrepant states less than or equal to incradius
.
tolerance, stopping criterion for iteration.
the maximum number of iterations.
the maximum number of items for which an incomplete
N.R
vector is completed and padded with zeros.
logical, if TRUE
K is checked for well-gradedness.
a state-by-problem indicator matrix representing the outer fringe
for each knowledge state in K
; typically the result of a call to
getKFringe
.
an object of class slm
, typically the result of a call to
slm
.
logical, should the estimated distribution of knowledge states be printed?
logical, should the estimates of error, guessing, and solvability parameters be printed?
a non-null value for digits
specifies the minimum
number of significant digits to be printed in values.
additional arguments passed to other methods.
See Doignon and Falmagne (1999) for details on the simple learning model
(SLM) for probabilistic knowledge structures. The model requires a
well-graded knowledge space K
.
An slm
object inherits from class blim
. See blim
for
details on the function arguments. The helper function getSlmPK
returns the distribution of knowledge states P.K
.
An object of class slm
and blim
. It contains all components
of a blim
object. In addition, it includes:
the vector of estimates of the solvability parameters.
Doignon, J.-P., & Falmagne, J.-C. (1999). Knowledge spaces. Berlin: Springer.
data(DoignonFalmagne7)
K <- DoignonFalmagne7$K # well-graded knowledge space
N.R <- DoignonFalmagne7$N.R # frequencies of response patterns
## Fit simple learning model (SLM) by different methods
slm(K, N.R, method = "MD") # minimum discrepancy estimation
#>
#> Simple learning models (SLMs)
#>
#> Number of knowledge states: 9
#> Number of response patterns: 32
#> Number of respondents: 1000
#>
#> Method: Minimum discrepancy
#> Number of iterations: 1
#> Goodness of fit (2 log likelihood ratio):
#> G2(16) = 125.19, p = 0
#>
#> Minimum discrepancy distribution (mean = 0.254)
#> 0 1 2
#> 760 226 14
#>
#> Mean number of errors (total = 0.25582)
#> careless error lucky guess
#> 0.16253301 0.09328252
#>
#> Error, guessing, and solvability parameters
#> beta eta g
#> a 0.092089 0.000001 0.79633
#> b 0.088720 0.000001 0.78900
#> c 0.045058 0.040640 0.67817
#> d 0.000001 0.040858 0.51355
#> e 0.000001 0.054722 0.53343
#>
slm(K, N.R, method = "ML") # maximum likelihood estimation by EM
#>
#> Simple learning models (SLMs)
#>
#> Number of knowledge states: 9
#> Number of response patterns: 32
#> Number of respondents: 1000
#>
#> Method: Maximum likelihood
#> Number of iterations: 751
#> Goodness of fit (2 log likelihood ratio):
#> G2(16) = 27.525, p = 0.036008
#>
#> Minimum discrepancy distribution (mean = 0.254)
#> 0 1 2
#> 760 226 14
#>
#> Mean number of errors (total = 0.43083)
#> careless error lucky guess
#> 0.41317121 0.01765381
#>
#> Error, guessing, and solvability parameters
#> beta eta g
#> a 0.1775600 0.0000010 0.87909
#> b 0.1739680 0.0000010 0.87043
#> c 0.1832340 0.0000010 0.73031
#> d 0.0054960 0.0000104 0.48765
#> e 0.0045130 0.0240918 0.47873
#>
slm(K, N.R, method = "MDML") # MDML estimation
#>
#> Simple learning models (SLMs)
#>
#> Number of knowledge states: 9
#> Number of response patterns: 32
#> Number of respondents: 1000
#>
#> Method: Minimum discrepancy maximum likelihood
#> Number of iterations: 138
#> Goodness of fit (2 log likelihood ratio):
#> G2(16) = 116.74, p = 0
#>
#> Minimum discrepancy distribution (mean = 0.254)
#> 0 1 2
#> 760 226 14
#>
#> Mean number of errors (total = 0.25522)
#> careless error lucky guess
#> 0.18088170 0.07433556
#>
#> Error, guessing, and solvability parameters
#> beta eta g
#> a 0.105502 0.000001 0.80827
#> b 0.100550 0.000001 0.79938
#> c 0.035363 0.022415 0.66652
#> d 0.000001 0.023790 0.51486
#> e 0.000001 0.058872 0.51965
#>
## Compare SLM and BLIM
m1 <- slm(K, N.R, method = "ML")
m2 <- blim(K, N.R, method = "ML")
anova(m1, m2)
#> Analysis of Deviance Table
#>
#> Model 1: m1
#> Model 2: m2
#> Resid. Df Resid. Dev Df Deviance Pr(>Chi)
#> 1 16 27.525
#> 2 13 12.623 3 14.902 0.001903 **
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1