Content deleted Content added
Deleted inexplicable link working |
Olexa Riznyk (talk | contribs) Improving references |
||
Line 10:
The starting point is not a topic of contention; research on CCT primarily investigates the application of different methods for the other three components. ''Note:'' The termination criterion and scoring procedure are separate in CAT, but the same in CCT because the test is terminated when a classification is made. Therefore, there are five components that must be specified to design a CAT.
An introduction to CCT is found in Thompson (2007)<ref>{{cite journal|last=Thompson
== How it works ==
Line 19 ⟶ 18:
Two approaches are available for the psychometric model of a CCT: [[classical test theory]] (CTT) and [[item response theory]] (IRT). Classical test theory assumes a state model because it is applied by determining item parameters for a sample of examinees determined to be in each category. For instance, several hundred "masters" and several hundred "non-masters" might be sampled to determine the difficulty and discrimination for each, but doing so requires that you be able to easily identify a distinct set of people that are in each group. IRT, on the other hand, assumes a trait model; the knowledge or ability measured by the test is a continuum. The classification groups will need to be more or less arbitrarily defined along the continuum, such as the use of a cutscore to demarcate masters and non-masters, but the specification of item parameters assumes a trait model.
There are advantages and disadvantages to each. CTT offers greater conceptual simplicity. More importantly, CTT requires fewer examinees in the sample for calibration of item parameters to be used eventually in the design of the CCT, making it useful for smaller testing programs. See Frick (1992)<ref>{{cite journal|last=Frick
== Starting point ==
Line 25 ⟶ 24:
== Item selection ==
In a CCT, items are selected for administration throughout the test, unlike the traditional method of administering a fixed set of items to all examinees. While this is usually done by individual item, it can also be done in groups of items known as [[testlet]]s (Leucht & Nungester, 1996;<ref>{{cite journal|last1=Luecht
Methods of item selection fall into two categories: cutscore-based and estimate-based. Cutscore-based methods (also known as sequential selection) maximize the [[information]] provided by the item at the cutscore, or cutscores if there are more than one, regardless of the ability of the examinee. Estimate-based methods (also known as adaptive selection) maximize information at the current estimate of examinee ability, regardless of the ___location of the cutscore. Both work efficiently, but the efficiency depends in part on the termination criterion employed. Because the [[sequential probability ratio test]] only evaluates probabilities near the cutscore, cutscore-based item selection is more appropriate. Because the [[confidence interval]] termination criterion is centered around the examinees ability estimate, estimate-based item selection is more appropriate. This is because the test will make a classification when the confidence interval is small enough to be completely above or below the cutscore (see below). The confidence interval will be smaller when the standard error of measurement is smaller, and the standard error of measurement will be smaller when there is more information at the theta level of the examinee.
Line 49 ⟶ 47:
*Frick, T. W. (1992). Computerized adaptive mastery tests as expert systems. Journal of Educational Computing Research, 8, 187–213.
*Huang, C.-Y., Kalohn, J.C., Lin, C.-J., and Spray, J. (2000). Estimating Item Parameters from Classical Indices for Item Pool Development with a Computerized Classification Test. (Research Report 2000–4). Iowa City, IA: ACT, Inc.
*Jacobs-Cassuto, M.S. (2005). A Comparison of Adaptive Mastery Testing Using Testlets With the 3-Parameter Logistic Model. Unpublished doctoral dissertation, University of Minnesota, Minneapolis, MN.
*Jiao, H., & Lau, A. C. (2003). The Effects of Model Misfit in Computerized Classification Test. Paper presented at the annual meeting of the National Council of Educational Measurement, Chicago, IL, April 2003.
*Jiao, H., Wang, S., & Lau, C. A. (2004). An Investigation of Two Combination Procedures of SPRT for Three-category Classification Decisions in Computerized Classification Test. Paper presented at the annual meeting of the American Educational Research Association, San Antonio, April 2004.
|