Computerized classification test: Difference between revisions

Content deleted Content added
m Updated "Educational technology" category to suitable subcategory.
simplify headings, cap, bold
Line 1:
A '''computerized classification test''' ('''CCT''') refers to, as its name would suggest, a [[Test (student assessment)|test]] that is administered by [[computer]] for the purpose of [[Classification rule|classifying]] examinees. The most common CCT is a mastery test where the test classifies examinees as "Pass" or "Fail," but the term also includes tests that classify examinees into more than two categories. While the term may generally be considered to refer to all computer-administered tests for classification, it is usually used to refer to tests that are interactively administered or of variable-length, similar to [[computerized adaptive testing]] (CAT). Like CAT, variable-length CCTs can accomplish the goal of the test (accurate classification) with a fraction of the number of items used in a conventional fixed-form test.
 
A CCT requires several components:
Line 13:
Parshall, C. G., Spray, J. A., Kalohn, J. C., & Davey, T. (2006). Practical considerations in computer-based testing. New York: Springer.</ref> A bibliography of published CCT research is found below.
 
== How ait CCT Worksworks ==
A CCT is very similar to a CAT. Items are administered one at a time to an examinee. After the examinee responds to the item, the [[computer]] scores it and determines if the examinee is able to be classified yet. If they are, the test is terminated and the examinee is classified. If not, another item is administered. This process repeats until the examinee is classified or another ending point is satisfied (all items in the bank have been administered, or a maximum test length is reached).
 
== Psychometric Modelmodel ==
Two approaches are available for the psychometric model of a CCT: [[classical test theory]] (CTT) and [[item response theory]] (IRT). Classical test theory assumes a state model because it is applied by determining item parameters for a sample of examinees determined to be in each category. For instance, several hundred "masters" and several hundred "nonmasters" might be sampled to determine the difficulty and discrimination for each, but doing so requires that you be able to easily identify a distinct set of people that are in each group. IRT, on the other hand, assumes a trait model; the knowledge or ability measured by the test is a continuum. The classification groups will need to be more or less arbitrarily defined along the continuum, such as the use of a cutscore to demarcate masters and nonmasters, but the specification of item parameters assumes a trait model.
 
Line 24:
A CCT must have a specified starting point to enable certain algorithms. If the [[sequential probability ratio test]] is used as the termination criterion, it implicitly assumes a starting ratio of 1.0 (equal probability of the examinee being a master or nonmaster). If the termination criterion is a [[confidence interval]] approach, a specified starting point on theta must be specified. Usually, this is 0.0, the center of the [[Probability distribution|distribution]], but it could also be randomly drawn from a certain distribution if the parameters of the examinee distribution are known. Also, previous information regarding an individual examinee, such as their score the last time they took the test (if re-taking) may be used.
 
== Item Selectionselection ==
In a CCT, items are selected for administration throughout the test, unlike the traditional method of administering a fixed set of items to all examinees. While this is usually done by individual item, it can also be done in groups of items known as [[testlet]]s (Leucht & Nungester, 1996;<ref>Luecht, R. M., & Nungester, R. J. (1998). Some practical examples of computer-adaptive sequential testing. Journal of Educational Measurement, 35, 229-249.</ref> Vos & Glas, 2000<ref >Vos, H.J. & Glas, C.A.W. (2000). Testlet-based adaptive mastery testing. In van der Linden, W.J., and Glas, C.A.W. (Eds.) Computerized Adaptive Testing: Theory and Practice.
</ref>).
Line 36:
{{Reflist}}
 
== A bibliographyBibliography of CCT research ==
{{refbegin}}
*Armitage, P. (1950). Sequential analysis with more than two alternative hypotheses, and its relation to discriminant function analysis. [[Journal of the Royal Statistical Society]], 12, 137-144.