Linear-on-the-fly testing: Difference between revisions

Content deleted Content added
No edit summary
m Fixing style/layout errors
 
(12 intermediate revisions by 12 users not shown)
Line 1:
{{short description|Educational assessment and evaluation technique}}
{{orphan|date=April 2013}}
'''Linear-on-the-fly testing''', often referred to as '''LOFT''', is a method of delivering educational or professional examinations. Competing methods include traditional linear fixed-form delivery and [[computerized adaptive testing]]. LOFT is a compromise between the two, in an effort to maintain the equivalence of the set of items that each examinee sees, which is found in fixed-form delivery, while attempting to reduce item exposure and enhance test security.
 
Fixed-form delivery, which most people are familiar with, entails the testing organization determining one or several fixed sets of items to be delivered together. For example, suppose the test contains 100 items, and the organization wished for two forms. Two forms are published with a fixed set of 100 items each, some of which should overlap to enable [[equating]]. All examinees that take the test are given one of the two forms.
 
If this exam is high volume, meaning that there is a large number of examinees, the security of the examination could be in jeopardy. Many of the test items would become well known in the population of examinees. To offset this, more forms would be needed; if there were eight forms, not as many examinees would see each item.
 
LOFT takes this to an extreme, and attempts to construct a unique exam for each candidate, within the given constraints of the testing program. Rather than publishing a fixed set of items, a large pool of items is delivered to the [[computer]] on which the examinee is taking the exam. Also delivered is a computer program to [[pseudorandom generator|pseudo-randomly select]] items so that every examinee will receive a test that is equivalent with respect to content and [[statistical]] characeristicscharacteristics,<ref>{{cite journal |last=Luecht, |first=R.M. (|year=2005). |title=Some Useful Cost-Benefit Criteria for Evaluating Computer-based Test Delivery Models and Systems. ''|journal=Journal of Applied Testing Technology'', |volume=7( |issue=2) |url=http://www.testpublishers.org/Documents/JATT2005_rev_Criteria4CBT_RMLuecht_Apr2005.pdf [|accessdate=2006-12-01 |url-status=dead|archiveurl=https://web.archive.org/web/20060927064953/http://www.testpublishers.org/Documents/JATT2005_rev_Criteria4CBT_RMLuecht_Apr2005.pdf] |archivedate=2006-09-27 }}</ref> although composed of a different set of items. This is usually done with [[item response theory]].
 
==References==
{{Reflist}}
 
[[Category:Psychometrics]]
[[Category:Educational assessment and evaluation]]
[[Category:Educational psychology]]
[[Category:School examinations]]
[[Category:EducationalComputer-based technologytesting]]
[[ar:اختبار خطي سريع]]