Computer user satisfaction: Difference between revisions

Content deleted Content added
BattyBot (talk | contribs)
Fixed reference date issue(s) (see CS1 maint: date format for details) and AWB general fixes
Line 2:
{{technical|date=January 2025}}
 
'''Computer user satisfaction (CUS)''' is the systematic [[measurement]] and [[evaluation]] of how well a [[computer system]] or [[Computer application|application]] fulfills the needs and expectations of individual users. The measurement of computer user satisfaction studies how interactions with [[technology]] can be improved by adapting it to our [[Psychology|psychological]] preferences and tendencies.
 
Evaluating [[user satisfaction]] helps gauge product stability, track industry trends, and measure overall user contentment.
 
Fields like [[User Interface]] (UI) [[User interface design|Design]] and [[User experience|User Experience]] (UX) [[User experience design|Design]] focus on the direct interactions people have with a system. While UI and UX often rely on separate [[Methodology|methodologies]], they share the goal of making systems more intuitive, efficient, and appealing.
 
==The Problem of Defining Computer User Satisfaction==
Line 49:
 
== The CUS and the UIS ==
Bailey and Pearson's 39-Factor Computer User Satisfaction (CUS) questionnaire and the User Information Satisfaction (UIS) were both surveys with multiple qualities; that is to say, the survey asks respondents to rank or rate multiple categories. Bailey and Pearson asked participants to judge 39 qualities, dividing them into five groups, each with different scales to rank or rate the qualities. The first four scales were for favorability ratings, and the fifth was an importance ranking. In the group asked to rank the importance for each quality, researchers found that their [[Sampling (statistics)|sample]] of users rated most important: "[[Accuracy and precision|accuracy]], [[Reliability (statistics)|reliability]], timeliness, relevancy, and confidence.''"'' The qualities of least importance were found to be "feelings of control, volume of output, vendor support, degree of training, and organizational position of [[Electronic data processing|EDP]] (the electronic data processing or computing department)." However, the CUS requires 39 x 5 = 195 responses.<ref>{{cite journal |last1=Bailey |first1=James E. |last2=Pearson |first2=Sammy W. |date=May 1983 |title=Development of a Tool for Measuring and Analyzing Computer User Satisfaction |journal=Management Science |volume=29 |issue=5 |pages=530–545 |doi=10.1287/mnsc.29.5.530}}</ref> Ives, Olson, and Baroudi, amongst others, thought that so many responses could result in errors of [[Attrition (research)|attrition]].<ref>{{cite journal |last1=Ives |first1=Blake |last2=Olson |first2=Margrethe H. |last3=Baroudi |first3=Jack J. |date=1 October 1983 |title=The measurement of user information satisfaction |journal=Commun. ACM |volume=26 |issue=10 |pages=785–793 |doi=10.1145/358413.358430}}</ref> This indicates that the respondent's failure to return the questionnaire directly correlated with the length of the surveys. This can result in reduced sample sizes and distorted results, as those who return long questionnaires may have differing [[Trait theory|psychological traits]] from those who do not. Ives and colleagues developed the User Information Satisfaction (UIS) as a means of addressing this. The UIS only requires the respondent to rate 13 metrics. 2 scales are provided per metric, yielding 26 individual responses. However, in a recent article, Islam, Mervi, and Käköla argued that measuring CUS in industry settings is difficult as the response rate often remains low. Thus, a simpler version of the CUS measurement method is necessary.<ref>{{cite journal
|last1 = Islam
|first1 = A.K.M. Najmul
Line 66:
 
==Grounding in Theory==
Another difficulty with most of these surveys is their lack of a foundation in [[Psychology|psychological]] theory. Exceptions to this were the model of web site design success developed by Zhang and von Dran<ref>{{cite journal
|last1 = Zhang
|first1 = Ping
Line 78:
|pages = 1253–1268
|doi = 10.1002/1097-4571(2000)9999:9999%3C::AID-ASI1039%3E3.0.CO;2-O
}}</ref> and the measure of CUS with e-portals developed by Cheung and Lee.<ref>C. M. K. Cheung and M. K. O. Lee, "The Asymmetric Effect of Website Attribute Performance on Satisfaction: An Empirical Study," ''Proceedings of the 38th Annual Hawaii International Conference on System Sciences'', Big Island, HI, USA, 2005, pp. 175c-175c, doi: 10.1109/HICSS.2005.585.</ref> Both of these models drew on Herzberg's two-factor theory of [[motivation]].<ref>{{cite book |last1=Herzberg |first1=Frederick |title=Work and the nature of man |date=1972 |publisher=Staples Press |isbn=978-0286620734 |edition=reprint |___location=London}}</ref> Consequently, their qualities were designed to measure both "satisfiers" and "hygiene factors". However, Herzberg's theory has been criticized for being too vague, particularly in its failure to distinguish between terms such as motivation, job motivation, job satisfaction, etc.<ref>{{cite journal
|last1 = Islam
|first1 = A.K.M. Najmul
Line 103:
 
==Future developments==
Currently, [[scholar]]sscholars and practitioners are experimenting with other measurement methods and further refinements to the definition of CUS. Others are replacing structured questionnaires with unstructured ones, where the respondent is asked simply to write down or dictate everything about a system that either satisfies or dissatisfies them. One problem with this approach, however, is that it tends not to yield [[Quantitative research|quantitative]] results, making comparisons and [[Statistical inference|statistical analysis]] difficult.
 
== References ==