Computer user satisfaction: Difference between revisions

Content deleted Content added
m Added link for contentment
No edit summary
Line 5:
{{technical|date=January 2025}}
}}
'''Computer user satisfaction''' is the systematic [[measurement]] and [[evaluation]] of how well a [[computer system]] or [[computerComputer application|application]] fulfills the needs and expectations of individual users. While sometimes referred to as '''System Satisfaction'''—especially when examining broader user groups or entire [[customer]] bases—it is also known simply as '''User Satisfaction''' in other contexts. These related terms can vary in scope, survey depth, [[anonymity]], and in how the findings are applied or translated to value.
 
Evaluating [[user satisfaction]] helps gauge product stability, track industry trends, and measure overall user [[contentment]]. These insights are valuable for [[Strategic management|business strategy]], [[market research]], and [[sales forecasting]], as they enable [[Organization|organizations]] to preempt dissatisfaction and protect their [[market share]] and revenue by addressing issues before they escalate.
 
Fields like [[User Interface]] (UI) [[User interface design|Design]] and [[User experience|User Experience]] (UX) [[User experience design|Design]] focus on the direct interactions people have with a system. While UI and UX often rely on separate [[Methodology|methodologies]], they share the goal of making systems more intuitive, efficient, and appealing. By emphasizing these [[design principles]] and incorporating user insights, developers can create systems that meet real-world needs and encourage people to keep using them.
 
== User Compliance ==
Using findings, [[Product design|product designers]], [[Business analysis|business analysts]], and [[Software engineering|software engineers]] anticipate change, and prevent user loss by identifying missing features, shifts in requirements, general improvements, or corrections. ''[[end user|End user]] computing satisfaction'' is also [[Psychology|psychological]], in that the findings can sometimes represent objective views, rather than subjective truths. For example, previous success or failure impactimpacts next -generation products. [[Organization|Organizations]] emphasize value in how products and opinions thereof manifest, preserving what is valued and caring how this is perceived.
 
This often creates a [[Positive feedback|positive feedback loop]] and creatingcreates a sense of agency for the user. These surveys assist toin steersteering the system towards stable product sector positions. This is important, because the effects of satisfied or dissatisfied users could be difficult to change as time goes on. Real -world examples are [[End user|end-user]] loyalty in the premium [[mobile device]] segment, opinion and perception of dependable [[Automotive industry|automotive]] brands, or lower -quality products that originate from certain nationalities based on [[Stereotype|stereotypes]]. In such cases, the [[Corrective and preventive action|corrective action]] is not made on a product level,; rather, it is handled in another business process via [[change management]], which aims to educate, inform, and promote the system with the users, swaying opinions whichthat could not be other altered amending productproducts.
 
The satisfaction measurements are often used in industry, [[manufacturing]], or other large [[Organization|organizations]] forto obtain internal user satisfaction. This could be used to motivate internal changes to improve or correct existing [[Business process|business processes]]. This could be by discontinuing use of systems, or prompt adopting to more applicable solutions. It could also be based on [[Job satisfaction|employee satisfaction]] which is important to promote productive [[Work environment|work environments]].
 
Doll and Torkzadeh's (1988) definition of user satisfaction is, ''the opinion of the user about a specific [[Application software|computer application]], which they use''. In a broader sense, the definition of user satisfaction can be extended to user satisfaction with any computer-based [[electronicsElectronics|electronic]] appliance. The term user can further be removed from objective and individual contexts, as "user" refers to the collective, from [[Individual|individuals]], groups, and across [[Organization|organizations]]. The term "user" is sometimes used to refer to the account or profile of an operator, and this is not excluded from the context, as can be seen when reference is made to "users" of a [[Network topology|network]], the system, by the owner of the system, and by the [[Distribution (marketing)|distributor]] or [[Developer (software)|developer]] of the system.  
 
== The CUS and the UIS ==
Bailey and Pearson's (1983) 39‑Factor39-Factor ''Computer'' ''User Satisfaction (CUS) questionnaire and its de''rivative, the ''User Information Satisfaction (UIS)'' short-form of Baroudi, Olson, and Ives, are typical of instruments whichthat one might term as 'factor-based'. They consist of lists of factors, each of which the respondent is asked to rate on one or more multiple -point scales. Bailey and Pearson's CUS asked for five ratings for each of 39 factors. The first four scales were for quality ratings, and the fifth was an importance rating. From the fifth rating of each factor, they found that their [[Sampling (statistics)|sample]] of users rated as most important: ''[[Accuracy and precision|accuracy]]'', ''[[Reliability (statistics)|reliability]]'', ''[[Modernity|timeliness]]'', ''[[Relevance|relevancy]],'' and ''[[confidence]] in the system''. The factors of least importance were found to be ''feelings of control'', ''volume of output'', ''vendor support'', ''degree of training'', and ''organizational position of [[Electronic data processing|EDP]]'' (the electronic data processing, or computing department). However, the CUS requires 39 x 5 = 195 individual seven‑pointseven-point scale responses.<ref>{{cite journal |last1=Bailey |first1=James E. |last2=Pearson |first2=Sammy W. |date=May 1983 |title=Development of a Tool for Measuring and Analyzing Computer User Satisfaction |journal=Management Science |date=May 1983 |volume=29 |issue=5 |pages=530–545 |doi=10.1287/mnsc.29.5.530 }}</ref> Ives, Olson, and Baroudi (1983), amongst others, thought that so many responses could result in errors of [[Attrition (research)|attrition]].<ref>{{cite journal |last1=Ives |first1=Blake |last2=Olson |first2=Margrethe H. |last3=Baroudi |first3=Jack J. |date=1 October 1983 |title=The measurement of user information satisfaction |journal=Commun. ACM |date=1 October 1983 |volume=26 |issue=10 |pages=785–793 |doi=10.1145/358413.358430 }}</ref> This means, the respondent's failure to return the questionnaire or the increasing carelessness of the respondent as they fill inout a long form. In [[psychometrics]], such errors not only result in reduced sample sizes but can also distort the results, as those who return long questionnaires, properly completed, may have differing [[Trait theory|psychological traits]] from those who do not. Ives, et al. thus developed the UIS. This only requires the respondent to rate 13 factors that remain in significant use. Two seven‑pointseven-point scales are provided per factor (each for a quality), requiring 26 individual responses. However, in a recent article, Islam, Mervi, and Käköla (2010) argued that measuring user satisfaction in industry settings is difficult as the response rate often remains low. Thus, a simpler version of the user satisfaction measurement instrument is necessary.
 
==The problem with the dating of factors==
An early [[criticism]] of these measures was that the factors date as [[computer technology]] evolves and changes. This suggested the need for updates and led to a sequence of other factor-based instruments. Doll and Torkzadeh (1988), for example, produced a factor-based instrument for a new type of user emerging at the time, called an "[[end user]]." They identified end-users as users who tend to interact with a [[Interface (computing)|computer interface]] only, while previously users interacted with developers and operational staff as well. McKinney, Yoon, and Zahedi (2002) developed a model and instruments for measuring web-customer satisfaction during the information phase.<ref>{{cite journal |last1=McKinney |first1=Vicki |last2=Yoon |first2=Kanghyun |last3=Zahedi |first3=Fatemeh “Mariam” |date=September 2002 |title=The Measurement of Web-Customer Satisfaction: An Expectation and Disconfirmation Approach |journal=Information Systems Research |date=September 2002 |volume=13 |issue=3 |pages=296–315 |doi=10.1287/isre.13.3.296.76 }}</ref> Cheung and Lee (2005), in their development of an instrument to measure user satisfaction with e-portals, based their instrument on that of McKinney, Yoon, and Zahedi (2002), which in turn was based primarily on instruments from prior studies.
 
==The problem of defining ''user satisfaction''==
As none of the [[instruments]] in common use really rigorously define their construct of user satisfaction, some scholars, such as Cheyney, Mann, and Amoroso (1986), have called for more research on the factors whichthat influence the success of end-user [[computing]]. Little subsequent effort which sheds new light on the matter exists, however. All factor-based instruments run the risk of including factors irrelevant to the respondent, while omitting some that may be highly significant to him/her. Needless to say, this is further exacerbated by the ongoing changes in [[information technology]].
 
In the literature, there are two definitionsterms for user satisfaction,: 'User"user satisfaction'" and 'User"user Informationinformation Satisfaction'satisfaction" (UIS), which are used interchangeably. According to Doll and Torkzadeh (1988), 'user satisfaction' is defined as the opinion of the user about a specific [[computer application]] that they use. Ives et al. (1983) defined 'User"user Informationinformation Satisfaction'satisfaction" as "the extent to which users believe the information system available to them meets their information requirements." Other terms for Useruser Informationinformation Satisfactionsatisfaction are "system acceptance" (Igersheim, 1976), "perceived usefulness" (Larcker and Lessig, 1980), "MIS appreciation" (Swanson, 1974), and "feelings about information system"'s (Maish, 1979). Ang and Koh (1997) have described user information satisfaction (UIS) as "a perceptual or subjective measure of system success." This means that user information satisfaction willmay differ in meaning and significance fromdependent personon tothe personauthor's definition. In other words, users who are equally satisfied with the samea system according to one definition and measure may not be similarly satisfied according to another., and vice versa.
 
Several studies have investigated whether or not certain factors influence the UIS, such as those by Yaverbaum (1988) and Ang and Soh (1997). Yaverbaum's (1988) study found that people who use their computers irregularly tend to be more satisfied than regular users. Ang and Soh's (1997) research, on the other hand, could find no evidence that computer backgrounduseage frequency affects UIS.
 
Mullany, Tan, and Gallupe (2006) essayclaim a [[definition]] ofthat user satisfaction, claimingis thatchiefly itinfluenced isby basedprior onexperience memories ofwith the pastsystem useor ofan a systemanalogue. Conversely, motivation, they suggest, is based on beliefs about the future use of the system. (Mullany et al., 2006).
 
User information satisfaction remains an important area of research despite the lack of a clear consensus as to how, or even if, it can be defined and gauged. Further, the presence of conflicting views can be interpreted to imply that due to the complex nature of human psychology, UIS cannot be predicted by singular variables, leaving an open problem for future study as to what method could be predictive of UIS.
The large number of studies over the past few decades, as cited in this article, shows that user information satisfaction remains an important topic in research studies despite somewhat [[Contradiction|contradictory]] results.
 
==A lack of theoretical underpinning==
Another difficulty with most of these instruments is their lack of theoretical underpinning by [[Psychology|psychological]] or managerial theory. Exceptions to this were the model of web site design success developed by Zhang and von Dran (2000), and athe measure of user satisfaction with e-portals, developed by Cheung and Lee (2005). Both of these models drew upon Herzberg's two-factor theory of [[motivation]].<ref>{{cite book |last1=Herzberg |first1=Frederick |title=Work and the nature of man |date=1972 |publisher=Staples Press |___location=London |isbn=978-0286620734 |edition=reprint |___location=London}}</ref> Consequently, their factors were designed to measure both 'satisfiers' and 'hygiene factors'. However, Herzberg's theory itself is criticized for failing to distinguish adequately between the terms ''motivation'', ''job motivation'', ''job satisfaction'', and so on. Islam (2011), in a recent study, found that the sources of dissatisfaction differsdiffer from the sources of satisfaction. He found that the environmental factors (e.g., system quality) were more critical to cause dissatisfaction, while outcome -specific factors (e.g., perceived usefulness) were more critical to cause satisfaction.
 
==Cognitive style==
A study by Mullany (2006) showed that during the life of a [[system]], satisfaction from users will on average increase in time as the users' experiences with the system increase. Whilst the overall findings of the studies showed only a weak link between the gap in the users' and analysts' [[cognitive style]] (measured using the KAI scales) and user satisfaction, a more significant link was found in the regions of 85 and 652 days into the systems' usage. This link shows that a large absolute gap between user and analyst cognitive styles often yields a higher rate of user dissatisfaction than a smaller gap. Furthermore, an analyst with a more adaptive cognitive style than the user at the early and late stages (approximately days 85 and 652) of system usage tends to reduce user dissatisfaction.
 
Mullany, Tan, and Gallupe (2006) devised an instrument (the System Satisfaction Schedule (SSS)), which utilizes user-generated factors (that is, almost exclusively,) and so avoids the problem of the dating of factors. Also aligning themselves to Herzberg, these authors argue that the perceived usefulness (or otherwise) of tools of the trade are contextually related, and so are special cases of hygiene factors. They consequently define [[user satisfaction]] as the absence of user dissatisfaction and complaint, as assessed by users who have had at least some experience of using the system. In other words, satisfaction is based on memories of the past use of a system. Motivation, conversely, is based on beliefs about the future use of the system. (Mullany et al., 2007, p. 464)
 
==Future developments==
Currently, some [[Scholar|scholars]] and practitioners are experimenting with other measurement methods and further refinements of the definition for ''satisfaction'' and ''user satisfaction''. Others are replacing structured questionnaires by unstructured ones, where the respondent is asked simply to write down or dictate all the factors about a system whichthat either satisfies or dissatisfies them. One problem with this approach, however, is that the instruments tend not to yield quantitative results, making comparisons and [[Statistical inference|statistical analysis]] difficult. Also, if scholars cannot agree on the precise meaning of the term ''satisfaction'', respondents will be highly unlikely to respond consistently to such instruments. Some newer instruments contain a mix of structured and unstructured items.
 
==References==