Computer user satisfaction: Difference between revisions

Content deleted Content added
Kaixvny (talk | contribs)
m Wikilink
m I added some links to other pages, corrected minor spelling issues, and added words where there were minor inconsistencies.
Line 6:
 
== Computer User Satisfaction ==
In concept, it sets out to measure and record on a granular level, individual operator satisfaction. Other similar or related concepts are ''System Satisfaction'' (spanning multiple users or even customers), simply ''[[Customer satisfaction|User Satisfaction]]'', or similar terms. The main differences being the target audience, [[Survey data collection|survey]] depth, anonymity, or how the results may be used, or the findings converted into value. It may be designed to measure product segment stability, industry trends, or how content users are. This is a value for [[Business model|Business Strategies]], [[Market research|Market Research]], [[Sales forecasting|Sales Forecasting]] and much more. This proactively prevents [[dissatisfaction]], which could manifest in user loss to competitors via product migration and loss of substantial user base and profit. Other fields which more directly deals with the layer of computer systems which users interact with are [[User interface design|User Interface Design and Experience]], often referred to as UI & UIX, is [[User experience evaluation|measured]] using different tools, but important facets of modern system design, development, and engineering.
 
== User Compliance ==
Using findings, [[Product design|Product Designers]], [[Business analysis|Business Analysts]], and [[Software engineering|Software Engineers]] anticipate change, and prevent user loss by identifying missing features, shifts in requirements, general improvements, or corrections. ''[[end user|End User]] Computing Satisfaction'' is also [[Psychology|psychological]], in that the findings can sometimes represent objective views, rather than subjective truths. For example, previous success or failure impact next generation products. Organizations emphasiseemphasize value in how products and opinions thereof manifest, preserving what is valued and caring how this is perceived.
 
This often creates a [[Positive feedback|positive feedback loop]] and creating a sense of agency for the user. These surveys assist to steer the system towards stable product sector positions. This is important, because the effects of satisfied or dissatisfied users could be difficult to change as time goes on. Real world examples are end-user loyalty in the premium mobile device segment, opinion and perception of dependable automotive brands, or lower quality products originate from certain nationalities based on stereotypes. In such cases, the corrective action is not made on a product level, rather it is handled in another business process via [[Change management|Change Management]], which aims to educate, inform and promote the system with the users, swaying opinions which could not be other altered amending product.
 
The satisfaction measurements are often used in industry, [[manufacturing]], or other large organizations for obtain internal user satisfaction. This could be used to motivate internal changes to improve or correct existing business processes. This could be by discontinuing use of systems, or prompt adopting to more applicable solutions. It could also be based on employee satisfaction which is important to promote productive work environments.
 
'''Doll''' and '''Torkzadeh's''' (1988) definition of user satisfaction is, ''the opinion of the user about a specific computer application, which they use''. In a broader sense, the definition of user satisfaction can be extended to user satisfaction with any computer-based [[electronics|electronic]] appliance. The term user can further be removed from objective and individual contexts, as "user" refers to the collective, from individuals, groups and across organizations. The term "user" is sometimes used to refer to the account or profile of an operator, and this is not excluded from the context, as can be seen when reference is made to "users" of a [[Network topology|network]], the system, by the owner of the system, and by the [[Distribution (marketing)|distributor]] or [[Developer (software)|developer]] of the system.
 
==The CUS and the UIS==
Bailey and Pearson's (1983) 39‑Factor ''Computer'' ''User Satisfaction (CUS) [[questionnaire]] and its de''rivative, the ''User Information Satisfaction (UIS)'' short-form of Baroudi, Olson and Ives are typical of instruments which one might term as 'factor-based'. They consist of lists of factors, each of which the respondent is asked to rate on one or more multiple point scales. Bailey and Pearson's CUS asked for five ratings for each of 39 factors. The first four scales were for quality ratings and the fifth was an importance rating. From the fifth rating of each factor, they found that their [[Sampling (statistics)|sample]] of users rated as most important: ''accuracy'', ''reliability'', ''timeliness'', ''relevancy'' and ''confidence in the system''. The factors of least importance were found to be ''feelings of control'', ''volume of output'', ''vendor support'', ''degree of training'', and ''organisational position of EDP'' (the electronic data processing, or computing department). However, the CUS requires 39 x 5 = 195 individual seven‑point scale responses.<ref>{{cite journal |last1=Bailey |first1=James E. |last2=Pearson |first2=Sammy W. |title=Development of a Tool for Measuring and Analyzing Computer User Satisfaction |journal=Management Science |date=May 1983 |volume=29 |issue=5 |pages=530–545 |doi=https://doi.org/10.1287/mnsc.29.5.530}}</ref> Ives, Olson and Baroudi (1983), amongst others, thought that so many responses could result in errors of attrition.<ref>{{cite journal |last1=Ives |first1=Blake |last2=Olson |first2=Margrethe H. |last3=Baroudi |first3=Jack J. |title=The measurement of user information satisfaction |journal=Commun. ACM |date=1 October 1983 |volume=26 |issue=10 |pages=785–793 |doi=https://doi.org/10.1145/358413.358430}}</ref> This means, the respondent's failure to return the questionnaire or the increasing carelessness of the respondent as they fill in a long form. In [[psychometrics]], such errors not only result in reduced sample sizes but can also distort the results, as those who return long questionnaires, properly completed, may have differing psychological traits from those who do not. Ives, et al. thus developed the UIS. This only requires the respondent to rate 13 factors that remain in significant use. Two seven‑point scales are provided per factor (each for a quality), requiring 26 individual responses. However, in a recent article, Islam, Mervi, and Käköla (2010) argued that measuring user satisfaction in industry settings is difficult as the response rate often remains low. Thus, a simpler version of the user satisfaction measurement instrument is necessary.
 
==The problem with the dating of factors==
An early [[criticism]] of these measures was that the factors date as [[computer technology]] evolves and changes. This suggested the need for updates and led to a sequence of other factor-based instruments. Doll and Torkzadeh (1988), for example, produced a factor-based instrument for a new type of user emerging at the time, called an [[end user]]. They identified end-users as users who tend to interact with a [[Interface (computing)|computer interface]] only, while previously users interacted with developers and operational staff as well. McKinney, Yoon and Zahedi (2002) developed a model and instruments for measuring web-customer satisfaction during the information phase.<ref>{{cite journal |last1=McKinney |first1=Vicki |last2=Yoon |first2=Kanghyun |last3=Zahedi |first3=Fatemeh “Mariam” |title=The Measurement of Web-Customer Satisfaction: An Expectation and Disconfirmation Approach |journal=Information Systems Research |date=September 2002 |volume=13 |issue=3 |pages=296–315 |doi=https://doi.org/10.1287/isre.13.3.296.76}}</ref> Cheung and Lee (2005) in their development of an instrument to measure user satisfaction with e-portals, based their instrument on that of McKinney, Yoon and Zahedi (2002), which in turn was based primarily on instruments from prior studies.
 
==The problem of defining ''user satisfaction''==
Line 32:
Mullany, Tan, and Gallupe (2006) essay a definition of user satisfaction, claiming that it is based on memories of the past use of a system. Conversely, motivation, they suggest, is based on beliefs about the future use of the system. (Mullany et al., 2006).
 
The large number of studies over the past few decades, as cited in this article, shows that user information satisfaction remains an important topic in research studies despite somewhat [[Contradiction|contradictory]] results.
 
==A lack of theoretical underpinning==