Computer user satisfaction: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: pages, title, template type, doi, date. Add: url, isbn, chapter, jstor. Removed parameters. Formatted dashes. | Use this bot. Report bugs. | Suggested by Jay8g | Category:CS1 maint: date format | #UCB_Category 2/6
OAbot (talk | contribs)
m Open access bot: url-access updated in citation with #oabot.
 
(11 intermediate revisions by 11 users not shown)
Line 1:
{{Short description|AttitudeHow tosatisfied a computeruser systemis usedwith fora computer workprogram}}
{{technical|date=January 2025}}
'''Computer user satisfaction''' is the systematic [[measurement]] and [[evaluation]] of how well a [[computer system]] or [[computer application|application]] fulfills the needs and expectations of individual users. While sometimes referred to as '''System Satisfaction'''—especially when examining broader user groups or entire [[customer]] bases—it is also known simply as '''User Satisfaction''' in other contexts. These related terms can vary in scope, survey depth, [[anonymity]], and in how the findings are applied or translated to value.
 
'''Computer user satisfaction (CUS)''' is the systematic measurement and evaluation of how well a [[computer system]] or [[Computer application|application]] fulfills the needs and expectations of individual users. The measurement of computer user satisfaction studies how interactions with [[technology]] can be improved by adapting it to [[Psychology|psychological]] preferences and tendencies.
Evaluating [[user satisfaction]] helps gauge product stability, track industry trends, and measure overall user contentment. These insights are valuable for [[Strategic management|business strategy]], [[market research]], and [[sales forecasting]], as they enable [[Organization|organizations]] to preempt dissatisfaction and protect their [[market share]] and revenue by addressing issues before they escalate.
 
Evaluating [[user satisfaction]] helps gauge product stability, track industry trends, and measure overall user contentment.
Fields like [[User Interface]] (UI) [[User interface design|Design]] and [[User experience|User Experience]] (UX) [[User experience design|Design]] focus on the direct interactions people have with a system. While UI and UX often rely on separate [[Methodology|methodologies]], they share the goal of making systems more intuitive, efficient, and appealing. By emphasizing these [[design principles]] and incorporating user insights, developers can create systems that meet real-world needs and encourage people to keep using them.
 
Fields like [[User Interface]] (UI) Design and [[User experience|User Experience]] (UX) Design focus on the direct interactions people have with a system. While UI and UX often rely on separate methodologies, they share the goal of making systems more intuitive, efficient, and appealing.
== User Compliance ==
Using findings, [[Product design|product designers]], [[Business analysis|business analysts]], and [[Software engineering|software engineers]] anticipate change, and prevent user loss by identifying missing features, shifts in requirements, general improvements, or corrections. ''[[End user]] computing satisfaction'' is also [[Psychology|psychological]], in that the findings can sometimes represent objective views, rather than subjective truths. For example, previous success or failure impacts next-generation products. [[Organization|Organizations]] emphasize value in how products and opinions thereof manifest, preserving what is valued and caring how this is perceived.
 
==The Problem of Defining Computer User Satisfaction==
This often creates a [[Positive feedback|positive feedback loop]] and creates a sense of agency for the user. These surveys assist in steering the system towards stable product sector positions. This is important because the effects of satisfied or dissatisfied users could be difficult to change as time goes on. Real-world examples are [[End user|end-user]] loyalty in the premium [[mobile device]] segment, opinion and perception of dependable [[Automotive industry|automotive]] brands, or lower-quality products that originate from certain nationalities based on [[Stereotype|stereotypes]]. In such cases, the [[Corrective and preventive action|corrective action]] is not made on a product level; rather, it is handled in another business process via [[change management]], which aims to educate, inform, and promote the system with the users, swaying opinions that could not be other altered amending products.
In the literature, there are a variety of terms for computer user satisfaction (CUS): "user satisfaction" and "user information satisfaction," (UIS) "system acceptance,"<ref>{{Cite book |last=Igersheim |first=Roy H. |chapter=Managerial response to an information system |date=1976-06-07 |title=Proceedings of the June 7–10, 1976, national computer conference and exposition on – AFIPS '76 |chapter-url=https://dl.acm.org/doi/10.1145/1499799.1499918 |___location=New York, NY |publisher=Association for Computing Machinery |pages=877–882 |doi=10.1145/1499799.1499918 |isbn=978-1-4503-7917-5}}</ref> "perceived usefulness,"<ref>{{Cite journal |last1=Larcker |first1=David F. |last2=Lessig |first2=V. Parker |date=1980 |title=Perceived Usefulness of Information: A Psychometric Examination |url=https://onlinelibrary.wiley.com/doi/10.1111/j.1540-5915.1980.tb01130.x |journal=Decision Sciences |language=en |volume=11 |issue=1 |pages=121–134 |doi=10.1111/j.1540-5915.1980.tb01130.x |issn=1540-5915|url-access=subscription }}</ref> "MIS appreciation,"<ref>{{Cite journal |last=Swanson |first=E. Burton |date=1 October 1974 |title=Management Information Systems: Appreciation and Involvement |url=https://pubsonline.informs.org/doi/10.1287/mnsc.21.2.178 |journal=Management Science |volume=21 |issue=2 |pages=178–188 |doi=10.1287/mnsc.21.2.178 |issn=0025-1909 |via=InformsPubsOnLine|url-access=subscription }}</ref> "feelings about information system's,"<ref>{{Cite journal |last=Maish |first=Alexander M. |date=March 1979 |title=A User's Behavior toward His MIS |url=https://www.jstor.org/stable/249147 |url-status=dead |journal=MIS Quarterly |volume=3 |issue=1 |pages=39–52 |doi=10.2307/249147 |jstor=249147 |issn=0276-7783 |url-access=subscription }}</ref> and "system satisfaction".<ref>{{Cite journal |last1=Khalifa |first1=Mohamed |last2=Liu |first2=Vanessa |date=2004-01-01 |title=The State of Research on Information System Satisfaction |url=https://aisel.aisnet.org/jitta/vol5/iss4/4/ |journal=Journal of Information Technology Theory and Application |volume=5 |issue=4 |issn=1532-4516}}</ref> For our purposes, we will refer to CUS, or user satisfaction. Ang and Koh (1997) describe user information satisfaction as "a perceptual or subjective measure of system success."<ref>{{cite journal
 
|last1 = Ang
The satisfaction measurements are often used in industry, [[manufacturing]], or other large [[Organization|organizations]] to obtain internal user satisfaction. This could be used to motivate internal changes to improve or correct existing [[Business process|business processes]]. This could be by discontinuing use of systems or prompt adopting to more applicable solutions. It could also be based on [[Job satisfaction|employee satisfaction]] which is important to promote productive [[Work environment|work environments]].
|first1 = James
|last2 = Koh
|first2 = Stella
|date = June 1997
|title = Exploring the relationships between user information satisfaction and job satisfaction
|journal = International Journal of Information Management
|volume = 17
|issue = 3
|pages = 169–177
|doi = 10.1016/S0268-4012(96)00059-X
}}</ref> This means that CUS may differ in meaning and significance dependent on the author's definition. In other words, users who are satisfied with a system according to one definition and measure may not be satisfied according to another, and vice versa.
 
According to Doll and Torkzadeh's, (1988)CUS definitionis ofdefined useras satisfaction is, ''the opinion of the user about a specific computer application, whichthat they use''. Ives and colleagues defined CUS as "the extent to which users believe the information system available to them meets their information requirements."<ref name="DollTorkzadeh1988">{{cite journal
|last1 = Doll
|first1 = William J.
Line 27 ⟶ 36:
|doi = 10.2307/248851
|jstor = 248851
}}</ref>
}}</ref> In a broader sense, the definition of user satisfaction can be extended to user satisfaction with any computer-based [[electronics|electronic]] appliance. The term user can further be removed from objective and individual contexts, as "user" refers to the collective, from [[Individual|individuals]], groups and across [[Organization|organizations]]. The term "user" is sometimes used to refer to the account or profile of an operator, and this is not excluded from the context, as can be seen when reference is made to "users" of a [[Network topology|network]], the system, by the owner of the system, and by the [[Distribution (marketing)|distributor]] or [[Developer (software)|developer]] of the system.
 
Several studies have investigated whether or not certain factors influence the CUS. Yaverbaum's study found that people who use their computers irregularly tend to be more satisfied than regular users.<ref>{{Cite journal |last=Yaverbaum |first=Gayle J. |date=1988 |title=Critical Factors in the User Environment: An Experimental Study of Users, Organizations and Tasks |url=https://www.jstor.org/stable/248807 |journal=MIS Quarterly |publication-date=March 1988 |volume=12 |issue=1 |pages=75–88 |doi=10.2307/248807 |jstor=248807 |issn=0276-7783 |access-date=8 January 2025 |url-access=subscription }}</ref>
 
Mullany, Tan, and Gallupe claim that CUS is chiefly influenced by prior experience with the system or an analogue. Conversely, motivation, they suggest, is based on beliefs about the future use of the system.<ref name=":1" />
 
== Applications ==
Using findings from CUS, [[product design]]ers, [[Business analysis|business analysts]], and [[Software engineering|software engineers]] anticipate change and prevent user loss by identifying missing features, shifts in requirements, general improvements, or corrections.
 
Satisfaction measurements are most often employed by companies or organizations to design their products to be more appealing to consumers, identify practices that could be streamlined,<ref>{{Cite web |title=What Is a Customer Satisfaction Survey? |url=https://www.salesforce.com/service/customer-service-incident-management/customer-satisfaction-survey/#surveys-are-important |access-date=2025-01-08 |website=Salesforce |language=en}}</ref> harvest personal data to sell,<ref>{{Cite web |date=16 January 2024 |title=Privacy Policy |url=https://www.govexec.com/about/privacy-policy/ |access-date=2025-01-08 |website=Government Executive |at=Under the section "How We Collect Data," the subsection "Other Information you Choose to Provide" applies to the subsection "For Other Purposes" under the section "Who We Share Your Data With." |language=en}}</ref> and determine the highest price they can set for the least quality.<ref>{{Cite web |title=How to use Pricing Surveys |url=https://www.surveymonkey.com/market-research/resources/pricing-surveys/ |access-date=2025-01-08 |website=SurveyMonkey |language=en-US}}</ref> For example, based on satisfaction metrics, a company may decide to discontinue support for an unpopular service. CUS may also be extended to [[Job satisfaction|employee satisfaction]], for which similar motivations arise. As an ulterior motive, CUS surveys may also serve to pacify the group being surveyed, as it gives them an outlet to vent frustrations.
 
Doll and Torkzadeh's definition of CUS is "the opinion of the user about a specific [[Application software|computer application]], which they use." Note that the term "user" can refer to both the user of a product and the user of a device to access a product.<ref name="DollTorkzadeh1988" />
 
== The CUS and the UIS ==
Bailey and Pearson's (1983)39-Factor 39‑Factor ''Computer'' ''User Satisfaction (CUS) questionnaire'' and its derivative, the ''User Information Satisfaction (UIS)'' short-formwere ofboth Baroudi,surveys Olson,with andmultiple Ives are typical of instrumentsqualities; that oneis mightto termsay, asthe 'factor-based'.survey Theyasks consistrespondents ofto listsrank ofor factors,rate eachmultiple ofcategories. whichBailey theand respondent isPearson asked participants to ratejudge on39 onequalities, ordividing morethem multiple-pointinto scales.five Baileygroups, andeach Pearson'swith CUSdifferent askedscales forto fiverank ratingsor forrate each of 39the factorsqualities. The first four scales were for qualityfavorability ratings, and the fifth was an importance ratingranking. FromIn the fifthgroup ratingasked ofto rank the importance for each factorquality, theyresearchers found that their [[Sampling (statistics)|sample]] of users rated as most important: ''[[Accuracy and precision|"accuracy]]'', ''[[Reliability (statistics)|reliability]]'', ''[[Modernity|timeliness]]'', ''[[Relevance|relevancy]]'', and confidence.''[[confidence]] in the system"''. The factorsqualities of least importance were found to be ''"feelings of control'', ''volume of output'', ''vendor support'', ''degree of training'', and ''organizational position of [[Electronic data processing|EDP]]'' (the electronic data processing or computing department)." However, the CUS requires 39 x 5 = 195 individual seven‑point scale responses.<ref>{{cite journal |last1=Bailey |first1=James E. |last2=Pearson |first2=Sammy W. |date=May 1983 |title=Development of a Tool for Measuring and Analyzing Computer User Satisfaction |journal=Management Science |date=May 1983 |volume=29 |issue=5 |pages=530–545 |doi=10.1287/mnsc.29.5.530 }}</ref> Ives, Olson, and Baroudi (1983), amongst others, thought that so many responses could result in errors of [[Attrition (research)|attrition]].<ref>{{cite namejournal |last1="IvesOlsonBaroudi1983"Ives |first1=Blake |last2=Olson |first2=Margrethe H. |last3=Baroudi |first3=Jack J. |date=1 October 1983 |title=The measurement of user information satisfaction |journal=Commun. ACM |volume=26 |issue=10 |pages=785–793 |doi=10.1145/358413.358430}}</ref> This indicates that the respondent's failure to return the questionnaire directly correlated with the length of the surveys. This can result in reduced sample sizes and distorted results, as those who return long questionnaires may have differing [[Trait theory|psychological traits]] from those who do not. Ives and colleagues developed the User Information Satisfaction (UIS) as a means of addressing this. The UIS only requires the respondent to rate 13 metrics. 2 scales are provided per metric, yielding 26 individual responses. However, in a recent article, Islam, Mervi, and Käköla argued that measuring CUS in industry settings is difficult as the response rate often remains low. Thus, a simpler version of the CUS measurement method is necessary.<ref>{{cite journal
|last1 = Ives
|first1 = Blake
|last2 = Olson
|first2 = Margrethe H.
|last3 = Baroudi
|first3 = Jack J.
|date = October 1983
|title = The measurement of user information satisfaction
|journal = Communications of the ACM
|volume = 26
|issue = 10
|pages = 785–793
|doi = 10.1145/358413.358430
}}</ref> This means, the respondent's failure to return the questionnaire or the increasing carelessness of the respondent as they fill in a long form. In [[psychometrics]], such errors not only result in reduced sample sizes but can also distort the results, as those who return long questionnaires, properly completed, may have differing [[Trait theory|psychological traits]] from those who do not. Ives, et al. thus developed the UIS. This only requires the respondent to rate 13 factors that remain in significant use. Two seven‑point scales are provided per factor (each for a quality), requiring 26 individual responses. However, in a recent article, Islam, Mervi, and Käköla (2010) argued that measuring user satisfaction in industry settings is difficult as the response rate often remains low.<ref>{{cite journal
|last1 = Islam
|first1 = A.K.M. Najmul
Line 55 ⟶ 61:
|journal = AMCIS 2010 Proceedings
|url = https://aisel.aisnet.org/amcis2010/287
}}</ref> Thus, a simpler version of the user satisfaction measurement instrument is necessary.
 
==The problem with the dating of factors==
An early criticism of these measures was that the factors date as [[computer technology]] evolves and changes. This suggested the need for updates and led to a sequence of other factor-based instruments. Doll and Torkzadeh (1988), for example, produced a factor-based instrument for a new type of user emerging at the time, called an "[[end user]]."<ref name="DollTorkzadeh1988"/> They identified end-users as users who tend to interact with a [[Interface (computing)|computer interface]] only, while previously users interacted with developers and operational staff as well. McKinney, Yoon and Zahedi (2002) developed a model and instruments for measuring web-customer satisfaction during the information phase.<ref>{{cite journal
|last1 = McKinney
|first1 = Vicki
|last2 = Yoon
|first2 = Kanghyun
|last3 = Zahedi
|first3 = Fatemeh "Mariam"
|date = September 2002
|title = The Measurement of Web-Customer Satisfaction: An Expectation and Disconfirmation Approach
|journal = Information Systems Research
|volume = 13
|issue = 3
|pages = 296–315
|doi = 10.1287/isre.13.3.296.76
}}</ref> Cheung and Lee (2005) in their development of an instrument to measure user satisfaction with e-portals, based their instrument on that of McKinney, Yoon and Zahedi (2002), which in turn was based primarily on instruments from prior studies.<ref>{{cite book
|last1 = Cheung
|first1 = C.M.K.
|last2 = Lee
|first2 = M.K.O.
|chapter = The Asymmetric Effect of Website Attribute Performance on Satisfaction: An Empirical Study
|date = January 2005
|title = Proceedings of the 38th Annual Hawaii International Conference on System Sciences
|pages = 175–184
|doi = 10.1109/HICSS.2005.585
|isbn = 0-7695-2268-8
}}</ref>
 
==The problem of defining ''user satisfaction''==
As none of the [[instruments]] in common use rigorously define their construct of user satisfaction, some scholars, such as Cheyney, Mann, and Amoroso (1986) have called for more research on the factors that influence the success of end-user [[computing]].<ref>{{cite journal
|last1 = Cheney
|first1 = Paul H.
|last2 = Mann
|first2 = Robert I.
|last3 = Amoroso
|first3 = Donald L.
|date = 1986
|title = Organizational Factors Affecting the Success of End-User Computing
|journal = Journal of Management Information Systems
|volume = 3
|issue = 1
|pages = 65–80
|doi = 10.1080/07421222.1986.11517755
}}</ref> Little subsequent effort which sheds new light on the matter exists, however. All factor-based instruments run the risk of including factors irrelevant to the respondent, while omitting some that may be highly significant to him/her. Needless to say, this is further exacerbated by the ongoing changes in [[information technology]].
 
In the literature, there are two terms for user satisfaction: "user satisfaction" and "user information satisfaction" (UIS), which are used interchangeably. According to Doll and Torkzadeh (1988), user satisfaction is defined as the opinion of the user about a specific [[computer application]] that they use.<ref name="DollTorkzadeh1988"/> Ives et al. (1983)<ref name="IvesOlsonBaroudi1983"/> defined "user information satisfaction" as "the extent to which users believe the information system available to them meets their information requirements." Other terms for user information satisfaction are "system acceptance" (Igersheim, 1976),<ref>{{cite book
|last1 = Igersheim
|first1 = Roy H.
|chapter = Managerial response to an information system
|date = June 1976
|title = Proceedings of the June 7-10, 1976, national computer conference and exposition on - AFIPS '76
|pages = 877–882
|doi = 10.1145/1499799.1499918
}}</ref> "perceived usefulness" (Larcker and Lessig, 1980),<ref>{{cite journal
|last1 = Larcker
|first1 = David F.
|last2 = Lessig
|first2 = V. Parker
|date = January 1980
|title = Perceived usefulness of information: a psychometric examination
|journal = Decision Sciences
|volume = 11
|issue = 1
|pages = 121–134
|doi = 10.1111/j.1540-5915.1980.tb01130.x
}}</ref> "MIS appreciation" (Swanson, 1974)<ref>{{cite journal
|last1 = Swanson
|first1 = E. Burton
|date = October 1974
|title = Management Information Systems: Appreciation and Involvement
|journal = Management Science
|volume = 21
|issue = 2
|pages = 178–188
|doi = 10.1287/mnsc.21.2.178
}}</ref> and "feelings about information system" (Maish, 1979).<ref>{{cite journal
|last1 = Maish
|first1 = Alexander M.
|date = March 1979
|title = A User's Behavior toward His MIS
|journal = MIS Quarterly
|volume = 3
|issue = 1
|pages = 39–52
|doi = 10.2307/249147
|jstor = 249147
}}</ref> Ang and Koh (1997) have described user information satisfaction (UIS) as "a perceptual or subjective measure of system success."<ref>{{cite journal
|last1 = Ang
|first1 = James
|last2 = Koh
|first2 = Stella
|date = June 1997
|title = Exploring the relationships between user information satisfaction and job satisfaction
|journal = International Journal of Information Management
|volume = 17
|issue = 3
|pages = 169–177
|doi = 10.1016/S0268-4012(96)00059-X
}}</ref> This means that user information satisfaction will differ in meaning and significance from person to person. In other words, users who are equally satisfied with the same system according to one definition and measure may not be similarly satisfied according to another.
 
Several studies have investigated whether or not certain factors influence the UIS. Yaverbaum's (1988) study found that people who use their computers irregularly tend to be more satisfied than regular users.<ref>{{cite journal
|last1 = Yaverbaum
|first1 = Gayle J.
|date = March 1988
|title = Critical Factors in the User Environment: An Experimental Study of Users, Organizations and Tasks
|journal = MIS Quarterly
|volume = 12
|issue = 1
|pages = 75–88
|doi = 10.2307/248807
|jstor = 248807
}}</ref> Ang and Soh's (1997) research, on the other hand, could find no evidence that computer useage frequency affects UIS.<ref>{{cite journal
|last1 = Ang
|first1 = James
|last2 = Soh
|first2 = Pekhooi
|date = October 1997
|title = User information satisfaction, job satisfaction and computer background: An exploratory study
|journal = Information & Management
|volume = 32
|issue = 5
|pages = 255–266
|doi = 10.1016/S0378-7206(97)00030-X
}}</ref>
 
Mullany, Tan, and Gallupe (2006) claim that user satisfaction is chiefly influenced by prior experience with the system or an analogue. Conversely, motivation, they suggest, is based on beliefs about the future use of the system.<ref name="MullanyTanGallupe2006">{{cite journal
|last1 = Mullany
|first1 = Michael J.
|last2 = Tan
|first2 = Felix B.
|last3 = Gallupe
|first3 = R. Brent
|date = August 2006
|title = The S-Statistic: a measure of user satisfaction based on Herzberg's theory of motivation
|journal = ACIS 2006 Proceeding
|url = https://aisel.aisnet.org/acis2006/86
}}</ref>
 
==The Problem With Dating of Metrics==
The large number of studies over the past few decades, as cited in this article, shows that user information satisfaction remains an important topic in research studies despite somewhat [[Contradiction|contradictory]] results.
An early criticism of these measures was that surveys would become outdated as [[computer technology]] evolves. This led to the synthesis of new metric-based surveys. Doll and Torkzadeh, for example, produced a metric-based survey for the "[[end user]]." They define end-users as those who tend to interact with a [[Interface (computing)|computer interface]] alone without the involvement of operational staff.<ref name="DollTorkzadeh1988" /> McKinney, Yoon, and Zahedi developed a model and survey for measuring web customer satisfaction.<ref>{{cite journal |last1=McKinney |first1=Vicki |last2=Yoon |first2=Kanghyun |last3=Zahedi |first3=Fatemeh “Mariam” |date=September 2002 |title=The Measurement of Web-Customer Satisfaction: An Expectation and Disconfirmation Approach |journal=Information Systems Research |volume=13 |issue=3 |pages=296–315 |doi=10.1287/isre.13.3.296.76}}</ref>
 
==Grounding in Theory==
==A lack of theoretical underpinning==
Another difficulty with most of these instrumentssurveys is their lack of theoreticala underpinningfoundation byin [[Psychology|psychological]] or managerial theory. Exceptions to this were the model of web site design success developed by Zhang and von Dran (2000)<ref>{{cite journal
|last1 = Zhang
|first1 = Ping
Line 210 ⟶ 79:
|pages = 1253–1268
|doi = 10.1002/1097-4571(2000)9999:9999<::AID-ASI1039>3.0.CO;2-O
}}</ref> and the measure of user satisfactionCUS with e-portals developed by Cheung and Lee.<ref>C. M. K. Cheung and M. K. O. Lee, "The Asymmetric Effect of Website Attribute Performance on Satisfaction: An Empirical Study," ''Proceedings of the 38th Annual Hawaii International Conference on System Sciences'', Big Island, HI, USA, (2005), pp. 175c-175c, doi: 10.1109/HICSS.2005.585.</ref> Both of these models drew uponon Herzberg's two-factor theory of [[motivation]].<ref>{{cite book |last1=Herzberg |first1=Frederick |title=Work and the nature of man |date=1972 |publisher=Staples Press |___location=London |isbn=978-0286620734 |edition=reprint |author-link___location=Frederick HerzbergLondon}}</ref> Consequently, their factorsqualities were designed to measure both '"satisfiers'" and '"hygiene factors'". However, Herzberg's theory itselfhas isbeen criticized for failingbeing too vague, particularly in its failure to distinguish adequately between the terms ''such as motivation'', ''job motivation'', ''job satisfaction'', and so on. Islam (2011) in a recent study found that the sources of dissatisfaction differs from the sources of satisfaction. He found that the environmental factors (e.g., system quality) were more critical to cause dissatisfaction while outcome specific factors (e.g., perceived usefulness) were more critical to cause satisfactionetc.<ref>{{cite journal
|last1 = Islam
|first1 = A.K.M. Najmul
Line 220 ⟶ 89:
 
==Cognitive style==
A study by Mullany (2006) showed that during the life of a [[system]], satisfaction from users will on average increase in time as the users' experiences with the system increase.<ref Whilstname=":0">{{cite thethesis overall|last=Mullany findings|first=Michael John |title=The Use of theAnalyst-User studiesCognitive showedStyle onlyDifferentials ato weakPredict linkAspects betweenof theUser gapSatisfaction inwith theInformation users'Systems and|date=2006 analysts|degree=PhD |publisher=Auckland University of Technology |url=https://hdl.handle.net/10292/338}}</ref> The study found that users' [[cognitive style]] (measuredpreferred usingapproach theto KAIproblem scalessolving) andwas not an accurate predictor of the user's satisfactionactual CUS. Similarly, developers of the system participated, and they too did not have a morestrong significantcorrelation linkbetween wascognitive foundstyle inand theactual regionsCUS. ofHowever, a strong correlation was observed between 85 and 652 days into using the systems' usagesystem. This link showsmeans that one's manner of thinking and how their attitude towards a largeparticular absoluteproduct gapbecame betweenincreasingly usercorrelated andas analysttime cognitivewent styleson. oftenSome yieldsresearchers have hypothesized that familiarity with a highersystem ratemay ofcause userone dissatisfactionto thanmentally aassimilate smallerto gapaccommodate that system. FurthermoreMullany, anTan, analystand withGallupe devised a moresystem adaptive(the cognitiveSystem styleSatisfaction thanSchedule the(SSS)), which utilizes user-generated atqualities and so avoids the earlyproblem andof latedating stagesqualities.<ref (approximatelyname=":0" days/> 85They define CUS as the absence of user dissatisfaction and 652)complaint, as assessed by users who have had at least some experience of using the system. usageMotivation, tendsconversely, tois reducebased useron dissatisfactionbeliefs about the future use of the system.<ref name=":1">{{cite thesisjournal
|last = Mullany
|first = Michael John
|date = 2006
|title = The Use of Analyst-User Cognitive Style Differentials to Predict Aspects of User Satisfaction with Information Systems
|url = https://hdl.handle.net/10292/338
|degree = PhD
|publisher = Auckland University of Technology
}}</ref>
 
Mullany, Tan, and Gallupe (2006) devised an instrument (the System Satisfaction Schedule (SSS)), which utilizes user-generated factors (that is, almost exclusively, and so avoids the problem of the dating of factors.<ref name="MullanyTanGallupe2006"/> Also aligning themselves to Herzberg, these authors argue that the perceived usefulness (or otherwise) of tools of the trade are contextually related, and so are special cases of hygiene factors. They consequently define [[user satisfaction]] as the absence of user dissatisfaction and complaint, as assessed by users who have had at least some experience of using the system. In other words, satisfaction is based on memories of the past use of a system. Motivation, conversely, is based on beliefs about the future use of the system.<ref>{{cite journal
|last1 = Mullany
|first1 = Miachael J.
Line 245 ⟶ 104:
 
==Future developments==
Currently, some [[Scholar|scholars]] and practitioners are experimenting with other measurement methods and further refinements ofto the definition for ''satisfaction'' and ''userof satisfaction''CUS. Others are replacing structured questionnaires bywith unstructured ones, where the respondent is asked simply to write down or dictate all the factorseverything about a system that either satisfies or dissatisfies them. One problem with this approach, however, is that theit instruments tendtends not to yield [[Quantitative research|quantitative]] results, making comparisons and [[Statistical inference|statistical analysis]] difficult. Also, if scholars cannot agree on the precise meaning of the term ''satisfaction'', respondents will be highly unlikely to respond consistently to such instruments. Some newer instruments contain a mix of structured and unstructured items.
 
== References ==
{{Reflist}}
 
== Further Readingreading ==
*{{cite journal
|last1 = Bargas-Avila
Line 309 ⟶ 168:
|url = https://figshare.com/articles/conference_contribution/Information_Systems_Success_Revisited/23888820
}}
 
*{{cite journal
|last1 = Delone
|first1 = William H.
|last2 = McLean
|first2 = Ephraim R.
|date = Spring 2003
|title = The DeLone and McLean Model of Information Systems Success: A Ten-Year Update
|journal = Journal of Management Information Systems
|volume = 19
|issue = 4
|pages = 9–30
|doi = 10.1080/07421222.2003.11045748
}}
*{{cite journal
|last1 = Doll
|first1 = William J.
|last2 = Torkzadeh
|first2 = Gholamreza
|date = March 1991
|title = The Measurement of End-User Computing Satisfaction: Theoretical and Methodological Issues
|journal = MIS Quarterly
|volume = 15
|issue = 1
|pages = 5–10
|doi = 10.2307/249429
|jstor = 249429
}}
*{{cite book
|title=The Motivation to Work