Exploring End-Users Computing Satisfaction (EUCS) Towards ...
EUCS test–retest reliability in representational model decision support systems
-
Upload
roger-mchaney -
Category
Documents
-
view
214 -
download
2
Transcript of EUCS test–retest reliability in representational model decision support systems
Research
EUCS test±retest reliability in representationalmodel decision support systems
Roger McHaneya,*, Ross Hightowerb, Doug White1,c
aDepartment of Management, College of Business Administration, Kansas State University, Manhattan, KS 66506, USAbCollege of Business Administration, University of Central Florida, P.O. Box 161991, Orlando, FL 32816-1991, USA
cUniversity of Northern Colorado, College of Business Administration, Kepner Hall, Geeley, CO 80639, USA
Received 6 February 1998; accepted 28 January 1999
Abstract
A test±retest reliability study of an end-user computing satisfaction instrument was conducted. The instrument was distributed
to real-world representational decision support system users through a mail survey. One month later, follow-up surveys were
mailed asking the original respondents to again evaluate the same system. The data sets were compared and suggest that the
instrument is internally consistent and stable when applied to its users. # 1999 Elsevier Science B.V. All rights reserved.
Keywords: End-user computing satisfaction (EUCS); Computer simulation; Satisfaction
1. Introduction
The proliferation of information technology (IT)
and its importance in effective managerial decision
making has created a greater need for valid and
reliable evaluative instruments. While a number of
these instruments have been developed [7, 20] and
validated [13, 17, 18, 29, 37] using various techniques,
caution must be used in its application to speci®c IT
areas outside those previously tested [9]. In addition,
many reliability tests were performed on relatively
small student groups; the members may differ from
their real-world counterparts [6, 19]. The present
article reports on the validity and test±retest reliability
of an end-user computing satisfaction instrument
when applied to real-world users of representational
model decision support systems (DSS) [1].
Researchers wishing to conduct studies assessing
success or failure of various information system
applications are faced with a dizzying array of choices
[3]. Delone and McLean [8] have organized over 180
articles according to six dimensions of success±
system quality, information quality, use, user satisfac-
tion, individual impact and organizational impact.
Many of these studies have provided instruments,
measures, or techniques intended to measure or pre-
dict information system success. Much of the work
in this area has been in response to the lack of a widely
accepted dependent variable for the measurement
of IS success.
The identi®cation of a satisfaction measure both
plagues and motivates IS researchers. Long ago, Keen
[23] listed issues of importance in the ®eld of MIS and
included the identi®cation of a dependent variable.
Information & Management 36 (1999) 109±119
*Corresponding author. Fax: +1-913-532-7479; e-mail:
[email protected]: [email protected]
0378-7206/99/$ ± see front matter # 1999 Elsevier Science B.V. All rights reserved.
PII: S - 0 3 7 8 - 7 2 0 6 ( 9 9 ) 0 0 0 1 0 - 5
Delone and McLean more recently echoed the impor-
tance of this sentiment by stating, `̀ . . .if information
systems research is to make a contribution to the world
of practice, a well-de®ned outcome measure (or mea-
sures) is essential. . .without a well-de®ned dependent
variable, much of I/S research is purely speculative.''
Without a dependent variable, measurable by reli-
able instruments [21, 34], a meaningful comparison of
competing software packages, implementation
approaches, system attributes, and software features
becomes impossible. In spite of these problems, pro-
gress toward the identi®cation of a universal, depen-
dent IS success variable has been made. Yet, no single
standard has gained widespread acceptance in the IS
research community. Researchers have operationa-
lized dependent variables according to various criteria.
Delone and McLean suggested researchers might
eventually develop a single comprehensive instrument
to account for all dimensions of success. Until such an
instrument is developed, researchers studying a spe-
ci®c instance of information technology must spend
time selecting and validating an appropriate measure.
Each potential surrogate for success must be assessed.
The level at which the output of an IS will be measured
must be determined, relevant aspects of a system must
be determined and researchers' beliefs regarding
selection of an appropriate surrogate for success
require consideration.
After a choice is made, the researcher must face the
possibility that the instrument may not prove ideal, so
an assessment of the appropriateness of the selected
measure must be made. This means taking extra care
to demonstrate the validity and reliability of the
instrument used in the new context.
Although a comprehensive, standard IS instrument
for success does not yet exist, several very respectable
measures are presently available and in use. Among
these is the Doll and Torkzadeh [9] instrument for
measuring end-user computing satisfaction (EUCS).
This consists of two components: ease of use, and
information product. The information product com-
ponent is operationalized through measures of con-
tent, accuracy, format and timeliness [2]. These four
constructs, together with the ease of use variable,
comprise an instrument for end-user computing satis-
faction. This instrument is speci®cally designed to
work within the current end-user computing environ-
ment consistent with current trends [30].
Prior reliability and validation tests of the EUCS
instrument include the original development of the
instrument by Doll and Torkzadeh. This study indi-
cated adequate reliability and validity across a variety
of applications in various industries. In a follow-up
study, Torkzadeh and Doll used the responses from
forty-one M.B.A. student students familiar with var-
ious applications to test short-term (2 hour) and long-
term stability of the instrument (2 weeks) with test±
retest administrations. The results indicated that the
instrument is internally consistent and stable. Hen-
drickson, et al. further extended the long-term (2
weeks) test-retest reliability of the instrument in a
single public institution where mainframe- and perso-
nal computer-based applications were considered.
This study investigated the instrument at two points
in time separated by two years. The initial test±retest
samples included 32 mainframe and 35 PC users. The
second test±retest sample relied on 22 mainframe and
22 PC users. Doll et al. [10] performed a con®rmatory
factor analysis based on a sample of 409 respondents
from a variety of ®rms and applications.
Prior tests of EUCS have been encouraging but
suffer from one or more limitations. Tests have either
used student groups or groups of users within speci®c
®rms. Students may not be good surrogates for real-
world users and results from a single organization may
not be generalizable [22]. In addition, most published
studies have concentrated on either reliability or
validity (e.g. Refs. [16, 26]), not both. This study
addresses these limitations by examining the reliabil-
ity and validity of EUCS using a single sample drawn
from a population using real-world systems in a wide
range of ®rms.
2. Testing instruments
2.1. Reliability of instruments
An intuitive approach to estimating the reliability of
an instrument is to assess the repeatability or consis-
tency of collected measurements. A simple method of
putting this idea into practice is to employ the test -
retest method. When using this technique, a group of
subjects is tested twice using the same measure. The
two sets of scores are correlated and the correlation
coef®cient is used as an estimate of the reliability of
110 R. McHaney et al. / Information & Management 36 (1999) 109±119
the measurement. This approach assumes the correla-
tion between the test, and the retest is due to the
underlying, unobservable true scores of the instrument
that have remained constant. The correlation is
expected to be less than perfect due to random mea-
surement errors that may have occurred. While this
assumption is considerably optimistic, a ®nding of a
strong correlation does support the premise that the
instrument is stable.
The test±retest technique has been widely used,
however, it does have shortcomings. If a person is
tested twice in a row using the same instrument, a bias
may emerge due to a carry-over effect. The act of
®lling out the questionnaire items in the test may
in¯uence the responses given in the retest. If the
interval of time between testings is too short, the
respondent may recall their replies and try to match
them rather than reassess the content of the questions.
This could lead to an in¯ation of the correlations. If the
time period between administrations is too long, the
respondent or the system under study may undergo
changes that in¯uence the response. In addition, the
respondents will have an opportunity to think of
aspects of the system they had not considered in the
original administration of the test. To minimize these
effects, Nunnally [28] suggests waiting for an interval
between two weeks and one month. He suggests that
this period of time should be suf®cient to keep mem-
ory from being a strong factor.
Beside recall and time, another consideration is
reactivity, which can de¯ate test±retest correlations
and indicate a falsely lower reliability. This phenom-
enon occurs when a person becomes sensitized to the
instrument and `learns' to respond in a way he or she
believes is expected.
2.2. Validity
The validity, or the extent to which an instrument
measures what it is intended to measure [4], may be
assessed in two ways; construct and convergent valid-
ity. Construct validity is the degree to which the
measures chosen are either true constructs, describing
the event of interest or merely artifacts of the meth-
odology itself. Two methods of assessing construct
validity are correlation analysis and con®rmatory
factor analysis. Correlation analysis is used to indicate
the degree of association between each item and the
total score of an instrument. A signi®cant correlation
indicates the presence of construct validity. A con-
®rmatory analysis procedure can be performed to
provide evidence that a set of latent variables exist
and that these account for covariances among a set of
observed variables. The a priori designation of these
factor patterns are tested against sample data to pro-
vide evidence of their psychometric stability.
3. Current study
3.1. Representational model decision support
systems
An early study by Alter breaks DSS model types
into seven general categories. These are based on the
degree to which the system outputs determine the
resulting decision. The research examines a particular
DSS type±representational models, speci®cally dis-
crete event computer simulation.
Computer simulation has become a popular deci-
sion support tool [11, 12, 14, 15]. Computerized
simulation dates back to the 1950s, when some of
the ®rst computer models were developed. Simulation
was time consuming and dif®cult to use because of the
slow hardware and software platforms available at the
time. Problem-solving was cumbersome and costly. In
the seventies and early eighties simulation started to
be used more often in organizational decision-making
settings. The introduction of the personal computer
and the proliferation of simulation software in the late
eighties and early nineties has put computer simula-
tion into the hands of more decision makers. Not only
did the number of users increase, but so did the variety
of available simulation software packages [33]. In the
1993 OR/MS Today Simulation Software Survey,
James Swain [35] reported that practitioners of OR/
MS were familiar with simulation and also discussed
more than ®fty discrete event simulation products.
Most modern simulation usage centers on its capabil-
ity [24, 25, 27, 32, 36].
3.2. Research methods
This study's population of interest was users of
representational DSS, speci®cally discrete-event com-
puter simulation. A list of potential candidates was
R. McHaney et al. / Information & Management 36 (1999) 109±119 111
developed, using the membership list of the Society
for Computer Simulation and a list of the recent
Winter Simulation Conference attendees. Five hun-
dred and three of these candidates were randomly
selected. First, a letter was sent to the candidates
asking for participation, then the survey package
was mailed. A reminder card was sent two weeks
later. Each candidate received two questionnaires, one
asking for a report on a successful simulation effort
and the second asking for a report on a less-than-
successful one.
One-hundred and ninety-eight responses were
returned. Of these, 123 were suitable for analysis.
Fourteen packets were returned unopened. Another
®fty-nine returns indicated that the respondent would
be unable to participate in the study. Of the 123 usable
responses, forty questionnaires were paired. In other
words, both successful and less-than-successful simu-
lations were reported by the same source. Therefore,
of the 503 packets sent out, 105 different individuals/
companies were represented in a total of 123 different
simulation projects. This makes the net response rate
of 105 out of 489, or 21.5%. The respondents worked
in a variety of ®elds, ranging from manufacturing,
health, government, service, computer, and consult-
ing. The projects range from simple predictive models
to complex manufacturing systems.
As recommended by Nunnally [28], after receiving
an initial response, the follow-up survey was mailed
one month later, to prevent carry-over and memory
effects. The follow-up survey provided information
about the system described in the initial response and
asked for several additional pieces of information. The
set of EUCS questions were included among them.
Seventy-four usable responses were received in this
follow-up survey, making the response rate of 74 out
of 105, or 70%.
3.3. Reliability
3.3.1. Alpha
Reliability or internal consistency of the EUCS
instrument was assessed using Cronbach's � [5] and
found to be 0.928 for the test data and 0.938 for the
retest data. Table 1 shows these results. This study's
� compares favorably with an overall � of 0.92
in the original study [9]. The subscale �s also
report satisfactory values ranging from 0.797 to
0.929 for the test data and from 0.702 to 0.911 for
the retest data.
3.3.2. Correlation results
Correlation coef®cients between the components of
the end-user computing satisfaction instrument in the
test and retest administrations were computed. In the
individual scale, the correlations were found to range
from 0.409 to 0.701. The values were seen to improve
in the subscales which ranged from 0.551 to 0.729.
The global measures of EUCS correlated at 0.760.
These results are shown in Table 2.
Table 1
Internal consistencyÐCronbach's �s
Variable Test data � Retest data �
Overall instrument 0.928 0.938
Subscales
Content 0.851 0.911
Accuracy 0.929 0.886
Format 0.839 0.781
Ease of use 0.887 0.871
Timeliness 0.797 0.702
Global score 0.866 0.871
Table 2
Correlation analysis
Variable Correlation (test±retest)
A1 0.553
A2 0.544
C1 0.586
C2 0.561
C3 0.580
C4 0.559
E1 0.701
E2 0.588
F1 0.578
F2 0.602
T1 0.503
T2 0.409
Subscales
Accuracy 0.572
Content 0.664
Ease of use 0.729
Format 0.684
Timeliness 0.551
Overall
Summary 0.760
112 R. McHaney et al. / Information & Management 36 (1999) 109±119
3.3.3. Paired T-tests
The paired T-test results are shown in Table 3. The
subjects' individual responses from the test data set are
paired with the corresponding response from the retest
data set. The resulting mean differences are reported
for the individual items, the subscales and the global
score. Results indicate signi®cant differences (p < 0.05)
only in the responses for item A2Ð`Satis®ed with
accuracy'. The accuracy subscale also demonstrates a
difference between the test and retest data (p < 0.08).
The overall T-test was not signi®cant (p < 0.15).
3.4. Validity
Two methods of assessing construct validity were
used: correlation analysis and con®rmatory factor
analysis. Table 4 lists the single item correlations,
all of which are signi®cant, ranging from 0.579 to
0.805 for the test data and from 0.647 to 0.803 for the
retest data. The subscale correlations are also signi®-
cant and range from 0.634 to 0.852 in the test data and
from 0.641 to 0.852 in the retest data. These correla-
tions support the premise that the instrument has
construct validity. Table 5 reports simple statistics
and correlations for each element of the instrument.
Table 3
Paired T-test results
Variable Mean
difference
SE T Probability
> |T |
C1 0.013 0.099 0.136 0.89
C2 ÿ0.054 0.105 ÿ0.514 0.61
C3 ÿ0.149 0.103 ÿ1.442 0.15
C4 ÿ0.135 0.106 ÿ1.275 0.21
A1 ÿ0.135 0.104 ÿ1.297 0.20
A2 ÿ0.230 0.113 ÿ2.031 0.05** a
F1 ÿ0.040 0.091 ÿ0.445 0.66
F2 ÿ0.135 0.099 ÿ1.369 0.18
E1 0.040 0.108 0.378 0.71
E2 ÿ0.054 0.120 ÿ0.450 0.65
T1 ÿ0.040 0.122 ÿ0.331 0.74
T2 ÿ0.162 0.114 ÿ1.424 0.16
Subscales
Accuracy ÿ0.365 0.202 ÿ1.803 0.08* b
Content ÿ0.324 0.320 ÿ1.014 0.31
Ease of use ÿ0.014 0.189 ÿ0.071 0.94
Format ÿ0.176 0.152 ÿ1.156 0.25
Timeliness ÿ0.203 0.193 ÿ1.048 0.30
Overall
Global ÿ1.081 0.737 ÿ1.468 0.15
a Signi®cant (p < 0.05).b Signi®cant (p < 0.10).
Table 4
EUCS instrument reliability analysis
Item Corrected item±total correlation � of entire instrument, if item or construct is deleted
test retest combined test retest combined
C1 0.81 0.80 0.81 0.93 0.93 0.92
C2 0.80 0.80 0.83 0.93 0.93 0.92
C3 0.74 0.73 0.73 0.93 0.93 0.93
C4 0.77 0.68 0.72 0.93 0.93 0.93
A1 0.70 0.71 0.72 0.93 0.93 0.93
A2 0.75 0.75 0.76 0.93 0.93 0.93
E1 0.66 0.69 0.70 0.93 0.93 0.93
E2 0.66 0.65 0.80 0.93 0.93 0.92
F1 0.63 0.73 0.64 0.93 0.93 0.93
F2 0.80 0.78 0.62 0.93 0.93 0.93
T1 0.58 0.65 0.57 0.93 0.93 0.93
T2 0.71 0.69 0.69 0.93 0.93 0.93
Subscales
Content 0.85 0.85 0.86 0.83 0.83 0.82
Accuracy 0.72 0.70 0.73 0.84 0.85 0.84
Ease of Use 0.63 0.64 0.60 0.85 0.86 0.86
Format 0.80 0.79 0.81 0.83 0.84 0.83
Timeliness 0.70 0.70 0.67 0.84 0.85 0.85
R. McHaney et al. / Information & Management 36 (1999) 109±119 113
To further support the premise that this instrument
has construct validity, a factor analysis procedure was
performed to con®rm its psychometric properties.
Doll and Torkzadeh originally proposed a ®ve scale
factor structure. In subsequent research, they recom-
mended a second-order factor structure. It was a single
factor called end-user computing satisfaction. The ®rst-
order structure matches the original factor structure of
Table 5
EUCS instrument correlation matrices and simple statistics (sample size � 148)
(a) Item correlations
C2 0.81
C3 0.65 0.67
C4 0.60 0.69 0.54
A1 0.58 0.68 0.47 0.62
A2 0.65 0.71 0.48 0.66 0.83
E1 0.56 0.51 0.56 0.36 0.42 0.45
E2 0.53 0.50 0.53 0.29 0.42 0.44 0.78
F1 0.62 0.65 0.64 0.53 0.50 0.50 0.49 0.44
F2 0.69 0.74 0.68 0.63 0.61 0.71 0.58 0.55 0.68
T1 0.53 0.45 0.43 0.49 0.43 0.46 0.30 0.40 0.44 0.40
T2 0.59 0.59 0.50 0.76 0.59 0.54 0.40 0.32 0.48 0.49 0.61
C1 C2 C3 C4 A1 A2 E1 E2 F1 F2 T1
(b) Subscale and overall instrument correlations
Accuracy 0.728
Ease of Use 0.588 0.479
Format 0.818 0.667 0.597
Timeliness 0.695 0.583 0.418 0.545
Overall EUCS 0.941 0.823 0.745 0.868 0.768
Content Accuracy Ease of use Format Timeliness
(c) Individual items: simple statistics
Item Mean Standard deviation
C1 3.83 0.93
C2 4.00 0.95
C3 3.75 0.96
C4 3.99 0.97
A1 4.04 0.95
A2 4.11 1.02
E1 3.34 1.19
E2 3.41 1.13
F1 3.82 0.85
F2 3.96 0.94
T1 3.94 1.05
T2 4.07 0.90
(d) Subscales: simple statistics
Factor Mean Standard deviation
Content 15.57 3.29
Accuracy 8.16 1.88
Format 7.78 1.64
Ease of use 6.75 2.19
Timeliness 8.01 1.75
Overall EUCS 46.26 9.05
114 R. McHaney et al. / Information & Management 36 (1999) 109±119
Content, Accuracy, Format, Ease of Use and Time-
liness. The exact form of the instrument used in this
study is illustrated in Fig. 1. The second-order model
was assessed using con®rmatory factor analysis with
the SAS proc CALIS and LISREL 8 [31]. The model
contains the a priori factor structure that was tested.
Table 6 presents the goodness of ®t indexes for this
study and compares them to the values reported by
Doll et al. [10]. The absolute indexes (GFI � 0.866,
AGFI � 0.762 and RMSR � 0.051) compare favor-
ably with the values reported by Doll et al., indicating
a good model-data ®t. The �2-statistic divided by the
degrees of freedom also indicates a reasonable ®t at
3.30 [38].
LISREL's maximum likelihood estimates of the
standardized parameter estimates are presented in
Fig. 1. EUCS model. Five ®rst-order factors. One second-order factor.
R. McHaney et al. / Information & Management 36 (1999) 109±119 115
Table 7 for the observed variables and Table 8 for the
latent variables.
Table 7 compares factor loadings, corresponding t-
values, and R2-values for this study with those
reported by Doll, et al. All items have signi®cant
loadings on their corresponding factors, indicating a
good construct validity. R2-values range from 0.48 to
0.89 providing evidence of acceptable reliability for
all individual items.
Table 8 provides standard structural coef®cients
and corresponding t-values as well as R2-values for
the latent variables. The standard structural coef®-
Table 6
Goodness-of-fit indexes
Current study Doll, Xia and Torkzadeh Study (1994)
�2 (df) 145.15 (44) 185.81 (50)
�2/df 3.30 3.72
Normed fit index (NFI) 0.899 0.940
Goodness-of-fit index (GFI) 0.866 0.929
Adjusted goodness of fit index (AGFI) 0.762 0.889
Root mean square residual (RMSR) 0.051 0.035
Table 7
Standardized parameter estimates and t values
Item Current study Doll, Xia, and Torkzadeh Study (1994)
factor loading R2 (reliability) factor loading R2 (reliability)
C1 0.855 (12.77) 0.73 0.826* a 0.68
C2 0.891 (13.66) 0.79 0.852 (20.36) 0.73
C3 0.758 (10.66) 0.58 0.725 (16.23) 0.53
C4 0.781 (11.14) 0.61 0.822 (19.32) 0.68
A1 0.883 (13.16) 0.78 0.868* a 0.76
A2 0.944 (14.61) 0.89 0.890 (20.47) 0.79
F1 0.757 (10.49) 0.57 0.780* a 0.61
F2 0.900 (13.34) 0.81 0.829 (17.89) 0.69
E1 0.915 (12.91) 0.84 0.848* a 0.72
E2 0.856 (10.66) 0.58 0.880 (16.71) 0.78
T1 0.690 (8.85) 0.48 0.720* a 0.52
T2 0.880 (11.72) 0.78 0.759 (13.10) 0.58
a Asterisk indicates a parameter ®xed at 1.0 in original solution. t values for item factor loadings are indicated in parentheses.
Table 8
Structural coefficients and t values
Item Current study Doll, Xia, and Torkzadeh Study (1994)
standard structure coefficient a R2 (reliability) standard structure coefficient a R2 (reliability)
Content 0.955 (15.22) 0.91 0.912 (17.67) 0.68
Accuracy 0.770 (10.86) 0.59 0.822 (16.04) 0.73
Format 0.855 (12.70) 0.73 0.993 (18.19) 0.53
Ease of use 0.629 (8.30) 0.40 0.719 (13.09) 0.68
Timeliness 0.712 (9.74) 0.51 0.883 (13.78) 0.76
a t values for factor structural coef®cients are indicated in parentheses.
116 R. McHaney et al. / Information & Management 36 (1999) 109±119
cients indicate the validity of the latent constructs with
values ranging from 0.629 to 0.955. The t-values are
all signi®cant and the R2-values range from 0.40 to
0.91, indicating acceptable reliability for all factors.
4. Conclusions
Correlation analysis and con®rmatory factor ana-
lysis indicate that the EUCS instrument is psychome-
trically sound and valid in test and retest
administrations. Strong values for Cronbach's � indi-
cate good internal consistency in both the test and
retest administrations. Correlation analysis between
test and retest administrations of the instrument fails to
detect any problems with reliability; however, differ-
ence testing provides mixed results. While eleven of
the individual questionnaire item responses do not
show any signi®cant differences between administra-
tions, one does at the (p < 0.05) level. This difference
is re¯ected less signi®cantly at the subscale level
(p < 0.08) and is not re¯ected as a signi®cant differ-
ence in the global EUCS item (p � 0.15).
The mixed ®ndings with respect to the difference
testing does not necessarily indicate instability of the
underlying theoretical construct nor does it mean that
the instrument is ¯awed. These ®ndings might indicate
a memory or reactivity effect. A memory effect can
occur when respondents are able to recall the answers
given in a prior test administration. Since a month
passed between the initial test and the retest, the
likelihood of memory playing are role is diminished.
Reactivity is more likely to explain the differences
discovered between several of the items. Reactivity
occurs when the questionnaire respondents think
about the questions between the two administrations.
This effect is common among respondents who are not
used to answering detailed questions.
Another explanation for the mixed results may
relate to the very nature of the software systems being
evaluated. Several individual simulation systems rated
as highly successful in the initial response were rated
as unsuccessful in the follow-up response. An altered
rating might occur for several reasons: perhaps the
user's perception of the software system may have
changed or new problems may have come to the notice
of the user in the intervening month, alternatively, the
user may have become aware of additional informa-
tion about the operation of the system or of similar
systems that are better. It is also possible that the
system being modeled may have been put into opera-
tion during the intervening month and shows that the
simulation was not an accurate depiction, as originally
believed. This argument is strengthened by looking at
the questionnaire item with signi®cant differences:
A2ÐAre you satis®ed with the accuracy of the sys-
tem. Another possibility is that questions included on
the original questionnaire introduced a degree of
response bias. The initial survey asked detailed ques-
tions relating to numerous aspects of the system being
rated. These items were used to test a contingency
model for computer simulation success. The follow-up
survey summarized the initial responses and asked
only some of the EUCS questions. The initial EUCS
responses may have been in¯uenced by the thought
process used by the subject as they regarded each
characteristic of their system in great detail. As a
result, the respondent may have provided a shallower
assessment of the system re¯ected in a slightly lower
EUCS rating.
Although one item exhibited a signi®cant difference
between the test and retest applications of the EUCS
instrument, the overall ®ndings provide evidence in
support of its psychometric stability. In addition, other
tests suggest the instrument to be internally consistent.
Support for construct and convergent validity are also
present. In conclusion, this research indicates the
EUCS instrument can be used as a surrogate measure
of success for representational DSS, espceially for
discrete event computer simulation systems.
Appendix
EUCS instrument questions
C1: Does the system provide the precise informa-
tion you need?
C2: Does the information content meet your needs?
C3: Does the system provide reports that seem to be
just about exactly what you need?
C4: Does the system provide suf®cient informa-
tion?
A1: Is the system accurate?
A2:Areyousatis®edwith the accuracy of the system?
R. McHaney et al. / Information & Management 36 (1999) 109±119 117
F1: Do you think the output is presented in a useful
format?
F2: Is the information clear?
E1: Is the system user friendly?
E2: Is the system easy to use?
T1: Do you get the information you need in time?
T2: Does the system provide up-to-date information?
References
[1] S. Alter, A Taxonomy of Decision Support Systems. Sloan
Management Review (1977) 37±56.
[2] J.E. Bailey, S.W. Pearson, Development of a tool for
measuring and analyzing computer user satisfaction, Man-
agement Science 29(5), 1983, pp. 530±545.
[3] S. Blili, L. Raymond, S. Rivard, Impact of task uncertainty,
end-user involvement, and competence on the success of
end-user computing, Information and Management 33(3),
1998, pp. 137±153.
[4] T.D. Cook, D.T. Campbell, Quasi-Experimentation: Design
and Analysis Issues in Field Settings, Houghton Mif¯in
Company, Boston, MA, 1979.
[5] L.J. Cronbach, Coef®cient alpha and the internal consistency
of tests, Psychometrika 16, 1951, pp. 297±334.
[6] W.H. Cunningham, W.T. Anderson, Jr., John Murphy, Are
students real people? The Journal of Business 47(3) 1974,
pp. 399±409.
[7] F.D. Davis, Perceived usefulness, perceived ease of use, and
user acceptance of information technology, MIS Quarterly
(1989) 319±340.
[8] W.H. Delone, E.R. McLean, Information success: the quest
for the dependent variable, Information Systems Research
3(1), 1992, pp. 60±95.
[9] W.J. Doll, G. Torkzadeh, The measurement of end-user
computing satisfaction, MIS Quarterly (1988) 259±274.
[10] W.J. Doll, W. Xia, G. Torkzadeh, A con®rmatory factor
analysis of the end-user computing satisfaction instrument,
MIS Quarterly (1994) 453±461.
[11] D.L. Eldredge, H.J. Watson, An ongoing study of the
practice of simulation in industry, Simulation and Gaming
27(3), 1996, pp. 375±386.
[12] F. Ford, D. Bradbard, J. Cox, W. Ledbetter, Simulation in
corporate decision making: then and now, Simulation 49(6),
1987, pp. 277±282.
[13] D.F. Galletta, A.L. Lederer, Some cautions on the measure-
ment of user information satisfaction, Decision Sciences
20(3), 1989, pp. 419±438.
[14] A. Geoffrion, Can MS/OR evolve fast enough? Interfaces 13
(1983) 10±15.
[15] P. Gray, I. Borovits, The contrasting roles of Monte Carlo
simulation and gaming in decision support systems, Simula-
tion 47(6), 1986, pp. 233±239.
[16] S.R. Hawk, N.S. Raju, Test±retest reliability of user information
satisfaction: a comment on Galletta and Lederer's paper,
Decision Sciences 22(4), 1991, pp. 1165±1170.
[17] A.R. Hendrickson, K. Glorfeld, T.P. Cronan, On the repeated
test±retest reliability of the end-user computing satisfaction
instrument: a comment, Decision Sciences 25(4), 1994, pp.
655±667.
[18] A.R. Hendrickson, P.D. Massey, T.P. Cronan, On the test±
retest reliability of perceived usefulness and perceived ease
of use scales, MIS Quarterly 17(2), 1993, pp. 227±230.
[19] C.T. Hughes, M.L. Gibson, Students as surrogates for
managers in a decision-making environment: an experimen-
tal study, Journal of Management Information Systems 8(2),
1991, pp. 153±166.
[20] B. Ives, M.H. Olson, User involvement and MIS success: a
review of research, Management Science 30(5), 1984, pp.
586±603.
[21] S.L. Jarvenpaa, G.W. Dickson, G. DeSanctis, Methodologi-
cal issues in experimental IS research: experiences and
recommendations, MIS Quarterly 9(2), 1985, pp. 141±156.
[22] K.G. JoÈreskog, D. SoÈrbom, LISREL 8: Structural Equation
Modeling with the SIMPLIS Command Language, Chicago,
IL: Chicago Scienti®c Software, Inc., 1993.
[23] P.G.W. Keen, Reference disciplines and a cumulative
tradition. Proceedings of the First International Conference
on Information Systems (December), 1980, pp. 9±18.
[24] A.M. Law, W.D. Kelton, Simulation Modeling and Analysis,
second edn., McGraw-Hill, New York, 1993.
[25] L. Lin, J. Cochran, J. Sarkis, A metamodel-based decision
support system for shop ¯oor production control, Computers
in Industry 18, 1992, pp. 155±168.
[26] R.W. McHaney, T.P. Cronan, Computer simulation success:
on the use of the end-user computing satisfaction instrument,
Decision Sciences 29(2), 1998, pp. 525±534.
[27] J.G. Moser, Integration of arti®cial intelligence and simula-
tion in a comprehensive decision-support system, Simulation
47(6), 1986, pp. 223±229.
[28] J.C. Nunnally, Psychometric Theory, second edn., McGraw-
Hill, New York, 1978.
[29] S.C. Palvia, N.L. Chervany, An experimental investigation of
factors in¯uencing predicted success in DSS implementation,
Information and Management 29(1), 1995, pp. 43±54.
[30] J.F. Rockart, L.S. Flannery, The management of end-user
computing, Communications of the ACM 26(10), 1983, pp.
776±784.
[31] SAS Institute, SAS User's Guide: Volume 1, ACECLUS-
FREQ. Version 6, fourth edn., SAS Institute, Inc., Cary, NC,
1994.
[32] T.J. Schriber, An Introduction to Simulation Using GPSS/H.
John Wiley & Sons, New York, 1992.
[33] Society for Computer Simulation, Directory of Vendors of
Simulators, Speci®c Components, and Related Services,
Simulation (1989) 259±275.
[34] D.W. Straub, Validating instruments in MIS research, MIS
Quarterly 13(2), 1989, pp. 147±166.
[35] J. Swain, Flexible tools for modeling, OR/MS Today (1993)
62±78.
118 R. McHaney et al. / Information & Management 36 (1999) 109±119
[36] A. Thesen, L. Travis, Simulation for Decision Making, West
Publishing Company, St. Paul, MN, 1992.
[37] G. Torkzadeh, W.J. Doll, Test±retest reliability of the end-
user computing satisfaction instrument, Decision Sciences
22(1), 1991, pp. 26±33.
[38] B.B. Wheaton, B. Muthen, D.F. Alwin, G.F. Summers,
Assessing reliability and stability in panel models. In: D.R.
Heise (Ed.), Sociological Methodology, Jossey-Bass, San
Francisco, CA, 1977.
Roger McHaney has current research interests in automated
guided vehicle system simulation, innovative uses for simulation
languages, and simulation success. Dr. McHaney holds a Ph.D.
from the University of Arkansas where he specialized in computer
information systems and quantitative analysis. He is currently an
Assistant Professor at Kansas State University. Dr. McHaney is
author of the textbook, Computer Simulation: A Practical
Perspective and has published simulation-related research in
journals such as Decision Sciences, Decision Support Systems,
International Journal of Production Research and Simulation &
Gaming.
Ross Hightower is an Assistant Professor of Management
Information Systems at the College of Business Administration,
University of Central Florida. His primary research interest is
computer-mediated communication, and information exchange in
groups. His work has appeared in journals such as Information
Systems Research, Decision Sciences, Information and Manage-
ment, and Computers in Human Behavior. He received his
doctorate in business administration from Georgia State University.
Doug White is an Assistant Professor at Western Michigan
University. He has published in Simulation and Gaming, IEEE:
Simulation Digest, Computers and Composition, and others. Dr.
White has worked for the Federal Reserve System and Oak Ridge
National Laboratories. Dr. White currently teaches computer
programming and acts as a networking consultant.
R. McHaney et al. / Information & Management 36 (1999) 109±119 119