McFarland Et Hamilton 2006

21
Adding contextual specificity to the technology acceptance model Daniel J. McFarland a, * , Diane Hamilton b a Management Information Systems, College of Business, Rowan University, 201 Mullica Hill Road, Glassboro, NJ 08028, USA b Management Information Systems, College of Business, Rowan University, 201 Mullica Hill Road, Glassboro, NJ 08028, USA Available online 11 November 2004 Abstract This paper examines the influence of contextual specificity when describing technology acceptance. Social cognitive theory provides the basis for adding several independent variables (computer anxiety, prior experience, otherÕs use, organizational support, task structure, and system quality) and one intervening variable (computer-efficacy) to the technology acceptance model (TAM). This extended model was tested using a mail survey and the results are tabu- lated using partial least squares. The results show that system usage is strongly influenced by computer anxiety, prior experience, otherÕs use, organizational support, task structure, system quality, and perceived usefulness. In addition, perceived usefulness is the strongest mediator in determining system usage. Ó 2004 Elsevier Ltd. All rights reserved. Keywords: Technology acceptance; Self-efficacy; Computer efficacy; System usage; Partial least squares 1. Introduction Researchers and practitioners alike strive to understand individualsÕ unwillingness to accept systems that appear to promise substantial benefits. Davis, Bagozzi, and 0747-5632/$ - see front matter Ó 2004 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2004.09.009 * Corresponding author. Tel.: +1 856 256 5426; fax: +1 856 256 4439. E-mail address: [email protected] (D.J. McFarland). Computers in Human Behavior 22 (2006) 427–447 www.elsevier.com/locate/comphumbeh Computers in Human Behavior

Transcript of McFarland Et Hamilton 2006

Page 1: McFarland Et Hamilton 2006

omputers in

C

Computers in Human Behavior 22 (2006) 427–447

www.elsevier.com/locate/comphumbeh

Human Behavior

Adding contextual specificity to thetechnology acceptance model

Daniel J. McFarland a,*, Diane Hamilton b

a Management Information Systems, College of Business, Rowan University, 201 Mullica

Hill Road, Glassboro, NJ 08028, USAb Management Information Systems, College of Business, Rowan University, 201 Mullica Hill Road,

Glassboro, NJ 08028, USA

Available online 11 November 2004

Abstract

This paper examines the influence of contextual specificity when describing technology

acceptance. Social cognitive theory provides the basis for adding several independent variables

(computer anxiety, prior experience, other�s use, organizational support, task structure, and

system quality) and one intervening variable (computer-efficacy) to the technology acceptance

model (TAM). This extended model was tested using a mail survey and the results are tabu-

lated using partial least squares. The results show that system usage is strongly influenced by

computer anxiety, prior experience, other�s use, organizational support, task structure, system

quality, and perceived usefulness. In addition, perceived usefulness is the strongest mediator in

determining system usage.

� 2004 Elsevier Ltd. All rights reserved.

Keywords: Technology acceptance; Self-efficacy; Computer efficacy; System usage; Partial least squares

1. Introduction

Researchers and practitioners alike strive to understand individuals� unwillingnessto accept systems that appear to promise substantial benefits. Davis, Bagozzi, and

0747-5632/$ - see front matter � 2004 Elsevier Ltd. All rights reserved.

doi:10.1016/j.chb.2004.09.009

* Corresponding author. Tel.: +1 856 256 5426; fax: +1 856 256 4439.

E-mail address: [email protected] (D.J. McFarland).

Page 2: McFarland Et Hamilton 2006

428 D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447

Warshaw (1989, p. 587) conclude that, ‘‘understanding why people accept or reject

computers has proven to be one of the most challenging issues in IS research.’’ This

lack of understanding continues despite recent improvements in application usability

and ease of use (Hasan, 2003). With employees seemingly accepting and rejecting

systems unsystematically, many organizations are failing to achieve the benefitspromised to them by software manufacturers.

The technology acceptance model (TAM) is one of the most widely used models

for describing IT usage behaviors (Igbaria, Guimaraes, & Davis, 1995). The TAM

asserts that IT behaviors are based largely on users� perceptions of a system�s easeof use and usefulness. While the model ‘‘has been empirically proven to have high

validity’’ (Chau, 1996, p.187), it ‘‘only supplies general information on users� opin-ions about a system’’ (Mathieson, 1991, p. 173). Similarly, user evaluation measures,

such as perceived ease of use and perceived usefulness, encompass many differentuser meanings and theoretical constructs (Chau, 1996; Moore & Benbasat, 1991; Se-

gars & Grover, 1994). Goodhue (1995, p. 1828) concludes that ‘‘there are so many

different underlying constructs, it is probably not possible to develop a single general

theoretical basis for user evaluations.’’ Cognitive psychologists support arguments

opposing the mental averaging of an activity domain. ‘‘Combining diverse attributes

into a single index creates confusions about what is actually being measured and how

much weight is given to particular attributes in the forced summary judgment’’ (Ban-

dura, 1997, p. 11).

2. Technology acceptance models

Igbaria et al. (1995) conclude that the TAM is one of the simplest, easiest to use,

and most powerful computer usage models. Similarly, Chau (1996) described the

TAM as one of the most influential of the over 20 computer usage models that Saga

and Zmud (1994) reviewed.The theoretical foundation for the TAM is Fishbein and Ajzen�s (1975) theory of

reasoned action (TRA). ‘‘The TAM adapted the generic TRA model to the particu-

lar domain of user acceptance of computer technology, replacing the TRA�s attitu-dinal determinants, derived separately for each behavior, with a set of two variables

perceived ease of use (PEOU) and perceived usefulness (PERUSE)’’ (Igbaria et al.,

1995, p. 88). PEOU is defined as ‘‘the degree to which a person believes using a par-

ticular system would be free of effort’’ and PERUSE is ‘‘the degree to which a person

believes that using a particular system would enhance his or her job performance’’(Davis, 1989, p. 320). Fig. 1 shows a common operationalization of the TAM (Igbaria

et al., 1995). The TAM is also based on ‘‘the cost-benefit paradigm from behavioral

decision theory’’ (Davis, 1989, p. 321). In general, the cost-benefit paradigm posits

that human behavior is based on a person�s cognitive tradeoff between the effort re-

quired to perform an action and the consequences of completing the action (Jar-

venpaa, 1989). Within MIS, the TAM asserts that a person will use an application

if the performance benefits outweigh the effort of using the application (Davis,

1989). Davis (1989) assesses performance benefits by measuring a person�s anticipated

Page 3: McFarland Et Hamilton 2006

External Variables

Perceived Ease of Use

Perceived Usefulness

System Usage

Fig. 1. Simplified technology acceptance model.

D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447 429

consequences of using the system (a.k.a., PERUSE), and effort by assessing a per-

son�s belief that using an application is free of effort (a.k.a., PEOU). Several empir-

ical studies demonstrate the efficiency, effectiveness and validity of the TAM and the

superiority of the TAM to the TRA (Adams, Nelson, & Todd, 1992; Chau, 1996;

Davis, 1986; Davis et al., 1989; Hendrickson, Glorfeld, & Cronan, 1994; Hubona

& Cheney, 1994; Igbaria et al., 1995; Mathieson, 1991; Segars & Grover, 1994).

Although the TAM ‘‘has been empirically proven to have high validity’’ (Chau,1996, p. 187), the model explains only a fraction of the observed IT usage variance.

Davis et al. (1989) study showed one of the highest explained variances. These

researchers were able to explain between 45% and 57% of the variance associated

with volunteer students� intentions to use a non-required word-processing applica-

tion. However, studies with similar objectives that investigated usage in the field ex-

plained much less variance. The percent variance explained in field studies varies

between 4% (Adams et al., 1992) and 45% (Igbaria et al., 1995). Consider PEOU;

it is unclear if these evaluations are based on the ease with which users interact withthe system or if it is based on the ease with which users interact with the task them-

selves. For example, when attempting to solve a challenging statistical problem, peo-

ple may not consider the system�s usability if they believe that they do not possess the

requisite statistical expertise. Consequently, without regard to system differences, all

statistical applications may receive equally poor PEOU evaluations. Similarly, PER-

USE measures may evaluate task usefulness rather than technological usefulness. To

illustrate, consider a system designed to monitor critically ill patients in an emer-

gency room. Without considering the system�s effectiveness, people may evaluate itas being very useful.

3. Self-efficacy and computer-efficacy

A promising addition to behavioral research is the concept of self-efficacy (Com-

peau & Higgins, 1995; Gist & Mitchell, 1992; Hasan, 2003). ‘‘Self-efficacy refers to

the judgments an individual makes about his or her capability to mobilize the moti-vation, cognitive resources, and course of action needed to orchestrate future

Page 4: McFarland Et Hamilton 2006

430 D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447

performance on a specific task’’ (Martocchio & Dulebohn, 1994, p. 358). This defi-

nition emphasizes three critical characteristics of self-efficacy. First, self-efficacy is

one�s belief in his or her capability to produce an outcome rather than an assessment

regarding the impacts of the outcome. Next, self-efficacy�s focus is on overall results

rather than component level skills. Finally, self-efficacy is a judgment of �what onecan do� in the future rather than an assessment of �what one has done� in the past.

Empirical studies have shown self-efficacy to be a distinctive, valid, and significant

construct (Bandura, 1997; Gist & Mitchell, 1992; Igbaria & Ivari, 1995).

To predict human behavior, people�s belief that their actions contribute to suc-

cess, and a self-assessment of their capabilities to accomplish the activity should

both be considered (Bandura, 1997). Although these beliefs are separate, they

are not independent of one another. ‘‘Beliefs that outcomes are determined by

one�s own behavior can be either demoralizing or empowering, depending onwhether or not one believes one can produce the required behavior’’ (Bandura,

1997, p. 20). In combining these beliefs, people develop self-images of future suc-

cess or failure states (Markus & Nurius, 1986). These self-conceptualizations of

the future serve to guide and motivate behavior. As a result, peoples� beliefs in

their causative capabilities ‘‘influence the way they think, feel, motivate themselves

and act’’ (Bandura, 1995, p. 2). ‘‘Given the importance of self-efficacy for predict-

ing and improving work performance and behavior’’ (Igbaria & Ivari, 1995, p.

588), several investigators argue the need for further research to examine the roleof self-efficacy in computing behavior (Gist & Mitchell, 1992; Gist et al., 1989;

Igbaria & Ivari, 1995).

In borrowing self-efficacy from cognitive psychology, MIS researchers have

defined computer-efficacy as one�s general belief that he/she is capable of putting

computer technologies to use (Compeau & Higgins, 1995; Vankatesh & Davis,

1996). Fig. 2 graphically depicts Compeau and Higgins� computer-efficacy model.

Empirical studies show computer-efficacy influencing: technology adoption (Burk-

hardt & Brass, 1990; Igbaria & Ivari, 1995), system usage (Compeau & Higgins,1995; Igbaria & Ivari, 1995), system ease of use perceptions (Vankatesh & Davis,

1996), affective states (Compeau & Higgins, 1995; Igbaria & Ivari, 1995), and com-

puter training (Gist, Schwoerer, & Rosen, 1989; Hill, Smith, & Mann, 1987; Webster

& Martocchio, 1992).

Davis (1989) theorized that computer-efficacy was distinctive from PERUSE and

PEOU and subsequent empirical research has demonstrated this distinctiveness

(Igbaria & Ivari, 1995; Vankatesh & Davis, 1996). Additionally, research findings

have demonstrated computer-efficacy�s significance in explaining computing behav-ior (Fenech, 1998; Igbaria & Ivari, 1995; Vankatesh & Davis, 1996). Computer-

efficacy is defined as one�s belief that he or she is capable of using a computer to

complete a task, without regard to the task�s difficulty or consequences. For example,

while not improving my job performance, and not being an easy task, I am quite

confident in my ability to use C++ to develop a simulation depicting my monthly

earning and spending habits.

Although the results of existing computer-efficacy studies are encouraging, like

the TAM, computer-efficacy has been defined as a general construct, that is, one�s

Page 5: McFarland Et Hamilton 2006

Encouragement by others

Others’ Use

Support

Computer-efficacy

Outcome expectation

User affective state

User anxiety

System usage

Fig. 2. Compeau & Higgins� computer-efficacy model.

D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447 431

self-assessment of his/her ‘‘abilities to use information and computer technologies ingeneral’’ (Vankatesh & Davis, 1996, p. 452). Insofar as computer-efficacy is ‘‘system-

independent’’ (Vankatesh & Davis, 1996, p. 473), these models suggest that if a per-

son is confident using one application to complete a particular task, he or she will be

confident using any application to complete any task. However, Goodhue (1995)

concludes that specific task and system quality significantly affect user�s evaluationsof systems. Similarly, while investigators have suggested that system quality may im-

pact computer-efficacy, they did not investigate or explore the potential relationship

(e.g., Compeau & Higgins, 1995; Vankatesh & Davis, 1996).Empirical evidence suggests that general indices of efficacy ‘‘bear little to no rela-

tion either to efficacy beliefs related to particular activity domains or to behavior. . .When global efficacy beliefs are related to performance, evidence suggests that par-

ticularized efficacy beliefs account for the relation. Global beliefs lose their predic-

tiveness when the influence of particular efficacy beliefs is removed’’ (Bandura,

1997, p. 42).

Additionally, social cognitive theory posits that behavior is selective for a par-

ticular individual and the current environment (Bandura�s, 1986). As a result, it isproblematic to generalize behavioral responses. To illustrate, consider a general

efficacy measure for sports. If one considers him- or herself to be a talented golfer

and a terrible kick-boxer, it is problematic to define a single measure of �sportsefficacy� or to assume that in playing one sport well, an individual possesses the

propensity to play all sports well. Within the context of computers, a business sys-

tems analyst may have very high efficacy with regard to defining and developing a

complicated business program using a cryptic programming language. However,

the same individual may have very low efficacy regarding his or her ability to con-trol a nuclear power plant using a system with a friendly graphical user interface.

As a result, when defining computer-efficacy, one must consider contextually rel-

evant characteristics (e.g., the task and technology) and characteristics of the

individual (e.g., past experiences).

Page 6: McFarland Et Hamilton 2006

432 D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447

4. Research objective

The objective of this study is to examine the influence of contextual variables

on end-user IT acceptance behaviors. Self-efficacy theory and social cognitive

theory describe how contextual variables affect human attitude and behavior. Inparticular, before acting, an individual considers enactive experiences (i.e., experi-

ence dealing with this situation before), vicarious experiences (i.e., watching others

deal with this situation), and social persuasion (i.e., support and encouragement

received). Furthermore, a triadic relationship exists among the individual�s affec-

tive state, the environmental characteristics, and the individual�s behavior (Ban-

dura�s, 1986). MIS researchers have adapted each of these constructs as follows:

prior experience measures enactive experience (Igbaria, Parasuraman, & Baroudi,

1996), other�s use measures vicarious experience (Compeau & Higgins, 1995),organizational support measures social persuasion (Igbaria, Zinatelli, Cragg, &

Cavaye, 1997), computer anxiety measures affective state (Compeau & Higgins,

1995), and the computing environment is defined in terms of the task structure

and the system quality (Goodhue, 1995). Fig. 3 illustrates the model considered

in this study. Table 1 defines the study variables and describes the operationaliza-

tion of the variables. It further indicates the number of survey items (questions)

used to measure each of the variables.

We consider several sets of hypotheses in evaluating our model, as stated below.The hypotheses assess the extent to which the exogenous variables influence the

endogenous variables and the extent to which the endogenous variables influence

each other.

H1. Task structure will have a positive, direct relationship with (a) computer-effi-

cacy; (b) perceived ease of use evaluations, (c) perceived usefulness evaluations,

(d) system usage.

H2. Prior experience will have a positive, direct relationship with (a) computer-

efficacy, (b) perceived ease of use evaluations, (c) perceived usefulness evalua-tions, (d) system usage.

H3. Other�s use will have a positive direct relationship with (a) computer-efficacy,

(b) perceived ease of use evaluations, (c) perceived usefulness evaluations, (d)

system usage.

H4. Organizational support will have a positive, direct relationship with (a) compu-

ter-efficacy, (b) perceived ease of use evaluations, (c) perceived usefulness eval-uations, (d) system usage.

H5. Anxiety will have a negative, direct relationship with (a) computer-efficacy, (b)

perceived ease of use evaluations, (c) perceived usefulness evaluations, (d) sys-

tem usage.

Page 7: McFarland Et Hamilton 2006

Other’s Use

System Quality

Organizational Support

Prior Experience

Anxiety

Task Structure

Computer Efficacy

Perceived Ease of Use

Perceived Usefulness

System Usage

Fig. 3. Research model.

D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447 433

H6. System quality will have a positive, direct relationship with (a) computer-

efficacy, (b) perceived ease of use evaluations, (c) perceived usefulness evalua-

tions, (d) system usage.

H7. Computer-efficacy will have a positive, direct relationship with (a) system

usage, (b) perceived usefulness, (c) perceived ease of use.

H8. Perceived ease of use evaluations will have a positive, direct relationship with

(a) perceived usefulness evaluations, (b) system usage.

H9. Perceived usefulness evaluations will have a positive, direct relationship withsystem usage.

5. Research methodology

A mail survey was used to gather the data for this study. The survey included 41

statements such as:

� using a computer improves, or would improve, my overall job performance;

� my experience using computers at work has been successful;

Page 8: McFarland Et Hamilton 2006

Table 1

Variable definitions, number of survey items and theoretical support

Variables Definition Previous studies

Independent

Task structure Five items measuring the extent to

which task is non-routine and varied

(Goodhue and Thompson, 1995; Goodhue,

1995; Igbaria, 1998)

Prior experience Two items measuring the individual�spast experience

(Igbaria and Ivari, 1995; Igbaria et al., 1995;

Igbaria et al., 1996; Taylor and Todd, 1995;

Vankatesh and Davis, 1996)

Other�s use Three items assessing the degree to

which the individual observed others

using a computer

(Compeau and Higgins, 1995; Igbaria et al.,

1996)

Computer

anxiety

Five items measuring an individual�suneasiness or apprehension towards

computers

(Compeau, 1992; Compeau and Higgins,

1995; Igbaria and Chakrabarti, 1990; Igbaria

and Ivari, 1995; Miura, 1987; Raub, 1981;

Staples et al., 1998)

Organizational

support

Four items assessing management

encouragement and resource support

(Compeau and Higgins, 1995; Igbaria, 1990;

Igbaria and Ivari, 1995; Igbaria et al., 1995;

Igbaria et al., 1996; Igbaria et al., 1997;

Thompson et al., 1991)

System quality Five items assessing system

functionality, performance, and

interactivity

(Goodhue and Thompson, 1995; Goodhue,

1995; Igbaria et al., 1995; Igbaria et al., 1990;

Lucas, 1975; Lucas, 1978; Vankatesh and

Davis, 1996)

Mediating

Perceived

usefulness

(PERUSE)

Four items measuring the degree to

which a person believes system use

will enhance job performance

(Adams et al., 1992; Davis, 1989; Davis et al.,

1989; Igbaria et al., 1995; Igbaria et al., 1996;

Igbaria et al., 1997; Jackson et al., 1997;

Langford and Reeves, 1998; Vankatesh and

Davis, 1996)

Perceived ease

of USE

(PEOU)

Four items measuring the degree to

which a person believes system use

will be free of effort

(Adams et al., 1992; Davis, 1989; Davis et al.,

1989; Igbaria et al., 1995; Igbaria et al., 1997;

Jackson et al., 1997; Langford and Reeves,

1998; Vankatesh and Davis, 1996)

Computer-

efficacy

Seven items assessing the extent to

which an individual feels confident in

using a computer

(Compeau and Higgins, 1995; Fenech, 1998;

Igbaria and Ivari, 1995; Langford and Reeves,

1998; Vandenbosch and Higgins, 1995;

Vankatesh and Davis, 1996)

Dependent

System usage Two self-reported system usage items:

frequency of use and duration of use

(Adams et al., 1992; Blair and Burton, 1987;

Cheney and Dickson, 1982; Compeau and

Higgins, 1995; DeLone, 1988; Igbaria and

Ivari, 1995; Igbaria et al., 1996; Igbaria et al.,

1989; Igbaria et al., 1997; Lee, 1986)

434 D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447

� my immediate supervisor uses computers at work extensively;

� my boss supports and encourages me to use a computer; and

� for fear of making a mistake I cannot correct, I hesitate using computers at

work.

Page 9: McFarland Et Hamilton 2006

D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447 435

Respondents rated each statement on a seven-point Likert scale anchored at

Strongly Agree and Strongly Disagree. These statements were adapted from prior

studies (as listed in Table 1), to measure each of the contextual variables included

in the model. The questionnaire was mailed to end-users with a cover letter that

briefly described the study. Stamped, self-addressed return envelopes were providedfor the convenience of the respondents. We sought to survey end-users from mid to

large, for-profit organizations that are served by internal IS staff members. This was

accomplished by targeting organizations with between 25 and 100 internal IS mem-

bers. Since for the purposes of this study system usage was being viewed as volun-

tary, we targeted individuals in professional and managerial roles. Additionally,

end-users were selected from diverse industries throughout the US. The list of

end-user names was purchased from Applied Computer Research, an organization

specializing in contact list development and management. A pretest of the question-naire was conducted with people from both academia and industry. Each respondent

was asked to complete the questionnaire, and provide feedback regarding the process

and measures. Additionally, interviews were conducted to ensure that questionnaire

responses were consistent with the underlying constructs.

Of the 700 surveys mailed, 114 were completed and returned, representing a re-

sponse rate of 16%. Six surveys were incomplete and therefore not used. As a result,

108 completed surveys were analyzed, with a resulting response rate of 15%. Demo-

graphic information was also collected from each respondent regarding his/herorganizational level, functional area, educational level, gender, and age. Measures

for each variable in the proposed model were obtained using the questionnaires.

5.1. Tests for model consistency and validity

The statistical analysis consisted of two stages. The first stage assessed of the reli-

ability of the measures used to operationalize the variables in this study. This

involved assessing the contribution and reliability of multiple indicators for thelatent and manifest variables. The second stage tested the proposed conceptual

model. This involved assessing the contribution and reliability of multiple latent

variable and manifest variable path coefficients.

The research model, as seen earlier in Fig. 3, consists of several latent variables

(constructs), such as organizational support, computer-efficacy, perceived ease of

use, and system usage. Latent variables are theoretical constructs that are not di-

rectly observable. In turn, latent variables are measured through a set of manifest

(indicator) variables. Unlike latent variables, manifest variables are directly observ-able. Consequently, manifest variables provide a means by which the latent variables

can be assessed. This study employed latent variable partial least squares (LVPLS)

analysis (Lohmoeller, 1984). LVPLS simultaneously assesses the extent to which

the manifest variables measure the latent variables (the outer model, or measurement

model) and the extent to which the latent variable relationships match those pro-

posed in the research model (the inner model, or structural model). The composite

reliability assesses the internal consistency of the latent variables, and according to

Page 10: McFarland Et Hamilton 2006

436 D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447

the procedure suggested by Nunnally (1978), our instruments demonstrate a strong

to moderate level of reliability. Another assessment of reliability, proposed by Forn-

ell and Larcker (1981), is the average variance extracted. This measure also showed

strong to moderate levels of reliability. Therefore, we concluded that all latent var-

iable measures exhibited sufficient levels of internal reliability. Convergent validity ofthe manifest variables was assessed by analyzing the factor loading scores for each;

all individual loadings were in the range of 0.4–0.95. Hair, Anderson, Tatham, and

Black (1992) suggest that individual item loadings greater than 0.3 are significant.

Therefore, all items demonstrated convergent validity.

Since this study employed a single data collection method, we tested for common

method variance. Podsakoff and Organ (1986) suggest conducting an unconstrained,

single factor analysis for models that intend to measure multiple constructs. A dom-

inance of one factor would suggest that items were related due to common methodvariance. We conducted an unconstrained, single factor analysis;the total explained

variance by the 11 retained factors (using a minimum eigenvalue of one) was 75%

and the first factor accounted for 26% of the variance. Since multiple factors were

retained and the first factor did not dominate the variance, common method vari-

ance was not detected.

The discriminant validity of the analysis was assessed by comparing the inter-

correlations among the latent variables with the correlation the measures have

with their respective constructs (Fornell, Tellis, & Zinkhan, 1982). Discriminantvalidity is demonstrated when the measures are more strongly related to them-

selves than to the other latent variables in the model. Discriminant validity was

demonstrated for all constructs except the perceived ease of use construct. In this

case, the construct loaded more highly with the anxiety construct than it did with

itself. However, the violations were mild and according to Chin (1998) an item

should be dropped due to insufficient loading only if it is determined that the vio-

lation is a result of method variance or some other concept. Several other

researchers have found similar discriminant validity issues (e.g., Igbaria & Ivari,1995; Staples et al., 1998). In situations where the violations were minor and

the internal reliability measures were adequate, the authors disregarded the viola-

tions. As a result, since we were unable to detect common method variance, and

since our internal reliability measures were adequate, and since the cross-loading

was mild, all items were retained.

The PLS procedure simultaneously calculates factor loadings for the items and

the path coefficient variables. The item loadings are used to assess the strength

and reliability of the measures, as previously explained, and the path coefficient val-ues and loadings are used to evaluate the theoretical relationships posed in the con-

ceptual model (Igbaria et al., 1997). The exogenous variable path coefficients

represent the total effect that the variable has on the endogenous variable. This total

effect consists of a direct and an indirect effect on the endogenous variable. ‘‘An indi-

rect effect represents those effects interpreted by the intervening variables; it is the

product of the path coefficients along an indirect route from cause to effect via trac-

ing arrows in the headed direction only. When more than one indirect path exists, the

total indirect effect is their sum’’ (Igbaria et al., 1995, p. 99).

Page 11: McFarland Et Hamilton 2006

D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447 437

While the parsimonious data assumptions of PLS provide methodological

conveniences, they also limit the ability to assess the statistical significance of the

conceptual model�s path coefficients (Meznar & Nigh, 1995). As a result, the non-

parametric jackknifing technique (Fenwick, 1979) was used in conjunction with t-sta-

tistics to determine the statistical significance of the path coefficients. This practice isconsistent with prior studies using PLS (e.g., Igbaria et al., 1995; Igbaria et al., 1997;

Meznar & Nigh, 1995; Staples et al., 1998). The jackknife method repeatedly ana-

lyzes the statistic in question using a resampling methodology. Rather than making

a priori variability assumptions (as is done in the traditional parametric t- and z-

tests), jackknifing uses large numbers of computations to explore the empirical var-

iability of a statistic. To test the hypotheses, t-statistics were calculated using the PLS

path coefficients (direct and indirect effects), and the jackknifing path coefficient bias

estimates.

6. Results of the hypothesis testing

This study investigates the following six exogenous variables:

� prior experience,

� other�s use,� anxiety,

� system quality,

� task structure, and

� organization support, and their influence on the following four endogenous

variables:

� system usage,

� perceived ease of use,

� perceived usefulness and computer-efficacy.

See Fig. 4 for the path diagram that depicts the structural equations associated

with this study.

The PLS procedure is a distribution-free procedure, which separates the over-

all model into two sub-models. The measurement model assesses how well the

manifest variables describe the exogenous latent variables. The remaining model

is the structural model, which assesses how well the exogenous latent variables

describe the endogenous latent variables. The evaluation and assessment ofthese models is handled independently first, then together as a total fit of the

overall conceptual model. The structural model is depicted as a path diagram,

with the paths representing relationships between the exogenous and endog-

enous variables. Fig. 5 depicts the path diagram with the manifest variable

loading scores and the structural model standardized regression coefficients for

the instrument.

The results of the multivariate test of the structural model provide the stand-

ardized regression coefficients (i.e., the path loadings) for the conceptual model.

Page 12: McFarland Et Hamilton 2006

438 D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447

The path loadings represent the direct effect that the exogenous variables have

on the endogenous variables. An indirect effect exists when an exogenous varia-

ble influences an intervening variable, which in turn influences the endogenous

variable. An indirect effect is calculated as the product of the path loadings

for all coefficients on the indirect path. If multiple indirect paths exist, the totalindirect effect is the sum of the individual indirect effects. The total effect is the

sum of the direct effect and the total indirect effect. The statistical significance

for the path loadings were determined using the t-statistics and the nonparamet-

ric jackknifing procedure (Alwin & Hauser, 1975). Complete results of hypothe-

ses testing are shown in Table 2, according to the type of statistical test(s)

conducted.

6.1. Communality coefficient

The structural model deals with latent variables that are not directly observa-

ble. The measurement model provides the means by which these latent variables

can be observed. Specifically, the measurement model represents the relationships

and loadings of the questionnaire items into their respective latent variable. As

such, the assessment of the measurement model focuses on the degree to which

the indicator variables load with their respective constructs. Individual manifest

variable loadings are used to assess latent variable reliabilities and discriminant

Other’s Use

System Quality

Organizational Support

Prior Experience

Anxiety

Task Structure

Computer Efficacy

Perceived Ease of Use

Perceived Usefulness

System Usage

Q7 Q8 Q9 Q10 Q11 Q12

Q40Q41Q42Q43Q44

Q15Q16Q17Q18

Q5 Q6

Q32Q34Q35Q38Q39

Q19Q20Q21Q22Q23

Q24 Q25 Q26 Q27 Q28 Q29 Q30

Q31Q33Q36Q37

Q1Q2Q3Q4

Q13 Q14

Fig. 4. Path diagram for structural equations.

Page 13: McFarland Et Hamilton 2006

Other’s Use

System Quality

Organizational Support

Prior Experience

Anxiety

Task Structure

Computer Efficacy

Perceived Ease of Use

Perceived Usefulness

System Usage

-.03

.02

.12.38

Q7-.58 Q8-.78 Q9-.80 Q10-.84 Q11-.74 Q12-.68

Q19-.46 Q20-.53 Q21-.87 Q22-.85 Q23-.65

Q40-.86 Q41-.89 Q42-.87 Q43-.86 Q44-.81

Q15-.82Q16-.88Q17-.90Q18-.82

Q5-.82Q6-.88

Q32-.68Q34-.86Q35-.76Q38-.42Q39-.73

Q1-.73 Q2-.90Q3-.86 Q4-.81

Q13-.90Q14-.62

Q31-.51Q33-.85Q36-.70Q37-.75

Q24-.66 Q25-.65 Q26-.83 Q27-.78 Q28-.(-.80) Q29-.74 Q30-.56

.10

.15

.15-.26

-.04

-.07

.08.11

.30.09

.50

.34-.30 -.63

-.26

-.11

.01 .03

-.14

.12

.02

.07.11

-.03

.28

. 10

Fig. 5. Manifest variable loading scores and structural model standardized regression coefficients.

D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447 439

validity. However, when these loadings are analyzed in total, they provide an

assessment of the overall fit of the measurement model. The communality coeffi-

cient provides an overall assessment of how well the manifest variables describe

their respective latent variables. Falk and Miller (1992) suggest that values less

than 0.30 are considered to be too low to be acceptable. For this study, the com-

munality coefficient was 0.58. Since the measure exceeded Falk and Miller�s rec-ommendations, we conclude that the measurement model provides adequate

reliability.

6.2. Root mean square of covariance

Since the PLS procedure simultaneously determines the structural and meas-

urement model loadings, we seek to assess the reliability of the combined model.

Falk and Miller (1992) suggest using root mean square of the covariance be-tween the manifest variable residuals and the latent variable spans to evaluate

the overall fit of the conceptual, and underlying measurement model. The root

mean square of the covariance between the manifest variable residuals and the

latent variable residuals (RMS Cov(E,U)) represents the correlation between

the variance of the manifest and latent variables that are not accounted for

by the model relationships. A RMS Cov(E,U) equal to zero would indicate that

the model perfectly described the relationships between the manifest and latent

variables. Falk and Miller (1992) suggest that RMS Cov(E,U) values of 0.02 rep-resent superior models and RMS Cov(E,U) values above 0.20 are evidence of an

Page 14: McFarland Et Hamilton 2006

Table 2

Results of hypothesis testing

Hypothesis Standardized regression

coefficient

p-value

H1 – High task structure will have a positive, direct relationship

with:

(a) computer efficacy 0.01 Not sig

(b) ease of use perceptions 0.03 0.01

(c) usefulness perceptions �0.14 0.001a

(d) system usage 0.12 0.001

H2 – Prior experience will have a positive, direct relationship with:

(a) computer efficacy 0.30 0.001

(b) ease of use perceptions 0.09 0.001

(c) usefulness perceptions 0.50 0.001

(d) system usage 0.34 0.001

H3 – Other�s use will have a positive, direct relationship with:

(a) computer efficacy �0.030 0.05a

(b) ease of use perceptions 0.02 Not sig

(c) usefulness perceptions 0.12 0.001

(d) system usage 0.38 0.001

H4 – Organizational support will have a positive, direct

relationship with:

(a) computer efficacy �0.04 0.01a

(b) ease of use perceptions �0.07 0.001a

(c) usefulness perceptions 0.08 0.001

(d) system usage 0.11 0.001

H5 – Anxiety will have a negative, direct relationship with:

(a) computer efficacy �0.3 0.001

(b) ease of use perceptions �0.63 0.001

(c) usefulness perceptions �0.26 0.001

(d) system usage �0.11 0.001

H6 – System quality will have a positive, direct relationship with:

(a) computer efficacy 0.10 0.001

(b) ease of use perceptions 0.15 0.001

(c) usefulness perceptions 0.15 0.001

(d) system usage �0.26 0.001a

H7 – Computer efficacy will have a positive, direct relationship

with:

(a) ease of use perceptions 0.11 0.001

(b) usefulness perceptions 0.07 0.001

(c) system usage 0.02 Not sig

H8 – Perceived ease of use evaluations will have a positive, direct

relationship with:

(a) usefulness perceptions 0.28 0.001

(b) system usage �0.03 Not sig

H9 – Perceived usefulness evaluations will have a positive, direct

relationship with system usage

0.10 0.001

a Significant, however, the effect was in the opposite direction from what was expected.

440 D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447

inadequate model. The RMS Cov(E,U) value was 0.07. This measure indicates

that overall the model provides strong evidence that the data adequately sup-

ports the model.

Page 15: McFarland Et Hamilton 2006

D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447 441

As is shown in Table 2, we found substantial support that contextual variables do

indeed directly affect IT acceptance. Specifically, system usage was directly and signif-

icantly affected by task structure, prior experience, other�s use, organizational support,anxiety, and system quality.

However, a few variables impacted the endogenous variables in directions oppo-site from what was expected. These variables are listed below along with possible rea-

sons for the opposite effects:

High task structure was found to reduce system usefulness perceptions. Perhaps the

respondents felt that computers are more useful for less structured tasks since less

structured tasks are more difficult to solve.

Other�s use was found to lower computer efficacy. Perhaps the frequency of peer

observations has an inverse relationship with one�s confidence in using a system; thatis, if a person is not confident in his/her ability to use a system he/she may spend

more time observing others using it before trying it him/herself.

Similarly, organizational support was found to lower computer efficacy and per-

ceived ease of use assessments. Respondents may believe that organizations provide

more support for those systems that are more difficult to use.

Lastly, high quality had a negative affect on system usage. Since respondents indi-

cated that high quality systems improve efficacy, ease of use, and usefulness, one

would expect that they would use a high quality system more often. Since this resultwas not found, perhaps the respondents felt that the systems they use most often

have poor quality.

7. Discussion of results

With the hopes of maintaining or improving competitiveness, organizations are

investing significantly in information technology. Unfortunately, these investments

do not guarantee that people will actually use the systems. In fact, studies show that

people sometimes choose not to use potentially beneficial systems. As well as repre-

senting a large lost investment, the unrealized potential of a system can be monu-

mental. In extreme cases, unused information systems may impact the viability of

the organization, if the particular information system is deemed a necessity. These

factors help explain why information technology acceptance is one of the top con-cerns for IS managers. Unfortunately, while IT acceptance has attracted the atten-

tion of researchers, the topic continues to be one of the most challenging and least

understood areas of MIS research.

The TAM is one of the most powerful and influential IT acceptance models. How-

ever, researchers suggest that the model may be too general. In addition, investiga-

tors suggest that the TAM does not fully consider or appreciate the impacts of

contextual variables. Self-efficacy in particular has been investigated as a potential

extension to the TAM. However, similar to the generalization concerns of the

Page 16: McFarland Et Hamilton 2006

442 D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447

TAM, based on a review of social cognitive theory, the MIS operationalization of

self-efficacy, computer-efficacy, may be overly generalized. This is in keeping with

Bandura�s (1986) suggestion that efficacy assessments must be particularized for a

specific situation. Furthermore, studies have shown that generalized efficacy meas-

ures bear little or no relation with particularized measures.The objective of this study was to add contextual variables to the TAM. The inde-

pendent variables considered in this study were chosen based on the antecedents of

self-efficacy and social cognitive theory. Specifically, Bandura�s (1986) suggests thatself-efficacy is formed through enactive experiences, vicarious experiences, social per-

suasion, and affect.The operationalization of these constructs was based on prior

MIS studies. Enactive experience was assessed by measuring prior experience. Vicar-

ious experience was assessed by measuring others� use. Social persuasion was as-

sessed by measuring organizational support and affect was assessed by measuringanxiety. To capture the environmental characteristics as suggested by social cogni-

tive theory, we measured task structure and system quality.

The mediating variables and dependent variable considered in this study were

chosen based on the technology acceptance model with a computer-efficacy exten-

sion. As a result, three mediators were measured, namely, computer-efficacy, per-

ceived ease of use, and perceived usefulness. Consistent with other TAM studies, a

single dependent variable, system usage, was measured. The operationalization of

these constructs was based on prior information system studies.A field research design was employed with data gathered by means of mailed

questionnaires. The mailing list was purchased from a market research organization,

Applied Computer Research, Incorporated (Phoenix, AZ). The hypotheses gener-

ated from the conceptual model were tested using partial least squares. The tests

of the hypotheses show that all the contextual variables (computer anxiety, prior

experience, other�s use, organizational support, and system quality) significantly af-

fect computer-efficacy. (One contextual variable, task structure, did not.) Further-

more, the contextual variables directly affected system usage.These findings suggest that while the TAM is valid, substantially stronger results

may be obtained if researchers particularize their research instruments. Furthermore,

in support of social cognitive theory, the influence of contextual variables should not

be overlooked or trivialized.

8. Research contribution

While the merits of the TAM are notwithstanding, the findings of this study sup-

port providing greater specificity when analyzing computing behaviors. As a result,

we proposed that the particularization of the TAM would provide overall better re-

sults. These findings are consistent with social cognitive theory and the Theory of

Reasoned Action.

Prior studies report that the explained variance of IT usage has been somewhat

inconsistent and not necessarily strong. The explained variance of system usage

(28%) in our study was consistent with the findings of prior studies.

Page 17: McFarland Et Hamilton 2006

D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447 443

Another important finding of this study is the significance of the contextual var-

iables. We found that contextual variables significantly impacted the mediating var-

iables, as well as the dependant variable, system usage. While these findings are

consistent with social cognitive theory, they are in conflict with the TAM and

TRA which posit that contextual variables only influence behavior indirectly,through mediating variables. As a result, the findings of this study suggest that

researchers should not indiscriminately disregard or trivialize the role of contextual

variables as they relate to IT behaviors. As suggested by social cognitive theory, it

appears that one�s behavior is a function of his or her characteristics and experiences

as they relate to the specific situation.

9. Research limitations

Although this study found many significant relationships between the latent var-

iables, causality should not be implied (Falk & Miller, 1992). The findings of this

study are appropriate for predictions and/or confirmation of theoretical constructs.

As a result, results of this study justify the conjecture that self-efficacy theory is appli-

cable to MIS research. Furthermore it is reasonable to predict, based on social cog-

nitive theory and these empirical results, that particularized instruments will provide

more insights into computing attitudes, perceptions, and behaviors. However, it isnot appropriate or justified to suggest that the presence or absence of one construct

will cause a change in another construct.

Another limitation of the methodology employed in this study is that, due to their

nature, we were unable to manipulate the independent variables. Since experimental

manipulation was not possible, we were limited to the extent to which we could con-

trol the study (Kerlinger, 1986). As a result, it is with less certainty that we may con-

clude that a true relationship exists between variables. This is due to our inability to

rule-out the potential that the observed relationship was not the result of one ormore unmeasured variables. Additionally, the present study used a single data col-

lection procedure. If multiple data collection methods were employed, the overall

convergent and discriminant validities could have been improved by using a multi-

trait-multimethod analysis (Campbell & Fiske, 1959).

Self-reported instruments were used to measure the research variables. As a result,

the items may suffer from several types of response biases, such as the halo effect

when the responses are influenced by the respondent�s overall impression of the ob-

ject (e.g., I like computers, so I will respond positively to all questions) and the errorof central tendency when the respondent avoids the extremes on the response scale

(Kerlinger, 1986).

In regard to the data analysis method, PLS requires multiple indicators for each

latent variable (Lohmoller, 1984; Chin, Marcolin, & Newsted, 1996; Falk & Miller,

1992). While there is no empirical evidence to determine the ideal number of indica-

tors, two may be too few. Since the PLS procedure gives preference to fitting the struc-

tural model, at the expense of the measurement model (Falk & Miller, 1992), a larger

number of manifest variables will allow the procedure to account for a greater portion

Page 18: McFarland Et Hamilton 2006

444 D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447

of the structural model�s variance. In the present study two latent variables had fewer

than four manifest variables, namely prior experience and system usage. As a result,

additional indicators may have improved the data analysis.

10. Suggestions for future research

Beyond addressing the limitations of this study, there are several research areas

and/or procedures that would confirm or enhance the findings in this study.

While the nature of this research does not allow for manipulation of the inde-

pendent variables, varying the research design can increase control. For example,

multiple instruments can be administered to the same people. By matching the

data sets, this design can significantly improve control by factoring out the influ-ences of several significant independent variables and other unmeasured extrane-

ous variables.

Additionally, the realm of potentially significant contextual variables is large.

Researchers can investigate a host of other contextual variables such as math anxi-

ety, math aptitude, reading aptitude, organizational structure, management style,

prior task training, and prior training regarding the specific technology. While con-

textual variable selection should be guided by theory, in consideration with the find-

ings of this study, one could also consider the influences of the specific situation inselecting appropriate and potentially significant variables.

Several constructs could be defined to a greater level of detail. For instance, the

prior experience and other�s use constructs were limited to the use of a computer. Fu-

ture studies could measure experience performing a specific task and/or experience

using a specific application.

Similarly, in the present study we asked the respondents to describe the types of

tasks they typically addressed when using information technology. Future studies

could describe and investigate the role of a particular task. Based on the findingsof this study, we suspect that these further particularizations would improve the

overall results of the study.

Finally, in consideration of the findings of this study, the roles of the various con-

textual variables could differ by application and situation. As a result, the search for

the optimal set of contextual and mediating variables may be fruitless it could all

depend on the unique mix of individuals, tasks, applications, and organizations.

However, finding a theory that would help one chose and weigh contextual variables

based on the particulars of a situation would provide substantial value and insightfor IS researchers and IS practice.

References

Adams, D. A., Nelson, R. R., & Todd, P. A. (1992). Perceived usefulness, ease of use, and usage of

information technology: A replication. MIS Quarterly, 16(2), 227–247.

Alwin, D. E., & Hauser, R. R. (1975). Decomposition of effects in path analysis. American Sociological

Review, 40, 37–47.

Page 19: McFarland Et Hamilton 2006

D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447 445

Bandura�s, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs,

NJ: Prentice-Hall.

Bandura, A. (1995). Exercise of personal and collective efficacy in changing societies. In A. Bandura (Ed.),

Self-efficacy in changing societies (pp. 1–45). New York, NY: Cambridge University Press.

Bandura, A. (1997). Self-efficacy: The exercise of control. New York, NY: W.H. Freeman & Company.

Blair, E., & Burton, S. (1987). Cognitive processes used by survey respondents to answer behavioral

frequency questions. Journal of Consumer Research, 14, 280–288.

Burkhardt, M. E., & Brass, D. J. (1990). Changing patterns or patterns of change: The effects of a

change in technology on social network structure and power. Administrative Science Quarterly, 35(1),

104–127.

Campbell, D. J., & Fiske, D. (1959). Convergent and discriminant validation by the multitrait-

multimethod matrix. Psychological Bulletin, 54, 81–105.

Chau, P. Y. K. (1996). An empirical assessment of a modified technology acceptance model. Journal of

Management Information Systems, 13(2), 185–204.

Cheney, P. H., & Dickson, G. B. (1982). Organizational characteristics and information systems success:

An exploratory investigation. Academy of Management Journal, 25(1), 170–184.

Chin, W. W. (1998). Issues and opinion on structural equation modeling. MIS Quarterly, 22(1), 7–16.

Chin, W. W., Marcolin, B. L., & Newsted, P. R. (1996). A partial least square latent variable modeling

approach for measuring interaction effects: Results from a Monte-Carlo simulation study and voice

mail emotion/adoption study. In Proceedings of the 17th international conference on information systems

(pp. 21–41). Cleveland, OH.

Compeau, D. R. (1992). Individual reactions to computing technology: A social cognitive theory

perspective. Doctoral dissertation. The University of Western Ontario.

Compeau, D. R., & Higgins, C. A. (1995). Computer self-efficacy: Development of a measure and initial

test. MIS Quarterly, 19(2), 189–211.

Davis, F. D. (1986). A technology acceptance model for empirically testing new end-user information

systems: Theory and results. Doctoral dissertation. MIT Sloan School of Management.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information

technology. MIS Quarterly, 13(3), 319–340.

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A

comparison of two theoretical models. Management Science, 35(8), 982–1003.

DeLone, W. H. (1988). Determinants of success for computer usage in small business. MIS Quarterly,

12(1), 51–61.

Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. Akron, OH: The University of Akron.

Fenech, T. (1998). Using perceived ease of use and perceived usefulness to predict acceptance of the World

Wide Web. Computer Networks & ISDN Systems, 30(1-7), 629–630.

Fenwick, I. (1979). Techniques in market measurement: The jackknife. Journal of Marketing Research,

16(3), 410–414.

Fishbein, M., & Ajzen�s, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and

reason. Reading, MA: Addison-Wesley.

Fornell, C., & Larcker, D. F. (1981). Structural equation models with unobservable variables and

measurement error: Algebra and statistics. Journal of Marketing Research, 18(3), 382–387.

Fornell, C. R., Tellis, G. L., & Zinkhan, G. M. (1982). Validity assessment: A structural equation

approach using partial least squares. In American marketing association educators� proceedings (pp.

405–409). Chicago, IL.

Gist, M. E., & Mitchell, T. R. (1992). Self-efficacy: A theoretical analysis of Its determinants and

malleability. Academy of Management Review, 7(2), 183–211.

Gist, M. E., Schwoerer, C., & Rosen, B. (1989). Effects of alternative training methods on self-efficacy and

performance in computer software training. Journal of Applied Psychology, 74, 884–891.

Goodhue, D. L., & Thompson, R. L. (1995). Task-technology fit and individual performance. MIS

Quarterly, 19(2), 213–236.

Goodhue, D. L. (1995). Understanding user evaluations of information systems. Management Science,

41(12), 1827–1844.

Page 20: McFarland Et Hamilton 2006

446 D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447

Hair, J. F, Anderson, R. E, Tatham, R. L., & Black, W. C. (1992).Multivariate data analysis with readings

(3rd ed.). New York, NY: MacMillan.

Hasan, B. (2003). The influence of specific computer experiences on computer self-efficacy beliefs.

Computers in Human Behavior, 19(4), 443–450.

Hendrickson, A. R., Glorfeld, K., & Cronan, T. P. (1994). On the repeated test-retest reliability of the end-

user computing satisfaction instrument: A comment. Decision Sciences, 25(4), 655–667.

Hill, T., Smith, N. D., &Mann, M. F. (1987). Role of efficacy expectations in predicting the decision to use

advanced technologies: The case of computers. Journal of Applied Psychology, 72(2), 307–313.

Hubona, G. S., & Cheney, P. H. (1994). System effectiveness of knowledge-based technology: The

relationship of user performance and attitudinal measures. In Proceedings of the 27th annual Hawaii

international conference on system sciences (pp. 532–541), Hawaii.

Igbaria, M. (1998). Personal communication.

Igbaria, M. (1990). End-user computing effectiveness: A structured equation model. Omega International

Journal of Management Science, 18(6), 637–652.

Igbaria, M., & Chakrabarti, A. (1990). Computer anxiety and attitudes towards microcomputer use.

Behavior and Information Technology, 9(3), 229–241.

Igbaria, M., & Ivari, J. (1995). The effects of self-efficacy on computer usage. Omega International Journal

of Management Science, 23(6), 587–605.

Igbaria, M., Guimaraes, T., & Davis, G. B. (1995). Testing the determinants of microcomputer usage via a

structural model. Journal of Management Information Systems, 11(4), 87–114.

Igbaria, M., Parasuraman, S., & Baroudi, J. J. (1996). A motivational model of microcomputer usage.

Journal of Management Information Systems, 13(1), 127–143.

Igbaria, M., Parasuraman, S., & Pavri, F. (1990). A path analytic study of the determinants of

microcomputer usage. Journal of Management Systems, 2(2), 1–14.

Igbaria, M., Pavri, F., & Huff, S. (1989). Microcomputer application: An empirical look at usage.

Information and Management, 16(4), 187–196.

Igbaria, M., Zinatelli, N., Cragg, P., & Cavaye, A. L. M. (1997). Personal computing acceptance factors in

small firms: A structural equation model. MIS Quarterly, 21(3), 279–305.

Jackson, C. M., Chow, S., & Leitch, R. A. (1997). Toward an understanding of the behavioral intention to

use an information system. Decision Sciences, 28(2), 357–389.

Jarvenpaa, S. L. (1989). The effect of task demands and graphical format on information processing

strategies. Management Science, 35(3), 285–303.

Kerlinger, F. N. (1986). Foundations of behavioral research (3rd ed.). New York, NY: Harcourt Brace

Jovanovich College Publisher.

Langford, M., & Reeves, T. E. (1998). The relationships between computer self-efficacy and personal

characteristics of the beginning information systems student. Journal of Computer Information Systems,

38(4), 41–45.

Lee, D. S. (1986). Usage patterns and sources of assistance to personal computer use. MIS Quarterly,

10(4), 313–325.

Lohmoeller, J. B. (1984). LVPLS 1.6 Program manual: Latent variable path analysis with partial least-

squares estimation. Universitaet zu Koehn, Zentralarchiv fuer Empirische Sozialforschung, Colgne,

Germany.

Lucas, H. C. (1975). Performance and the use of an information system. Management Science, 21(8),

908–919.

Lucas, H. C. (1978). Empirical evidence for a descriptive model of implementation. MIS Quarterly, 2(2),

27–41.

Markus, H., & Nurius, P. (1986). Possible selves. American Psychologist, 41, 954–969.

Martocchio, J. J., & Dulebohn, J. (1994). Performance feedback effects in training: The role of perceived

controllability. Personnel Psychology, 47(2), 357–373.

Mathieson, K. (1991). Predicting user intentions: Comparing the technology acceptance model with the

theory of planned behavior. Information Systems Research, 2(3), 173–191.

Meznar, M. B., & Nigh, D. (1995). Buffer or bridge. environmental and organizational determinants of

public affairs activities in american firms. Academy of Management Journal, 38(4), 975–997.

Page 21: McFarland Et Hamilton 2006

D.J. McFarland, D. Hamilton / Computers in Human Behavior 22 (2006) 427–447 447

Miura, I. T. (1987). The relationship of computer self-efficacy expectations to computer interest and course

enrollment in college. Sex Roles, 16, 303–311.

Moore, G. C., & Benbasat, I. (1991). Developement of an instrument to measure the perceptions of

adopting an information technology innovation. Information Systems Research, 2(3), 192–222.

Nunnally, J. (1978). Psychometric methods (2nd ed.). New York, NY: McGraw-Hill.

Podsakoff, P., & Organ, M. (1986). Self-reports in organizational research: Problems and prospects.

Journal of Management, 12(4), 531–544.

Raub, A. C. (1981). Correlates of computer anxiety in college students. Doctoral dissertation. University

of Pennsylvania, Philadelphia, PA.

Saga, V., & Zmud, R. (1994). The nature and determinants of IT acceptance, routinization, and infusion.

IFIP Transactions A (Diffusion, Transfer, and Implementation of Information Technology), 45, 67–86.

Segars, A. H., & Grover, V. (1994). Re-examining perceived ease of use and usefulness: A confirmatory

factor analysis. MIS Quarterly, 17(4), 517–525.

Staples, D. S., Hulland, J. S., & Higgins, C. A. (1998). A self-efficacy theory explanation for the

management of remote workers in virtual organizations. Journal of Computer-Mediated Communica-

tion, 3(4), 1–36.

Taylor, S., & Todd, P. A. (1995). Assessing IT usage: The role of prior experience. MIS Quarterly, 19(4),

561–570.

Thompson, R. L., Higgins, C. A., & Howell, J. M. (1991). Personal computing: Toward a conceptual

model of utilization. MIS Quarterly, 15(1), 125–143.

Vandenbosch, B., & Higgins, C. A. (1995). Executive support systems and learning: A model and empirical

test. Journal of Management Information Systems, 12(2), 99–130.

Vankatesh, V., & Davis, F. D. (1996). A model of the antecedents of perceived ease of use: Development

and test. Decision Sciences, 27(3), 451–481.

Webster, J., & Martocchio, J. J. (1992). Microcomputer playfulness: Development of a measure with

workplace implications. MIS Quarterly, 16(2), 201–226.