ORGANIZATIONAL INTERVENTION EFFECTIVENESS...
Transcript of ORGANIZATIONAL INTERVENTION EFFECTIVENESS...
ORGANIZATIONAL INTERVENTION EFFECTIVENESS ON END USER
INFORMATION TECHNOLOGY ACCEPTANCE
by
Brian D. Otte
JOHN C. HANNON, DBA, Faculty Mentor and Chair
HENRY J. LINDBORG, PhD, Committee Member
BERNARD J. SHARUM, PhD, Committee Member
Raja K. Iyer, PhD, Dean, School of Business and Technology
A Dissertation Presented in Partial Fulfillment
of the Requirements for the Degree
Doctor of Philosophy
Capella University
Add Month Year (of conference approval)
© Brian D. Otte, 2010
Abstract
Information technology (IT) enables productivity, defines part of the culture for a social
group, and enables both IT strategy and business strategy. IT end users who choose not to
accept and utilize the available IT within an organization fail to take advantage of the
productivity that technology offers. Businesses have multiple organizational interventions
that address end user IT acceptance. Multiple field experiments will be conducted to
measure end user perception changes in IT ease of use and IT usefulness enabled through
training and incentivizing IT system utilization. A survey that contains scales on
perceived ease of use, perceived usefulness, and communication perceptions will capture
the training and incentive effectiveness. Training and incentive effectiveness at adjusting
end user IT acceptance will be determined by comparing pretreatment and posttreatment
survey results. Organizations may look to the results of this research effort when
selecting specific preimplementation or postimplementation organizational interventions.
Measuring the effectiveness of two specific interventions in a practitioner environment
lays the groundwork for additional research surrounding organizational intervention
effectiveness. The intervention effectiveness informs the practitioner concerning
intervention selection and timing when implementing IT within a business environment.
Dedication
To be added.
iii
Acknowledgments
I would like to thank Barb Elwert for detailed editorial suggestions.
iv
Table of Contents
Acknowledgments iv
List of Tables viii
List of Figures ix
CHAPTER 1. INTRODUCTION 1
Introduction to the Problem 1
Background of the Study 5
Statement of the Problem 8
Purpose of the Study 9
Rationale 10
Research Questions and Hypotheses 11
Significance of the Study 12
Definitions of Terms 14
Assumptions and Limitations 15
Nature of the Study 16
Summary 16
Organization of the Remainder of the Study 17
CHAPTER 2. LITERATURE REVIEW 18
Introduction 18
Business Strategy 18
Institutional Culture 25
Interventions and IT System Implementation 32
v
Preimplementation Interventions 35
Postimplementation Intervention 42
End User IT Acceptance 45
Summary 48
CHAPTER 3. METHODOLOGY 50
Introduction 50
Research Design 51
Sample 53
Intervention Effectiveness 56
Access to Site 56
Setting 58
Instrumentation and Measures 59
Data Collection 61
Organizational Intervention Training: Phase 1 62
Organizational Intervention Incentives: Phase 2 63
Data Analysis 64
Validity and Reliability 67
Limitations 72
Ethical Considerations 73
REFERENCES 75
APPENDIX A. INTERVENTIONS FOR PRE-IT SYSTEM IMPLEMENTATION 82
APPENDIX B. INTERVENTIONS FOR POST-IT SYSTEM IMPLEMENTATION 83
vi
APPENDIX C. TECHNOLOGY ACCEPTANCE MODEL 84
APPENDIX D. TECHNOLOGY ACCEPTANCE MODEL 3 85
APPENDIX E. SURVEY INSTRUMENT 86
APPENDIX F. SURVEY INSTRUMENT SCALES 90
vii
List of Tables
Table 1. Perceived Usefulness Determinants Presented in TAM-3 15
Table 2. Perceived Ease of Use Determinants Presented in TAM-3 16
Table 3. Facility Name and Employee E-Mail Status 54
Table 4. Phase 1: Training 63
Table 5. Phase 2: Incentives 64
Table 6. Survey Instrument Measurement Scales 66
Table A1. Determinants of Perceived Usefulness 82
Table A2. Determinants of Perceived Ease of Use 82
Table B1. Determinants of Perceived Usefulness 83
Table B1. Determinants of Perceived Ease of Use 83
Table F1. Perceived Usefulness Scales 90
Table F2. Perceived Ease of Use Scales 90
viii
List of Figures
Figure 1. SDLC phases with implementation phase highlighted. 3
Figure 2. Conceptual framework used during literature review. 19
Figure 3. Strategy relationships enabled through organizational interventions. 48
Figure 4. Project phases, facility sequencing, and experimental framework. 52
Figure C1. Technology acceptance model. 84
Figure D1. TAM–3. 85
ix
CHAPTER 1. INTRODUCTION
Introduction to the Problem
Businesses worldwide increased their percentage of revenue spent on information
technology (IT) from 3.5% in 2000 to 3.54% in 2001 and 3.57% in 2002 (David, Schuff,
& St. Louis, 2002). The increasing investments in IT have fostered short-term
productivity growth by introducing IT and long-term productivity growth through
strategic organizational change (Brynjolfsson & Hitt, 2003). As Lucas (1999) pointed
out, although IT provides tools to increase data management productivity and efficiency,
productivity can be realized only when the deployed IT actually is used. Increases in
efficiency and productivity expected from deploying an IT system cannot be realized if
end users choose not to use the IT system. As Mathieson (1991) asserted, voluntary IT
systems are particularly prone to this problem. Organizations that implement IT, even if
the user community views the IT system as effective, will not realize full IT system
benefits if the user community does not use the IT system (Bierstaker, Brody, & Pacini,
2006).
Organizational leadership faced with an unaccepted or underused IT system
presents a problem for organizational management regarding ways to modify end user
community perspectives targeting IT system acceptance. Schein (2004) asserted that an
organization might purpose the IT implementation process itself as a mechanism to
enable cultural change. Organizational interventions during IT system implementation
enable efficiency and increase productivity through IT system use (Lucas, 1999) while
enabling cultural changes within the organization (Schein, 2004). Venkatesh and Bala
1
(2008) postulated that introducing organizational interventions designed to address
specific end user IT acceptance determinants (see Appendices A & B) can increase IT
system use. Increased end user IT acceptance facilitates an increase in productivity while
achieving beneficial cultural changes by fostering IT system use.
Hoffer, George, and Valacich (2008) described multiple implementation
methodologies (IMs) that organizations can employ to install and replace information
systems (ISs). One such commonly used IM is the systems development life cycle
(SDLC; see Figure 1). The SDLC is characterized by multiple stages in which an IT
system phases into or out of an organization (Hoffer et al., 2008). Ward and Peppard
(2002) explained that organizational leadership choices, computing system, culture, and
business process must align with an overarching business strategy. A business strategy
aligns actions with the desired results and requires that organizational leadership be
mindful of the organizational environment.
An analysis by organizational leadership of user expectations, project
commitment, and change commitment may determine that end user IT acceptance is less
than acceptable. These organization-wide issues need enterprise-wide activities that
address identified shortcomings in end user IT acceptance throughout the IT system
implementation phase. The quantitative effectiveness of organizational intervention
options before and after the IT system implementation phase provides organizational
leadership with insight during selection of the best organizational interventions. Because
the chosen IM depends on the implementation purpose and function, organizations need
multiple SDLC-based organizational interventions to best align end user IT acceptance
2
(Venkatesh & Bala, 2008). This study will focus attention on the SDLC implementation
phase by empirically testing two organizational interventions.
Figure 1. SDLC phases with implementation phase highlighted.
Gulliksen et al. (2003); Martinko, Henry, and Zmud (1996); McAllister (2006);
and Schenk, Vitalari, and Davis (1998) attested to the importance of involving the end
users early in the SDLC phases, highlighting the importance of involving end users
throughout the SDLC process. Early end user involvement in the SDLC phases enables
the system implementers and end users to form a common understanding surrounding IT
that controls implementation cost and time (McAllister, 2006). Other factors critical to
successful IT implementations are user expectations (Ginzberg, 1981a) and change
commitment (Ginzberg, 1981b). Training and incentives acting as organizational
Planning
Analysis
DesignImplementation
Maintenance
3
interventions involve and inform the end user community of the importance of the IT
system while informing organizational leadership about end user IT acceptance.
Organizational interventions before system implementation inform the end user
community and prepare it for changes that the IT system will impart to the enterprise
(Venkatesh & Bala, 2008). Organizational interventions after implementation address
unanticipated results and offer the organization the opportunity to address end user
acceptance results that do not meet organizational leadership expectations.
Training is one frequently used intervention to introduce new information to a
community, but Kang and Santhanam (2003) indicated that more than preimplementation
training is required. For example, an organization that has completed the implementation
of an IT system and is faced with end user acceptance issues cannot offer
preimplementation training. Organizational leadership determining less than acceptable
end user IT acceptance need enterprise-wide actions that address end user IT acceptance
surrounding system implementation. Businesses need additional options beyond
interventions applicable only during specific SDLC phases that address end user IT
acceptance.
In lieu of only a training option, Venkatesh and Bala (2008) suggested multiple
interventions that adjust end user IT acceptance and ensure increased system usage.
Though the topic has received little attention, Venkatesh and Bala called for research that
determines the effectiveness of and optimal timing for interventions. Two organizational
interventions that are the focus of this study are training the end user community about
the IT system and offering incentives to use the IT system. Training informs the end user
4
community about the IT system within the organizational context, and incentives require
actual IT system use over time. Both training and incentives involve the end user
community early and throughout the IT implementation phase. Introducing training and
utilizing incentives as well as measuring the resulting effect on end user IT acceptance
through perceived ease of use and perceived usefulness variables may address the void in
the literature.
Background of the Study
Royce (1998) and Ward and Peppard (2002) suggested that the metrics that gauge
IT implementation success include on-time system delivery and within-budget
installation. However, truly successful IT system implementations involve more than on-
time access and within-budget installation. End user IT acceptance and actual IT system
use are vital because obtaining results from an unused resource is difficult (Mathieson,
1991). Organizations invest in IT to increase productivity and enable within-business
change (Brynjolfsson & Hitt, 2003), yet the pressure to deliver IT projects on time and
installed within budget has led to such temporary measures as implementation
outsourcing and third-party contracting (Ward & Peppard, 2002). The basic premise for
this study is that what is needed to truly assess successful IT implementation are metrics
beyond time and budget that include end user IT acceptance.
Determining IT implementation success via time and budget metrics alone does
not consider other IT implementation benefits. IT implementations are approved and
funded based upon realizable increased organizational efficiency (Ward & Peppard,
5
2002); productivity gains (Brynjolfsson & Hitt, 2003); and their ability to enable positive
organizational/cultural changes (Cameron & Quinn, 2006; Schein, 2004). Taylor-
Cummings (1998) indicated that positive social change requires a positive socialization
environment that reinforces areas among organizational and work group arrangements,
which then results in improved IT system integration and group effectiveness.
IT system implementations that concentrate on end user community acceptance
and use attend to the end user community and foster the reinforcing environment that
Taylor-Cummings (1998) advocated. Why? Businesses require actual IT system use to
achieve desired results from the IT installation, and as Mathieson (1991) contended,
obtaining results from an unused resource is difficult. Metrics that focus on time and
budget need to be augmented with metrics that focus the IT system implementation on
end user IT acceptance. Forging the path toward this area of inquisition included research
by Davis, Bagozzi, and Warshaw (1989) on end user IT acceptance that was extended by
Venkatesh and Bala (2008), who looked at the interaction of preimplementation and
postimplementation organizational interventions and end user IT acceptance. These
foundational studies pointed to increasing the role of end users in the IT implementation
process.
Sharma and Yetton (2003, 2007) suggested an increased role for end users during
the implementation process. Mathieson (1991) argued that the implementation team IT
acceptance level is relevant to IT acceptance levels with the broader institutional users,
signifying that if the system implementers themselves do not accept the system, the
organization may abandon hope that the end users accept the system. If the organizational
6
leadership determine that IT system acceptance is unacceptable, they can implement
initiatives within an organization that influence IT system acceptance (Cohen, 2005). A
need for interventions might occur anywhere within the SDLC, including before and after
system implementation.
The extant social group culture informs administration of the levers and
implementation modes that align with the current organization (Cameron & Quinn,
2006), yet the intervention that an organization uses is its choice, derived from its own
metrics and informed by research. A business interested in productivity and efficiency
might look to research that details organizational intervention effectiveness within
specific phases of the SDLC to make its decisions.
Collaborative IT systems, in which system use is not mandated, benefit from
increases in end user acceptance and actual system use because the increasing work
performed through the IT system intrinsically enforces business rules and processes by
using the IT system organizationally (Kang & Santhanam, 2003). End users who do not
accept or use a collaborative system add little interaction value with other end users; they
do not increase adoption rates, and they minimize any adaptation that is required (Kang
& Santhanam, 2003). Restated simply, an IT system that is not used cannot be realized
(Mathieson, 1991). Lack of end user IT acceptance, especially in a collaborative
environment, means that the system becomes increasingly nonrelevant to the users in the
performance of their jobs.
The technology acceptance model (TAM; see Appendix C) indicates that
perceived ease of use and perceived usefulness of technology predict actual system
7
utilization (Davis, 1986, 1989; Venkatesh & Morris, 2000). Venkatesh and Bala (2008)
extended the TAM with the TAM-3 by identifying and linking perceived ease of use and
perceived usefulness to specific organizational interventions (see Appendix D). However,
TAM-3 literature has provided little evidence on the effectiveness of organizational
interventions.
Statement of the Problem
The problem that this study will address is the lack of empirical evidence
indicating the effectiveness of TAM-3-based organizational interventions at adjusting end
user IT acceptance during specific SDLC phases. At the core of the TAM-3 is the TAM.
Davis (1986) introduced the TAM to enhance the understanding of end user IT
acceptance. Davis et al. (1989) found that the TAM can predict and explain IT use by
measuring the variables of perceived ease of use and perceived usefulness, which can
then predict end user IT acceptance (Davis, 1986). The benefit of increased end user IT
acceptance is that it enables greater technology utilization (Adams, Nelson, & Todd,
1992) and increased IT integration within the implementing organization (Kang &
Santhanam, 2003). The TAM has received much research attention, and Venkatesh and
Bala (2008) extended the TAM by introducing the TAM-3.
Venkatesh and Bala (2008) extended the TAM by defining and testing the
determinants of perceived ease of use and perceived usefulness through the TAM-3.
Their study consisted of a literature review and four longitudinal field studies structured
to identify antecedent factors that adjusted the TAM’s perceived ease of use and
8
perceived usefulness variables. However, the effectiveness of different TAM-3
organizational interventions during specific SDLC phases has received little attention.
The problem is that empirical evidence supporting use of the TAM-3 model in specific
SDLC phases has been lacking. This research will add to the body of knowledge that
TAM-3 practitioners can use to select the best intervention and the best timing to apply
an intervention within the SDLC implementation phase to address end user IT acceptance
by measuring different organizational interventions and varying the timing of a TAM-3
organizational intervention.
Purpose of the Study
This study addresses the lack of empirical evidence for TAM-3 practitioners
detailing TAM-3 organizational intervention effectiveness. The study has four purposes:
1. Determine within the systems installation environment the effectiveness of
different IT-based TAM-3 organizational interventions.
2. Determine whether one TAM-3 intervention (i.e., IT training) is more
effective than another IT TAM-3 intervention (i.e., incentives).
3. Determine whether perceived ease of use and perceived usefulness vary based
upon the timing of IT training.
4. Add to the body of TAM-3 knowledge about IT organizational interventions
as they relate to perceived ease of use and perceived usefulness.
9
Rationale
Multiple organizational interventions can adjust end user perceptions toward IT
acceptance. This study will inform the organizational intervention selection process. An
organizational intervention may occur at any time. This study will inform the timing for
an organizational intervention. Multiple phases exist within the SDLC, so it is the intent
of this study to indicate whether one of two organizational interventions, namely, training
or incentives, has a greater effect than the other.
Beyond the empirical evidence that organizational leadership and IT system
implementers will receive from this research, past experiences, such as bad system
implementations, can influence future expectations from end users (Ramlall, 2004).
Increasing practitioner understanding of the effectiveness of an IT implementation can
improve the intervention selection process. Understanding intervention effectiveness
allows businesses to focus on specific interventions that have been shown to be effective,
which then facilitates user acceptance of the current implementation and may benefit
future implementations.
Finally, one option available to organizational leadership is to manage end user IT
acceptance through mandates because top-management support is critical to IS success
(Sabherwal, Jeyaraj, & Chowa, 2006). Mandates are an option for organizational
leadership that reflect management support (Sharma & Yetton, 2003; Venkatesh & Bala,
2008). Taylor-Cummings (1998) pointed out that end user compliance with mandates
does not increase organizational support for an IS from its members. Overt coercion such
as mandates may facilitate short-term system utilization, but individual usage will vary
10
(Martins & Kellermanns, 2004). End users who voluntarily choose to use a collaborative
system exploit the efficiencies offered through its use and embrace the changes in process
and culture that the system enables (Kang & Santhanam, 2003).
Research Questions and Hypotheses
The study will be guided by eight research questions and hypotheses:
1. Is an end user’s IT system ease of use perception dependent on the training
timing and IT system availability?
H01: IT system perceived ease of use is not dependent on the training timing and
IT system availability.
Ha1: IT system perceived ease of use is dependent on the training timing and IT
system availability.
2. Is an end user’s IT system usefulness perception dependent on the training
timing and IT system availability?
H02: IT system perceived usefulness perception is not dependent on the training
timing and IT system availability.
Ha2: IT system perceived usefulness perception is dependent on the training timing
and IT system availability.
3. Does training increase end user IT system ease of use perception?
H03: Training does not increase IT system perceived ease of use.
Ha3: Training increases IT system perceived ease of use.
4. Does training increase end user IT system usefulness perception?
11
H04: Training does not increase IT system perceived usefulness.
Ha4: Training increases IT system perceived usefulness.
5. Do incentives increase end user IT system ease of use perception?
H05: Incentives do not increase IT system perceived ease of use.
Ha5: Incentives increase IT system perceived ease of use.
6. Do incentives increase end user IT system usefulness perception?
H06: Incentives do not increase IT system perceived usefulness.
Ha6: Incentives increase IT system perceived usefulness.
7. Does training increase end user IT system perceived ease of use more than
incentives?
H07: Training does not increase IT system perceived ease of use more than
incentives.
Ha7: Training increases IT system perceived ease of use more than incentives.
8. Does training increase end user IT system perceived usefulness more than
incentives?
H08: Training does not increase IT system perceived usefulness more than
incentives.
Ha8: Training increases IT system perceived usefulness more than incentives.
Significance of the Study
This study will provide evidence for practitioners interested in the effectiveness of
two interventions within an organization. The study also will inform the practitioners
12
about the optimal timing to deliver training to harness the maximum intervention effect.
A system implementer may add an additional system implementation tool that enables
successful IT system implementation by involving the end user community. The end user
community will then gain from increased collaborative system use and related
productivity and efficiency because organizational value increases as each member
increases system use. Additional IT system utilization increases end user information
sharing and enables the end user community to exploit additional IT system features.
Increasing system utilization by getting users to engage the system increases productivity,
enables cultural changes, and expands these benefits for the organization, subsequently
enabling strategic benefits beyond using the IT system.
The study context surrounds the IT system implementation process and is
significant because the study creates knowledge within the IT discipline for TAM-3
practitioners and scholars. The results of this study also will provide business
management with mechanisms that facilitate business strategies and cultural changes
within the organizational environment. Scholars may benefit from linking theory to
practice by testing and measuring the results previously postulated by Venkatesh and
Bala (2008). Practitioners who are implementing IT, as well as business executives,
administrators, and management, may benefit from the quantitative results that indicate
the effectiveness of specific interventions. The results may inform the decision makers
who are responsible for constructing and selecting specific interventions.
13
Definitions of Terms
Incentive. “A thing that motivates or encourages one to do something” (Oxford
American College Dictionary, 2002, p. 676).
Perceived ease of use. “The degree to which the prospective user expects the
target system to be free of effort” (Davis et al., 1989, p. 985).
Perceived usefulness. “The prospective user’s subjective probability that using a
specific application system will increase his or her job performance within an
organizational context” (Davis et al., 1989, p. 985).
Systems development life cycle (SDLC). “A common methodology for system
development in many organizations; it features several phases that mark the progress of
the system analysis and design effort” (Hoffer et al., 2008, p. 9).
Technology acceptance model (TAM). Davis sought to apply the TAM to predict
IT system user acceptance through two variables, namely, perceived ease of use and
perceived usefulness (as cited in Davis et al., 1989). Davis (1986) adapted the theory of
reasoned action introduced by Fishbein and Ajzen in 1975.
Technology acceptance model 3 (TAM-3). Venkatesh and Bala (2008) extended
TAM research detailing perceived ease of use determinants and perceived usefulness
determinants, and they linked the identified determinants to specific organizational
interventions believed effective in adjusting perceived ease of use and perceived
usefulness variables (see Table 1 & Table 2). TAM-3 guides the organizational
intervention design in this study that will address end user IT acceptance.
14
Assumptions and Limitations
This study will measure the effectiveness of two interventions captured through
the variables of perceived ease of use and perceived usefulness. The TAM indicates that
perceived ease of use and perceived usefulness of IT predict actual system usage (Davis,
1986, 1989; Venkatesh & Morris, 2000). This study will draw upon previously published
research indicating that perceived ease of use and perceived usefulness are antecedents of
IT end user acceptance and actual IT system usage (Venkatesh & Bala, 2008). Additional
variables may be involved.
Table 1. Perceived Usefulness Determinants Presented in TAM-3
Determinants DefinitionsPerceived ease of use
The degree to which the end user “expects the target system to be free of effort” (Davis et al., 1989, p. 985).
Subjective norm The degree to which the end user perceives “that most people who are important to him think he should or should not perform the behavior in question” (Fishbein & Ajzen, 1975, p. 302).
Image The degree to which “use of an innovation is perceived to enhance one’s image or status in one’s social system” (Moore & Benbasat, 1992, p. 195).
Job relevance “An individual’s perception regarding the degree to which the target system is applicable to his or her job” (Venkatesh & Davis, 2000, p. 191).
Output quality “People will take into consideration how well the system performs those tasks, which we refer to as perceptions of output quality” (Venkatesh & Davis, 2000, p. 191).
Result demonstrability
“The degree to which an individual believes that the results of using a system are tangible, observable, and communicable” (Venkatesh & Bala, 2008, p. 277).
*Note. Adapted from “Technology Acceptance Model 3 and a Research Agenda on Interventions,” by V. Venkatesh and H. Bala, 2008, Decision Sciences, 39(2), p. 277.
15
Table 2. Perceived Ease of Use Determinants Presented in TAM-3
Determinants DefinitionsComputer anxiety “The fear or apprehension felt by an individual when using computers, or
when considering the possibility of computer utilization” (Maurer & Simonson, 1984, p. 321).
Computer playfulness “The degree of cognitive spontaneity in microcomputer interactions” (Webster & Martocchio, 1992, p. 204).
Computer self-efficacy “An individual's perception of his or her ability to use a computer in the accomplishment of a job task” (Compeau & Higgins, 1995, p. 193).
Objective usability A “comparison of systems based on the actual level (rather than perceptions) of effort required to completing specific tasks” (Venkatesh, 2000, pp. 350-351).
Perceived enjoyment “The activity of using a specific system is perceived to be enjoyable in its own right, aside from any performance consequences resulting from system use” (Venkatesh, 2000, p. 351).
Perception of external control
“Perceptions of external control are related to individuals’ control beliefs regarding the availability of organizational resources and support structure to facilitate the use of a system” (Venkatesh & Bala, 2008, p.278)
*Note. Adapted from “Technology Acceptance Model 3 and a Research Agenda on Interventions,” by V. Venkatesh and H. Bala, 2008, Decision Sciences, 39(2), p. 279.
Nature of the Study
This exploratory, quantitative study will use a field experiment to determine
intervention effectiveness on an end user community surrounding an IT system
implementation. End user IT system acceptance will be determined by measuring the
perceived ease of use and perceived usefulness variables, which predict end user
technology acceptance detailed in the TAM (Davis, 1986, 1989; Venkatesh & Bala,
2008).
16
Summary
This chapter introduced a problem that exists in a business environment, where
either the end user community underutilizes an available IT system or the business seeks
to maximize end user community productivity through increased IT usage. The context of
this problem surrounds the implementation phase within the SDLC and presents two
organizational interventions that Venkatesh and Bala (2008) posited address end user
acceptance. Organizations seeking to increase end user IT system acceptance may look to
specific organizational interventions that facilitate increased IT acceptance by the end
user community. TAM-3-based organizational interventions theorized to increase end
user IT acceptance have received little research attention, even though the research
results may directly relate to the practitioner conducting IT system implementation and
business management within any organization.
Organization of the Remainder of the Study
The comprehensive literature review in chapter 2 frames and contextualizes the
study. Chapter 3 details the methodology and the rationale for using the field experiment
methodology. Chapter 4 details the results of the study, followed by a discussion of the
results, implications, and recommendations in chapter 5.
17
CHAPTER 2. LITERATURE REVIEW
Introduction
The literature review includes a discussion of research on business strategy,
derived from institutional executives, as well as institutional culture, which defines
potential organizational interventions that address end user IT acceptance. Figure 2
depicts the narrowing focus of the literature review from the business strategy outer ring
to the central ring, which is end user IT acceptance. The literature review provides an
understanding of the influences that guide decision makers when implementing IT and
formulating actions that enable successful IT implementation. The literature review
indicated that the business strategy and culture of the participating organization favored
specific interventions. The literature review results in two organizational interventions
that adjust end user IT acceptance during IT system implementation.
Business Strategy
Ward and Peppard (2002) conceptualized three IT eras, each with different
objectives within the computing discipline that include data processing, management ISs,
and strategic ISs. Ward and Peppard asserted that the computing discipline functionality
has changed, suggesting that IT selection offers the business mechanisms that enable
competitive forces when they strategically select, implement, and use IT. How IT is
pressed into service within an organization and the reasons for selecting IT have changed
from simply using IT to increase efficiencies to increasing management effectiveness to
improve competitiveness (Ward & Peppard, 2002). Ward and Peppard also predicted a
18
fourth IT era, where focused IT investments enable a business strategy through the
organizational change that the IT investments create through the IT implementation
process and actual IT system utilization. The business strategically aligns the changes
inherent in implementing and using IT with the institutional strategy. Through continual
strategic IT investments and an eye on the business strategy, the institution gains a
competitive advantage that derives value.
Figure 2. Conceptual framework used during literature review.
IT is a broad discipline that encompasses networks, hardware, software, and data
(O’Brien & Marakas, 2008). The IT system central to this study is e-mail, delivered from
the manufacturer and bundled as a collaboration application suite that includes
Business strategy
Institutional culture
Organizational interventions
Communication
Training
Incentives
End user IT acceptance
19
calendaring, contact manager, task manager, note manager, personal journal manager,
and e-mail application. The literature review focused on the e-mail application. End users
use software to process data (O’Brien & Marakas, 2008). Application software enables
computer end users to get data into a computer, facilitates data retrieval from the
computer, stores the data for later retrieval, and facilitates data movement from one
individual to others who can use the data. The e-mail application facilitates
communication through a computer and is the most fundamental and ubiquitous
application system on a computer within a business context.
Businesses generally improve efficiency through implementing IT, but calculating
the actual return on investment is less clearly understood (Ward & Peppard, 2002). Much
of the literature has focused on the difficulty associated with implementing complex
systems within the organization. Depending on the IT system customization levels, the
organization might expend significant effort to implement and require business process
redesigns (Hoffer et al., 2008). The customization occurs within the IT system software
and also requires changes within the business and social group concerning procedures
and processes (Ward & Peppard, 2002).
During IT system implementations, complex systems like Enterprise Resource
Planning (ERP) may receive customization, the business process may change, and the
employees may need to adapt their interactions within the social group and business. ERP
implementations are so large and complex that significant customization and budgeting
occur, and organizations typically augment their staff during implementations with
consultant experts (Lucas, 1999). Basic systems that require minimal customization may
20
not require an outside consultant or additional budgeting for implementation, yet they
offer organizational leadership mechanisms that enable change.
Implementing an IT system within an organization has various complexities that
influence the cost associated with implementing the system. Implementing an IT system
has some degree of cost and benefit, but the IT implementation also facilitates
organizational changes (Ward & Peppard, 2002). The changes that occur by
implementing an IT system offer business leadership the ability to facilitate strategic
change. The end user community may choose not to accept the IT system because its
members do not understand the system and may not understand how their individual
actions disrupt the organization (Kang & Santhanam, 2003). Teo and Ang (2001) found
that “people at all levels must understand and accept the change process” (p. 462). The
strategic changes desired by introducing the IT system within the business cannot occur if
the end users do not accept and use the installed system.
Decision makers may specifically purpose IT system implementation to enable
change within their organizations (Glen, 2003; Lucas, 1999; Schein, 2004, 2009; Senge,
2006; Ward & Peppard, 2002). Business executives may confront barriers to their
strategic plans during IT system implementation. Teo and Ang (2001) separated IS
planning into three implementation phases and indicated that the greatest problems
surrounding IS strategy involve top management support, involvement, and commitment.
A barrier exists for successfully implementing an IT system if organizational leaders
refuse to embrace and accept such an implementation. Furthermore, any business process
redesign that accompanies IT system implementation adds an additional barrier if the
21
organizational leaders are not supportive, involved, or committed to such an
implementation (Teo & Ang, 2001).
A benefit exists for organizations to use system implementers who are familiar
with the system implementation process (Lucas, 1999). Business use metrics that indicate
IT system implementation success (Freedman, 2003). Time and budget are metrics used
during system implementation (Taylor-Cummings, 1998), but these metrics fail to
capture other quantifiable metrics available during system implementation. Aligning IT
system changes, the organization changes, and the social group processes during IT
system implementation enables a business strategy (Ward & Peppard, 2002). Yet the
metrics upon which the system implementation experts are measured are on-time system
delivery and within-budget installation (Taylor-Cummings, 1998), both of which indicate
system implementation success (Freedman, 2003). Research has supported including end
users throughout the IT system implementation process (David et al., 2002; Freedman,
2003; Martinko et al., 1996; Sabherwal et al., 2006; Sharma & Yetton, 2003), yet end
user involvement is not captured through on-time system delivery and within-budget
installation metrics.
Other hurdles exist for IT system implementers, including incorrect job
perceptions or job idealizations that result in misaligned organization changes from
misaligned interactions between users and system implementers (Lyytinen & Hirschheim,
1987). Strategic IT use involves aligning continued IT use with the business (Ward &
Peppard, 2002). Metrics such as on-time system delivery and within-budget installation
marginalize concerns such as individual acceptance issues and training during system
22
implementation for the user community. On-time system delivery and within-budget
installation overlook the end user community, not capturing end user metrics such as the
end user perceptions surrounding the technology ease of use or usefulness while
implementing a system, even with overwhelming evidence indicating early and
continuous end user involvement (Gulliksen et al., 2003; Martinko et al., 1996;
McAllister, 2006; Schenk et al., 1998).
Viewing system implementation from the end user perspective, it appears that
multiple little problems with IT are associated with the IT system and are not separate
problems. Ward and Peppard (2002) expanded on system implementation failures within
the user domain, indicating that many system implementations result in progressive
system use decreases because of systemic inadequacies in end user training. Ward and
Peppard posited that organizations receive additional value through increased
productivity as each individual increases system use, while simultaneously enabling
change.
Factors crucial during IT system implementation are system modification to fit
the organization (Kang & Santhanam, 2003); business processes adaptation (Ward &
Peppard, 2002); and installed system end user acceptance and utilization (Adams et al.,
1992). On-time system delivery and within-budget installation do not capture the metrics
associated with end user involvement, even though they are vital to IT system
implementation. Metrics such as on-time system delivery and within-budget installation
only measure and reward successful timing and budgets, not the desired business and
strategic changes that result from actual IT system utilization.
23
Lacking individual end user acceptance threatens the business changes and
strategic changes enabled through implementation of the IT system. As more
organization members accept and engage the IT system, the institutional body of
knowledge increases, but with limited or no actual IT system utilization, there is no body
of knowledge or common understanding of how the implemented IT system fits within
the business (Ward & Peppard, 2002). IT implementations are successful if they meet
implementation metrics within an environment, but organizationally, the system
implementation is a failure if end users within the organization fail to understand the
relationship between organizational procedures and the installed system (Ward &
Peppard, 2002). Lacking system use by the end user community limits the results sought
by a strategic IT system implementation.
Bartlett, Ghoshal, and Birkinshaw (2004) noted that “the influence of culture can
also be seen in organizational processes such as the nature of policies and procedures,
planning and control, information processing and communication, and decision-making”
(p. 168). The business strategy manifests within the social group culture as policies and
procedures. Implementing an IT system within an organization initiates change, just as
planning an intervention initiates change. Either using or not using an IT system reflects
the organizational culture. If business management choose an IT system to enable a
competitive or a business advantage, as Ward and Peppard (2002) suggested, system
utilization is required because without actual system utilization, the IT can add little
value.
24
When the business characteristics change or when the current business strategy is
no longer the best fit for the business, the institution can adopt a new business strategy.
The cultural maturities of an organization inform organizational leadership about the
malleability of culture and firm performance. Adaptable cultures exist in high-performing
organizations, but strong cultured organizations exist in low-performing firms (Heskett,
Sasser, & Schlesinger, 1997). Ward and Peppard (2002) argued that the business IT
strategy extends beyond determining what technology can do for the organization and
that the business strategy includes more than aligning IT and business objectives. Ward
and Peppard suggested that strategic IT use includes aligning business and IT, as well as
defining, communicating, and understanding the reasons specific IT deployment choices
and methods are used. Delivering value through IT requires coupling the business
strategy with strategic changes that an IT implementation enables, delivering value
through strategic IT selection, implementation, and utilization (Lucas, 1999; Ward &
Peppard, 2002; Schein, 2009).
Institutional Culture
Implicit in an intervention is that change will occur in a measurable way. The
desired effect is that the change enables movement from a less desirable state toward a
more desirable state. How organizational leadership present and implement change
depends on the existing culture. This means that when organizational leadership select an
intervention, they reflect on the people involved and where the organization has been,
25
and they also consider the current culture while determining a new directed vision where
the organization will move.
The business strategy that the institution embraces defines and influences the
culture because different business strategies employ different mechanisms that influence
behavior. Ward and Peppard (2002) detailed three general business strategies: low cost,
differentiation, and niche focused. An institution that embraces a differentiation business
strategy focuses on people and their creativity or market orientation rather than on
management controls, and it also utilizes incentive schemes that are not production based
(Ward & Peppard, 2002). Within a differentiation business strategy, an incentive
reinforces preferred activities, such as creativity, or deemphasizes other activities that are
not preferred, such as production quotes. Conversely, within a low-cost business strategy,
an incentive reinforces production quotas as a preferred activity and deemphasizes
creativity. An organizational executive who is contemplating business strategy changes
or cultural changes must consider the current business strategy and current culture (Ward
& Peppard, 2002) while favoring change activities that achieve an alignment between
business strategy and institutional culture.
The message that makes change possible requires communication, but within an
organization, barriers or boundaries can impede communication (Schein, 2009).
Executives might communicate desired change by imposing rules and regulations on IT
system utilization. Mandating IT system utilization is an option, but Schein (2009)
indicated that before change can occur, the driving force for change must be greater than
the restraining force against change and the organizational intervention selected must fit
26
the organizational culture. Ward and Peppard (2002) noted that a business strategy that
pursues a low-cost strategy seeks structure and conformity, and in such an environment,
mandates are culturally agreeable. Conversely, an organization whose business strategy
embraces differentiation or a niche-focused strategy may encourage workforce creativity
and may find that mandates are not aligned with their culture and business strategy.
Administration might select mandates as the best choice for their business
environment. Mandating IT system utilization, where IT system use is up to end user
discretion, is problematic and points to management issues. Baum and Kling (2004)
stressed that mandating IT system utilization not based upon real and meaningful values
has implications for corporate government effectiveness. Administration need additional
methods to enable cultural change within their environment beyond mandates. Business
executives who attempt to achieve strategic alignment between culture and strategy need
mechanisms to change the culture by instilling meaning and meaningful values in its
members.
The business strategy and the organizational culture are executive responsibilities.
Organizational leadership create the business strategy, and the business strategy informs
the business culture (Baum & Kling, 2004; Cameron & Quinn, 2006; Schein, 2004,
2009). Before implementing any change, organizational management must understand
what will work within their environment. The leaders need to know their constituents’
thoughts and feelings (Kouzes & Posner, 2007). Change through strategic IT utilization
allows administration to shape change through IT implementation within the business
(Ward & Peppard, 2002). Strategic change is possible if organizational leadership are
27
aware of the current culture, understand what change means to the constituents who
comprise the social group, and maintain a clear view of the resulting culture and business
strategy.
Different mechanisms that enable cultural change are available for different
organizational maturities (Schein, 2004). If a misalignment exists between culture and the
organization’s mission and goals, a cultural change process could be implemented
(Knowles, Holton, & Swanson, 2005). As an example, Schein (2004) cited technological
seduction as a cultural change mechanism, explaining that the organization introduces
technology and develops the organization through an educational intervention. Cultural
change mechanisms are effective for midlife and early growth organizations, but not for
stabilized cultures (Schein, 2004). Multiple cultural change mechanisms exist, even
though they may not be effective. Schein indicated that culture ranges from malleable to
fixed, depending on the organization’s evolutionary stage.
Using IT to enable change at an organization that is mature or in decline may
result in a reaction from the end user community that is not the desired effect. Schein
(2004) described the results from introducing IT on silo cultures where the implemented
system met end user resistance, subversion, and refusal to engage the system. Positive
cultural change or any other desired change is unlikely if the end user community meets
the change with resistance, subversion, and refusal. Organizational leadership faced with
these issues need methods to mitigate the issues and create a positive environment for the
envisioned change.
28
IT enables a business strategy when the end user community uses the IT system
because the technology emplaces a common frame of reference throughout the
organization (Schein, 2004). Low acceptance or low utilization by the end user
community is problematic for the business if organizational leadership have strategically
chosen IT implementation to enable change. Obtaining results from an unused resource is
difficult (Mathieson, 1991). Cultural changes occur only when the end user community
adopts the common reference and assumptions using IT (Schein, 2004). Action is
required if end user acceptance and system utilization are less than desired, suggesting
that the resulting cultural change envisioned through IT system implementation is in
jeopardy.
Senge (2006) developed the learning organization concept, which includes
systems thinking, personal mastery, mental models, a shared vision, and team learning,
all suggesting business strategy and cultural alignment. Senge described systems thinking
in an organization as “bound by invisible fabrics of interrelated actions” (p. 7); personal
mastery that infers “dominance over people or things” (p. 7); and mental models of
“deeply ingrained assumptions, generalizations, or even pictures or images that influence
how we understand the world and how we take action” (p. 8). Senge and Schein (2004)
viewed culture from similar perspectives. Senge believed that organizational learning
occurs through teams, not individuals. This view advocates that although individuals
comprise the team and each individual retains knowledge, the institutional knowledge
itself comprises culture and exists within individual team members.
29
IT system implementation offers an organization the ability to incorporate a
“managed change program” (Schein, 2009, p. 152) that enables cultural change. Central
to enabling change and successful IT system implementation is that necessity for the end
user community to accept and utilize the implemented IT to enable the envisioned
changes and strategy. Schein (2004) indicated that educational interventions enable
cultural change. Team learning (Senge, 2006) is one organizational intervention that may
address end user IT acceptance. Education communicates information about the IT
system. Training provided to the end user community addresses perceived ease of use and
perceived usefulness determinates (Venkatesh & Bala, 2008).
Organizations with adaptable cultures outperform organizations with strong
cultures (Heskett et al., 1997), but confronted with either culture, administration must
select and implement an intervention appropriate for the culture by weighing the business
strategy and understanding possible changes within the social group culture and the
desired effects of introducing the IT within the environment.
Cultural Interventions
Within a business context, Daft (2004) argued that some resistance to change is
good. Daft suggested using strategies that overcome resistance, including training and
communication, participation and involvement, creation of a psychological safe
environment, and forcing and coercion. Open communication is required for people to
understand the reasons for change, and coping, participation, employee involvement, and
a psychologically safe environment imply that a culture of trust exists between employees
and leadership (Daft, 2004).
30
Daft (2004) noted that even though force and coercion have been used
successfully to adjust business processes, these strategies to overcome resistance may
have unforeseen and deleterious effects such as employee interpretations and sabotage;
consequently, they are not advised. Open communication, participation, and employee
involvement, while creating a psychologically safe environment, are positive intervention
mechanisms. Force and coercion are not intervention mechanisms considered useful at
the target association.
Organizational Interventions
The interventions that are selected create a psychologically safe environment
while ensuring employee involvement and participation. Training and incentives foster
open communication. The training intervention design introduces the IT system, informs
the user community about possible uses of the IT system, and communicates how the IT
system addresses the strategic needs of the organization. The participating association
maintains a culture that is familiar with training.
Another intervention that fosters open communication, and addresses employee
involvement and participation while creating a psychologically safe environment utilizes
incentives to engage the installed IT system. The culture of the participating association
is familiar with competition and rewards as an intervention mechanism. Determining
system implementation success and linking system implementation success to the two
selected interventions occurs next.
31
Interventions and IT System Implementation
During IT system implementation, the system implementers integrate the IT
within the environment. Metrics used to determine successful system implementation
include on-time system delivery and within-budget installation (Royce, 1998), which has
led to increased pressure to deliver IT on time and within budget (Ward & Peppard,
2002). On-time system delivery and within-budget installation metrics do not capture
metrics surrounding the end user community acceptance or intention to utilize the IT
system.
IT implementation failures are normally attributable to unresolved organizational
or cultural issues (Ward & Peppard, 2002). Implementing an IT system within an
environment merges the organizational processes, the IT, and the application system end
users within the environment. The implementation processes and changes required
suggest that the metrics used to determine implementation success involve including the
end user community. Hoffer et al. (2008) contended that a common and serious problem
in IT system development is limited user involvement. Applying interventions and then
measuring end user IT acceptance levels offers the organization a mechanism that
measures and adjusts end user IT acceptance. The process keeps the end user involved
throughout the SDLC and results in quantifiable information that informs the
implementation process.
End user IT acceptance, increased technology utilization, and increased IT
integration within the implementing organization result from technology being perceived
as aligned with the business needs. Venkatesh and Bala (2008) suggested specific
32
organizational interventions in the TAM-3, including training and incentives, which they
posited adjust end user IT acceptance and increase IT system use. Research has not
indicated whether one TAM-3 intervention is more effective at addressing end user IT
acceptance. This study will address that void by measuring two TAM-3 interventions and
measuring their effectiveness in addressing end user IT acceptance.
Segmenting of Organizational Interventions Within the SDLC
Interventions can occur whenever organizational leadership determine that they
are required, including events surrounding the SDLC implementation phase. The SDLC
has five phases that cycle from one phase to the next phase. The context of this research
effort surrounds the implementation phase in the SDLC, including before, during, and
after the implementation phase.
Research consistently has called for the involvement of end users in multiple
functions and perspectives during all SDLC phases (Adams et al., 1992; David et al.,
2002; Sharma & Yetton, 2003; Venkatesh & Bala, 2008). The five basic SDLC phases
include planning, analysis, design, implementation, and maintenance (Hoffer et al.,
2008). Specific interventions have an affinity for either preimplementation or
postimplementation based upon administrative considerations or current implementation
status (Venkatesh & Bala, 2008). This choice is arbitrary and assigned to the business
executive, but measuring one intervention against another intervention informs the
decision process.
Throughout the SDLC, different interventions have different effects on the end
users’ IT system perceptions because some interventions are less effective during specific
33
SDLC phases. As an example, introducing an IT system through a live demonstration
preview or introducing the system with a hands-on, live demonstration does not make
sense postimplementation because the end user already has had extensive exposure to the
IT system. Grouping interventions as preimplementation or postimplementation
(Venkatesh & Bala, 2008) separates the interventions within the SDLC and enables
specific IT system implementation milestones while giving the end user community a real
stake in the system implementation process.
Venkatesh and Bala (2008) listed organizational interventions associated with
preimplementation, including design characteristics, user participation, management
support, and incentive alignment. Interventions associated with postimplementation
includes training, organizational support, and peer support. The TAM-3 links perceived
usefulness determinates with subjective norm, image, job relevance, output quality, and
result demonstrability, while linking perceived ease of use determinates with computer
self-efficacy, perceptions of external control, computer anxiety, computer playfulness,
perceived enjoyment, and objective usability (Venkatesh & Bala, 2008).
The TAM-3 extended the TAM. Although the TAM is widely accepted within the
technology adoption field (Bagozzi, 2007), it is not without its detractors. As Bagozzi
(2007) indicated, most TAM research has extended perceived ease of use or perceived
usefulness antecedents, but it has not detailed how perceived ease of use or perceived
usefulness produce their effects. Venkatesh and Bala (2008) indicated the organizational
interventions act on perceived ease of use and perceived usefulness. These determinates
are used to construct interventions that attempt to adjust end user perceptions about an IT
34
system. Within this research effort, the changes in perceived ease of use and perceived
usefulness indicate the intervention effectiveness.
Bagozzi (2007) called for research on specific linkages detailing how perceived
ease of use or perceived usefulness produce effects. When designing interventions to
employ the determinants of perceived ease of use and perceived usefulness, as identified
by Venkatesh and Bala (2008), the results begin to address the questions presented by
Bagozzi, and begin informing how perceived ease of use or perceived usefulness produce
their effects. Intervention effectiveness speaks to how specific organizational
interventions produce their effects, which was the larger dialogue posited by Bagozzi.
Preimplementation Interventions
Venkatesh and Bala (2008) indicated that training occurs as either a
preimplementation or a postimplementation intervention. Several factors rationalize using
training as a preimplementation intervention. End users with varied computer proficiency
levels comprise the target association, ranging from computer inexperienced to expert.
Preimplementation is appropriate to adjust perceptions before the end users have a chance
to form their own perceptions within the organizational context (Martinko et al., 1996).
The organizational training focuses on training adults and presenting the application
system functional aspects, including how the existing culture will change. The training
communicates how each person plays a part in those changes. After the training, the end
users gain application system access and begin using the new system. Adult learners want
to know “how the learning will be conducted, what will be learned, and why it will be
35
valuable” (Knowles et al., 2005, p. 201). The preceding factors indicated that
preimplementation training is the appropriate time for training.
Training Focus and Design
Daft (2004) listed training and communication, participation and involvement,
and a psychologically safe environment as ways to overcome resistance. The training
design in this research effort includes addressing computer functionality for the end user
community and communicating envisioned business process changes as well as the
resulting cultural changes through implementing the IT system. The training also includes
informing the end user community why the changes are occurring and the rationale for
the choices. Within a training or classroom environment, Brookfield (2006) advocated
communicating the rationale for the decisions so that the learners understand that the
decisions are based upon experience. Conveying information to the end user community
about the IT system implementation and the strategic rationale for installing the system
informs the end user community why the changes are occurring, thus relaying the
business need for the changes.
Knowles et al. (2005) espoused a view of teaching as “the management of
procedures that will assure specified behavioral changes” (p. 84). Through training, the
organization conveys to the end user community how the new system enables changes
and how the changes are dependent on end user acceptance and system utilization. Within
the target association, adults comprise the end user community, and adults are motivated
to learn when they can link the learning tasks to problems in their lives (Knowles et al.,
2005). Tailoring the instruction design with the recommendations from Daft (2004) and
36
Knowles et al. relays the training information to the learner in a way that overcomes
resistance to change in the business context and presents the adults in the organization
with training that relates to their daily lives by contextualizing the IT system in their task
performance.
Using training to overcome change resistance (Daft, 2004) informs the end user
community and orients the learners, thus enabling an examination of biases, habits, and
new approaches (Knowles et al., 2005). Adult learning is motivated through the
perception that the learning will help adults deal with real-life situations by using the IT
system (Knowles et al., 2005). Training as a preimplementation intervention paves the
way for implementing an IT system and informs those involved about the business
changes that accompany the implementation.
Training Within the SDLC
Hoffer et al. (2008) described an SDLC as having five phases: planning, analysis,
design, implementation, and maintenance. Training is considered as supporting the IT
system implementation (Hoffer et al., 2008). Training enables the end user community to
understand the business purpose and cultural changes (Schein, 2004) facilitated through
system implementation. Training is a form a communication (Kouzes & Posner, 2007;
Senge, 2006) that informs and relates how the IT system enables the end user community
to accomplish tasks (Knowles et al., 2005).
Lacking organizational support such as training for an IT system requires that end
users look for answers from available resources, such as coworkers, who may know how
to complete a process within a system (Hoffer et al., 2008). End users who are not
37
informed harbor negative IT system perceptions; those who entertain incorrect
perspectives about the appropriate method to complete a task may communicate
information that perpetuates mental models that are incorrect (Senge, 2007). Presenting
the IT system accurately is important during preimplementation so that end users hold
accurate IT system perceptions and minimize initial resistance (Venkatesh & Bala, 2008).
Training is a preimplementation intervention.
Training Components
As an intervention, the training design requires that the training target what the
organization wants to change. The IT system central to this study is e-mail that is
delivered from the manufacturer and bundled as a collaboration application suite that
includes calendaring, contact manager, task manager, note manager, personal journal
manager, and e-mail application. Kang and Santhanam (2003) found that during a
collaboration system implementation, task-focused training resulted in end users not
having the information they needed to engage the system, requiring information beyond
system task functionality. Knowles et al. (2005) commented that because adults perceive
learning as life centered, they orient learning to their own lives and experiences. This
finding indicated that task-oriented information by itself is inadequate. Training for adult
learners presents the end users with information about how their system utilization fits
into their lives and within their social group and the organization. The training includes
conveying the business rationale with the reasons supporting any process changes (Kang
& Santhanam, 2003).
38
Adult education, or andragogy, has a different approach from educating children,
as Knowles et al. (2005) indicated. Adults resist situations of imposed will from others.
Rather than lecture-based training, the training involves a dialogue that involves the end
user community. Training is designed to link the IT benefits to the end user interaction.
Kouzes and Posner (2007) showed that an above-average investment on training
benefits organizations through higher employee involvement, more commitment, and
increased customer service levels. The investment in employees fosters increased end
user community involvement. Leaders are responsible for creating an environment where
all organizational members are empowered to voice their views and take initiatives
(Kouzes & Posner, 2007). Involving end users during the SDLC extends beyond
performing tasks during development and system implementation; it also includes full
end user involvement in the project, including participation and involvement (Sabherwal
et al., 2006). Freedman (2003) found that a lack of end user involvement during the IT
implementation project “is the key cause of resistance, sniping, rumor, and sabotage”
(p. 106). Training facilitates a psychologically safe environment where difficult topics
such as change and what the change means for the individual and the organization
receives adequate attention. Training gives the end user community a voice in the
implementation process.
Delivering information on functional tasks while fostering an environment where
the adult end user community relates the IT system to their daily lives informs the end
user community about the stakes involved and gives the adult learners a stake in the
process. Adult learners maintain shared experiences that enable learning (Knowles et al.,
39
2005). An open dialogue environment gives the training attendees the ability to ask
questions that lead to a better IT system understanding, more participation in the
implementation process, and an increase in their stake in the implementation success.
Kouzes and Posner (2007) asserted that people “want to know what they do
matters” (p. 134) and what they do has an impact to others in their lives. The end users
should feel free to participate, ask questions, and receive realistic answers. Training
enables the end users to contextualize their role with respect to other end users, the social
group, the organization, and the IT system. Through training, the end users gain a real
stake in the IT system.
Communication is not a one-way operation. Glen (2003) stated, “Effective
communication occurs when a thought of one person is translated into words, expressed,
heard, and translated back into an identical thought in the mind of another” (p. 35). After
communication, the next logical step is taking action based upon the communicated
message. Follow-through processes must accompany any communication of change,
which end users can witness through evidence of money and materials (Kouzes & Posner,
2007). The training orients the end user community with IT system tasks, and informs the
process changes within the environment through the IT system implementation. The end
user community can rationalize the changes with the changes supported by observed
institutional support for the IT system implementation within the organization. The
learners thus receive the information they need to contextualize the information,
understand what it means for them within their job, and perceive the institutional
commitment from the institutional hierarchy within their organization.
40
Organizations might use IT implementation as a mechanism that enables change
within the environment (Schein, 2004). Surprisingly, even as organizations invest more in
IT systems (David et al., 2002), organizational members are unable to explain the
business benefits that the IT system offers (Faguet, 2003). If the institutional power
structure choose to implement the new IT system without understanding why they are
implementing it, communicating the reasons for implementation cogently to the end user
community is unlikely. Leaders should communicate with their employees (Kouzes &
Posner, 2007). From a strategic perspective, leaders must know their environment and
create a dialogue with members by connecting with information sources (Kouzes &
Posner, 2007). Through communication, organizational leadership builds culture, which
then informs and conveys the existing business culture (Baum & Kling, 2004).
“Communication plays a major role in promoting a transparent culture, and doing
it frequently is important” (Baum & Kling, 2004, p. 20). Similar to resistance to change
within organizations, Brookfield (2006) discussed resistance to learning, suggesting that
teachers determine resistance sources. Resistance to change in the organizational context
(Baum & Kling, 2004) or the training context (Brookfield, 2006) requires action to
reduce the effect of the sources of resistance. Protecting desirable traits within the
corporate culture means that organizational leaders can neutralize existing cultural threats
(Baum & Kling, 2004). Communication that details organizational leadership vision
informs the end user community about the changes required during IT system
implementation and addresses change resistance.
41
Postimplementation Intervention
Venkatesh and Bala (2008) suggested that different interventions are applicable
during preimplementation and postimplementation. Venkatesh and Bala summarized
postimplementation organizational interventions that address design characteristics, user
participation, management support, incentive alignment, training, institutional support,
and peer support. The postimplementation intervention selected utilizes incentives.
Incentives
Change by aligning the business process and the organizational culture requires
coordination. Coordination requires communication and motivation (D. E. Campbell,
2006). One way to motivate individuals is by offering incentives. Incentives have such a
profound impact on motivation that Martins and Kellermanns (2004), who investigated
acceptance of a web-based application, found that just the perception of incentives was a
motivating factor. Not all incentives are monetary; some incentives include public
recognition of an accomplishment. Ramlall (2004) referenced Champagne and McAfee,
who indicated that praise and awards can address employees’ esteem and their need for
psychological security. Lindborg (1997) indicated that within the team environment, it is
important to recognize the contributions of individual team members. Utilizing rewards
and praise incentives influence employees’ actions in a positive way.
Other literature on incentive effectiveness has been conflicting. Todd and
Benbasat (1999) analyzed strategy selection choices and incentives, finding that
employees who were incentivized were not influenced, hinting that offering incentives
alone is not entirely effective. Just as training requires communication to convey the
42
message, incentives must use communication so that the incentive intent is understood
(D. E. Campbell, 2006). This means that the end user community must understand the
incentives purposes so that they can align actions with incentive. Communication in
conjunction with incentives fosters coordinated action and creates motivation (D. E.
Campbell, 2006). The information communicated to the end user community guides the
choices surrounding desirable actions, but the end user community must have the
capability to choose from and possess knowledge about the available options surrounding
the incentivized action (Todd & Benbasat, 1999). Training enables the employees by
informing and presenting options that give the employee task choices to enable change.
Research on incentives has indicated that rewards can be used to increase and
decrease interest in tasks (Eisenberger & Cameron, 1996). Eisenberger and Cameron
(1996) found that task interest increases with quality dependent rewards and that
completion-dependent rewards and performance-independent rewards are unreliable.
Increasing the end users’ interest in IT system use is the goal of the incentive, suggesting
that quality dependent rewards are prudent.
Businesses goals, the recognition and rewards that the business embraces, and the
methods businesses use to measure employee performance are reflected in their business
culture (Baum & Kling, 2004). Designing incentives that overcome resistance requires
careful consideration (Cameron & Quinn, 2006), and the incentives used should be
central to reinforcing the culture selected (Kouzes & Posner, 2007). Communication to
the end user community about the new culture and the incentives ensures that the new
43
methods and the rewards for embracing the new methods are common knowledge within
the environment.
Eisenhardt (1985) merged organizational literature and agency theory, explicitly
adding reward as a control mechanism for organizations. Eisenhardt found that when
organizations apply rewards as a form of control, the rewards are more effective as the
task becomes simpler and the measurement is easier and cheaper. Conversely, Eisenhardt
also noted that reward effectiveness decreases as the task becomes more complex and
measurement becomes increasingly difficult with associated higher costs. Reward
effectiveness is optimal when rewards are comprised of simple tasks that are easy and
low cost to measure.
The intent of this study is to determine incentive effectiveness at addressing end
user IT acceptance, as measured through perceived ease of use and perceived usefulness
variables. Davis (1989) theorized in the TAM that end user IT system use is predicted by
two variables, namely, perceived ease of use and perceived usefulness. Incentives offer
organizations a venue to both communicate the desired changes and reward desirable
actions, which reinforces the new culture (Cameron & Quinn, 2006). Eisenhardt (1985)
suggested designing an incentive system that is easy to observe and rewards employees
based upon behaviors. A postimplementation incentive requiring actual e-mail system
interaction by the end user community includes the incentive design suggestions from
Eisenhardt: requires simple behavior, is easy to observe, and can be measured through
low cost. The incentive will offer a voluntary multipart quiz delivered through e-mail and
responded through e-mail over multiple days. The users who voluntarily choose to take
44
part in the multipart quiz will engage the IT system and interact with the system to
answer the questions. The reward taking part in the research is a chance to win a $50.00
gift card prize.
Research Informing Incentives From Prior Research
Incentives research has spanned different disciplines, including management for
control (Eisenhardt, 1995); management support (Sharma & Yetton, 2003); and IS
management used as coordination mechanisms (Sherif, Zmud, & Browne, 2006).
Applying incentives within these management support activities will enable IT system
acceptance and cultural changes at the target environment. Schein (2009) indicated that
determining rewards and punishments is difficult. This research effort will use a
monetary reward as an incentive to enable IT system utilization. The incentive enables IT
system use because it requires an e-mail-delivered quiz and e-mail-delivered response.
The incentive is a game designed with content that informs the end user community about
e-mail security and e-mail features while requiring interaction with the e-mail system.
End User IT Acceptance
Davis (1986) introduced the TAM, which adapts the theory of reasoned action to
explain computer usage behavior (Davis et al., 1989). New models have extended the
TAM to include the TAM-2 (Venkatesh & Davis, 2000) and the TAM-3 (Venkatesh &
Bala, 2008). TAM constructs are perceived ease of use and perceived usefulness (Davis
et al., 1989). Determinates of perceived ease of use and perceived usefulness detailed in
45
the TAM-3 are the basis for selecting the two organizational interventions employed in
this study.
Venkatesh and Bala (2008) presented a view of end user IT system acceptance,
including specific attributes associated with the IT system implementation that influence
end user acceptance. The organizational interventions, highlighted in Venkatesh and
Bala, are designed to modify end user IT acceptance. The intervention effectiveness is
unknown at addressing end user IT acceptance. The two interventions used in this
research are training and incentives. Training is a preimplementation organizational
intervention and incentives occur postimplementation. This study will measure changes
in perceived ease of use and perceived usefulness to indicate the effectiveness of the
intervention. Measuring different interventions effectiveness originated from suggestions
by Venkatesh and Bala (2008).
End User IT Acceptance Cost and Benefit
Dholakia, Bagozzi, and Gopinath (2007) suggested that experiences derived from
past system implementations are easier to envision than new implementation plans.
Dholakia et al. suggested that past failed IT system implementations influence current or
future implementations and also limit or interfere with decisions about current or future
implementations. Integrating interventions throughout the IT system implementation
enables mechanisms to measure end user acceptance throughout the IT system
implementation process. The measures, mechanisms, and metrics quantify end user
acceptance and giving organizational leadership the tools to monitor and react to end user
IT acceptance.
46
Mandates
Mandates are an option for organizational leadership to enable IT system
utilization as well as negotiation, persuasion, motivation, and support (Sharma & Yetton,
2003). Venkatesh and Davis (2000) indicated that over time, mandates become less
effective. Carlson (2000) believed that top down-mandates create workforce resistance.
Venkatesh and Davis suggested that alternatives to mandates are more effective as work
environments move toward worker empowerment. Maintaining good morale and creating
a new culture through voluntary IT system usage are incongruent with institutional
mandates. Even though mandates are an option available to businesses, research has
indicated that mandates are not preferred and do not enable a positive working
environment when addressing end user IT acceptance.
McAllister (2006) stated, “A system delivered on time by developers but that fails
to do what is needed by the users is a failed system, providing no value to the business”
(p. 156). End users who are given access to an IT system that they do not accept may
resist the new IT, but through increasing end user involvement during the
implementation, the resistance can be addressed (O’Brien & Marakas, 2008). User
resistance to the IT system limits the IT system’s effectiveness within the organization.
Top challenges involved in developing and implementing systems include getting the
employees to use it and making it easy to use (O’Brien & Marakas, 2008). Once the
organization realizes that it has issues with end user IT acceptance, it must take action to
adjust end user IT perceptions.
47
Summary
This literature review presented a path through the existing literature on
management strategy, business strategy, and IT strategy. An IT system implementation
that is introduced to enable change requires IT system utilization by the end user
community (see Figure 3).
Figure 3. Strategy relationships enabled through organizational interventions.
End user IT acceptance is important and relevant in determining actual IT system
utilization (Joshi, 2005). Organizational interventions occur throughout the IT system
implementation phase. Determining intervention effectiveness and determining which
interventions are more effective at addressing end user IT acceptance informs the
Management Strategy
Business Strategy
IT Strategy
End User IT Use
End User IT Acceptance
InterventionsTrainingIncentives
48
practitioner when selecting and implementing organizational interventions. Chapter 3
details the methodology and the rationale for using the field experiment methodology.
49
CHAPTER 3. METHODOLOGY
Introduction
Siponen and Oinas-Kukkonen (2007) extended the framework presented by
Järvinen (2000) that maps the research design to the research question within IS research.
This study will use the framework detailed by Siponen and Oinas-Kukkonen with an
approach that centers on artifact utility. This study will focus on the artifact of
organizational interventions, and the intervention effectiveness on positively adjusting
end user IT acceptance is the artifact evaluation, as measured through surveys that
capture end user perceptions on perceived ease of use and perceived usefulness. The
TAM indicates that perceived ease of use and perceived usefulness of an IT system
predict actual system utilization (Davis, 1986, 1989; Venkatesh & Morris, 2000).
Venkatesh and Bala (2008) extended the TAM with the TAM-3, which identifies and
links the determinants of perceived ease of use and perceived usefulness with specific
organizational interventions. TAM-3 literature has provided little evidence of the
effectiveness of organizational interventions on end user IT system acceptance. This
study seeks to determine the organizational intervention effectiveness on end user IT
acceptance.
Järvinen (2008) identified the links between IS research questions and research
methods. Yin (2009) indicated that an exploratory experiment and an exploratory survey
are both appropriate for testing, noting that survey methods are beneficial when capturing
the “prevalence of a phenomenon” (p. 9). Surveys and field experiments are theory-
testing devices (Järvinen, 2008), and this study is interested in determining the
50
intervention effectiveness on end user IT acceptance. Measuring the intervention
effectiveness will be done using a survey instrument.
Research Design
The research design for this study is a field experiment. A pre- and posttreatment
survey will measure the treatment effectiveness. D. T. Campbell and Stanley (1963)
presented multiple designs for experimental and quasi-experimental designs. This
research effort adopts the conventions presented by D. T. Campbell and Stanley for
depicting an experiment: X for exposure to treatment, O for observation through
measurement, and R for random assignment.
The target association in this research is comprised of eight separate facilities.
Two experiment phases will occur (see Figure 4). Phase 1 involves conducting one
experiment at each facility before IT system implementation. The Phase 1 experiments
will be conducted sequentially at each of the eight facilities. Phase 2 will involve one
association-wide experiment occurring postimplementation. An experiment determines
the treatment effect by manipulating variables and measuring the effect on other variables
(D. T. Campbell & Stanley, 1963).
R is an individual random assignment predetermined using specific methods
described in the Sample section. During Phase 1, the X is training. End users in the
treatment group will receive training designed to focus on addressing perceived ease of
use and perceived usefulness of an e-mail system, which is the target IT system. During
Phase 2, the X offers an incentive to utilize the IT system. End users in the treatment
51
group will take part in a 5-part quiz that will be delivered through e-mail each day, with a
chance to win a prize for correctly answering the most questions. The prize is either a $50
gift card to a national submarine sandwich chain or a national coffeehouse chain. End
users who return the top scoring quizzes will be entered into a draw for the gift card. One
winner will receive the incentive gift card.
Figure 4. Project phases, facility sequencing, and experimental framework.
Creswell (2009) recommended a process for designing quantitative research that
is accomplished through a survey and experimental research. The experiment captures the
treatment effect on end user IT acceptance, and the survey captures comparable
information across the derived sample (Cooper & Schindler, 2008). Based upon the data
Phase II--Incentives
OrganizationRO1 X O2
RO3 O4
Phase I--Training
Field testRO1 X O2
RO3 O4
Optional field test(s)RO1 X O2
RO3 O4
Facility ARO1 X O2
RO3 O4
Facility BRO1 X O2
RO3 O4
Facility CRO1 X O2
RO3 O4
Facility DRO1 X O2
RO3 O4
Facility ERO1 X O2
RO3 O4
Time
Training (Phase I) Finished
Tim
52
collected, the measured difference between the treatment group and the control group is
the treatment effect.
A survey instrument can employ multiple modes, including the Internet and paper
to deliver the questions (Dillman, Smyth, & Christian, 2009). In this study, two data
collection modes will be used because the treatment will enable computer access within
the organization. Before the first treatment, not all individuals at the target association
will have access to computers; some individuals will have minimal computer experience.
After the new IT system is introduced, all members will have access to computers and
will have received preliminary training on basic computer functions, gaining access and
the skills required to take a computer survey.
Sample
A true experiment requires random assignment of the participants to a control
group or a treatment group (Creswell, 2009). Before randomly assigning individuals to
either the control group or the treatment group, the organizational landscape will be
detailed. Each participant in this study is assigned to one of eight facilities. It is where the
participant primarily works. All eight facilities, when combined, comprise the target
organization. Table 3 presents the facility initials, the research name, the number of
assigned employees, and the number of organizationally provided e-mail assignments at
each facility. Two field experiment phases will occur to capture information. During
Phase 1, each facility will participate in its own experiment. The sample frame will be
53
one facility. During Phase 2, the association as a whole will receive one experiment, with
all facilities combined as the sample frame.
Table 3. Facility Name and Employee E-Mail Status
Real facility name CLC ACY* CORP* NABS* FW HVY NMY APY
Research facility name
Field Test A
Field Test B* or Facility (A)
Field Test C* or Facility (A)
Field Test D* or Facility (A)
Facility B
Facility C
Facility D
Facility E
Employee numbers 60 39 22 8 384 120 358 629
Employees with association-provided e-mail
3 5 7 1 17 20 19 31
Employees with some e-mail address
20 33 22 3 325 80 250 434
Note. * Facility scheduled as field test site if required. If not required, the remaining field test facilities will be combined as one facility and the data used as research data.
During Phase 1, the participants in the sample will be from the target population
of all employees at the target facility. Each employee assigned to the target facility will
have an opportunity to participate in the study. During Phase 2, the participants in the
sample will be from the target population of all employees at the participating
organization. All employees will have an opportunity to participate in each phase of the
study. The whole association employee total is approximately 1,600. Random assignment
to the treatment group or the control group will be based upon the suggestions from
Norušis (2008), who supported assigning random numbers to the individuals, sorting the
random numbers, and assigning the top half to one group and the bottom half to another
group. A sort from smallest to largest on the random number assigned to each employee
54
results in a random employee arrangement. The top half of the list becomes Group Zero,
and the bottom half of the list becomes Group 1. Kerlinger and Lee (2000) suggested
tossing a coin when assigning individuals to groups. A coin toss assigns the group
attributed with a zero to either treatment or control. The group attributed with a 1 fills the
other role.
During Phase 1, all facility employees will have a nonzero chance of assignment
to the control group or the treatment group. Random assignment to the treatment group or
the control group will be determined before the research begins. An announcement
mailed through the post office to all employees at the target organization will introduce
the e-mail access project to them. The mailings will occur at one facility at a time in
sequence. The same events will occur at each facility, but the intervention timing will
change relative to IT system implementation. In addition to announcing the e-mail access
project, the message will convey that before e-mail system access, mandatory training
must occur, which the user must sign up for. The research consists of a pre- and
posttreatment survey. Each end user will be given the same survey at two points in time.
Those members in the treatment group receive training between the surveys, and those in
the control group do not receive training.
During Phase 2, all organization employees have a nonzero chance for assignment
to the control group or the treatment group. Random assignment to either group will be
determined before Phase 2 research begins. The research consists of a pre- and
posttreatment survey. Each end user will be given the same survey at two points in time,
with those members in the treatment group receiving e-mailed quizzes for 1 week, and if
55
they are among those who successfully answer the most questions correctly, also being
eligible for a reward incentive. Those in the control group will not receive the e-mailed
quizzes and will not be eligible for a reward incentive.
Intervention Effectiveness
An experiment methodology will be used to understand the effectiveness of an
intervention. The experiment will measure the observed difference between a treatment
group that received training or incentives and a control group that did not receive the
treatment. During Phase 1, the sample frame will be one facility. During Phase 2, the
sample frame will be the whole organization. The sample methods or the selection
criteria for participants will be different, depending on the research phase.
During Phase 1, the training lead time relative to IT system implementation will
be altered to determine whether there is an optimal effective training lead time. System
access time is controlled by username and password assignment by the organization. The
facilities are numbered by e-mail rollout order, that is, the first facility is Facility A, the
second facility is Facility B, and so on. The resulting facility research names are A, B, C,
D, and E. Facility A is the first rollout facility, and Facility E is the last rollout facility.
Access to Site
Determining candidate organizations interested in participating in this study
occurred by sending letters to geographically local organizations meeting specific
researcher-defined attributes. Initial contact with 259 candidate sites occurred through
56
mailing a one-page letter to the chief executive officer, the human resources officer, any
training or development personnel, and key IT individuals identified at businesses that
employ more than 250 people. A starting point identified one city in Wisconsin where the
researcher lives. Then a selection of counties occurred by drawing successively larger
circles on a Wisconsin county map surrounding the origin. If a circle touched a county,
that county was included in the weekly mailing.
With the target county list, identifying institutions that meet the 250-employee
research requirement occurred by combining two methods. The Wisconsin Department of
Workforce Development (2007) detailed the largest employers in each county in
Wisconsin. This list details the employer name, the industry type, and the employee size
range. The selection criteria included organizations that employed more than 250
employees. This resulted in an institution list within one Wisconsin county that employs
more than 250 employees. With this institution list, a search in the LexisNexis database
using the company name derived from the Wisconsin Department of Workforce
Development revealed the mailing addresses, executive names, and other personnel
positions targeted. Organizations with multiple locations had their company headquarters
selected to limit the selection.
This methodology resulted in a list that did not include companies known to the
researcher that fit the selection criteria in the local area. Therefore, an additional step
bolstered the results. Consulting the information published by the Wisconsin Department
of Workforce Development Office of Economic Advisors (2008), this researcher selected
targeted county profiles that revealed the 10 largest towns in each county. Using these
57
town names, a LexisNexis database search for all employers that employ 250 or more
employees resulted in a satisfactory institution list when coupled with the first method.
A letter mailed through the post office facilitated initial contact. Stage 1 included
163 initial letters. Stage 2 included 273 initial letters. Stage 3 included 173 letters. And
Stage 4 included 36 letters. A total of 645 letters were mailed to 259 companies in 14
counties. Letter mailing lasted 4 weeks. On average, each company contacted received
nearly 2.5 letters of invitation. However, the researcher received 18 letters from the post
office as undeliverable. Ten institutions responded and indicated interest in participating
in this study. After a phone interview with each interested responding organization, an
ideal candidate emerged that fit the research requirements.
Setting
The target organization is a nonprofit that provides various hobby and sporting
activities for individuals and families. The target organization has eight separate facilities
located in southeastern Wisconsin. The facilities comprise one organization, which itself
is a subsidiary of a nationwide entity, the Young Men’s Christian Association (YMCA).
The target organization has approximately 1,600 staff. During the data collection phase,
the organization will implement e-mail at each facility. Before implementing e-mail at
the target organization, the organization must use expensive post office mailings and
make announcements on community bulletin boards. They cannot use e-mail as the
primary vehicle because few staff members have an institutionally provided e-mail
service.
58
Instrumentation and Measures
Venkatesh and Bala (2008) validated the scales that measure TAM constructs,
which include perceived ease of use and perceived usefulness. Both Venkatesh and Bala
gave this researcher permission to use the survey scales used in their research. Beyond
perceived ease of use and perceived usefulness information, the survey also captures
demographic and communication metrics that this researcher incorporated into the survey
instrument. The scales presented by Venkatesh and Bala are not changed, but the
instrument has demographic information merged and communication scales added, which
requires a field test for the instrument at the first few smaller facilities. A scale internal
consistency assessment will occur, and the derived Cronbach’s alpha will be reported.
During the field tests, if the survey instrument requires changes, the researcher will make
the changes before administering the survey beyond the field test environment. Because
this instrument requires validation and a field test, the scale internal consistency findings
will be reported within the Results section.
The changes in perceived ease of use and perceived usefulness from pretreatment
and posttreatment surveys will be measured. The same survey administered pretreatment
and posttreatment will capture perceived communication levels and demographic
information. The demographic information includes age, gender, education, years using
computers, and self-reported proficiency with computers. Survey instrument facilitation
will occur through paper or an online medium. Perceived ease of use and perceived
usefulness are fundamental to the TAM and will help to predict and explain IT use (Davis
et al., 1989).
59
During Phase 1, participants in the treatment group and the control group will
receive the same instrument twice. During Phase 2, participants in the treatment group
and the control group will receive the same instrument twice. In both research phases, the
survey will be delivered pretreatment to participants in the treatment group and the
control group. Then the treatment group will receive the treatment, but the control group
will not. Then the delivery of the same instrument will occur to participants in the
treatment group and the control group.
The TAM, which was developed to predict actual system use, suggests that
perceived ease of use and perceived usefulness determine end user behavioral intention to
utilize an IT system (Venkatesh & Bala, 2008). Perceived ease of use and perceived
usefulness changes will be measured by comparing before and after surveys using
previously validated scales that measure perceived ease of use and perceived usefulness
variables from Venkatesh and Bala (2008). The scales validated by Venkatesh and Bala
are included in the survey instrument. Fowler (2009) indicated that a survey instrument
requires validation through a field pretest before implementation of the actual data
collection.
The target organization has no organizational supplied e-mail, but is interested in
the effect of organizational supplied e-mail on enabling communication within each
facility, between facilities, and with overall association communication. An additional
scale that measures end user communication perception before and after system
implementation will be added to the survey. Capturing information surrounding
communication aids in determining the changes in communication perceptions within the
60
organizational context and highlights any communication enhancements attributable to IT
system implementation.
In Phase 1, the first facility scheduled is relatively small and will serve as the field
test location for the survey instrument. The next facilities also are available as potential
field test locations if required. If no changes to the instrument are required, and if the
internal consistency of the instrument is higher than .7, they will participate with Facility
A, and their data will be used within the research. The development of the survey
instrument scales and their reported reliability and validity are discussed (Creswell,
2009).
Data Collection
Data collection will occur through various methods. The first survey, which will
be a paper copy, will be delivered via the mail service. The rationale for manual survey
and data entry is because some respondents have limited access to computers. The
surveys will be collected, hand entered into SPSS, and verified that the data have been
entered correctly. The posttest survey will be administered electronically because the
treatment enables computer access to all end users. This survey will utilize a survey
services such as SurveyMonkey, Zoomerang, or another survey provider that the YMCA
prefers. Data analysis will be conducted using SPSS.
61
Organizational Intervention Training: Phase 1
The treatment in Phase 1 is training. Two training modes give the end users a
choice of preferred training mode. In Phase 1, all individuals at one facility will receive a
random assignment to either the treatment group or the control group. During Phase 1, an
instrument will measure perceived ease of use and perceived usefulness. The instrument
also will measure end user perspectives of organizational communication levels and
demographic information.
The treatment group will select one of two training modes, each of which has the
same information but a different delivery mechanism. One mode will be face-to-face
training delivered by an instructor in a classroom setting; the other mode will include
training recordings, including video and audio presentations, that will be available online.
The material covered in either training mode is the same, but one is a recording and the
other is instructor led. The control group will receive no training. Each individual has an
equal chance of being assigned to either the treatment group or the control group (see
Table 4). This sampling method is simple random sampling (Trochim & Donnelly, 2008).
The training design addresses two TAM constructs: perceived ease of use and
perceived usefulness. In a true experiment, measuring the control and treatment group is
the same, but the treatment group gets something that the control group does not;
therefore, if everything else is controlled, the only difference between the control group
and the treatment group is the treatment (D. T. Campbell & Stanley, 1963). In Phase 1, an
independent variable is the variation in training time relative to IT system
implementation. Measuring the effect of training lead time occurs by measuring the
62
change in perceived ease of use and perceived usefulness variables because the training
lead time varies per facility.
Table 4. Phase 1: Training
End user community (facility wide) completes survey(O1 treatment and O3 control)
(R) Research variablesEnd user selects
preferred training method
Training is derived from focusing on perceived ease of use and
perceived usefulness(X)
End
user
rand
omly
ass
igne
d to
tre
atm
ent o
r con
trol g
roup
Inde
pend
ent V
aria
bles
Less than 14 days but greater than 7 days before
implementation
Online recorded presentation
TrainingLess than 7 days before implementation Instructor facilitated
classroom presentation
Same day as implementation
Control group-No training
End user community (facility wide) completes survey(O2 treatment and O4 control)
Organizational Intervention Incentives: Phase 2
Postimplementation, all employees will have e-mail system access during Phase
2. Incentives will be the intervention in Phase 2. In this phase, all individuals in the
organization will be randomly assigned to either the treatment group or the control group.
During Phase 2, the same instrument used in Phase 1 will be employed to again measure
end user IT system perceived ease of use and perceived usefulness. The instrument also
will capture demographic information and measure end user conceptions of existing
organizational communication levels.
63
The treatment group will receive a daily quiz through e-mail. The e-mail system
facilitates delivery and completed quiz return, requiring interaction with the system for
quiz delivery and response. Those employees who successfully return and answer the
treatment quizzes with the highest correct answers receive a chance to receive a $50.00
gift card to either a national submarine sandwich shop or a national coffeehouse chain
(see Table 5).
Table 5. Phase 2: Incentives
End user community (organization wide) completes survey(O1 treatment and O3 control)
(R) Research variables
End user community participates in a 5 day, once daily online quiz on computer security
and e-mail use(X)
End
user
rand
omly
as
sign
ed to
tre
atm
ent o
r con
trol
Inde
pend
ent
Var
iabl
es
Incentive to utilize system
Chance to win a $50.00 gift card to a national submarine shop or a national
coffee house
Incentive to utilize system
Control group-No incentives to utilize systemEnd user community (organization wide) completes survey
(O2 treatment and O4 control)
Data Analysis
Creswell (2009) guided the researcher in the development of plans for the data
analysis. Creswell suggested reporting descriptive statistics about the survey response
rates, indicating that graphs are appropriate. Creswell also suggested determining whether
response bias exists. The survey instrument uses scales from previously published
research. The instrument has not been validated, and Fowler (2009) noted that the
instrument requires field testing. The scale internal consistency for this instrument will be
64
determined using Cronbach’s alpha (Creswell, 2009). The survey instrument contains two
sections. The first part collects demographic information, and the second part collects
scales on perceived ease of use, perceived usefulness, and communication perceptions on
a 5-point Likert scale (see Appendix E). Creswell offered suggestions on statistical tests
that depend on data normality, and the statistical choices used depend on the results
received.
The researcher will use SPSS to process the data. A preimplementation
intervention trains the end user community, whereas varying the lead time for training
with the actual IT system implementation adjusts the time between training and IT
system implementation. A postimplementation intervention incentivizes the end user
community to use the implemented system. Measuring the results occurs through surveys
administered pre- and posttreatment.
Phase 1 and Phase 2 data will determine the amount of change that is attributable
to treatment. The instrument will capture changes in perceived ease of use, perceived
usefulness, and communication perceptions between pretreatment and posttreatment that
will indicate the treatment effect. A data analysis through descriptive means will occur,
and depending on the data distribution, either parametric analysis or nonparametric
analysis will occur. Statistical resources will aid the specific method used depending on
the nature of the data (Cooper & Schindler, 2008; Creswell, 2009; Kerlinger & Lee,
2000; Norušis, 2008).
A nonparametric test for Research Questions 1 and 2 will test the hypotheses
using chi-square (Norušis, 2008). The data used to test these hypotheses will be from the
65
treatment group. The test variable list includes Questions 17 to 20 for Research Question
1 and Questions 13 to 16 for Research Question 2 (see Table 6). The grouping variables
used within SPSS are the different training groups whose training timing varies with
system implementation.
Table 6. Survey Instrument Measurement Scales
Question Measurement scale type1,7,8,9,28 Scale2 Nominal3-6;10-27 Ordinal1-12 Demographic3-4 Computer skill perceptions9-11 Educational attributes13-16 Perceived usefulness scale (PU1;PU2;PU3;PU4)17-20 Perceived ease of use scale (PEOU1; PEOU2; PEOU3; PEOU4)21,23,25,27 E-mail communication enable scale (EC1, EC2, EC3, EC4)22;24;26 Association awareness (AA1,AAE 2, AAE 3)28 Self-reported system utilization (USE1)
A parametric test for Research Questions 3 to 6 will test the hypothesis using one-
way ANOVA (Norušis, 2008). The dependent list for Research Questions 3 and 5
includes Questions 17 to 20, and the dependent list for Research Questions 4 and 6
includes Questions 13 to 16. Research Questions 3 and 4 are interested in comparing the
amount of change end users experience from training. For Phase 1, two groups comprise
the training, that is, participants in the treatment group or those in the control group. For
Phase 2, the incentive phase has two groups, namely, those in the treatment group or
those in the control group. The result will be four total groupings, two from Phase 1 and
two from Phase 2. A value comparison derived from this procedure using ANOVA will
answer Research Questions 3 to 6. Post hoc analysis using the Bonferroni test (Norušis,
66
2008) will address Research Questions 7 and 8. The procedures used to test these
hypotheses will follow and reference Norušis as a guide.
Validity and Reliability
Reliability deals with consistency and dependability, and validity refers to the
standards to judge research quality (Trochim & Donnelly, 2008). The research design
will embrace a positivistic research paradigm (Gephart, 1999). Validity and reliability
have threats associated with them, which the research design will attempt to address.
Taken together, validity and reliability define the quality and accuracy of the procedure
used in the research. The research method used is a true experiment. Cooper and
Schindler (2008) discussed validity in experimentation and indicated that there are two
major validity varieties: internal and external. An experimental design addresses internal
validity (Trochim & Donnelly, 2008). Controlling internal and external validity increases
the research value.
Internal Validity and Research Design
Creswell (2009) categorized internal validity threats as threats that involve
participants, researcher manipulations, and procedures used in the experiment. Threats to
internal validity are history, maturation, testing, instrumentation, regression, selection,
mortality, selection interaction and maturation (Cooper & Schindler, 2008; D. T.
Campbell & Stanley, 1963). Russ-Eft and Hoover (2005) suggested three methods to
address internal validity as individual random assignment, management of confounding
factors, and use of multiple measurement methods. However, each method has a cost
67
associated with addressing the threat. Cooper and Schindler (2008) suggested seeking
internal validity and then external validity to balance the ideal scientific method with
what the target environment allows.
A true experimental design, such as presented in the research of D. T. Campbell
and Stanley (1963), allows the researcher to control for internal validity threats. The
research method uses a pretest-posttest control group design, which D. T. Campbell and
Stanley indentified as a true experimental design. Individual random assignment to the
treatment and control group then occurs. Russ-Eft and Hoover (2005) suggested
identifying, measuring, and controlling confounding factors. Reporting any relevant
confounding factors identified during the research effort informs future research.
Utilizing one instrument means that multiple measurement methods (Russ-Eft & Hoover,
2005) are not employed, which is a threat to internal validity.
There are costs associated with exercising too much control to increase internal
validity, including loss of external validity (Cone & Foster, 2006; Russ-Eft & Hoover,
2005). The ideal research design is strong in internal and external validity (D. T.
Campbell & Stanley, 1963). Cone and Foster (2006) offered the researcher advice when
they stated, “Remember that no research is perfect” (p. 280).
Data collection will occur through one instrument, but survey delivery will occur
through paper or an online survey delivery service. Using the same survey instrument to
measure the treatment effectiveness fails to offer converging evidence (Russ-Eft &
Hoover, 2005), which is a threat to internal validity. However, the survey instrument uses
scales to measure the variables, which address the same question in different forms
68
(Fowler, 2009). Checking each scale for internal consistency post hoc addresses
instrument internal validity.
External Validity
Generalizability refers to the ability to extend the research to other locations and
other venues. D. T. Campbell and Stanley (1963) detailed four threats to external validity
in an experiment, which are interaction of testing and X, interaction of selection and X,
reactive arrangements, and multiple X interference. Kerlinger and Lee (2000) asserted,
“The fact that one is participating in an experimental study may alter one’s normal
behavior” (pp. 477-478). Random assignment places the participants in either the
treatment group or the control group. The experiment will not overwhelm the end user
community, will sample the entire end user community, and will avoid obtrusive
measures (Russ-Eft & Hoover, 2005) by focusing the research on the e-mail system
installation process. All end users have an equal chance of assignment to either the
control group or the treatment group.
Creswell (2009) detailed three threats to external validity, namely, selection
interaction, setting interaction, and history interaction. D. T. Campbell and Stanley
(1963) called for greater external validity by maximizing experimental similarity.
Addressing the external validity threats that Creswell presented requires conducting
multiple experiments at different settings and different times, which will be accomplished
during Phase 1.
Generalizability to other places and other contexts require framing the current
context and describing the current organization, thus enabling the reader to determine
69
similar generalizability to other places and contexts. Trochim and Donnelly (2008)
discussed two methods to address threats to external validity: sampling model and
proximal similarity. Trochim and Donnelly noted that through proximal similarity, the
researcher presents similarity gradients with different contexts in terms of similarities.
Trochim and Donnelly suggested using proximal similarity by “describing the ways your
contexts differ from others by providing data about the degree of similarity between
various groups of people, places, and even times” (p. 36). Through proximal similarity,
detailing organizational traits present in the current study allows the reader to determine
other similar and comparable contexts that have similar traits. This presents the reader an
opportunity to find similar attributes within the research context that are applicable to
other contexts, external to the current research effort. Trochim and Donnelly suggested
conducting the study “in a variety of places, with different people, and at different times”
(p. 36). The Phase 1 experiment will occur across different facilities, at different times,
and with different individuals. Cooper and Schindler (2008) suggested, “Secure as much
external validity as is compatible with the internal validity requirements by making
experimental conditions as similar as possible to conditions under which the results will
apply” (p. 256).
Threats to Reliability
Threats to reliability include the survey questions, survey instrument, and research
methods. Reliability threats identified and the steps used to address the reliability issues
are discussed. Fowler (2009) stated, “Reducing measurement error through better
question design is one of the least costly ways to improve survey estimates” (p. 112). An
70
instrument field test determines internal consistencies. The questions focus on
demographic attributes, measure communication perceptions, perceived ease of use, or
perceived usefulness. Fowler reminded the researcher to be cognizant of educational and
cultural backgrounds of the population. Question design focuses on asking one question
of the respondent and keeping questions simple (Fowler, 2009).
Cooper and Schindler (2008) advocated survey instrument internal validity to add
to overall research reliability. Perceived ease of use and perceived usefulness survey
questions used exist in other previously published research. Venkatesh and Bala (2008)
reported internal consistencies of .92 for perceived ease of use and .93 for perceived ease
of use (see Appendix F). Cameron and Quinn (2006) cited similar results.
Reliability estimating is the degree or the confidence in the reliabilities (Trochim
& Donnelly, 2008). Perceived ease of use and perceived usefulness scales exist within the
survey, and the rollout plan incorporates a research field test at the first facilities. Hinkin
(2005) stressed the importance of reporting internal consistency reliability. Evidence has
not supported one specific value for reliability, but researchers have used the .7 cutoff
(Kerlinger & Lee, 2000). The cutoff for reliability used here is .7. During the field test,
the survey instrument scale internal consistency reliabilities will be reported.
The survey instrument is reliable if it consistently measures something precisely,
but Kerlinger and Lee (2000) indicated that if it is not measuring what it is supposed to
be measuring, it is not valid. The survey focus is to determine end user community
perceptions of the IT system’s perceived ease of use and perceived usefulness. According
71
to Venkatesh and Bala (2008), the scales have high internal consistency, which are the
same scales used here.
Phase 1 will be conducted preimplementation to measure the effectiveness of
training to address end user IT acceptance, which will be assessed through perceived ease
of use and perceived usefulness. Phase 2 will be conducted postimplementation with the
whole organization to measure the incentive effectiveness to enable IT system utilization,
which will be assessed through perceived ease of use and perceived usefulness. Gephart
(1999) indicated that reliability is a criterion for assessing positivism research. Crow,
Davis, and Maxfield (1960) discussed experimental conclusions and stated that the
“reliability of experimental conclusions can also be increased by refining the
experimental technique” (p. 110). An instrument gathers pretreatment and posttreatment
end user perceptions on end user IT acceptance. Holton and Burnett (2005) discussed
reliable measures, noting “[a] reliable measure is one that yields consistent results” (p.
35). Through iterating the first experiment five times, any experimental refinements
techniques incorporated will be included in the methodology section, and anomalies
found in the data that require explanation receive attention in the results section.
Limitations
Two phases will comprise the research effort. During Phase 1, each facility will
receive training as an intervention and delivered in an experiment. After completing the
posttest data collection for one facility, each control group at the target facility will
receive training previously offered to the treatment group so that all members will receive
72
training on the e-mail application. Also during Phase 1, multiple training modes and
methods are available for members in the treatment group so that they can choose their
preferred training method. The training presented to respondents includes handouts, with
the training delivered through different venues such as self-paced, online, and face to
face. The training design addresses e-mail ease of use and e-mail usefulness. During
Phase 2, measuring incentive effectiveness at adjusting end user IT acceptance captured
through perceived ease of use and perceived usefulness variables will occur after all
individuals have completed Phase 1 training. Incentives have different effects on different
populations, and the incentive selected may not appeal to all people or appeal to all
people equally.
Ethical Considerations
There are minimal potential risks involved in participating in this study. The
survey captures demographic information, which presents a very small risk that the
questionnaire can link the respondents to specific survey responses. The researcher will
keep the paper surveys in a secure location external to the organization. After entering the
data into the computer, the researcher will keep the data in a locked, fireproof cabinet. A
biometrically secured laptop will maintain data file security. Endpoint security exists on
the laptop, and both the operating system and the endpoint security regularly receive
updates. A locked, fireproof cabinet also houses the data file backups. After publishing
the results, the researcher will destroy all paper surveys, data files, and backup data.
73
The survey captures demographic information, communication perceptions within
the organization, IT perceived ease of use, and IT perceived usefulness. These measures
contain no inherent concerns or risks associated with answering the survey questions.
Maintaining confidentiality surrounding respondents, surveys, and respondent anonymity
is highly valued, and data reporting will occur through aggregated results only.
Utilizing the Belmont Report as a guide will help the researcher to address and
remediate ethical concerns. The Belmont Report is concerned with respect for persons,
beneficence, and justice (National Institutes of Health, 1979). The principles within the
Belmont Report will guide the research by focusing on the participants’ well-being
throughout any activities.
All participants will receive an informed consent form, which they must sign
before taking part in any research. It provides information about the study and the
voluntary nature of participation. A locked, fireproof cabinet will hold the informed
consent forms. There are no foreseeable risks inherent with participating in this research
for the participants, each facility, or the target organization. Benefits exist for the
individual, facility, and the target organization from the results derived from this
research.
74
REFERENCES
Adams, D. A., Nelson, R. R., & Todd, P. A. (1992). Perceived usefulness, ease of use, and usage of information technology: A replication. MIS Quarterly, 16(2), 227-247.
Bagozzi, R. P. (2007). The legacy of the technology acceptance model and a proposal for a paradigm shift. Journal of the Association for Information Systems, 8(4), 244-254.
Bartlett, C. A., Ghoshal, S., & Birkinshaw, J. (2004). Transnational management: Texts, cases, and readings in cross-border management (4th ed.). New York, NY: McGraw-Hill Irwin.
Baum, H., & Kling, T. (2004). The transparent leader: How to build a great company through straight talk, openness, and accountability. New York, NY: HarperCollins.
Bierstaker, J. L., Brody, R. G., & Pacini, C. (2006). Accountants’ perceptions regarding fraud detection and prevention methods. Managerial Auditing Journal, 21(5), 520.
Brookfield, S. D. (2006). The skillful teacher: On technique, trust, and responsiveness in the classroom (2nd ed.). San Francisco, CA: Jossey-Bass.
Brynjolfsson, E., & Hitt, L. M. (2003). Computing productivity: Firm-level evidence. Review of Economics & Statistics, 85(4), 793-808.
Cameron, K. S., & Quinn, R. E. (2006). Diagnosing and changing organizational culture: Based on the competing values framework (Rev. ed.). San Francisco, CA: Jossey-Bass.
Campbell, D. E. (2006). Incentives: Motivation and the economics of information (2nd ed.). New York, NY: Cambridge University Press.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. L. Gage (Ed.), Handbook of research on teaching (pp. 171-246). Chicago, IL: Rand McNally.
Carlson, P. A. (2000). Information technology and the emergence of a worker-centered organization. ACM Journal of Computer Documentation, 24(4), 204-212. doi:10.1145/353927.353930
Cohen, D. S. (2005). Why change is an affair of the heart ; In the drama of change, emotions, not logic, impel people to cast off the old and embrace the new. Chief Information Officer, 19(5), 1.
75
Compeau, D. R., & Higgins, C. A. (1995). Computer self-efficacy: Development of a measure and initial test. MIS Quarterly, 19(2), 189-211.
Cone, J. D., & Foster, S. L. (2006). Dissertations and theses from start to finish: Psychology and related fields (2nd ed.). Washington, DC: American Psychological Association.
Cooper, D. R., & Schindler, P. S. (2008). Business research methods (10th ed.). New York, NY: Irwin/McGraw-Hill.
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Thousand Oaks, CA: Sage.
Crow, E. L., Davis, F. A., & Maxfield, M. W. (1960). Statistics manual: With examples taken from ordnance development. New York, NY: Dover.
Daft, R. L. (2004). Organization theory and design (8th ed.). Mason, OH: South-Western.
David, J. S., Schuff, D., & St. Louis, R. (2002). Managing your total IT cost of ownership. Communications of the ACM, 45(1), 101-106. doi:10.1145/502269.502273
Davis, F. D. (1986). A technology acceptance model for empirically testing new end-user information systems: Theory and results. (Doctoral dissertation). doi:1721.1/15192
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982-1003.
Dholakia, U. M., Bagozzi, R. P., & Gopinath, M. (2007). How formulating implementation plans and remembering past actions facilitate the enactment of effortful decisions. Journal of Behavioral Decision Making, 20(4), 343-364. doi:10.1002/bdm.562
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009). Internet, mail and mixed-mode surveys: The tailored design method (3rd ed.). Hoboken, NJ: John Wiley & Sons.
Eisenberger, R., & Cameron, J. (1996). Detrimental effects of reward: Reality or myth? American Psychologist, 51(11), 1153-1166. doi:10.1037/0003-066x.51.11.1153
Eisenhardt, K. M. (1985). Control: Organizational and economic approaches. Management Science, 31(2), 134-149.
76
Faguet, D. (2003). Practical financial management: A guide for today’s manager. Hoboken, NJ: John Wiley & Sons.
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Boston, MA: Addison-Wesley.
Fowler, F. J. (2009). Survey research methods (Vol. 1., 4th ed.). Los Angeles, CA: Sage.
Freedman, R. (2003). Building the IT consulting practice. San Francisco CA: Jossey-Bass.
Gephart, R. (1999). Paradigms and research methods. Retrieved from http://division. aomonline.org/rm/1999_RMD_Forum_Paradigms_and_Research_Methods.htm
Ginzberg, M. J. (1981a). Early diagnosis of MIS implementation failure: Promising results and unanswered questions. Management Science, 27(4), 459-478.
Ginzberg, M. J. (1981b). Key recurrent issues in the MIS implementation process. MIS Quarterly, 5(2), 47-59.
Glen, P. (2003). Leading geeks: How to manage and lead people who deliver technology. San Francisco, CA: Jossey-Bass.
Gulliksen, J., Göransson, B., Boivie, I., Blomkvist, S., Persson, J., & Cajander, Å. (2003). Key principles for user-centred systems design. Behaviour & Information Technology, 22(6), 397-409.
Heskett, J. L., Sasser, W. E., & Schlesinger, L. A. (1997). The service profit chain: How leading companies link profit and growth to loyalty, satisfaction, and value. New York, NY: Free Press.
Hinkin, T. R. (2005). Scale development principles and practices. In R. A. Swanson & E. F. Holton III (Eds.), Research in organizations: Foundations and methods of inquiry (pp. 161-179). San Francisco CA: Berrett-Koehler.
Holton, E. F., & Burnett, M. F. (2005). The Basics of Quantitative Research. In R. A. Swanson & E. F. Holton III (Eds.), Research in organizations: Foundations and methods of inquiry (pp. 29-44). San Francisco CA: Berrett-Koehler Publishers, Inc.
Hoffer, J. A., George, J. F., & Valacich, J. S. (2008). Modern systems analysis and design (5th ed.). Upper Saddle River, NJ: Pearson Prentice Hall.
77
Järvinen, P. (2000). Research questions guiding selection of an appropriate research method. In H. R. Hansen, M. Bichler, & H. Mahrer (Eds.), Proceedings of the European Conference on Information Systems 2000, 3-5 July (pp. 124-131). Vienna, Austria: Vienna University of Economics and Business Administration.
Järvinen, P. (2008). Mapping research questions to research methods. In D. Avison, G. Kasper, B. Pernici, I. Ramos, & D. Roode (Eds.), Advances in information systems research, education & practice (pp. 29-41). Boston, MA: Springer. doi:10.1007/978-0-387-09682-7-9_3
Joshi, K. (2005). Understanding user resistance and acceptance during the implementation of an order management system: A case study using the equity implementation model. Journal of Information Technology Case and Application Research, 7(1), 6-20. Retrieved from ABI/INFORM Global database. (Document ID: 874990641)
Kang, D., & Santhanam, R. (2003). A longitudinal field study of training practices in a collaborative application environment. Journal of Management Information Systems, 20(3), 257-281.
Kerlinger, F. N., & Lee, H. B. (2000). Foundations of behavioral research (4th ed.). London, England: Thompson Learning.
Knowles, M. S., Holton, E. F., & Swanson, R. A. (2005). The adult learner: The definitive classic in adult education and human resource development (6th ed.). Burlington, MA: Elsevier.
Kouzes, J. M., & Posner, B. Z. (2007). The leadership challenge (4th ed.). San Francisco, CA: Jossey-Bass.
Lindborg, H. J. (1997). The basics of cross-functional teams. New York, NY: Quality Resources.
Lucas, H. C. (1999). Information technology and the productivity paradox: Assessing the value of investing in IT. New York, NY: Oxford University Press.
Lyytinen, K., & Hirschheim, R. (1987). Information systems failures: A survey and classification of the empirical literature Oxford Surveys in Information Technology (pp. 257-309). Oxford: Oxford University Press.
Martinko, M. J., Henry, J. W., & Zmud, R. W. (1996). An attributional explanation of individual resistance to the introduction of information technologies in the workplace. Behaviour & Information Technology, 15(5), 313-330.
78
Mathieson, K. (1991). Predicting user intentions: Comparing the technology acceptance model with the theory of planned behavior. Information Systems Research, 2(3), 173-191.
Martins, L. L., & Kellermanns, F. W. (2004). A model of business school students’ acceptance of a web-based course management system. Academy of Management Learning & Education, 3(1), 7-26.
Maurer, M. M., & Simonson, M. R. (1984). Development and validation of a measure of computer anxiety. Paper presented at the annual meeting of the Association for Educational Communication and Technology, Dallas, TX. Retrieved from http://eric.ed.gov/ERICWebPortal/detail?accno=ED243428
McAllister, C. A. (2006). Requirements determination of information systems: User and developer perceptions of factors contributing to misunderstandings. (Doctoral dissertation). Retrieved from ProQuest Dissertation & Theses database. (AAT 3226800)
Moore, G. C., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2, 192-222.
National Institutes of Health. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Retrieved from http://ohsr.od.nih.gov/guidelines/belmont.html
Norušis, M. J. (2008). SPSS 16.0 guide to data analysis. Upper Saddle River, NJ: Prentice Hall.
O’Brien, J. A., & Marakas, G. M. (2008). Introduction to information systems (14th ed.). New York, NY: McGraw-Hill/Irwin.
Oxford American college dictionary. (2002). New York, NY: G. P. Putnam & Sons.
Ramlall, S. (2004). A review of employee motivation theories and their implications for employee retention within organizations. Journal of American Academy of Business, Cambridge, 5(1/2), 52.
Royce, W. (1998). Software project management: A unified framework. Reading, MA: Addison Wesley Longman.
Russ-Eft, D., & Hoover, A. L. (2005). Experimental and quasi-experimental designs. In R. A. Swanson & E. F. Holton III (Eds.), Research in organizations: Foundations and methods of inquiry (pp. 75-95). San Francisco CA: Berrett-Koehler.
79
Sabherwal, R., Jeyaraj, A., & Chowa, C. (2006). Information system success: Individual and organizational determinants. Management Science, 52(12), 1849-1864.
Schein, E. H. (2004). Organizational culture and leadership (3rd ed.). San Francisco, CA: Jossey-Bass.
Schein, E. H. (2009). The corporate culture survival guide (Rev. ed.). San Francisco, CA: Jossey-Bass.
Schenk, K. D., Vitalari, N. P., & Davis, K. S. (1998). Differences between novice and expert systems analysts: What do we know and what do we do? Journal of Management Information Systems, 15(1), 9-50.
Senge, P. M. (2006). The fifth discipline: The art & practice of the learning organization. New York, NY: Currency Doubleday.
Sharma, R., & Yetton, P. (2003). The contingent effects of management support and task interdependence on successful information systems implementation. MIS Quarterly, 27(4), 533-555.
Sharma, R., & Yetton, P. (2007). The contingent effects of training, technical complexity, and task interdependence on successful information systems implementation. MIS Quarterly, 31(2), 219-238.
Sherif, K., Zmud, R. W., & Browne, G. J. (2006). Managing peer-to-peer conflicts in disruptive information technology innovations: The case of software reuse. MIS Quarterly, 30(2), 339-356.
Siponen, M. T., & Oinas-Kukkonen, H. (2007). A review of information security issues and respective research contributions. SIGMIS Database, 38(1), 60-80. doi:10.1145/1216218.1216224
Taylor-Cummings, A. (1998). Bridging the user-IS gap: A study of major information systems projects. Journal of Information Technology, 13(1), 29-54.
Teo, T. S. H., & Ang, J. S. K. (2001). An examination of major IS planning problems. International Journal of Information Management, 21(6), 457-470. doi:10.1016/ s0268-4012(01)00036-6
Todd, P., & Benbasat, I. (1999). Evaluating the impact of DSS, cognitive effort, and incentives on strategy selection. Information Systems Research, 10(4), 356-374.
Trochim, W. M. K., & Donnelly, J. P. (2008). The research methods knowledge base (3rd ed.). Mason, OH: Atomic Dog.
80
Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Systems Research, 11(4), 342.
Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273-315. doi:10.1111/j.1540-5915.2008.00192.x
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186.
Venkatesh, V., & Morris, M. G. (2000). Why don’t men ever stop to ask for directions? Gender, social influence, and their role in technology acceptance and usage behavior. MIS Quarterly, 24(1), 115.
Ward, J., & Peppard, J. (2002). Strategic planning for information systems (3rd ed.). New York, NY: John Wiley & Sons.
Webster, J., & Martocchio, J. J. (1992). Microcomputer playfulness: Development of a measure with workplace implications. MIS Quarterly, 16(2), 201-226.
Wisconsin Department of Workforce Development. (2007). Private-sector and public-sector establishments. Retrieved from http://dwd.wisconsin.gov/oea/ largest_employers/largest_employers_march_2007_all_ownership.xls
Wisconsin Department of Workforce Development Office of Economic Advisors. (2008). County workforce profiles in Wisconsin 2008. Retrieved from http://dwd. wisconsin.gov/oea/county_profiles/current.htm
Yin, R. K. (2009). Case study research: Design and methods (4th ed.). Thousand Oaks, CA: Sage.
81
APPENDIX A. INTERVENTIONS FOR PRE-IT SYSTEM IMPLEMENTATION
Table A1. Determinants of Perceived Usefulness
Determinates of perceived usefulness
Design characteristics
User participation
Management support
Incentive alignment
Subjective norm X* X XImage X XJob relevance X X X XOutput quality X X X XResult demonstrability X X X X*X indicates a particular intervention can potentially influence a particular determinant of perceived usefulness or perceived ease of use.**Note. Adapted from “Technology Acceptance Model 3 and a Research Agenda on Interventions,” by V. Venkatesh and H. Bala, 2008, Decision Sciences, 39(2), p. 293.
Table A2. Determinants of Perceived Ease of Use
Determinants of perceived ease of use
Design characteristics
User participation
Management support
Incentive alignment
Computer self-efficacy Perceptions of external control X* X Computer anxiety X Computer playfulness X Perceived enjoyment X X XObjective usability X X *X indicates a particular intervention can potentially influence a particular determinant of perceived usefulness or perceived ease of use.**Note. Adapted from “Technology Acceptance Model 3 and a Research Agenda on Interventions,” by V. Venkatesh and H. Bala, 2008, Decision Sciences, 39(2), p. 293.
82
APPENDIX B. INTERVENTIONS FOR POST-IT SYSTEM IMPLEMENTATION
Table B1. Determinants of Perceived Usefulness
Determinates of Perceived Usefulness Training Organizational support Peer supportSubjective norm X*Image XJob relevance X X XOutput quality X X XResult demonstrability X X X*X indicates a particular intervention can potentially influence a particular determinant of perceived usefulness or perceived ease of use.**Note. Adapted from “Technology Acceptance Model 3 and a Research Agenda on Interventions,” by V. Venkatesh and H. Bala, 2008, Decision Sciences, 39(2), p. 293.
Table B1. Determinants of Perceived Ease of Use
Determinants of perceived ease of use Training Organizational support Peer supportComputer self-efficacy X* Perceptions of external control X XComputer anxiety X X Computer playfulness X Perceived enjoyment X Objective usability X *X indicates a particular intervention can potentially influence a particular determinant of perceived usefulness or perceived ease of use.**Note. Adapted from “Technology Acceptance Model 3 and a Research Agenda on Interventions,” by V. Venkatesh and H. Bala, 2008, Decision Sciences, 39(2), p. 293.
83
External Variables
Perceived Usefulness
Perceived Ease of Use
Behavioral Intention Use Behavior
APPENDIX C. TECHNOLOGY ACCEPTANCE MODEL
Figure C1. Technology acceptance model.Note. Adapted from “Technology Acceptance Model 3 and a Research Agenda on Interventions,” by V. Venkatesh and H. Bala, 2008, Decision Sciences, 39(2), p. 276.
84
APPENDIX D. TECHNOLOGY ACCEPTANCE MODEL 3
Figure D1. TAM–3.Note. Adapted from “Technology Acceptance Model 3 and a Research Agenda on Interventions,” by V. Venkatesh and H. Bala, 2008, Decision Sciences, 39(2), p. 280.
85
APPENDIX E. SURVEY INSTRUMENT
Before completing this survey, I would like to thank you for taking the time to respond to this survey. This survey is important for understanding how effective events that the association organizes are at addressing end user IT acceptance. Your help is greatly appreciated.
SURVEY INSTRUCTIONS:
• Do not write any personally identifiable information on the survey, as your answers are anonymous and confidential.
• There is no right or wrong question answers on this survey. This survey is exploring your computer application perceptions. Based on your experience with computer applications, choose the appropriate response.
• Be sure to mark your answers carefully. Most questions ask you to circle one value, which best matches the description of how you feel about the item. As an example, if you were asked how much you agree with the statement, “Winters are cold in Wisconsin”, and you feel you agree, you would circle the item “Agree” like this:
A. Winters are cold in Wisconsin.
1Strongly Disagree
2Disagree
3Neutral
4Agree
5Strongly Agree
• Note that throughout the survey the scale descriptions change. As an example, the sample asked whether you agree or disagree. In another question, the question may ask how useful something is. The special instructions will highlight these. Please read the instructions.
• This survey should take less than 7 minutes to complete and is voluntary.
• Thank you for your time and cooperation.
86
Section 1: Demographics. This information will allow comparisons among different groups of people. Your responses are anonymous and confidential. Please answer below each question the answer that best describes you.
1. What was your age (in years) on your last birthday?
2. Are you (circle one). Male Female
3. Described your computer skills (circle one).Extremely
usefulOf considerable
use Of use Not very useful Of no use
4. Describe how people who know you would rate your computer skills (circle one).Very
satisfactory Satisfactory Borderline Unsatisfactory Very Unsatisfactory
5. Describe your proficiency with computers (circle one).Very
satisfactory Satisfactory Borderline Unsatisfactory Very Unsatisfactory
6. Describe your comfort level with computers (circle one).Very good Good Borderline Poor Very Poor
7. Roughly, how many years have you have been using computers?
8. As of January 1, 2011 the number of years you have worked at the YMCA of the Fox cities. (Estimations are o.k.).
9. Total years receiving formal education. (Only count complete years).
10. Select highest degree completed (circle one).Did not graduate
Grade School
High School
Associate degree
Bachelor’s degree
Master’s or higher degree
11. Select highest degree started (circle one).Grade School High School Associate degree Bachelor’s degree Master’s degree PhD
12. Indicate your job level, which best describes you (circle one).
EmployeeTeam
leader/supervisor
Mid-level manager Manager Executive Not
employed
87
Section 2: Using your perceptions of e-mail, answer the following questions. Circle one response that best describes your answer.
13. Using e-mail improves my performance in my job.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
14. Using e-mail in my job increases my productivity.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
15. Using e-mail enhances my effectiveness in my job.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
16. I find e-mail to be useful in my job.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
17. My interaction with e-mail is clear and understandable.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
18. Interacting with e-mail does not require a lot of my mental effort.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
19. I find e-mail to be easy to use.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
20. I find it easy to get e-mail to do what I want it to do.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
88
21. E-mail increases my ability to communicate1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
22. I know what is going on within the association.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
23. I believe that e-mail enables communication within the organization.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
24. I know what is going on at my branch facility1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
25. I believe that e-mail enables communication at my branch facility.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
26. I know what is going on within my department.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
27. I believe that e-mail enables communication within my department.1
Strongly disagree
2Disagree
3Neutral
4Agree
5Strongly agree
28. On average, how much time do you spend using e-mail each day?
89
APPENDIX F. SURVEY INSTRUMENT SCALES
Scales used with permission from Venkatesh and Bala.
Table F1. Perceived Usefulness Scales
Using the system improves my performance in my job.Using the system in my job increases my productivity.Using the system enhances my effectiveness in my job.I find the system to be useful in my job.
Table F2. Perceived Ease of Use Scales
My interaction with the system is clear and understandable.Interacting with the system does not require a lot of my mental effort.I find the system to be easy to use.I find it easy to get the system to do what I want it to do.
90