Different experiences, different effects: a longitudinal study of learning a computer program in a...
Transcript of Different experiences, different effects: a longitudinal study of learning a computer program in a...
omputers in
CComputers in Human Behavior 22 (2006) 364–380
www.elsevier.com/locate/comphumbeh
Human Behavior
Different experiences, different effects:a longitudinal study of learning a computer
program in a network environment
Zheng Yan *
Department of Educational and Counseling Psychology, School of Education, University at Albany,
Albany 12222, NY, USA
Available online 11 November 2004
Abstract
Students� previous computer experience has been widely considered an important factor
affecting subsequent computer performance. However, little research has been done to exam-
ine the contributions of different types of computer experience to computer performance at
different time points. The present study compared the effects of four types of computer expe-
rience on 30 graduate students� learning of a statistical program over one semester. Among the
four types of computer experience, students� earlier experience of using computer network sys-
tems was found to affect their initial performance of learning the statistics program, but the
experience of using statistical programs, the experience of email programs, and the length
of using computers did not. These findings suggest complex relationships between students�computer experience and their computer performance and have implications for both learning
and teaching computer programs and understanding the transfer of learning.
� 2004 Elsevier Ltd. All rights reserved.
Keywords: Computer experience; Computer performance; Computer network; Longitudinal research;
Multilevel growth modeling
0747-5632/$ - see front matter � 2004 Elsevier Ltd. All rights reserved.
doi:10.1016/j.chb.2004.09.005
* Tel.: + 1 518 442 5060; fax: + 1 518 442 4953.
E-mail address: [email protected].
Z. Yan / Computers in Human Behavior 22 (2006) 364–380 365
1. Introduction
How is students� prior experience of using computers associated with their subse-
quent computer-based performance? Will a student with six years of computer expe-
rience learn a new computer program faster or better than another student with onlyone year of experience? Is it true that the richer the computer experience that stu-
dents have, the better the computer performance they demonstrate? The extensive
computer experience research (e.g., Chua, Chen, & Wong, 1999; Rozell & Gardner,
1999, 2000; Smith, Caputi, Crittenden, & Jayasuriya, 1999) indicates that these ques-
tions are far more complex than what might generally be thought. Researchers have
found, for instance, that computer performance is influenced by both direct compu-
ter experience such as previous hands-on usage of different computer programs and
indirect experience such as simply observing other people�s computer-based activities(Anderson & Reed, 1998; Jones & Clark, 1995; Smith et al., 1999) and by both pos-
itive and negative experience (Reed, Oughton, Ayersman, Ervin Jr., & Giessler, 2000;
Rosen & Weil, 1995; Weil & Rosen, 1995). The present study, building on the exist-
ing research, further examined the complexity of different types of computer experi-
ence influencing computer performance at different time points.
1.1. Different types of computer experience
Four indicators have widely been used to examine general computer experiences:
Length of time using computers, frequency of using computers, computer ownership,
and computer courses taken (Karsten & Roth, 1998; Mitra, 1998; Nichols, 1992;
Potosky & Bobko, 1998; Smith et al., 1999; Taylor & Mounfield, 1994). Taylor
and Mounfield (1994) found, for instance that both computer ownership and
high-school computer courses influenced introductory programming course scores
of 656 college students. However, these commonly used indicators of general compu-
ter experience have limitations. For example, frequency of using computers might bea poor indicator since some experienced users may not need to spend much time on
computers, and owning a computer may no longer be a valid indicator since compu-
ter ownership is becoming increasingly common. In contrast, researchers found that
students� prior experience of using specific computer programs such as Word or Net-
scape affects their computer performance with specific tasks such as word processing
or Internet navigation (Born & Cummings, 1994; Dusick, 1998; Karsten & Roth,
1998; Kay, 1993; Kirkman, 1993; Mitra, 1998; Reed, Ayersman, & Liu, 1996,
2000; Reed & Giessler, 1995; Schumacher & Morahan-Martin, 2001). Reed andhis collaborators (Reed et al., 2000; Reed & Giessler, 1995), for instance, found that
students with much experience of using programming language and authoring tools
took more linear navigating steps in using a hypermedia program, whereas those
with much experience of using word processing, spreadsheets, and databases took
more nonlinear navigating steps.
In addition to general and specific computer experience, empirical evidence has
suggested that students� computer experience differs not only in quantity (e.g., five
366 Z. Yan / Computers in Human Behavior 22 (2006) 364–380
years of experience vs. 10 years of experience) but also in quality (e.g., simple expe-
rience vs. advanced experience) (Busch, 1995; Cassidy & Eachus, 2002; Potosky &
Bobko, 1998; Schumacher & Morahan-Martin, 2001; Torkzadeh and Kouftros,
1994). For example, researchers found that gender differences existed only in ad-
vanced-level performance but not in beginning-level performance of both using wordprocessing and spreadsheet programs (Busch, 1995) and navigating the Internet
(Schumacher & Morahan-Martin, 2001). Gender differences were also reported in
students� computer self-efficacy for advanced skills but not for beginning skills
(Busch, 1995; Torkzadeh and Koufteros, 1994).
With the rapid spread of Internet use, researchers started to differentiate the
experience of using personal computers (PCs) from that of using networked com-
puters (NCs) (Anderson & Reed, 1998; Born & Cummings, 1994; Cassidy & Ea-
chus, 2002; Dusick, 1998; Karsten & Roth, 1998; Rosen, Sears, & Well, 1993;Schumacher & Morahan-Martin, 2001). Rosen et al. (1993), for instance, surveyed
204 college students on their personal computer experience (e.g., using word
processing to do homework) and networked computer experience (e.g., using dif-
ferent computer network systems on campus). They found these two types of
computer experience had different influences on students� computer phobia. Stu-
dents who used network systems showed less computer phobia than those who
did not.
Although there is a wide variety of computer experience (e.g., general vs. specific,simple vs. complex, and PC-based vs. NC-based), little is known about the unique
contributions of each type of computer experience to subsequent computer perform-
ance. Although researchers have demonstrated how one type of computer experience
impacts computer use, little is known about whether one type of computer experi-
ence is more important than another. Thus, studies comparing the effects of different
types of computer experience are needed.
1.2. Effects of computer experience at different times
The complex relationship between computer experience and computer perform-
ance is manifested not only by different types of computer experience but also by ef-
fects of computer experience at different time points. Certain types of computer
experience might help reduce students� initial learning difficulties but might not affect
their subsequent learning performance. Conversely, other types of experience might
affect the subsequent performance but not the initial one, while others might affect
both or neither. Studying the effects at different times requires using a longitudinalresearch method. In general, a longitudinal study has three fundamental features:
(a) asking questions about change of variables over time (e.g., initial performance
and later improvement), (b) collecting data at multiple points in time (e.g., adminis-
tering a survey three times a year), and (c) analyzing data with appropriate methods
(e.g., repeated ANOVA, regression analysis, time series analysis, and, more recently,
multilevel growth modeling) (Bryk & Raudenbush, 2002; Diggle, Liang, & Zegger,
1994; Goldstein, 1995; Singer & Willett, 2003).
Z. Yan / Computers in Human Behavior 22 (2006) 364–380 367
In contrast to the large amount of research examining different types of compu-
ter experience, longitudinal studies on computer experience are extremely limited.
Only three published longitudinal studies relevant to computer experience research
were located (Rosen et al., 1993; Rozell & Gardner, 1999, 2000). Rozell and Gard-
ner (1999), for instance, studied various factors affecting computer-related perform-ance in a computer-training course with 75 manufacturing workers. They found
that participants� past computer experience influenced their computer performance
through their attitudes toward computers as the mediating variable. Although they
collected longitudinal data at six time points, none of the major research questions
focused on longitudinal changes nor did the regression analysis report estimated
regression parameters such as intercepts of change and slopes of change. Following
their 1999 study, Rozell and Gardner (2000) examined a large longitudinal data set
of 600 undergraduate students and found that multiple cognitive, motivational,and affective processes contributed to students� computer performance. Particu-
larly, students� computer experience was found to significantly influence their initial
performance on a computer-related task. However, all the variables that were
measured three times were not treated as time-sensitive variables. Furthermore,
three separate regression analyses at three time points rather than one coherent
regression analysis across the three time points were performed. By doing so,
again, the research questions of the study did not directly involve change of vari-
able relationships over time, nor did the primary data analyses estimate regressionparameters of time-variance variables. Thus, only the immediate effects of compu-
ter experience rather than the longitudinal effects of computer experience were
estimated.
It is clear that comparative studies and longitudinal studies of computer experi-
ence are needed to advance the current knowledge of computer experience and to
guide the daily practice of learning, teaching, and training how to use computers.
Furthermore, in essence, the relationship between initial computer experience and
subsequent computer performance concerns psychological processes and mecha-nisms of transfer of learning. Thus, existing theories and research in transfer (e.g.,
Detterman & Sternberg, 1993; Greeno, Collins, & Resnick, 1996; Mayer & Wittrock,
1996; Robins, 1996; Singley & Anderson, 1989a, 1989b) provides a theoretical foun-
dation for better understanding of how and why previous computer experience af-
fects subsequent computer performance. For instance, Mayer and Wittrock (1996)
examined four major kinds of transfer; general transfer of general skills (e.g., by
learning Latin to improve minds), specific transfer of specific skills (e.g., by training
highly specialized skills), specific transfer of general skills (e.g., by teaching forunderstanding), metacognitive control of general and specific skills (e.g., by training
self-regulation strategies). This typology of transfer of learning can inform and be
informed by empirical investigations of the specific experience-performance relation-
ship in the domain of using computer programs.
Guided by existing theories of transfer of learning, the present study focused on
how different computer experience variables influenced students� performance in
using SAS, a widely used statistical software program, over one semester. It compared
four kinds of computer experience; the length of time using computers, experience of
368 Z. Yan / Computers in Human Behavior 22 (2006) 364–380
using statistical programs, experience of using email programs, and experience of
using network systems, to examine the unique contribution of each factor on both
the initial status and rate of improvement of computer-based performance in a net-
work system environment. Moreover, it collected four-wave longitudinal data based
on students� performance over four statistical projects and analyzed the data withmultilevel growth modeling, one of the latest longitudinal data analysis methods
(Singer & Willett, 2003). Specifically, the study addressed four research questions:
(a) Did students� previous experience of using network systems affect both the initial
status and rate of improvement in using SAS over time? (b) Did students� previousexperience of using email programs affect both the initial status and rate of improve-
ment in using SAS over time? (c) Did students� previous experience of using statisticalprograms affect both the initial status and rate of improvement in using SAS? (d) Did
the length of students� previous experience of using computers affect both the initialstatus and rate of improvement in using SAS over time?
2. Method
2.1. Participants
Thirty students who enrolled in an introductory research methodology course at aresearch university graduate school in the Northeast participated in the study.
Among them are 9 males and 21 females, with 63% being master�s students and
37% doctoral students. These students were very diverse in their educational train-
ing, professional background, statistical knowledge, and computer experience, and
none of them had used the SAS program prior to the study. Their ages ranged from
about 25 years to about 45 years.
2.2. Predictor variables
Four predictor variables of student computer experience, NETWORK, EMAIL,
STATISTICS, and YEAR, were obtained from the results of a short computer expe-
rience questionnaire that each participant completed at the beginning of the study.
The variable NETWORK concerns students� previous experience of using computer
network systems. It was based on students� responses to the question, ‘‘Have you
used a computer network system at school or at work before?’’ The variable EMAIL
has to do with students� previous experience of using the email program at home,based on students� responses to the question, ‘‘Do you check your email at home?’’
The variable STATISTICS involves students� previous experience of using statistics
programs, and resulted from students� responses to the question, ‘‘Have you used
any statistics programs before?’’ These three variables were coded as categorical
according to students� dichotomized answers (Yes = 1, No = 0). The fourth variable,
YEAR, concerns the number of years of using computers. This continuous variable
was calculated according to students� responses to the question, ‘‘In what year did
you start using a computer?’’
Z. Yan / Computers in Human Behavior 22 (2006) 364–380 369
2.3. The outcome variable
The outcome variable of student computer performance in this longitudinal study
is HELP, the number of times that students asked for help during the process of
completing each SAS project on the basis of codings of the transcribed videotapedata by two researchers, with the Cohen�s j statistic of 0.94. This variable has three
important features.
First, the task used in the study was based on a total of four homework assign-
ments required in the one-semester course, with an interval of about one month be-
tween each assignment. Each homework assignment included two major parts: Run
a SAS procedure and obtain computational results, and do a statistical analysis and
report statistical results. The present study focused on the computational part rather
than statistical one, examining how students learn statistical programs rather thanstatistical concepts.
Second, the SAS program was installed in a local network system rather than a
stand-alone program in a single computer. Within the network, it normally takes
six basic steps to finish a SAS project: creating a DAT file, creating a SAS file, cre-
ating a COM file, executing the COM file, viewing the LOG file, and viewing the LIS
file. The present study focuses on how each student proceeded with this six-step basic
operational procedure of using SAS instead of on various statistical analysis proce-
dures (e.g., conducting a t-test or a v2-test). This six-step basic procedure has tobe followed in all four SAS projects in order to obtain the SAS results on the net-
work system. Consequently, focusing on the same basic procedure across different
SAS projects made it possible to effectively analyze students� learning of the proce-
dure while reducing the potential practice effect due to repeated measures over time,
a methodological challenge often confronted by conventional longitudinal studies
(Diggle et al., 1994; Goldstein, 1995; Singer & Willett, 2003). It is often a challenging
task for many students to follow this basic procedure since they need to know the
internal structure of the SAS program and its network system environment in orderto navigate between the SAS program and the network system.
Third, each student in the study did not work alone under strictly controlled
experimental laboratory conditions. Instead, they worked on their own SAS projects
with appropriate help from one graduate teaching assistant whenever they had ques-
tions. To maximize the opportunity of observing authentic performance while help-
ing students learn SAS, the help that this teaching assistant provided was carefully
controlled according to two basic rules: (a) the teaching assistant should only answer
the questions the students asked during their work with SAS, and (b) the teachingassistant should not provide extra intervention or initiate lengthy instruction. As a
result, the study used the number of helps that students received during the project
rather than other conventional indicators (e.g., performance levels or correct rates)
to indicate students� performance on four homework assignments. The higher num-
ber of helps received from the teaching assistant in any given project indicates a
lower level of performance of using SAS basic procedure, whereas a lower number
of helps indicates a higher level of skill with SAS. The selection of the outcome var-
iable aligns with the Vygotskian approach to learning and development (Rogoff,
370 Z. Yan / Computers in Human Behavior 22 (2006) 364–380
1990; Vygotsky, 1978), in which improvement of a student�s skill of using SAS over
time can be considered as a process of moving from a dependent learner who needs a
lot of assistance to an independent learner who needs less or no assistance.
2.4. Procedure
After receiving an introduction to the use of the SAS program with one step-by-
step demonstration in the beginning of the course, participants came to a research
laboratory at their preferred time. They were asked to complete the computer expe-
rience questionnaire and then worked on the SAS project with the teaching assistant
in a one-to-one interactive context. One PC Dell 486 was set up in the observation
room. It was connected to a local area network, functioning as a workstation of
the network system. Each SAS session lasted about one hour. The four assignmentswere distributed to the class within an interval of approximately one month. All the
SAS sessions were videotaped for further data analysis.
2.5. Data analysis
The primary data analysis method used in the study was multilevel growth mode-
ling (Singer & Willett, 2003). This method focuses on analyzing longitudinal data,
specifically examining how predictors affect both initial status and rate of changeof individual growth trajectories. The basic form of multilevel growth models
includes the level 1 model and the level 2 model. The level 1 model fits linear regres-
sion lines on each individual�s observed growth trend over time in order to describe
within-individual changes over time. The level 2 model uncovers how the intercept
and slope of the average fitted line are systematically associated with various predic-
tors in order to explain between-individual changes over time. The PROC MIXED
procedure in SAS (Singer, 1998) was used to fit both the level 1 and level 2 model
simultaneously to examine what factors influence the process of learning SAS.The model fitting sequence follows the logic from specific computer experience to
general computer experience to introduce four different computer experience predic-
tors: (a) Since SAS is a network application program, previous specific experience of
using network systems should matter. Thus, NETWORK is the first predictor to be
included into themodel; (b) Since an Email program is also a network application pro-
gram, previous specific experience of using email programs should influence students�learning SAS. Thus, EMAIL is the second predictor to be added to themodel; (c) Since
SAS is a statistical program, previous specific experience of using statistical packagesshould be relevant. Thus, STATISTICS is the third predictor to be included in the
model; (d) Length of time using computers (YEAR), one of the most commonly used
indicators of general computer experience, was the last predictor to add to the model.
3. Results
As shown in Table 1, the outcome variable (HELP), the number of helps receivedby each of these 30 students, varies over the four SAS projects. The data set also
Table 1
The four-wave longitudinal data with four predictors (N = 30)
ID Number of helps received (Help) Predictors
Project 1 Project 2 Project 3 Project 4 NETWORK EMAIL STATISTICS YEAR
1 1 2 3 1 1 1 0 14
2 10 29 26 4 1 1 1 8
3 10 6 6 1 0 1 0 9
4 15 14 12 13 0 1 1 11
5 10 20 24 13 0 1 0 10
6 5 8 6 8 1 1 0 3
7 7 10 8 7 1 1 0 3
8 10 6 8 2 1 1 1 12
9 7 1 1 0 1 1 0 17
10 1 3 4 0 1 1 1 9
11 5 10 6 14 0 0 1 6
12 4 1 5 0 0 0 1 2
13 8 10 9 3 1 1 0 10
14 2 10 3 0 1 1 0 17
15 12 4 7 17 1 1 0 11
16 11 7 8 9 0 0 0 11
17 7 3 25 5 1 1 0 9
18 3 0 0 0 1 1 0 14
19 1 7 7 0 1 1 0 29
20 12 8 10 5 1 1 0 10
21 31 27 33 29 0 0 0 4
22 6 4 2 2 1 1 0 12
23 9 4 5 10 1 1 1 7
24 8 9 24 21 0 0 0 9
25 6 0 4 7 1 0 0 8
26 2 6 5 0 1 1 15
27 11 4 13 6 0 0 0 10
28 38 5 12 0 0 1 0 2
29 18 3 1 2 0 0 1 20
30 16 9 12 13 0 1 0 15
Mean 9.5 7.7 9.8 6.2 11
SD 8.2 7 8.5 7 5.7
Note: The data of Subject 26 in Project 3 are missing due to a technical failure in the original videotape.
Z. Yan / Computers in Human Behavior 22 (2006) 364–380 371
includes four predictor variables, previous experience of using computer network
systems (NETWORK), previous experience of using email (EMAIL), previous expe-
rience of using statistics programs (STATISTICS), and the number of years of using
computers (YEAR).
The relationship between the outcome variable and predictor variables in the data
set can be further hypothesized as follows: (a) The number of helps that each student
received during one project (HELPti) is a function of learning experience that stu-
dents gained through the project (PROJECTti); (b) Both the number of helps eachstudent received in the first project (p0i) and the rate of change in the number of helps
received over four projects (p1i) is a function of four variables, NETWORKi, EMAI-
Li, STATISTICSi, and YEARi. These two hypotheses can be presented with both the
372 Z. Yan / Computers in Human Behavior 22 (2006) 364–380
level 1 growth model and the level 2 growth model below.The level 1 hypothesized
model:
HELPti ¼ p0i þ p1i � PROJECTti þ eti ð1ÞThe level 2 hypothesized model:
poi ¼ c00 þ c011 �NETWORKi þ c012 � EMAILi þ c03 � STATISTICSi
þ c04 �YEARi þ f0i ð2Þ
p1i ¼ c10 þ c11 �NETWORKi þ c12 � EMAILi þ c13 � STATISTICSi
þ c14 �YEARi þ f1i ð3Þ
Table 2 summarizes the model fit and parameter estimates of a series of six
multi-level growth models. As shown in Table 2, Model 1 is the baseline model,
estimating the intercept of initial status, the population average true initial status,
(c00 ¼ 10:30, p < .001) and the intercept of rate of change, the population average
rate of true change, (c10 ¼ �0:827, p > .05). Model 2 adds the predictor NET-
WORK to the baseline model, finding a significant effect of NETWORK on ini-
tial status (c01 ¼ �6:529, p < .05) but no significant effect on rate of change(c11 ¼ 0:618, p > .05). Model 3 is a reduced model that estimates the single effect
of NETWORK on initial status (c01 ¼ �5:136, p < .05), while removing the insig-
nificant parameter, the rate of change. Model 4 includes the predictor EMAIL,
finding no significant joint effects of EMAIL on both initial status (c02 ¼ 2:765,p > .05) and rate of change (c12 ¼ �0:820, p > .05). Model 5 adds the predictor
STATISTICS to Model 3, finding no significant joint effects of STATISTICS
on both initial status (c03 ¼ �1:458, p > .05) and rate of change (c13 ¼ �0:279,p > .05). Finally, Model 6 includes the predictor YEAR, again, finding no signif-icant joint effects of YEAR on both initial status (c04 ¼ �0:336, p > .05) and rate
of change (c14 ¼ 0:006, p > .05).
On the basis of estimates of both fixed and random effects of the six fitted
growth models shown in Table 2, Model 3 is considered the final model for
two major reasons. First, according to the estimates of fixed effects for both ini-
tial status and rate of change, Model 3 is the most parsimonious among the six
models since it includes the only significant predictor parameter, initial status of
NETWORK (c01 ¼ �5:316, p < .05), in the model. Second, according to the esti-mates of random effects of the six models, Model 3 is among the most effective in
explaining the level 2 variation in both initial status (24.43, the second lowest
value among the six models) and rate of change (1.202, the lowest values among
the six models). Thus, The level 1 fitted model:
HELPti ¼ p0i þ p1i � PROJECTti: ð4Þ
The level 2 fitted model:
p0i ¼ c00 þ c01 �NETWORKi: ð5Þ
Table 2
Estimates of fixed and random effects and goodness - of - fit statistics from a series of fitted multilevel
growth models in which variables of NETWORK, EMAIL, STATISTICS, and YEAR predict the average
number of helps received at the 1st project and the rate of change in the number of helps received between
the 1st project and the 4th project (N = 30)
Parameter estimate (standard error)
Model 1 Model 2 Model 3 Model 4 Model 5 Model 6
Fixed effects
Initial status
Intercept 10.30*** 14.00*** 13.21*** 11.45*** 13.92*** 16.43***
(1.583) (2.263) (1.891) (2.898) (2.218) (3.250)
Network �6.529* �5.136* �5.610* �5.441* �4.548*
(3.007) (2.074) (2.499) (2.107) (2.021)
EMAIL 2.765
(3.685)
STATISTICS �1.458
(3.296)
YEAR �0.336
(0.264)
Rate of change
Intercept �0.827� �1.177� �0.827� �0.225 �0.743 �0.889
(0.473) (0.727) (0.474) (0.924) (0.575) (1.029)
Network 0.618
(0.966)
EMAIL �0.820
(1.079)
STATISTICS �0.279
(1.050)
YEAR 0.006
(0.086)
Random effects
Level-1
Residual 27.64*** 27.64*** 27.64*** 27.64*** 27.64*** 27.64***
(5.046) (5.046) (5.046) (5.046) (5.046) (5.046)
Level-2
Initial status 33.68 25.14 24.43 24.93 26.11 23.43
(21.13) (19.34) (19.01) (19.60) (19.70) (39.04)
Rate of change 1.202 1.342 1.202 1.301 1.425 1.441
(2.035) (2.095) (2.035) (2.085) (2.114) (2.118)
Goodness-of-fit
Deviance 790.9 779.7 781.9 775.3 775.6 783.4
AIC �399.4 �393.9 �395.0 �391.7 �391.8 �395.7
BIC �404.9 �399.4 �400.5 �397.2 �397.3 �401.2
Note. Since the SAS MEXED procedure uses restricted maximum likelihood estimation (RML), three
estimates of goodness of fit, Deviance, AIC, and BIC, are included in this table but not used for examining
the model fit (see Singer and Willett, 2003, pp.116–122).* p < .05 � p < .10.
** p < .01.*** p < .001.
Z. Yan / Computers in Human Behavior 22 (2006) 364–380 373
374 Z. Yan / Computers in Human Behavior 22 (2006) 364–380
p1i ¼ c10: ð6ÞCombining these two fitted models to produce a composite model, (see Fig. 1)
HELPti ¼ c00 þ c01 �NETWORKþ p1i � PROJECTti: ð7ÞThus, with obtained parameter estimates, the final fitted model is:
HELPti ¼ 13:21� 5:136NETWORKi � 0:827PROJECTti: ð8ÞThis final fitted model (Eq. (8)) indicates the major findings of the study: (a) There
are no significant effects of three predictor variables, EMAIL, STATISTICS, and
YEAR, on either initial status or rate of change of the process of learning SAS. That
is, students� previous experience of using email programs, statistical programs, and
the length of time using computers do not influence their initial performance and
subsequent progress over time in using SAS; (b) There is a significant effect of pre-
vious experience of using computer network systems, NETWORK, on the initial sta-
tus of the number of helps students received. Those who had experience of using
network systems tended to need much less help at the beginning of the project;whereas those who had little experience of using network systems tended to demand
more helps at the beginning of the project. To put it quantitatively, those students
Enter the SAS program
DIR COPY EDIT SUBMIT SAS
.DAT .SAS .COM .LOG .LIS
Enter then etwork system
Enter the network system
Exit the SAS program
Fig. 1. The relationship between the network system and the SAS program with five basic commands
(DIR, COPY, EDIT, COM, and SAS) and five basic files (.DAT, .SAS, .COM, .LOG, and .LIS).
0
5
10
15
1 2 3 4
Num
ber
of h
elps
rec
eive
d
Without system experience With system experience
Project
Fig. 2. Effects of previous experience using computer network systems on the number of helps that
students received over the four SAS projects.
Z. Yan / Computers in Human Behavior 22 (2006) 364–380 375
with no previous experience using network systems received, on average, more than
five more instances of help at the beginning of Project 1 than those with previous net-
work experience. Fig. 2 illustrates this effect.
As shown in Fig. 2, the effect of students� previous experience of using the networksystems on the number of helps yields the visible vertical distance at the first project
between the two fitted average growth trajectories for students without previousexperience of using network systems versus students with previous experience of
using network systems. However, there is no significant effect of NETWORK on
the rate of change in the number of helps, resulting in an equal gap over four projects
between the two paralleled fitted average growth trajectories.
4. Discussion
This study examines the effects of four computer experience predictors. The
results of the study suggest that students� previous experience of using computer net-
works rather than their experience using email programs, statistical programs, and
number of years of using computers significantly affects the number of helps needed
in order complete their first SAS project. Why does only the experience of using com-
puter networks but not the other three computer experience predictors have a signif-
icant impact? Why does the experience of using computer networks affect only the
number of initial helps needed in the first project but not the change in the numberof helps over the four projects? This section will focus on potential explanations of
these two important questions.
There are at least four possible reasons to explain why only students� experienceof using networks was related to their subsequent computer performance. First,
since the statistical program SAS in this study is running on a network system,
the task of using SAS demands not only skills in using SAS as a computer
376 Z. Yan / Computers in Human Behavior 22 (2006) 364–380
program but also knowledge of a large network environment where SAS is situ-
ated. Thus, students� specific experience of using network systems in the past
(e.g., knowing how to log into and exit from the system, being able to navigate
within the system environment, and/or understanding the basic architecture of
computer networks) can be positively transferred to help students complete the ba-sic SAS procedure. A lack of basic knowledge of the network system was a partic-
ularly important reason why some of the students needed extensive help from the
teaching assistant in the study. In other words, the specific transfer of specific skills
took place due to identical elements shared by two tasks of using network systems,
according to the typology of transfer by Mayer and Wittrock (1996). This finding
is consistent with the existing literature in which knowledge of the network system
improves students� computer performance (Anderson & Reed, 1998; Cassidy & Ea-
chus, 2002; Dusick, 1998; Karsten & Roth, 1998; Rosen et al., 1993; Schumacher &Morahan-Martin, 2001).
Second, like the SAS program used in the study, email programs are network
applications. Presumably, students� experience of using email programs should
have substantial positive influence on their performance in learning SAS. But the
findings of this study do not support this reasonable presumption. One of the
explanations is that using email programs provided students with program-specific
experience (e.g., reading messages and sending out responses) and simple network-
ing experience (e.g., dialing up an Internet service provider or clicking an icon toopen an email program) rather than more advanced networking ones (e.g., navigat-
ing across different network programs and dealing with multiple files and folders).
In the study, completing a SAS project demands relatively sophisticated knowledge
about the local computer network system (e.g., using properly at least five key sys-
tem commands and proceeding correctly with five types of files, the. DAT file, the.
SAS file, the. COM file, the. LOG file, and the. LIS file). Thus, as suggested by the
existing literature (Busch, 1995; Cassidy & Eachus, 2002; Potosky & Bobko, 1998;
Schumacher & Morahan-Martin, 2001; Torkzadeh & Kouftros, 1994), students�simple computer experience (e.g., using email programs) does not always support
their completion of complex tasks (e.g., using a sophisticated statistical program
in a large network system). In this case, the specific transfer of specific skills did
not occur due to critical differences between two seemingly similar tasks of using
network applications, based on the typology of transfer by Mayer and Wittrock
(1996).
Third, the extensive computer experience literature suggests that users� experi-ence with specific computer programs influences subsequent specific computer per-formance that matches their experience (Dusick, 1998; Karsten & Roth, 1998; Kay,
1993; Kirkman, 1993; Mitra, 1998; Reed et al., 1996; Reed et al., 2000; Reed &
Giessler, 1995; Schumacher & Morahan-Martin, 2001). SAS is a popular statistical
program. Logically, what one previously learned with other statistical programs
(e.g., SPSS or STATA) should transfer to the subsequent process of using SAS
(e.g., producing the data file and running a statistical procedure). Surprisingly,
however, the findings of this study indicate that students� previous experience of
using statistical programs does not have a significant effect on learning another sta-
Z. Yan / Computers in Human Behavior 22 (2006) 364–380 377
tistical program. Why did this seemingly very relevant experience not contribute to
students� learning or, more specifically, not decrease the number of helps needed to
finish SAS projects? Careful examination of the instructional strategies used in the
classroom prior to the study provides important insights. The present study was
situated in the natural progression of a graduate-level methodology course. Beforethe study, as routine teaching practice, the instructor gave the whole class a de-
tailed demonstration of the step-by-step procedure for running basic statistical tests
on SAS (e.g., conducting a t-test or ANOVA). Furthermore, the instructor pro-
vided students with detailed SAS procedures for each of four SAS projects. All
these instructional strategies focused on the SAS statistical program rather than
on the SAS system environment. Thus, these strategies probably reduced students�need of assistance with the SAS statistical program substantially but the challenge
of navigating the network system remained. In other words, prior experience usingstatistical programs probably did influence the learning of the new SAS program.
This influence, however, was suppressed by the strong instructional scaffoldings
that were specifically provided to help students learn the SAS program, as consist-
ent with Anderson and Reed�s research (1998). Here, according to the typology of
transfer by Mayer and Wittrock (1996), the specific transfer of specific skills was
not observed due to substantial differences in context in which two similar tasks
are involved.
Fourth, there is essentially no consensus in the literature on the effects of generalcomputer experience such as length of time using computers or computer owner-
ship (e.g., Anderson & Reed, 1998; Karsten & Roth, 1998; Kirkman, 1993; Mitra,
1998; Nichols, 1992; Potosky & Bobko, 1998; Rosen et al., 1993; Smith et al.,
1999). Thus, it is not surprising that the length of time using computers did not
affect the SAS learning process. One specific fact stands out. As showed in Table
1, the average number of years of experience with computers for the students par-
ticipating in the study is 11 years (SD = 5.7). However, only 56% of students had
experience with computer networks. Many students had extensive experience withpersonal computers (PCs) but not with networked computers (NCs). Thus, a lack
of basic knowledge of the network system could account for students who had
experience with PCs but still needed extensive help with SAS as an application
of NCs. According to Mayer and Wittrock�s (1996) typology of transfer, the spe-
cific transfer of general skills did not occur due to critical differences between the
general experience of using PCs and the specific performance of using NCs. To ex-
plain why the experience of using networks affects only the parameter of initial sta-
tus but not the parameter of rate of change, one could speculate effects of bothcomputer-experience factors and non-computer-experience factors on the computer
performance.
First, the findings of the study suggest that students without network experience
tended, on average, to ask for help in the first SAS project five times more than those
with network experience. Having computer experience could give students a head
start and ease their initial learning challenges. Whether or not one has computer
experience will make substantial differences for completing relevant computer tasks,
but this influence most likely takes place in the beginning. This evidences not only
378 Z. Yan / Computers in Human Behavior 22 (2006) 364–380
how much computer experience affects computer performance but also how long this
effect lasts. The transfer literature has documented a wide variety of transfer of learn-
ing, such as positive transfer versus negative transfer, weak transfer vs. strong trans-
fer, specific transfer vs. general transfer, near transfer vs. far transfer (Detterman &
Sternberg, 1993; Greeno et al., 1996; Mayer & Wittrock, 1996; Robins, 1996; Singley& Anderson, 1989a). The findings of this longitudinal study suggest another impor-
tant aspect of transfer, that is, initial transfer versus late transfer or short-term trans-
fer versus long-term transfer that deserve further investigation.
Second, the findings of the study suggest that students without network experi-
ence do not ask for help in the later three SAS projects significantly more than those
with network experience. In other words, having relevant computer experience alone
gives students a head start but does not necessarily speed up or slow down students�entire learning process. In fact, the extensive computer experience literature has indi-cated that (a) computer experience often affects computer performance through com-
plex interactions with various mediated variables such as computer attitudes or
computer anxiety (e.g., Anderson & Reed, 1998) and (b) various non-computer-
experience variables such as learning style and self-efficacy significantly influence
computer performance (e.g., Reed et al., 2000; Rozell & Gardner, 2000). In their
seminal chapter of the Handbook of Educational Psychology, Greeno et al. (1996)
took behavioral, cognitive, situative perspectives to examine three major types of
transfer, that is, task-based transfer due to identical elements existed between sourcetasks and target tasks, learner-based transfer due to similar cognitive schemes con-
structed by learners, and context-based transfer due to parallel constraints and affor-
dances involved in contexts. The present study mainly focused on four variables of
computer experience that primarily are task-based. Systematical multivariable
research is needed to compare the effects of task-based, learner-based, and con-
text-based variables in order to further understand the complex relationship between
computer experience and computer performance.
Further follow-up research is needed to replicate, improve, and extend the presentstudy, including more continuous predictor variables of computer experience, differ-
ent outcome variables of computer performance such as correction rate and types of
questions asked, various non-computer-experience variables such as learner-based
variables and context-based variables, in order to further understand the complex ef-
fects of computer experience on computer performance. However, the new empirical
evidence provided from the present study reveals the complexity of the relationships
between students� computer experience and their computer performance, providing
useful implications for improving the daily practice of teaching and learning compu-ter programs and advancing the current research on transfer of learning. Since stu-
dents� experience of using network systems significantly affects their performance
when learning new network system programs, further effort should be made to train
computer learners to deal with NCs rather than relying on PCs. Since computer
experience primarily serves as the initial base for learning new computer programs,
instead of overemphasizing students� previous experience, further effort should be
made to develop students� intrinsic motivation, computer attitudes, cognitive styles,
problem solving strategies, and other non-experience factors in order to promote
Z. Yan / Computers in Human Behavior 22 (2006) 364–380 379
both short-term transfer and long-term transfer of learning and to help students
learn computers better, faster, and more enjoyable.
References
Anderson, D. K., & Reed, W. M. (1998). The effects of Internet instruction, prior computer experience,
and learning style on teachers� Internet attitude and knowledge. Journal of Educational Computing
Research, 19(3), 227–246.
Born, R. G., & Cummings, C. W. (1994). An assessment model for computer experience, skills, and
attitudes of undergraduate business students. Journal of Computer Information System (Fall), 41–53.
Bryk, A. S., & Raudenbush, S. W. (2002). Hierarchical linear models: Applications and data analysis
methods (second ed.). Thousand Oaks, CA: Sage.
Busch, T. (1995). Gender differences in self-efficiency and attitudes toward computers. Journal of
Educational Computing Research, 12(2), 147–158.
Cassidy, S., & Eachus, P. (2002). Developing the computer user self-efficacy (CUSE) scale: Investigating
the relationship between computer self-efficacy, gender and experience with computers. Journal of
Educational Computing Research, 26(2), 133–153.
Chua, S. L., Chen, D. T., & Wong, A. F. L. (1999). Computer anxiety and its correlates: A meta-analysis.
Computers in Human Behavior, 15, 609–623.
Diggle, P. J., Liang, K.-Y., & Zegger, S. L. (1994). Analysis of longitudinal data. New York: Oxford
University Press.
Detterman, D. K., & Sternberg, R. J. (Eds.). (1993). Transfer on trial: Intelligence, cognition, and
instruction. Norwood, NJ: Ablex.
Dusick, D. M. (1998). What social cognitive factors influence faculty members� use of computers for
teaching? A literature review. Journal of Research on Computing in Education, 31(2), 123–138.
Goldstein, H. (1995). Multilevel statistical models (second ed.). New York: Halstead Press.
Greeno, C., Collins, A. M., & Resnick, L. B. (1996). Cognitive and learning. In D. C. Berliner & R. C.
Calfee (Eds.), Handbook of educational psychology (pp. 15–46). NY: Macmillan.
Jones, T., & Clark, V. A. (1995). Diversity as a determinant of attitude: A possible explanation of the
apparent advantage of single-sex settings. Journal of Educational Computing Research, 12, 51–64.
Karsten, R., & Roth, R. M. (1998). The relationship of computer experience and computer self-efficacy to
performance in introductory computer literacy courses. Journal of Research on Computing in
Education, 31(1), 14–25.
Kay, R. H. (1993). An exploration of theoretical and practical foundations for assessing attitudes toward
computers: The computer attitude measure CAM. Computers in Human Behavior, 9, 371–386.
Kirkman, C. (1993). Computer experience and attitudes of 12-year-old students: Implications for the UK
national curriculum. Journal of Computer Assisted Learning(9), 51–62.
Mayer, R. E., & Wittrock, M. C. (1996). Problem-solving transfer. In D. C. Berliner & R. C. Calfee (Eds.),
Handbook of educational psychology (pp. 47–62). New York: Macmillan.
Mitra, A. (1998). Categories of computer use and their relationships with attitudes toward computers.
Journal of Research on Computing in Education, 30(3), 281–296.
Nichols, L. M. (1992). The influence of student computer-ownership and in-home use on achievement in
an elementary school computer programming curriculum. Journal of Educational Computing Research,
8(4), 407–421.
Potosky, D., & Bobko, P. (1998). The computer understanding and experience scale: A self-report measure
of computer experience. Computers in Human Behavior, 14(2), 337–348.
Reed, W. M., Ayersman, D. J., & Liu, M. (1996). The effects of students� computer-based prior experiences
and instructional exposures on the application of hypermedia-related mental models. Journal of
Educational Computing Research, 14(2), 185–207.
Reed, W. M., Oughton, J. M., Ayersman, D. J., Ervin Jr., J. R., & Giessler, S. F. (2000). Computer
experience, learning style, and hypermedia navigation. Computers in Human Behavior, 16, 609–628.
380 Z. Yan / Computers in Human Behavior 22 (2006) 364–380
Reed, W. M., & Giessler, S. F. (1995). Prior computer-related experiences and hypermedia metacognition.
Computers in Human Behavior, 11(3–4), 581–600.
Robins, A. (1996). Transfer in cognition. Connection Science. Journal of Neural Computing, Artificial
Intelligence and Cognitive Research, 8(2), 185–203.
Rogoff, B. (1990). Apprenticeship in thinking: Cognitive development in social context. New York: Oxford
University Press.
Rosen, L. D., Sears, D. C., & Well, M. M. (1993). Treating technophobia: A longitudinal evaluation of the
computerphobia reduction program. Computers in Human Behavior, 9, 27–50.
Rosen, L. D., & Weil, M. M. (1995). Computer anxiety: A cross-cultural comparison of university
students in ten countries. Computers in Human Behavior, 11(1), 45–64.
Rozell, E. J., & Gardner III, W. L. (1999). Computer-related success and failure: A longitudinal field study
of the factors influencing computer-related performance. Computers in Human Behavior, 15, 1–10.
Rozell, E. J., & Gardner III, W. L. (2000). Cognitive, motivation, and affective processes associated with
computer-related performance: A path analysis. Computers in Human Behavior, 16, 199–222.
Schumacher, P., & Morahan-Martin, J. (2001). Gender, Internet, and computer attitudes and experience.
Computers in Human Behavior, 17(1), 95–110.
Singley, M. K., & Anderson, J. R. (1989a). The transfer of cognitive skill. Cambridge, MA: Harvard
University Press.
Singer, J. D. (1998). Using SAS PROC MIXED to fit multilevel models, hierarchical models, and
individual growth models. Journal of Educational and Behavioral Statistics, 4, 323–355.
Singer, J. D., & Willett, J. B. (2003). Applied longitudinal analysis: Modeling change and event occurrence.
New York: Oxford University Press.
Singley, M. K., & Anderson, J. R. (1989b). The transfer of cognitive skill. Cambridge, MA: Harvard
University Press.
Smith, B., Caputi, P., Crittenden, N., & Jayasuriya, R. (1999). A review of the construct of computer
experience. Computers in Human Behavior(15), 227–242.
Taylor, H. G., & Mounfield, L. C. (1994). Exploration of the relationship between prior computer
experience and gender on success in college computer science. Journal of Educational Computing
Research, 11(4), 291–306.
Torkzadeh, G., & Kouftros, X. (1994). Factorial validity of a computer self-efficacy scale and the impact
of computer training. Educational and Psychological Measurement Fal., 54(3), 813–821.
Weil, M. M., & Rosen, L. D. (1995). The psychological impact of technology from a global perspective: A
study of technological sophistication and technophobia in university students from twenty-three
countries. Computers in Human Behavior, 11(1), 95–133.
Vygotsky, L. S., 1978. Mind in society. In M. Cole, V. Johnsteiner, S. Scribner, & E. Souberman, Trans.
Cambridge, MA: Harvard University Press. (Original work published 1935).
Zheng Yan received his Ed.D. from Harvard University and currently is an assistant professor of edu-cational psychology in the School of Education at State University of New York at Albany. His researchinterests include the psychology of computer skill acquisition, the Internet and child development, thepsychology of e-learning, and longitudinal research methodology.