Sherri Cianca · WebQuests One of the major reasons why Bernie Dodge (1997) and Tom March (2008)...
Transcript of Sherri Cianca · WebQuests One of the major reasons why Bernie Dodge (1997) and Tom March (2008)...
Sherri Cianca
218
Cianca, S. (2010). Quality Webquests: Scaffolding pre-service teachers’ Webquest
construction. Teacher Education Quarterly, Special Online Edition. Retrieved from http://teqjournal.org/cianca.html
Sherri Cianca
219
Quality WebQuests:
Scaffolding Pre-Service
Teachers’ WebQuest Construction
By Sherri Cianca
Abstract
Though public schools in North America are connected to the internet, teachers in these
schools usually use technology for mediocre, routine tasks. WebQuest development holds
possibility for promoting an innovative, transformative use of technology if that
WebQuest focuses on high-level critical thinking. To date, poor WebQuests dominate the
internet, and teachers lack support in their attempts to create good WebQuests. This study
compares two instructional models of support. In comparing these models, the study
reflects on the characteristics of quality WebQuests and on scaffolds to bolster pre-
service teachers in their development of quality WebQuests. The results of the study
suggest that a superior model of instruction includes in-class scaffolds and a high-quality
exemplar.
Sherri Cianca
220
Quality WebQuests:
Scaffolding Pre-Service
Teachers’ WebQuest Construction
By Sherri Cianca
One hundred percent of the public schools in the United States are connected to the
worldwide web (NCES, 2005). As impressive as this statistic sounds, a closer examination
reveals that most teachers use technology for trivial, routine tasks that simply mirror existing,
traditional methods (Russell, Bebell, O’Dwyer, & O’Connor, 2003; Ertmer, 2005). According to
March (2000b) the most common educational use of the Internet involves teacher-made
WebQuests, which are intended to guide students’ navigation of websites.
Though the intention may be educational, teachers who create WebQuests often
misunderstand or overlook the principles and function of true WebQuests (Vidoni & Maddux,
2002). The authors Zheng, Stucky, McAlack, Menchana & Stoddart (2005) state that many sites
that claim to be WebQuests are not true WebQuests at all, but rather, are little more than URL
worksheets where students simply fill in the blanks with information they found either on the
Web or in a book. Unfortunately, URL worksheets fail to challenge learners to transform
information into products that demonstrate in-depth understanding (Jonassen, Howland, Moore,
& Marra, 2003).
Some researchers blame teacher education programs (Doering, Hughes, & Huffman,
2003). They say teacher educators fail to train teachers in a meaningful use of technology that
promotes significant student learning (Russell, et al., 2003; Strudler, Archambault, Bendixen,
Anderson, Weiss, 2003). It may be reasonable, then, to target WebQuests as a means for training
teachers to use technology in a meaningful way that fosters significant student learning.
This study addresses the problem of teachers’ development of low-level WebQuests and
it attempts to offer scaffolds to teachers and pre-service teachers as they strive to create quality
WebQuests. With these goals in mind, the project studied the influence of two models of
instruction and the scaffolds found in each model. This paper reviews the attributes of a quality
WebQuest, describes instructional scaffolds, and suggests a model-specific scaffold for the
creation of good WebQuests.
Sherri Cianca
221
Research Questions
The study examined two models of instruction: Model C with a control group, Model E
with an experimental group. Model C involved written scaffolds in the form of instructor
feedback and peer and self-evaluation; Model E included in-class scaffolds and a sample
WebQuest to illustrate an exemplary product. Answers are sought for the following research
questions:
1. Which model of instruction engendered the highest quality of prospective
teachers’ WebQuests: instruction Model C with written feedback and peer and
self-evaluation or instruction Model E with in-class scaffolds and an example of
an exemplary product?
2. Which scaffolds hold the greatest potential for promoting pre-service teachers’
development of quality WebQuests: Written feedback? Peer Evaluation? Self-
Evaluation? In-class scaffolds? An example of an exemplary product?
3. How closely does the quality of participants’ WebQuests match the quality of the
participants’ other course assignments?
4. Which sections of the prospective teachers’ WebQuests are the strongest? Which
sections are the weakest? (See below for a description of the sections of a
WebQuest.)
Literature Review
Review of the literature begins with a discussion of WebQuests: how and why
WebQuests originated and the components of a WebQuest. To meet the function of true
WebQuests, this educational tool needs to challenge students to use and improve their critical
thinking skills (Vidoni & Maddux, 2002) and engage in deep understanding (Zheng, et. al., 2005;
Jonassen, et. al., 2003). With this emphasis in mind, this review explores the literature on critical
thinking and suggests how critical thinking can become a part of WebQuest construction. The
final topic addressed is constructivism, as I agree with the contention that WebQuests should
involve constructivist-based practices where students construct their own understanding in an
environment that challenges them to analyze, compare, and classify information and then to
debate and collaborate with their peers in decision-making (Strickland, 2005).
Sherri Cianca
222
WebQuests
One of the major reasons why Bernie Dodge (1997) and Tom March (2008) created
WebQuests was to address the need for quality Internet use: quality use includes a structured
format that leads to collaboration among students as they interact with, analyze, and synthesize
research-based, subject-specific information from sites in good working order. Because
WebQuests list the sites to be explored, students spend their time using information, as opposed
to spending hours reading unrelated, unsupported, faulty, or weak information on sites learners
find on their own. As a result, students are less distracted from the primary learning task
(Hassanien, 2006). When developed with deep learning as the goal and critical thinking as the
process, WebQuests facilitate students’ construction and application of new and relevant
knowledge (Zheng, et. al., 2005). A quality WebQuest challenges students to investigate age-
appropriate, real-life issues—a context often lacking in traditional lessons and textbooks (March
2000a). Researching real world information, interacting with that information, and analyzing
issues from various perspectives motivates students to get involved with issues that matter.
Improving students’ level of learning led to WebQuests becoming the most popular use of the
Web for educational purposes (March 2000b).
WebQuest Components/Sections. As learners analyze, synthesize, evaluate, transform,
and generalize information, the parts of a WebQuest work together to support a learner’s
thinking (Abu-Elwan, 2007; Simina & Hamel, 2005). According to Dodge (1997), a WebQuest
consists of six critical components. A synthesis of the literature leads to the following description
for each component. For a closer match with this study, the author added a problem statement
and points of view to the descriptions and a bibliography section. How each section might appear
in a WebQuest can be seen in Sample WebQuest: Vancouver 2010. To compare the following
description with one found on-line, see Brooks and Byles (2009).
1. Introduction. Statement of a problem. The introduction sets the topic in an
authentic environment. It motivates involvement, suggests possible viewpoints,
and challenges learners to solve a real-world problem.
2. Task: The task presents the assignment in an interesting, accessible manner. It
gives focus to the inquiry. To scaffold learners’ gradual increase in self-reliance,
the overall task is broken up into sub-tasks.
Sherri Cianca
223
3. Process: The process includes the roles students will assume, the step-by-step
procedures students will follow to complete the task, and the organizational
framework for research, synthesis, and final presentation of findings and
decisions. This section describes students’ roles (one for each member of the
three or four person team). Each role assumes a different perspective on the
problem. All students research common sites to build the same background
knowledge. Next, students become experts from the perspective of their assumed
roles. Finally, team members meet to collaboratively synthesize their findings. To
lend support for high-order thinking, the process gives checkpoints throughout,
allowing for differing degrees of consultation and coaching.
4. Resources: Resources are imbedded in the process section (see above).
Hyperlinks are given for the sites to explore to complete the task: informational
sites and multimodal resources. All team members explore a first set of links to
gain common background knowledge. Subsequent sets of links are categorized
by roles, and individuals explore only that set of links listed for that individual’s
role. Links enable each team member to gain both a unique perspective of the
problem and possible solutions for the problem.
5. Evaluation: The evaluation section contains the criteria to be used by both the
teacher and students to guide, support, and evaluate students’ progress.
6. Conclusion: The conclusion brings closure, calling on students to reflect on what
was learned and to discuss possible extensions and applications into other
domains.
7. Bibliography: The bibliography contains all references for other WebQuests
consulted, as well as the bibliographic information for other resources.
8. Teachers Pages: These pages can contain information on the standards addressed
by the WebQuests, a lesson plan, suggestions, additional websites, pre-requisites,
materials, classroom management tips and the like.
WebQuest and Cognitive Theories of Learning
When developed as intended, WebQuests challenge students to solve problems, think
deeply, and better make sense of the world. To the cognitive theorist, to learn is to better
understand the world and to change our intended behavior as a result of that understanding
Sherri Cianca
224
(Woolfolk, 2001). WebQuests prompt students to build background knowledge and
understanding of a situation, to formulate a response or solution to a problem, and then to
actively address that problem using a real-world context.
Anderson and Krathwohl (2001), who revised Bloom’s (1956) original taxonomy,
categorized the cognitive domain into six levels: remember, understand, apply, analyze, evaluate,
and create. Good WebQuests pose a problem and then lead students to consider various
perspectives, ramifications, and solutions to that problem. In the process, a good WebQuest will
challenge students to function in the highest levels of the cognitive domain: 1) students analyze
the information on websites to determine which information is cogent to their study, 2) they
evaluate the strength, worth, and relevance of information to their own study, their own
perspective, and 3) students work together to synthesize their findings and create a valid
argument for addressing the problem.
The criteria for WebQuest quality is closely aligned to at least two dimensions of critical
thought: a) generating solutions to problems and b) developing one’s perspective in a fair-
minded way that explores, analyzes and evaluates alternative beliefs, arguments, and points of
view (Paul, Binker, Jensen, & Kreklau, 1997). To develop WebQuests with these dimensions
requires pre-service teachers to think in these dimensions, or as Chambers (1988) states, to teach
for critical thinking, teachers need to engage in critical thinking themselves.
WebQuest and Constructivist Learning Design
To the constructivist, people construct their own understanding of the world, and they do
so through experience and reflection (Kolb, 1984). Constructivist learning design calls for the
establishment of situations that 1) arrange for student explanations, 2) group students for
collaborative engagement, 3) are ripe with questions to keep students thinking, 4) provide
opportunities for students to reflect on and then exhibit their understanding (Gagnon & Collay,
2006) and to engage in authentic situations. According to Vygotsky (1986) scaffolding helps the
learner build depths of understanding. During their engagement in WebQuests, students are
grouped for interaction, are called on to explain their findings and thought processes, are driven
by a quest to find answers to questions and solutions to problems, and through it all, are scaffold
with particular sites to explore and with periodic checks to guide their progress. WebQuest
requires students to reflect on many perspectives. After discussing and debating their findings,
students come to conclusions based on solid evidence, and then they disseminate their findings
Sherri Cianca
225
and decisions to others. WebQuest learners become involved in authentic situations as they
assume roles of adults, tackle problems adults attempt to solve, and in the use of resources adults
use to solve problems. The products students create mirror the products adults create as
advocates of change. As a result, learners view their learning as more relevant, more connected
to life (March, 2008).
Methodology
This study is an action research project; and as such, it aims at gathering relevant,
practical knowledge that can be applied to the researcher’s own classroom teaching (Borg, Gall,
& Gall, 1993). With the goal of improving my own teaching, I proposed two models of
instruction and I sought to determine the impact those models would have on pre-service
teachers’ WebQuest construction. I collected and analyzed data using qualitative methods, and I
used content analysis to identify the levels of quality attained by participants’ WebQuests.
Setting and Participants
This study, conducted in a Western New York university, spanned a two-year (four-
semester) period where I taught twelve sections of the same methods course. Each semester, I
taught one section of the course to undergraduate students and two sections of the course to
graduate students. The undergraduates were in the last semester of a four-year concurrent
program; the graduates were in the last semester of a one-year master’s program.
Though the study spanned two years (four semesters), the study compares only the last year’s
WebQuests (last two semester). I labeled the fall-semester participants the control group and the
spring-semester participants the experimental group.
Of the 86 participants, 48% (n = 41) were in the control group and 52% (n = 45) were in
the experimental group. The total of 28% (n = 24) undergraduates consisted of an equal number
50% (n = 12) in both the control group and the experimental group. The total of 72% (n = 62)
master’s students were made up of 47% (n = 29) in the control group and 53% (n = 33) in the
experimental group. Each semester, graduate students were in two different classes: I labeled
these Subgroup A Grads and Subgroup B Grads. Each semester, the Subgroup A Grads met
Monday nights and the Subgroup B Grads met Wednesday nights. Each semester, the
undergraduates met Tuesday mornings. The average age of the undergraduates was 23 years old;
whereas, the average age of the graduates was 32 years old. The average age of participants in
the control group (29 years old) was essentially the same as the average age of the participants in
Sherri Cianca
226
Figure 1
Study Participants
86 Participants
Control Group
(Model C)
Experimental
Group
(Model E)
48% (n = 41) 52% (n = 45)
Undergrads (n =
12)
Subgroup A Grads
(n = 12)
Subgroup B Grads
(n = 17)
Undergrads (n =
12)
Subgroup A Grads
(n = 12)
Subgroup B Grads
(n = 21)
the experimental group (30 years old).
To test for homogeneity between the control group and the experimental group, I
compared participants’ scores on other course assignments: marks participants earned for other
assignments completed in this methods course. I conducted an independent-sample t-test and
found no significant differences between the control group and the experimental group.i This
suggests that the two groups were evenly matched in this area as well.
WebQuest Assignment and Instruction Models
Instruction for both models. I used instruction Model C with the control group and
instruction Model E with the experimental group. I introduced both groups to WebQuests in the
same way: with both I used an interactive mini-lesson based on the Dodge’s (2009) model.
To begin the introduction to WebQuests, I broke students into home groups of four
students each. Each student in a home group chose one of the following roles:
1) Technology Expert. This role focused on the number of websites, that all sites
are up and running, that the sites have colorful pictures, animation, videos, and
sound bites.
Sherri Cianca
227
2) Scholar. This role was concerned that the WebQuest involves students in high-
level thinking such as analysis, synthesis, evaluation, and creative expression.
3) Sociologist. This role cared about group work, collaborative engagement,
discussion, and consensus.
4) Business manager. This role narrowed in on the smooth functioning of the
WebQuest, if the time spent is worth the benefit derived, and the organizational
framework.
After choosing their roles, home group members broke into specialty groups: a group of
all technology experts, a group of scholars, group of sociologists and a group of business
managers. Specialty groups analyzed five different WebQuests from the perspective of their
occupational role and the concerns of that role. For the WebQuests that participants analyzed,
see Dodge (2009). Specialty groups discussed the features of each WebQuest according to that
role’s specific criteria for quality. They discussed, debated, and finally decided on how they
would categorize the WebQuests from the best to the worst. Members of each specialty group
then returned to their home groups. Home group members discussed and debated which
WebQuests were the best and which were the worst, and why. After much discussion, home
groups reached a consensus and categorized the WebQuests from best to worst. Next, home
groups listed their choices on the white board and members from each home group defended
why they put the WebQuest in the position they did. Groups hammered out their reasons and,
finally, a consensus was reached, and one WebQuest emerged as the best and another as the
worst.
To conclude, I gave students the task of listing the attributes of a quality WebQuest.
Groups recorded the attributes; and then, going around the room, each attribute listed was
verbalized. At this point, I passed out a copy of the WebQuest Rating Scale (see Appendix A) to
each student, and we compared the criteria on the rating scale to the criteria they listed on the
white boards.
Throughout, the process was deemed important. It was through the process that
participants from both Model C and Model E learned the composition of WebQuests, and they
began to understand the attributes of a quality WebQuest.
Instruction for the Model C. After the above introduction to WebQuest, participants for the
control group completed the following out-of-class activities (Model C):
Sherri Cianca
228
1. Participants worked independently out-of-class to create a WebQuest. They had
an open invitation to meet privately with the instructor, but primarily, they
worked on their own, using the sample WebQuests (see above) and the WebQuest
Rating Scale (see Appendix B) as their guide.
2. Each participant conducted a self-evaluation using the WebQuest Rating Scale.
They made revisions to their WebQuests and sent a copy to the three peers
assigned to work together.
3. Each participant received three peer-evaluations, using the WebQuest Rating
Scale. I formed participants into groups of four. Each group member emailed the
other members of their group a copy of their WebQuest and group members rated
peers’ WebQuests using the WebQuest Rating Scale.
4. Prospective teachers made revisions to their WebQuests. A half hour of class
time was dedicated to discussion and in class questions and answers, especially
over common problem areas. The participants revised their WebQuests.
5. Each participant completed and submitted a second self-evaluation using the
WebQuest Rating Scale.
6. Participants submitted rough drafts to the instructor, and then received feedback
from instructor. I sent each participant detailed, in-depth feedback for each
section of their WebQuest.
7. Participants made final revisions and submitted their final WebQuests.
Instruction for Model E. The next semester, I changed my instruction. Model E
instruction grew out of my discontent with Model C instruction. I struggled with the lack of
critical feedback participants received from their peers, and I struggled with my own cost in time
and labor to give feedback on rough drafts. Participants’ apparent need for such extensive
feedback suggested that most lacked sufficient knowledge to construct quality WebQuests
without on-going support. Model E emerged as an alternative model of instruction: to give
support to pre-service teachers while reducing demands on the instructor.
I weighed the benefits of the WebQuest assignment against existing in-class activities and
determined that WebQuest creation was worthy of increased in-class focus. I replaced the Model
C out-of-class activities with the following in-class activities:
Sherri Cianca
229
1. Participants received an exemplar or prototype to benchmark top-level
proficiency. The exemplar was much like the sample found in Appendix A. My
expectations are a bit different than those found on any of the sample WebQuests
participants analyzed during the introduction activity (see above). The sample
like that found in Appendix A is more inline with my expectations. I include this
sample rather than my own that the reader might see the quality of WebQuest a
preservice teacher is capable of creating.
2. Participants worked on their WebQuests in class: I dedicated two additional in-
class hours to WebQuest creation. The two hours were spread over two class
periods.
3. As the instructor, I circulated to give prompts, ask questions to scaffold and guide,
and give words of encouragement. I gave them the ―go ahead‖ or gave assistance
as participants completed the following checkpoints:
Introduction is problem-based.
Knows how to create a PowerPoint and how to hyperlink.
Task relates to the problem posed in the introduction.
Process describes students’ roles, included the function of each role in the
real world, and gives a perspective for each role.
Resources: check that the first set of hyperlinks is conducive to building
students’ background knowledge of the topic.
4. Participants worked out-of-class to complete their final WebQuests.
An analysis of participants’ WebQuests would reveal the differences between the WebQuests
created under instruction Model E and those created under instruction Model C.
Data Collection and Data Analysis
Data were collected throughout the study in the form of observation reports, anecdotal
records, final WebQuest submissions, and checks along the way (rough draft for Model C and
five checkpoints for Model E). The primary data reported in this paper relates to the quality of
participants’ final WebQuests. As is expounded on below, to determine WebQuest quality, I
began by identifying the criteria found in assessment tools developed by others (Bellofatto, Bohl,
Casey, Krill & Dodge, 2001; eMints National center, 2006; Hassanien, 2006). From these
Sherri Cianca
230
samples, I adopted the criteria they listed as important, added other criteria to further clarify my
expectations, and developed what I considered a valid and reliable evaluation tool.
Evaluation Tool. I developed a WebQuest Rating Scale to guide participants’ WebQuest
construction and to measure WebQuest quality. To generate the rating scale, I perused and
synthesized elements from others’ WebQuest assessment rubrics (Bellofatto, et al., 2001; eMints
National Center, 2006; Hassanien, 2006). Bellofatto and others’ (2001) rubric includes
descriptors for each section of the WebQuest: for example, visual appeal, mechanical aspects,
relevance and social importance, relationship to standards, clarity, assignment of roles,
timeliness of links, and evaluation criteria. eMints National Center (2006) includes an emphasis
on graphic elements, spelling and grammar, compelling questions, clarity of tasks and process,
group work, and quality of resources. I tested all criteria by evaluating two hundred examples of
WebQuest found on the internet, and then refined the rating scale to fit what I felt is critical for a
quality WebQuest, retaining some criteria, rewording other criteria, and adding some criteria of
my own (See Appendix B).
I developed an additional assessment scheme: the WebQuest Scoring Rubric. The rubric
served as an efficient, effective, holistic means to evaluate and describe ranges of performance.
To compose the rubric, I took key criteria from the WebQuest Rating Scale (See Appendix B:
WebQuest Scoring Rubric).
I used the WebQuest Scoring Rubric to assess each participant’s WebQuest five times on
five different occasions. To triangulate my data for improved qualitative research reliability, two
other university professors who are knowledgeable about WebQuest quality conducted a separate
review of participants’ WebQuests.
In the midst of their analysis, one of the outside markers requested samples for the first
three sections of the rubric. In response, I compiled exemplars for various levels of proficiency
for the more subjective criteria (see Appendix C). The outside markers and I then completed
separate analyses using the WebQuest Scoring Rubric and accompanying exemplars.
I found the differences between the three markers were insignificant: p = .663 (between
groups’ analysis of variance). Though my mean scores for all sections of the WebQuests were
higher than either of the outside evaluators, the overall total and the categorization into levels of
quality were close enough to render an insignificant difference among markers, thus establishing
high inter-rater reliability.
Sherri Cianca
231
Results
Through this research study, I sought answers to four questions. In this section, I address
each of those four questions.
Research Question One
The following research question set the parameters of the study:
Which model of instruction engendered the highest quality of prospective
teachers’ WebQuests: instruction Model C with written feedback and peer and
self-evaluation or instruction Model E with in-class scaffolds and an exemplary
product sample?
To determine an answer to this question, I called on two outside markers to join me as I rated
participants’ WebQuests using the WebQuest Scoring Rubric. Working independently, we rated
the WebQuests on their introduction, task, process, resources, and overall content.
Compiling and comparing the data, I found the control group WebQuests were of lower
quality than the experimental group’s WebQuests. See the graph in Figure 2 (determined by
running a two-way, between-groups ANOVA). As Figure 2 suggests, both the control group
and the experimental group show a very small difference between the three subgroups:
SubGroup A graduates, SubGroup B graduates, and Undergraduates.
Figure 2
SubGroups’ WebQuest Quality
Sherri Cianca
232
An analysis of the graphs found in Figure 2 showed a very small statistical difference
between the three subgroups. In contrast, the difference between the control group and
experimental group as a whole was statistically significant.ii See Table 1. To determine the
statistical variance between groups, I ran a statistical analysis (ANOVA).
Table 1
Control Group and Experimental Group’s WebQuest Mean Scores
Group N WebQuest Mean
Control 41 2.68 (67%)
Experimental 45 3.44 (86%)
The WebQuest mean for the control group was 2.68 (67%), while the WebQuest mean for the
experiential group was 3.44 (86%). This data suggests the WebQuests created by the
experimental group and instruction Model E are of higher quality than the WebQuests created by
the control group and instruction Model C.
Research Question Two
The second research question asked the following:
Which instructional scaffolds hold the greatest potential for promoting pre-
service teachers’ development of quality WebQuests: Written feedback? Peer
Evaluation? Self-Evaluation? In-class scaffolds? A sample of an exemplary
product?
To determine which scaffold made the biggest difference in WebQuest quality, I began by
separating the variables for Model E. The first variable, in-class scaffolds, could be isolated into
three categories based on participants in-class attendance: those absent both class periods
(received no in-class scaffolds), those present one class period (received a moderate amount of
in-class scaffolds), and those present two class periods (received all in-class scaffolds they
sought).
Attendance records show that 2 students were absent both periods of in-class instruction,
9 students received one hour of instruction, and 34 students were present the entire two hours.
These numbers are too low and too different from one another to determine any significant
correlation between WebQuest quality and time in class. Even so, when I looked at WebQuest
quality among subgroups there was no significant difference between the mean qualities of their
Sherri Cianca
233
WebQuests. The worst WebQuest came from a participant who was absent both class periods,
and one of the best WebQuests came from the other student who was absent both class periods.
The differences in quality between the WebQuests created after one hour of in-class time and
two hours of class time were insignificant, though the data may not be reliable since I spent extra
time the second hour with those participants who were absent the first hour to help them catch
up.
Since all prospective teachers in the experimental group received the prototype and all
participants in the control group did not, this variable might be considered the variable
responsible for the increase in WebQuest quality between Models C and E. Additional research
into this variable needs to be conducted before any conclusive statements can be made
concerning the effect of a prototype.
To conclude, individual interventions resisted analysis; and so, I could not determine
which scaffold had the greatest effect on WebQuest quality. Consequently, for this study, the
scaffolds will be considered in terms of sets of scaffolds rather than individual scaffolds.
Research Question Three
The third research question asked the following:
How closely does the quality of prospective teachers’ WebQuests match the
quality of participants’ other course assignments?
First, I tallied up the scores on other course assignments and found the difference in quality was
minimal and insignificant between the two groups (p = .309). For the control group, the mean
for ―other course assignments‖ was 92% and for the experimental group, the mean for ―other
course assignments‖ was 91%. These percentages suggest that ―other course assignment‖
quality was essentially the same for both groups.
Next, I compared the mean for the WebQuest assignment with the mean for ―other course
assignments‖. I found the difference in assignments was significantly different (p < .005).iii
Then I compared the differences for each group. I found the control group showed a
significantly larger difference between WebQuest quality and the quality of ―other course
assignments.‖ Though the experimental group also showed a large difference, that difference
was not as profound as the difference for the control group.iv
For the control group, the mean for
the WebQuest was 67% and the mean for the ―other course assignments‖ was 92%--a difference
Sherri Cianca
234
of 25%. Whereas, for the experimental group, the mean for the WebQuests was 86% and the
mean for the ―other course assignments‖ was 91%--a difference of 5%. See Table 2 below.
According to the means, the WebQuest quality for both groups of participants was lower
than the level of quality achieved in other course assignments. However, WebQuests created by
the experimental group were higher in quality than the control group WebQuests and closer to
the quality of work participants produced in other course assignments.
Table 2
Comparison of Mean Scores for Control Group and Experimental Group
Group N WebQuest
Mean
“Other Course
Assignment”
Mean
Mean
Difference
Control 41 67% 92% 25%
Experimental 45 86% 91% 5%
Research Question Four
The fourth research question asked the following:
Which sections of the prospective teachers’ WebQuests are the strongest? Which
sections were the weakest?
See Table 3 for a comparison of WebQuest mean quality between groups (control and
experimental) and sections (introduction, task, process, resources, and content).
Table 3
Comparison of the Mean Scores for WebQuest Quality across Sections
Introduction Task Process Resources Content
Control 62% 63% 57% 78% 73%
Experimental 83% 84% 84% 90% 89%
For both groups, the strongest section was the resource section. The weakest section was
divided: for the control group, the weakest section was the process section, but for the
experimental group, the introduction, task, and process were all close.
Sherri Cianca
235
Discussion and Summary
The results of this study suggest that when this instructor chose between an instruction
model with in-class scaffolds and exemplary product samples (Model E) and an instruction
model with written feedback and peer and self-evaluation (Model C), the instruction model with
in-class scaffolds and exemplary product samples resulted in higher quality WebQuests.
Though it evades statistical verification, anecdotal records suggest that the timing and the
interactive nature of feedback given for Model C may have been a factor. That is, Model C
participants received extensive written feedback, but that feedback was given near the end of the
project; and, except for five participants who came to my office for help, the submission of the
rough draft was the first time I became aware of pre-service teachers’ progress. In contrast,
Model E participants developed WebQuests in the midst of a community with the availability of
immediate verbal feedback. Model E gave me the opportunity to interact with participants,
giving prompts, redirecting with questions, offering encouragement, answering queries, leading
whole-class mini-lessons, and facilitating discussions when participants dealt with similar
problems or issues. In the place of written peer-assessment, I encouraged participants to ask for
and give oral feedback to one another. When participants in the control group came to an
impasse, I was unaware of their struggles. Whereas, when participants in the experimental group
came to an impasse, I was available and peers were available to help struggling participants
refocus.
When the control group asked for a prototype, I resisted, and a few of the pre-service
teachers expressed frustration in their uncertainty over expectations. I was reticent to give
prospective teachers a prototype that exemplified perfect completion of the assignment. I
assumed such a model would rob pre-service teachers, especially those in the master’s program,
of high-level thinking and would, in a sense, be tantamount to doing students’ work for them.
By the time I developed Model E, I reconsidered. I began to equate the do-it-on-your-own
attitude to abandoning students who need direction. Rather than a mental crutch that thwarts
participants’ high-level thinking, I began to view prototypes as a means to clarify and specify
expectations and help participants internalize key elements of exemplar quality. I am now
convinced that giving a prototype makes expectations clear and that such a scaffold aligns
teaching and learning. Or as Loughrans and colleagues advocate, teacher training is best when
Sherri Cianca
236
taught using strategies teachers would use as they instruct their own students (Loughrans,
Hamilton, LaBoskey, & Russel, 2004).
Some evidence suggests the scaffolds characterizing Model E may not be necessary for
all pre-service teachers. Not all participants in the control group needed alternative scaffolds.
Twenty-two percent of the control group participants created very high quality WebQuest,
earning 95% to 100%, without being exposed to the additional scaffolds found in Model E. As
well, a few participants in the experimental group reacted negatively to spending in-class time
working on WebQuests. Though all prospective teachers but two expressed a positive attitude,
two undergraduate participants stated that they would rather work on the assignment at home;
one said working in class was distracting. Nonetheless, class time was used effectively, and the
effort devoted to in-class scaffolding of participants’ progress may have been a factor in the
meaningful use of technology. Continuing research is needed to determine if this supposition is
well founded.
An analysis of anecdotal records indicate participants needed scaffolds the most in four
areas: a) determining a real-life problem-solving situation, b) becoming aware of alternative
perspectives for a problem, c) determining roles, and d) working with PowerPoint and
hyperlinks. The first two struggles were most frequently resolved by suggesting participants
conduct further research to gain a better understanding of their topic. The third struggle was best
resolved by using prompts to get participants brainstorming possible occupations and/or persons
involved or interested in the problem. Close to one-fourth of the pre-service teachers needed
help with PowerPoint, and more than one-third requested assistance inserting hyperlinks.
Knowledgeable peers helped the most with this fourth struggle.
The WebQuest Rating Scale provided valuable criteria for separating the assignment into
straightforward, manageable components—though I plan to revise the rating scale to better
emphasize the need for alternative perspectives. The WebQuest Scoring Rubric was valuable in
many ways: in its economy, its attention to the authenticity of problem-based learning, and its
support for alternative perspectives. Next time I teach the course, I plan to give pre-service
teachers a copy of this rubric, along with exemplars for the rubric (Appendix C).
I plan to continue my research on this topic this coming year. In that study, I will
separate the various scaffolds and provide a forum for participants to articulate which scaffolds
they found most beneficial.
Sherri Cianca
237
Concluding Remarks
We teachers must analyze our present practices to seek ways to improve instruction
(Loughrans, et. al, 2004). In this age of technology, it is essential that educational use of
technology is meaningful and is used to promote significant student learning. Just as I have
learned and been inspired by other instructors’ research findings, I hope the sharing of my
research will touch the palate of other teacher educators. May they too investigate ways to
scaffold pre-service teachers’ significant uses of technology. As well, I hope this article will
inspire teachers, both pre-service and in-service, to view the Internet as an impetus for high-
level, critical thinking, especially in the creation of dynamic, real-world, problem-solving
WebQuests.
References
Abu-Elwan, R. (2007). The use of WebQuest to enhance the mathematical problem-posing skills
of pre-service teachers. The International Journal for Technology in Mathematics
Education; 14(1), 31-39.
Anderson, L. W. & Krathwohl, D.R. (Eds.). (2001). A taxonomy for learning, teaching, and
assessing: A revision of Bloom’s taxonomy of educational objectives. New York:
Longman.
Bellofatto, L., Bohl, N., Casey, M., Krill, M., & Dodge, B. (2001). A rubric for evaluating
WebQuests. Retrieved from http://WebQuest.sdsu.edu/WebQuestrubric.html
Bloom, B. (1956). Taxonomy of educational objectives, Handbook I: The cognitive domain.
New York: David McKay Co. Inc.
Borg, W., Gall, J., & Gall, M. (1993). Applying educational research: A practical guide. New
York: Longman.
Brooks, S. & Byles, B. (2009). Internet4Classrooms. Retrieved from
http://www.internet4classrooms.com/using_quest.htm#definition
Chambers, J. (1988). Teaching thinking through the curriculum—where else? Educational
Leadership, 45, 4-6.
Dodge, B. (2009). A WebQuest about WebQuests. Retrieved San Diego State University
http://webquest.sdsu.edu/webquestwebquest-es.html
Dodge, B. (1997). Some thoughts about WebQuests. Retrieved from San Diego State University:
http://WebQuest.sdsu.edu/about_WebQuests.html
Sherri Cianca
238
Doering, A., Hughes, J., & Huffman, D. (2003). Preservice teachers: Are we thinking with
technology? Journal of Research on Technology in Education, 35(3), 342-361.
eMINTS National Center. Professional development for teachers by teachers from the
University of Missouri: Rubric/scoring guide. Retrieved from
http://www.emints.org/WebQuest/rubric.shtml
Ertmer, P.A. (2005). Teacher pedagogical beliefs: The final frontier in our quest for technology
integration? Educational Technology Research and Development, 53(4), 25-39.
Gagnon, G., & Collay, M. (2006). Constructivist learning design for classroom teaching.
Thousand Oaks: Corwin Press.
Hassanien, A. (2006). An evaluation of the WebQuest as a computer-based learning tool.
Research in Post-Compulsory Education, 11(2), 235-250.
Jonassen, D.H., Howland, J., Moore, J., & Marra, R.M. (2003). Learning to solve problems with
technology: A constructivist perspective (2nd
ed.) Upper Saddle River, NU: Merrill
Prentice Hall.
Kolb, D. A. (1984). Experiential Learning: Experience as the source of learning and
development. New Jersey: Prentice Hall.
Loughrans, J., Hamilton, L., LaBoskey, V. & Russel, T. (Eds.). (2004). International handbook
of self-study of teaching and teacher education. Dordrecht, Netherlands: Springer.
March, T. (2008). Assessing best WebQuests. Retrieved from
http://bestWebQuests.com/bWebQuest/matrix.asp
March, T. (2000a). The 3 R’s of WebQuests: Let’s keep them real, rich, and relevant.
Multimedia Schools, Nov/Dec. Retrieved from
http://www.infotoday.com/MMSchools/nov00/march.htm
March, T. (2000b). WebQuests 101: Tips on choosing and assessing WebQuests. MultiMedia
Schools, 7(5), 55-58.
National Center for Educational Statistics [NCES]. (2005). Internet access in US public schools
and classrooms: 1994-2003. Washington, DC: US Department of Education. Retrieved
from http://nces.ed.gov/programs/digest/d05/
Paul, R., Binker, J, Jensen, K. & Kreklau, H. (1997). Critical thinking handbook. Rohnert Park,
CA: Foundation for Critical Thinking
Sherri Cianca
239
Russell, M., Bebell, D., O’Dwyer, L.M & O’Connor, K. (2003). Examining teacher technology
use: Implications for preservice and inservice teacher preparation. Journal of Teacher
Education, 54(4), 297-310.
Simina, V. & Hamel, M. (2005). CASLA through a social constructivist perspective: WebQuest
in a project-driven language learning. ReCALL, 17(2), 217-228.
Strickland, J. (2005). Using WebQuest to teach content: Comparing instructional strategies.
Retrieved from http://www.citejournal.org/vol5/iss2/socialstudies/article1.cfm
Strudler, N., Archambault, L., Bendixen, L., Anderson, D., Weiss, R. (2003). Project THREAD:
Technology helping restructure educational access and delivery. Educational Technology
Research and Development, 51(1), 57-72.
Vidoni, K. & Maddox, C. (2002). WebQuests: Can they be used to improve critical thinking
skills in students? Computers in the Schools, 19 (1/2), 101-117.
Vygotsky, L. (1986). Thought and language, Revised Edition, Kozulin, A. (Ed.). Cambridge:
MIT Press.
Woolfolk, A. (2001). Educational psychology, (8th
edition). Boston: Allyn and Bacon.
Zheng, R., Stucky, B., McAlack, M. Menchana, M. and Stoddart, S. (2005). WebQuest learning
as perceived by higher–education learners. TechTrends, 49 (4), 41-49.
iMarks for the control group (M = 90.84, SD = 8.227) and the experimental group (M = 92.29,
SD = 7.291); t (84) = -.86, p = .39 (two-tailed). The magnitude of the differences in the means
(mean difference = -1.15, 95% Cl: -4.8 to 1.9) was very small (eta squared = 0.0087).
ii F (1, 80) = 20.2, p <.0005. The effect size (partial eta squared = .2) suggests a large effect size between
groups: the experimental groups’ WebQuests were significantly higher in quality than the WebQuests of
the control group.
iii To investigate statistical differences in quality between WebQuests and other course
assignments, I performed a one-way between-groups multivariate analysis of variance
(MANOVA). The multivariate tests showed a statistically significant difference between the
WebQuest quality and other course assignment quality. This was shown on the combined
dependent variables, F (2, 83) = 15.6, p < .0005; Wilks’ Lambda = .73; partial eta squared = .27.
iv To compare variances of each group separately, I conducted one-way repeated measures
ANOVA with a Bonferroni adjustment. The control group showed a significantly large
Sherri Cianca
240
difference between WebQuest quality and quality of other course assignments: F (1, 40) = 61.4,
p = .000; Wilks’ Lambda = .39; partial eta squared = .606. The experimental group also showed
a large difference, though not as large as the control group. For the experimental group, the
difference between WebQuest quality and the quality of other course assignments was as
follows: F (1, 44) = 6.48, p = .015; Wilks’ Lambda = .87; partial eta squared = .13.
Appendix A
WebQuest Rating Scale
0—Absent 1—Poor 2—Fair 3—Good 4—Excellent Performance
Level Criteria Scoreiv
INTRODUCTION
0 1 2 3 4
1. The introduction describes a compelling real-life problem or issue that inspires students to make a positive change in the world.
/4
TASK
0 1 2 3 4 2. Task can be referenced to state/provincial standards.
0 1 2 3 4 0 1 2 3 4
3. To answer the essential problem/issue requires students to function one of the top three levels of Bloom’s Taxonomy.
/4 4. The learner is encouraged to invent his or her own solution.
PROCESS
0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
Clarity 5. Every step in the process is clearly stated.
6. Most students would know exactly where they are at each step of the process and know what to do next.
7. Activities are designed to transform students’ thinking from basic knowledge to the construction of new meaning through high-level thinking.
Richness 8. Different roles are assigned to help students understand different perspectives and to share responsibility in accomplishing the task.
Process One 9. Contains hyperlinks for all team members to build background knowledge related to the problem.
0 1 2 3 4
Process Two 10. Roles are relevant to the problem posed in the Introduction.
0 1 2 3 4 11. Each role has a number role-specific hyperlinks (at least 5). /4
RESOURCES
0 1 2 3 4
Annotated 12. Resources are annotated.
0 1 2 3 4
Relevance 13. There is a clear and meaningful connection between all resources and the information needed for students to accomplish the task.
0 1 2 3 4 14. Checkpoints along the way scaffold and guide students’ progress.
Sherri Cianca
241
0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4
Quality 15. Varied resources provide enough meaningful information for students to think deeply and from different perspectives.
Mechanical Aspects
16. All websites are up and running
Navigation and Flow 17. Clear navigation and flow to all websites.
/20 18. It is clear what to do when you get to the site.
CONCLUSION
0 1 2 3 4 0 1 2 3 4
19. Gives an overview or review of key ideas.
/4 20. Challenges students to transfer their learning to other topics and issues and/or challenges students to improve a situation studied.
EVALUATION
0 1 2 3 4 0 1 2 3 4
21. The assessment tool clearly measures students’ acquisition of knowledge, concepts, and/or skills.
/8 22. Criteria for success are found on a rubric, checklist, rating scale or other student-friendly assessment form.
CONTENT: Conceptual Knowledge and Skills
0 1 2 3 4
23. The WEBQUEST is conducive to substantially increasing students’ conceptual knowledge and/or skills for the topic studied.
/12
OVERALL
0 1 2 3 4
Appropriateness 24. Is age/grade appropriate
/4
0 1 2 3 4
World Changing 25. Throughout, the WebQuest inspires students to make a positive change in the world.
/10
0 1 2 3 4
Focus 26. The focus is on using information, not looking for it, and supporting learners’ thinking at the levels of analysis, synthesis, and evaluation
/10
TOTAL: _______/80
Sherri Cianca
242
Appendix B
WebQuest Scoring Rubric Criteria Level 1 Level 2 Level 3 Level 4
Introduction
(Problem
Statement)
Describes a
compelling real-
life problem that
inspires students to
make a positive
change in the
world. More than
one perspective is
considered.
There is an
introduction, but it
fails to include a
problem.
The introduction
includes an
unrealistic or weak
real-life problem
and fails to explain
why a situation is a
problem.
The introduction
includes a real-life
problem, but fails
to include why the
situation is a
problem. The
introduction
presents only one
perspective to the
problem.
The introduction
includes a compelling
real-live problem. It
gives an overview of
why this is a problem
or suggests how
dealing with the
problem could bring
about positive change.
The introduction
suggests more than
one perspective.
Task: The products
become a means
for students to
disseminate their
findings and
decisions
regarding the
problem and the
promotion of
positive change.
The task requires
students to
function at a high
level of Bloom’s
taxonomy.
The task appears
unrelated to the
problem or the task
is not one found in
real life. The task
fails to call on
students to
disseminate
findings, decisions
or reasons for
decisions.
The task is weak in
relation to the
problem or weak
in relation to a
real-life task. The
task calls on
students to
disseminate
information at the
lowest level of
Bloom’s
Taxonomy:
knowledge.
The task is related
to the problem and
is a real-life task
that promotes
positive change.
The task calls on
students to
function at the
comprehension
and/or application
level of Bloom’s
taxonomy.
The task is related to
the problem and is a
real-live task
promotes positive
change. The task calls
on students to
function at the
analysis, synthesis,
and/or evaluation
level of Bloom’s
taxonomy.
Process:
Organizational
framework and
activities are from
various
perspectives
(roles) and are
designed to
transform students’
thinking from
basic knowledge to
new meaning
through high-level
thinking.
Organizational
framework &
activities fail to be
conducive to
transforming
students’ thinking.
No specific roles,
or roles are
unrelated and/or
irrelevant to the
problem.
Organization, roles
and activities
promote only basic
knowledge. Roles
assume only one
problem-related
perspective.
Organization, roles
and activities
promote
construction of
new meaning at
the comprehension
and/or application
level. Roles
assume two
problem-related
perspectives.
Organization, roles
and activities clearly
promote the
construction of new
meaning through
high-level thinking.
Roles assume at least
three problem-related
perspectives.
Sherri Cianca
243
Resources: Varied
and numerous
resources provide
enough
information for
students to think
deeply and from
different
perspectives.
There are no web-
based links, links
are unrelated to the
topic, or they lack
meaningful
information.
Resources offer
one perspective.
Meaningful
information can be
gleaned from at
least six resources.
Resources offer
two perspectives.
Meaningful
information can be
gleaned from at
least nine
resources.
Resources provide
highly meaningful
information from at
least three
perspectives and at
least twelve resources.
Overall Content:
Taken as a whole,
the WebQuest is
highly conducive
to increasing
students’
conceptual
knowledge and/or
skills for the topic.
Taken as a whole,
the WEBQUEST
is not conducive to
building students
conceptual
knowledge and/or
skills for the topic
under study.
Taken as a whole,
the WEBQUEST
is somewhat
conducive to
building students
conceptual
knowledge and/or
skills for the topic
under study.
Taken as a whole,
the WEBQUEST
is conducive to
building students
conceptual
knowledge and/or
skills for the topic
under study.
Taken as a whole, the
WEBQUEST is
highly conducive to
building students
conceptual knowledge
and/or skills for the
topic under study.
Appendix C
Exemplars for WebQuest Quality
Topic: the Great Lakes
Introduction: The introduction describes a compelling real-life problem or issue that inspires
students to make a positive change in the world.
Level 1: There is an introduction, but it fails to include a problem.
A variety of fish live in the Great Lakes. You will learn about fish and fishing in the Great
Lakes and will report your findings to the government.
Level 2: The introduction includes a non-realistic or weak problem or fails to explain why a
situation is a problem.
You want to be a fisherman on the Great Lakes. To decide what type of fisherman to be, you
need to find out more about the kind of fishing that takes place on the Great Lakes. Find out
which type of fish sell for the most money and where you can sell your fish.
Sherri Cianca
244
Level 3: The introduction includes a real-life problem, but fails to include why the situation is a
problem. The introduction suggests only one perspective in relation to the problem.
Since the beginning of industrialization, factories and citizens have used the Great Lakes as
dumping grounds for rubbish: raw sewage, animal carcasses, excess chemicals, and
everything in between. This rubbish is killing fish. You have been appointed to helping
clean up the problem resulting from these deposits.
Level 4: The introduction includes a real-life problem. It gives an overview of why this is a
problem or suggests how dealing with the problem could bring about positive change. The
introduction suggests more than one perspective.
Due to extensive pollution, your local beach on one of the Great Lakes has banned
swimming. Pollution also affects lake inhabitants. Fish, for example, are being found
floating, dead, in the lakes. This has not stopped fishermen who remain out on the lake
baiting, catching, and eating these contaminated fish. Talk of cleaning up the lake and
stopping factories from depositing pollutants in the lake makes many people in your town
fear that these actions will cause factories to move, and town people will lose their jobs.
Task: The products become a means for students to disseminate their findings and decisions
regarding the problem or issue. The task requires students to function at a high level of Bloom’s
taxonomy: analysis, synthesis, or evaluation.
Level 1: The task appears unrelated to the problem or the task is not one found in real life. The
task fails to call on students to disseminate findings, decisions, and reasons for those decisions.
Write a report on the fish found in the Great Lakes. In the report, include a picture of each
type of fish and write comments for each species of fish.
Sherri Cianca
245
Level 2: The task is weak in relation to the problem or weak in relation to a real-life task. The
task calls on students to disseminate information at the lowest level of Bloom’s Taxonomy:
knowledge.
You and your team will research the topic of pollution on the Great Lakes. You will record
information about the causes of pollution, record information on the animals and plants that
have been affected by pollution, and list possible ways to stop lake pollution. You will gather
pictures and replicas of flora and fauna that have become extinct and endangered due to lake
pollution. You will put your information together on a poster to share with your class.
Level 3: The task is related to the problem and is a real-life task that promotes positive change.
The task calls on students to function at the comprehension and/or application level of Bloom’s
taxonomy.
Your job is to research and discuss the causes and effects of pollution on the survival of
Great Lakes flora and fauna. Record this information on posters. When presented to the
class, give examples to validate causes and effects. Predict what will happen to the Great
Lakes in a year, five years, and ten years if measures are not taken to prevent further
pollution. Include suggestions on how to deal with the problem.
Level 4: The task is related to the problem and is a real-live task that promotes positive change.
The task calls on students to function at the analysis, synthesis, and/or evaluation level of
Bloom’s taxonomy.
The Great Lakes Environmental and Public Works Committee commissioned your team to
deal with the problem of pollution on the Great Lakes. The team will research the issue and
possible solutions to the problem. The team, made up of representatives from factories and
other interest groups, will meet to discuss and then formulate a plan that is acceptable to all
Sherri Cianca
246
parties. The plan will include strategies for cleaning up pollution and stopping further
dumping of pollutants into the lakes. The plan must consider the needs factories located on
the Great Lakes and factory employees. Create a poster presentation and write a newspaper
article that gives particulars of the problem and presents the plan for dealing with all issues.
Process: Organizational framework and activities are from various perspectives (roles) and are
designed to transform students’ thinking from basic knowledge to new meaning through high-
level thinking.
Level 1: . . . No specific roles, or roles are unrelated and/or irrelevant to the problem.
Each person in the group will learn about one of the Great Lakes. (no specific roles)
Reporter—you will research and report what the group discovers.
Journalist—you will write an article on pollution in the Great Lakes.
Weather Planner—you will determine what to wear when visiting the Great Lakes.
Level 2: . . . Roles assume only one problem-related perspective.
Scientist—research Great Lakes flora and fauna and the affects of lake pollution on survival.
Biologist: learns about how pollution harms animals in and around the Great Lakes
Research Scientist: study pollution and the damage it does to living things.
Marine Biologist: report on endangered and extinct species of marine life in the Great Lakes
and the part lake pollution plays in endangerment and extinction.
Level 3: . . . Roles assume two problem-related perspectives.
Scientist—(same as above)
Factory Representative—determine types of factories found on the Great Lakes, number of
people employed by these factories, and why factories need to put chemicals into the
lakes.
Sherri Cianca
247
Marine Biologist: report on endangered and extinct species of marine life in the Great Lakes
and the part lake pollution plays in endangerment and extinction.
Level 4: . . . Roles assume at least three problem-related perspectives. Examples:
Scientist—(see above)
Factory Representative—(see above)
Lawyer—study laws protecting factory owners, homeowners, fishermen, and others whose
livelihood and wellbeing are affected by pollution on the Great Lakes.
Environmentalist—study the clean up and prevention of lake pollution.
Medical Health Officer—study lake pollution and its effect on public health.
Farmer—study farmers’ need for phosphorus fertilizer and the result to crops if phosphorus
fertilizers are not used. (Farm run-off pollutes the Great Lakes.)