detwannabe.files.wordpress.com€¦ · Web viewThough the problem to be explored is most clearly...
Click here to load reader
-
Upload
nguyenlien -
Category
Documents
-
view
212 -
download
0
Transcript of detwannabe.files.wordpress.com€¦ · Web viewThough the problem to be explored is most clearly...
First Critical Review of Research of Digital game-based learning: Impacts of instructions
and feedback on motivation and learning effectiveness
Jeremy T. Bond
EDU 800 - Central Michigan University
Problem (about 1 page)
1. Identify the clarity with which this article states a specific problem to be explored.
Though the problem to be explored is most clearly stated in the article’s abstract,
the authors do also, albeit eventually, arrive at offering a reasonably clear statement of
the problem explored, as part of the article’s introduction sequence. Specifically, amidst
a plethora of literature relating to digital game-based learning (DGBL), the authors seek
to “to identify the conditions under which DGBL is most effective, by analyzing the
effects of two different types of instructions (learning instruction vs. entertainment).”
(Erhel & Jamet, 2013, p. 158) Though their purpose is most explicitly offered in the
abstract, by reading section 1.4 of the introduction one can glean that a great deal of
research has occurred around DGBL, producing contradictory results in some cases, as
the other studies looked across issues of motivational impact, comparative benefit, and
bearing on learning effectiveness. It is here, in section 1.4, the author’s pre-occupation
with the documented impact of instructions type on reading effectiveness and the
attainment of deep learning, as opposed to surface or rote learning, emerges and the
desire to explore the impact of differing types of instructions on the experience of DGBL
is made most clear as the focus of the research.
2. Comment on the need for this study and its educational significance as it relates to this problem.
Although significant numbers of studies have looked at DGBL, the focus of the
vast majority appears to differ notably from that of this effort. In fact, most of the
included studies included herein compared DGBL itself with non-game-based
educational media, and the varying impact thereof on motivation, interest, effectiveness,
etc. These purposes, while interesting, are not those explored in Erhel and Jamet’s (2013)
work, which, instead of comparing DGBL with some other learning practice, confines the
scope to DGBL and looks specifically at the impact the type of instruction a learner
receives prior to engaging in DGBL has on the outcomes achieved, and secondarily if the
addition of “Knowledge of Correct Response (KCR)” feedback “can influence the types
of learning strategies induced by the instructions.” (2013)
By identifying the mixed results yielded in other studies, noting “…digital
learning games are of debatable educational worth…” but also accepting these as having
“benefits in terms of motivation and engagement,” Erhel and Jamet (2013) build a case
for their research. The mix of results identified by their meta-analyses further
demonstrates the existence of inconsistencies in studies ostensibly researching the same
thing – the impacts of DGBL. Finally, it is also indicated that “no one has so far
subjected the games’ instructions to scientific scrutiny, even though they are a
fundamental feature” (Erhel & Jamet, 2013). This single conclusive point alone
establishes a need for the study, earlier justification notwithstanding.
The educational ramifications could be of considerable value, if, as they desired,
the researchers were able to establish that a certain type of instruction, given prior to
engagement with a digital game environment yielded reliably better learning performance
and pursuit of mastery goals, as opposed to performance goals. From a pedagogical
perspective then, teachers could adopt the method of issuing a particular type of
instruction prior to game-play and be better assured of deeper learning among students.
3. Comment on whether the problem “researchable”? That is, can it be investigated through the collection and analysis of data?
The stated goal, to “ascertain whether the effects of instructions given during the
reading phase that have been observed for text-based learning would also manifest
themselves during DGBL” (Erhel & Jamet, 2013) is, in this author’s opinion, highly
“researchable.” The intent to have two groups pursue a digital game, one as a learning
exercise, and the other as play, and then to determine what, if any difference in
motivation and learning depth could be observed/measured is a workable model.
Specifically, the assessment of learning measured by quiz performance could indicate one
type of instruction leads to pursuit of mastery goals, and therefore to deeper learning and
better performance. If the exercise is properly conducted, useful research could indeed
emerge from these endeavors. In particular, if introducing KCR (knowledge of correct
response) feedback was shown to further advance performance, this phenomenon would
further suggest “researchability” of these matters.
Theoretical Perspective and Literature Review (about 3 pages)
4. Critique the author’s conceptual framework.
In more than one location, it is shared with the reader that a value-added
approach, which, in this case “consists in testing the effects of learning quality of adding
features to an educational game,” (Erhel & Jamet, 2013) is the model to be applied in the
research. In their (2013) meta-analytic review, Wouters and van Oostendorp established
that “players require instructional support to engage in cognitive processes such as
selecting and actively organizing/integrating new information” by showing the presence
of instruction improved learning outcomes. The research further “revealed that the
learning effect was largest when learning of skills was involved” (Wouters & van
Oostendorp, 2013). Though much of what is written of a value added approach appears
to focus on the impact of the teacher, it is at its core a framework which seeks to test the
impact any single additional factor (e.g. type of instruction) has on learner outcomes in a
particular setting. The execution of the model, demonstrated in the division of the
participants in to two groups, is well supported in prior research on DGBL, but even
more so in the study of reading. In fact, Erhel and Jamet (2013) adapted, more or less
entirely, their first experiment from van den Broek, Lorch, Linderholm, and Gustafson
(2001) which “asked individuals to read texts… in either a study condition… or an
entertainment condition.” In summary, the researchers’ framework is appropriate and
well-supported for their endeavors.
5. How effectively does the author tie the study to relevant theory and prior research? Are all cited references relevant to the problem under investigation?
The authors’ review of literature is considerable and informative, first providing
elements of working definitions for the “new medium” of Digital Game-Based Learning
(DGBL), before moving in to other areas. With an underlying meaning for DGBL in
place, the authors discuss three areas of note. First, analysis of research endeavors
relating to DGBL related to motivational benefits is offered, followed by review of
comparative media studies, and finally a look at the use of instructions on potentially
augmenting DGBL’s impact. In each arena, the authors demonstrate a degree of
preoccupation with informing novices in the learning space, sharing information on the
distinction between mastery and performance goals, deep and surface learning, and, to a
lesser extent, intrinsic and extrinsic motivation. The relevance of this remediation is
questionable, despite the possible value to less experienced readers.
The authors give the impression of being focused more on distinguishing their
own work from the related previously completed research, by addressing the impact of a
condition not addressed by prior efforts. A desire for distinction aside, the wide review
of DGBL research does serve to establish DGBL’s pervasiveness in education, as well as
interest in it expressed by study in the scholarly community. In this fashion, Erhel and
Jamet demonstrate necessary extension of research in the field of DGBL; though, in
doing so certain less relevant work is cited, including the Liu, Horton, Olmansan, &
Toprac (2011) study correlating intrinsic motivation and learning scores. Many other
studies (sixty-eight) are also reviewed for their comparative purpose, but ultimately serve
as little more than a process of elimination, further distinguishing the uniqueness of the
authors’ research.
Only in the final section of the literature review does one find cited research
exploring themes directly related to the conditions which Erhel and Jamet seek to test.
This research, however, is not related to DGBL or game-play at all, but rather to matters
of reading, literacy, and reading-specific outcomes. As noted earlier, the work of van den
Broek, Lorch, Linderholm, and Gustafson (2001) is the closest representation offered.
Perhaps this meandering literature review serves then to provide sufficient evidence of
the unique condition of the research at hand, but it undeniably sets up a primary condition
to be tested in their first experiment – if, in fact, the impact of instruction type on reading
is similar to that in DGBL.
6. Does the literature review conclude with a brief summary of the literature and its implications for the problem investigated?
The review of literature lacks a summative conclusion and, instead, offers an end
which could lead some to think the experimental goal is to support the notion that
relevant experiments in reading instruction are related to DGBL and furthermore that
play instruction results in less meaningful learning, when in fact their goals are more
specifically to confirm the relationship and determine which type of instruction has a
more positive impact on DGBL. Though the points as presented in the literature review’s
conclusion have a relationship with the study’s interests, they do not represent entirely
the literature review offered to that point, nor the researchers’ full intentions.
7. Evaluate the clarity and appropriateness of the research questions or hypotheses.
As the authors executed two experiments they reasonably have two related
hypotheses. In the first experiment, the stated premise was “…to ascertain whether the
effects of instructions given during the reading phase that have been observed for text-
based learning would also manifest themselves during DGBL.” (Erhel & Jamet, 2013).
In the second experiment, they seek to test the impact of feedback on how learners carry
out tasks. Specifically, Erhel and Jamet (2013) “set out to determine whether the presence
of KCR feedback in DGBL quizzes can influence the type of learning strategies induced
by the instructions.” In each case, care is taken to greatly limit the independent variable
so a single altered condition, as the second experiment maintains the already tested
independent variable used in the first, adding as its own independent variable the
presence of correct response feedback to both the play and learning groups.
Their intended ends are very clear, and, furthermore, the limited scope of the
conditions to be tested speaks to not only appropriateness, but focus, clarity, and
discipline as well. By testing a single meaningful difference, the experiments’ observable
outcomes can reasonably be attributed to the impact of the test variable.
Research Design and Analysis (about 3 pages)
8. Critique the appropriateness and adequacy of the study’s design in relation to the research questions or hypotheses.
The research design will be discussed here as a single design, though differences
between the two experiments will also be discussed. The design is comparative in nature,
as a group of college-aged (18-26 years) persons of approximately equal mix of genders,
participated in a series of interactions. The subjects completed assessments based upon
their interactions with a digital learning environment known as ASTRA, while divided in
to two smaller groups. Each group, then, was provided with a different set of instructions
prior to engaging in the ASTRA interactions. In this value-added approach, researchers
hoped, in the first experiment to determine which form of pre-instruction had more
impact on learning and motivational measures in DGBL. In the second experiment, KCR
(Knowledge of Correct Response) feedback was added to both conditions. Overall,
participants had university experience of between two and three years, at various
institutions in Rennes, France. The mix of gender among participants, in both
experiments, included more women than men, though not by a large margin. The ratio of
women to men is representative of the increase in female college entrance rates, versus
the rate for males, which has been on the rise since 1991 (Mather & Adams, 2007).
The study’s design is reasonable and appropriate relative to the hypotheses to be
tested. The study size is not adequate, however, as such a modestly sized group of test
subjects, selected from a relatively small geographical area, and of a finite range of age.
The stated hypotheses make no reference to testing the conditions in question with a
specific age group within a finite geographical area. If more specificity were present in
the hypotheses one might determine the desire of the research is not simply to determine
“the conditions under which DGBL is most effective, by analyzing the effects of two
different types of instructions,” but rather to do so only among French college students.
9. Critique the adequacy of the study’s sampling methods (e.g., choice of participants) and their implications for generalizability.
The research design is adequate for testing the hypotheses shared earlier,
however, a degree of specificity is lacking from the hypotheses that creates at least two
notable issues. First, the outcomes of the research posited in the hypotheses do not make
mention of the age/level of the participants used in the study (i.e., one does not discover
until the experiment parameters are shared that it is the intent to measure “effects of
instructions given during the reading phase” (Erhel & Jamet, 2013) on 18 to 26 year old
college-attending subjects, attending schools in a particular city in France). Moreover,
while it is explicitly stated that those enrolled in “medical or allied health programs were
excluded (2013), the programs of study of other participants is either not known or
omitted. The goal in excluding such individuals, of course, was to pre-exclude those with
a high likelihood of prior knowledge – those with existing knowledge of the age-related
conditions presented within the ASTRA sequence.
Sampling was done by recruitment “from a pool of students from several
universities” but nothing else is offered regarding the method of solicitation,
compensation, or other matters which may have resulted in over- or under-represented
groups within the experimental population. The generalizability of the findings is limited
by the constraints of the test groups. While the study’s findings may be applicable to
college-aged learners in France, and perhaps elsewhere, the conclusions could not
automatically be extrapolated to younger or older learners. Furthermore, the amount of
prior knowledge of the ASTRA content is attended to by the researchers, but the amount
of prior experience test subjects have had with DGBL, educational video, and other
related learning settings is not. This oversight creates potentially another question as to
whether findings were ultimately impacted not by the tested condition, but perhaps by the
savvy (or lack thereof) of test subjects.
10. Critique the adequacy of the study’s procedures and materials (e.g., interventions, interview protocols, data collection procedures).
The material choice for the study is, in a word, odd. ASTRA is not a “digital
game,” by the authors’ own accepted definition of such: “the ‘coming together’ of serious
learning and interactive entertainment” and “an entertainment medium designed to bring
about cognitive changes in its players.” (Erhel & Jamet, 2013) ASTRA has elements of a
digital simulation as well as interactivity qualities, but it lacks the characteristics inherent
to something which is played. The necessary characteristics of a game include “intense
interest and competitiveness…carried out by its own specific and often unspoken rules”
and must include “a win and a lost state.” (Bycer, 2013) ASTRA is more of a multi-
media learning object or unit, than truly a digital game. The challenges with the selection
of ASTRA are acknowledged by the authors as well. In the conclusion of the article, it is
noted the ASTRA environment “involved relatively little interactivity.” (2013). So
significant is this concern, the authors opine “it would be well worth replicating our study
with more immersive and interactive material…” (2013).
Offering informed critique of other experiment materials is difficult. Short of a
selected sample of the paraphrase question meant to measure memorization, the content
of the questionnaires and quizzes is not provided. The numbers and types of questions,
however, would suggest that these were instruments capable of measuring the learning of
the research subjects, relative to their interaction with ASTRA.
Data collection was achieved exclusively through scoring of the various
assessment instruments given to and completed by participants. This approach, while
reasonable and adequate for a comparison study, does run counter to the data collection
methods observable in studies from the review of literature, which included think aloud,
and other qualitative data gathering techniques.
11. Critique the appropriateness and quality (e.g., reliability, validity) of the measures used.
Outcomes of the first experiment indicate students provided learning instructions
fared better on both paraphrase-type questions and inference-type questions than did
participants given entertainment instructions. However the margin between performance
levels on paraphrase type questions was not significant. This finding was contradictory
to the researchers’ expectations; however, the significant difference in performance
between the two groups on inference questions is consistent with anticipated outcomes.
To briefly summarize results shown in the category of motivational factors, it is sufficient
to say no significant differences were observed.
Use of ANOVA is appropriate in this study, as there are two groups included in
this study. I am less sure of the appropriateness of Levene’s test in this case as
determining if variances found in both groups are essentially the same is presently
beyond my ability. That said, in general a ‘good’ score in Levene’s test is p<.05, which
was obtained in the case of this study.
With respect to the second experiment in this study, no statistical significance was
achieved in any outcome measure. However, the instruments themselves, in both
experiments, do appear to be valid in that they are designed to assess desired measures
(e.g. recall, motivation, etc.). The instruments’ (motivation questionnaire, paraphrase-
and inference-type quizzes) reliability may be questionable, however, as results in the
second experiment, following the introduction of KCR feedback, produced not only
results which in some cases further contradicted the researchers’ expectations, but stood
counter to the result produced in the first experiment in some respects. Since it seems
KCR feedback was introduced to both groups, the same group should have ultimately
performed better in the same areas, but did not.
12. Omitted by direction received from Dr. DeSchryver.
C. Interpretation and Implications of Results (about 3 pages)
13. Critique the author’s discussion of the methodological and/or conceptual limitations of the results.
In general discussion the authors initially highlight those areas which were found
to be in support of their hypotheses, noting “better comprehension performances with
learning instruction condition in experiment 1,” and likewise “with the combination of
entertainment instruction and KCR feedback in experiment 2.” (Erhel & Jamet, 2013)
Acknowledgement is made that the results in the second experiment that the change in
performance could be attributed to the addition of KCR, but further entertain that this
accounts for only some of the results. Critique of methods is warranted, given the narrow
sample scope, observable reliability issues with certain instruments, and unaccounted-for
outcomes in tested hypotheses and related conditions. The authors admit challenges,
noting “…this quizzes performed with ASTRA…We can assume KCR did not play its
full role here…” and further positing, “we would need to replicate experiment 2 with
more taxing quizzes.” (2013)
14. How consistent and comprehensive are the author’s conclusions with the reported results?
The authors are reasonable in their representation and presentation of results,
sharing a degree of surprise at the inconsistencies in the outcomes tested, but also noting
unaccounted-for differences in subjects’ response to related factors, e.g., fear of failure.
Conclusions and suggestions for further research, e.g. replicating the second experiment,
and encouraging others to continue research on the role instruction plays on a gamer’s
ability to engage in deep learning as part of entertainment-based DGBL.
15. How well did the author relate the results to the study’s theoretical base?
Relationships drawn back to the study’s theoretical basis are present, but
somewhat limited to prior research supporting the originality of the current study.
Primary focus is given to support for the uniqueness of including the goal of testing the
impact of instruction type on DGBL outcomes, as well as the introduction of KCR
feedback.
16. In your view, what is the significance of the study, and what are its primary implications for theory, future research, and practice?
From my perspective, these researchers had interesting ideas and goals which
demonstrate merit. If we are to expect those engaging in game and game-like activities to
learn, and learn deeply from that activity, it is reasonable to assume that the way in which
we instruct the gamer can play a part in the depth and type of learning as well as the
gamer’s ability to demonstrate such. While I concur with the authors’ suggestions for
future research, I would find this study and future studies like it more impactful if the
underlying DG was considerably more game-like. ASTRA, as I noted earlier, is a
simulation, a multi-media learning object, not a game. Furthermore, the assessment itself
was not presented in the manner of a game, but rather in very typical traditional formats.
The study is significant, perhaps in its originality, as the researchers obviously
hoped it would be, but also in its success in demonstrating that instruction type does
indeed seem to play some sort of role in a learner’s ability to learn and demonstrate
learning. Though a number of outcomes were unexplained by the research at hand,
differences that could only be reasonably attributed to the varying instruction types and
the introduction of Knowledge of Correct Response feedback were observed. Primary
implications, in my opinion, include the importance of pre-game instructions for DGBL
experiences, further support of the value of feedback to learners, and a call for continued
research in these areas.
Works Cited
Bycer, J. (2013, January 22). What Defines a Game: Meaning Vs. Action. Retrieved from
Gamustra:
http://www.gamasutra.com/blogs/JoshBycer/20130122/185251/What_Defines_a_
Game_Meaning_Vs_Action.php
Erhel, S., & Jamet, E. (2013). Digital game-based learning: Impact of instructions and
feedback on motivation and learning effectivenes. Computers & Education, 156-
167.
Mather, M., & Adams, D. (2007, February). The Crossover in Female-Male College
Enrollment Rates. Retrieved from Population Reference Bureau: Inform,
Empower, Advance:
http://www.prb.org/Publications/Articles/2007/CrossoverinFemaleMaleCollegeEn
rollmentRates.aspx
Wouters, P., & van Oostendorp, H. (2013). A meta-analytic revew of the role of
instructional support in game-based learning. Computers & Education, 412-425.