Tyner & Fienup 2015 Effects of describing antecedent stimuli & performance criteria in TA (print)
-
Upload
bryan-tyner -
Category
Documents
-
view
142 -
download
1
Transcript of Tyner & Fienup 2015 Effects of describing antecedent stimuli & performance criteria in TA (print)
ORIGINAL PAPER
The Effects of Describing Antecedent Stimuliand Performance Criteria in Task Analysis Instructionfor Graphing
Bryan C. Tyner1,2 • Daniel M. Fienup1,2
Published online: 10 December 2015
� Springer Science+Business Media New York 2015
Abstract Task analyses are ubiquitous to applied behavior analysis interventions,
yet little is known about the factors that make them effective. Numerous task
analyses have been published in behavior analytic journals for constructing single-
subject design graphs; however, learner outcomes using these task analyses may fall
short of what could be considered socially significant by educators and the behavior
analytic community. To investigate ways to enhance task analysis instruction,
graphing performance was compared between groups receiving either a task anal-
ysis that simply described the necessary responses or the same task analysis sup-
plemented with descriptions of relevant antecedent stimuli and performance criteria,
or the consequences of correctly performing each step. Participants using the sup-
plemented task analysis demonstrated more accurate graphing behavior compared
with those using the task analysis without these descriptions. Implications of
enhancing task analysis effectiveness by linking instructions to the three-term
contingency are discussed.
Keywords Task analysis � Graphing instruction � Computer-based instruction �College students
Introduction
Task analysis (TA) is widely used in applied behavior analysis (ABA) for teaching
the completion of behavior chains. Task analysis refers to the process of ‘‘breaking
down a complex skill into smaller, teachable units, the product of which is a series
& Bryan C. Tyner
1 Department of Psychology, Queens College, CUNY, Flushing, New York, NY 11367, USA
2 Department of Psychology, The Graduate Center, CUNY, New York, NY, USA
123
J Behav Educ (2016) 25:379–392
DOI 10.1007/s10864-015-9242-z
Author's personal copy
of sequentially ordered steps’’ (Cooper et al. 2007, p. 437), as well as the permanent
product of that process used for instruction purposes. Task analysis is an evidence-
based practice (Wong et al. 2010) and has been used for teaching a wide range of
skills including vocational skills to learners with intellectual disabilities (Cuvo et al.
1978), reading interventions to middle school teachers (Browder et al. 2007), and
graphing methods to college students and behavior analysts (Lo and Konrad 2007;
Dixon et al. 2009). Although many studies demonstrate the effectiveness of existing
TAs and of interventions using TAs in their procedures, very little research has
evaluated how to develop effective TAs for such uses. In 1967, Annett and Duncan
wrote that identifying ‘‘what to describe and on what level of detail’’ (p. 1) are two
central challenges when developing TA for use in industry. To the best of our
knowledge, there exists little empirical support for solutions to this problem.
A primary resource for learning about TA development is Cooper et al.’s (2007)
Applied Behavior Analysis textbook, which recommends three methods for
constructing and validating TA (also reported in Snell and Brown 2006): (1)
observe competent individuals engage in the behavior (2) consult expert performers
of the task, and (3) perform the behavior chain yourself. These strategies provide
guidance to practitioners developing TA, but do not answer questions about the
level of detail and range of stimuli that should be described in TA instruction. Crist
et al. (1984) parametrically evaluated (i.e., manipulated along a continuum) the
number of responses described per step in three TAs of vocational tasks for
individuals with intellectual disabilities. Participants received a 28-step TA with
either one response at a time (28 total steps), two responses at a time (14 total steps),
or four responses at a time (7 total steps). Researchers found that participants made
more errors when more responses were combined (i.e., the 7-step condition). More
recently, Graff and Karsten (2012) found that minimizing the jargon used in TA
instruction and supplementing text with pictures and examples produced greater
learner accuracy compared with TA instruction without these revisions. These
studies indicate that manipulating some parameters of TA instruction may affect
learner performance; however, which components of TA instruction are necessary
and sufficient for optimal learner performance is unknown. Data demonstrating
relations between TA components and learner performance may inform the
development of instruction materials used in research and applied settings.
A common use of TA is for teaching ABA students and practitioners to create
single-subject design graphs. Graphing data facilitate the identification of behavior–
environment relations, and ability to construct single-subject graphs is a require-
ment of the Behavior Analytic Certification Board task analysis (BACB 2012);
however, graphing software is complex, and learning to use it may be difficult for
some individuals. For these reasons, a number of TAs for creating graphs in
Microsoft Excel have been published in ABA journals (e.g., Carr 2008; Carr and
Burkholder 1998; Dixon et al. 2009; Lo and Konrad 2007; Lo and Starling 2009;
Pritchard 2008; Reed 2009; Touchette et al. 1985). Each of these TAs is a sequenced
list of responses in the behavior chain and often contains pictures of relevant
stimuli. Despite their similarities, each published TA varies in the number of steps,
number of pictures, sequence of responses, and level of detail. For example, Lo and
Konrad’s (2007) TA consisted of 110 steps including 66 screenshots of the software,
380 J Behav Educ (2016) 25:379–392
123
Author's personal copy
while Pritchard’s consists of 33 steps and no pictures. The majority of empirical
research on graphing TA demonstrates that presenting a particular TA positively
affected participants’ graphing performance, establishing the general efficacy of TA
graphing instruction. However, no research has evaluated the effects of manipu-
lating qualities of TA instruction. Without data to guide instruction design, many
parameters of TA instruction, such as which qualities of the task and the level of
detail with which to describe them, are based on subjective opinion and trial and
error.
While TA instruction typically includes descriptions of discrete responses,
adding descriptions of relevant antecedent stimuli may enhance learner performance
(Cooper et al. 2007); however, the effectiveness of these descriptions has not been
researched. Some research demonstrates that describing performance criteria, or the
consequences of correct responses, improves responding in academic tasks
(Johnston and O’Neill 1973) and may also improve TA instruction. For example,
graphing TA may be enhanced by describing the changes to the graph the learner
should observe when correctly completing instructions to manipulate menu options
and buttons. Describing relevant antecedent stimuli, the required responses, and the
consequences of correctly emitting those responses may enhance instructional
effectiveness and improve learner outcomes by conceptually linking TA instruction
to the three-term contingency.
In spreadsheet and graphing software environments, relevant antecedents may
include graph elements not formatted according to publication standards, such as
colored data points, and descriptions of the physical characteristics of the software
components that one must manipulate in order to reformat them. Performance
criteria include descriptions of what a graph should look like once a step or series of
steps is completed. For example, after completing several steps to modify the
features of the data path, the user should see a black data path with black data
points. As a first step toward understanding components of TA that may enhance
instructional outcomes, this study compared the effects of TA with and without
descriptions of relevant antecedents and performance criteria on the accuracy and
speed of constructing a reversal design graph. It is hypothesized that graphing
accuracy will be higher when using TA instruction supplemented with these details.
Method
Participants and Setting
Sixteen undergraduate students enrolled in an undergraduate course on the
introduction to psychology participated and earned research credit toward the
course’s research requirement. The researcher recruited the participants using an
online research recruitment system. GraphPadTM StatMateTM statistical software—
designed for calculating power analyses—used the mean, standard deviation, and
sample sizes of participant accuracy data reported in a comparison of video- and
text-based TA instruction for creating a multiple baseline graph (Tyner and Fienup
2015). This analysis estimated that six participants were necessary per group for a
J Behav Educ (2016) 25:379–392 381
123
Author's personal copy
minimum of 80 % power with a = .05. In total, eight participants were randomly
assigned to each instructional group.
Instruction took place in a research laboratory containing three computer work
stations, each equipped with a Windows�-based computer including a 48.26-cm
monitor, keyboard, and mouse, with Microsoft Excel 2007 installed.
Materials
Materials included a sign-in sheet, two versions of a task analysis for constructing a
graph, a tutorial program to present the TAs, and a social validity and demographics
questionnaire (see Table 1) used to assess differences in relevant skills and
experience prior to instruction.
Table 1 Participant demographics and social validity questionnaire
Item
#
Question Control Supplemented
M SD M SD p
1 Number of relevant courses 2.0 2.1 4.3 4.1 [.05
2 Overall computer skills 3.0 1.1 3.0 0.6 [.05
3 Frequency of using excel 1.4 0.8 1.5 0.8 [.05
4 Number of graphs made on computer 14.3 18.9 10.0 12.2 [.05
5 Graphing is an important skill for students to learn 4.1 0.7 4.0 1.1 [.05
6 Students should be able to graph independently and without
assistance
3.7 0.5 3.5 1.0 [.05
7 The tutorial was a good method for teaching graphing 3.9 1.2 3.6 1.0 [.05
8 The tutorial I just used helped improve my graphing skills 3.4 1.3 3.8 0.8 [.05
9 I can now create each of the graphs I learned without
assistance
3.1 0.9 3.6 0.8 [.05
10 I am better at graphing in Excel than I was before using the
tutorial
3.4 1.1 3.3 1.2 [.05
11 Would prefer using a tutorial like this over attending a
classroom lecture
3.6 1.5 3.8 1.0 [.05
Yes No Yes No p
12 Would recommend the tutorial to others
who wanted to learn how to graph
5.0 2.0 5.0 1.0 [.05
Questions 1 and 4 were fill-in-the-blank. Questions 2 through 11 were Likert scale format. Question 2
ranged from beginner (1) to proficient (5). Question 3 ranged from rarely (1) to frequently (5). Questions
4 through 11 ranged from strongly disagree (1) to strongly agree (5). Question 12 was answered yes or
no. Seven control participants and six experimental participants completed the questionnaire
382 J Behav Educ (2016) 25:379–392
123
Author's personal copy
Task Analyses
Researchers developed two TAs for constructing the same reversal design graph,
one for control and one to test the effects of supplemental descriptions of relevant
antecedents and performance criteria. The control TA was based on the TA used by
Tyner and Fienup (2015) for constructing multiple baseline graphs. Development of
the TA-incorporated graphing methods is described in published graphing
instruction research (e.g., Dixon et al. 2009; Lo and Konrad 2007) and publication
guidelines of the American Psychological Association (APA 2010; Nicol and
Pexman 2010). The final version of the control TA described the response sequence
for making a reversal design graph, including how to: (a) organize the data table and
insert the graph, (b) format the data series and chart area, (c) change the value of the
axes, (d) align data points with tic marks, (e) insert the chart, axis, and condition
labels, (f) insert phase-change lines, (g) lift the y-axis off the x-axis, and (h) copy
and paste the graph and all components as an image for submission for publication
purposes.
The supplemented TA included all of the text contained in the control TA as well
as descriptions of relevant antecedent stimuli and performance criteria for each
response. Relevant antecedents were defined as physical descriptions of the
topographical characteristics of the stimuli within the software user interface that
the user was instructed to manipulate above and beyond descriptions of actions to
make. Examples of relevant antecedents include the color, shape, and location of
buttons, icons, and/or menu labels, or of more salient stimuli near them that might
be used to approximate their locations. Performance criteria were defined as
descriptions of the graph after correctly completing the present step. For example,
after inserting phase-change lines, the line should be straight without any blurry
segments which indicate it is angled, and the space between the line and data points
on either side should be equal. Descriptions of performance criteria were linked to
the graph–component checklist used for performance assessment. Figure 1 presents
a side-by-side comparison of the instructions provided for inserting phase-change
lines in each tutorial.
Tutorials
The learning environment in this study was similar to that described by Tyner and
Fienup (2015). The TA instruction was displayed in a PowerPoint� slide show (see
Fig. 1) positioned on the left third of the screen with Excel on the right two-thirds of
the screen. Each slide contained buttons along the bottom of the window to navigate
backward or forward one slide and to a table of contents. The control tutorial
contained 32 slides containing two to fifteen sentences each. The supplemented
tutorial was identical to the control tutorial except it contained one additional
instructional slide and between zero and five more sentences per slide describing
either relevant antecedent stimuli or performance criteria. The presentation and
sequence of steps was otherwise identical.
J Behav Educ (2016) 25:379–392 383
123
Author's personal copy
Dependent Variables
The dependent variables were graphing accuracy and duration. Graphing accuracy
was defined as the percentage of graph elements formatted correctly according to
APA guidelines for the publication of figures (APA 2010; Nicol and Pexman 2010)
and those found in ABA publications on graphing research (e.g., Dixon et al. 2009;
Lo and Konrad 2007; Tyner and Fienup 2015) and text books (e.g., Cooper et al.
2007). Researchers evaluated final graphs using a 28-question checklist of graph
components (available upon request). Checklist items were scored as correct or
incorrect, and the number of correct items was counted and divided by 28 to
calculate percentage correct for each participant. Graphing duration was defined as
the number of minutes that passed from the time the participant clicked the ‘‘Begin’’
button to the time the participant notified the researcher of his or her completion of
the graph as recorded by the researcher.
Procedure
Researchers assigned each participant to receive either the control or supplemented
TA using block random assignment. Group assignment was made in pairs of two
participants using a random number generator to ensure equal group sizes.
Before a participant arrived to the experiment, the researcher arranged the
assigned tutorial alongside a blank Microsoft Excel spreadsheet on the computer
screen. When a participant arrived, the researcher logged the date of participation on
Fig. 1 Screen shots of the control tutorial (left) and the supplemented (right). Both images present thesame step for formatting phase-change lines. The last two paragraphs in the supplemented tutorial showexamples of descriptions of performance criteria that were omitted from the control tutorial
384 J Behav Educ (2016) 25:379–392
123
Author's personal copy
the next available line in the sign-in sheet, which provided the participant ID
number and the condition assignment for each participant. Next, the researchers sat
the participant at a computer and explained the consent form, and each participant
provided voluntary informed consent. Then the researcher encouraged the
participant to ‘‘try your best’’ and the participant began the TA instruction. The
researchers provided no feedback or additional instructions for graphing. When a
participant indicated that he or she was done, the researcher provided a copy of the
demographics and social validity questionnaires to fill out and return before leaving.
Interobserver Agreement
An undergraduate research assistant independently coded 100 % percent of all
participant graphs for the purpose of calculating interobserver agreement (IOA). If
the researcher and assistant scored an item on the checklist the same (correct or
incorrect), the item was rated as an agreement. If the two observers scored the item
differently, the item was rated as a disagreement. IOA was calculated for each graph
by dividing the number of checklist items scored in agreement by the total number
of checklist items and multiplying by 100. Mean IOA was 95.4 % (SD = 4.9 %;
Range = 85.7–100 %) for all graphs. Chronbach’s alpha (a) was calculated as an
additional assessment of IOA using both observers’ calculated percentage of correct
for all 16 participants. Chronbach’s alpha represents the extent to which participant
scores vary due to variance in their performance versus variance in rater assessment
(Osbourne 2008), and indicated a high degree of agreement in this study, a = .993.
Results
Table 1 summarizes participants’ self-report of the number of relevant college
courses taken (question 1) and computer, graphing, and Excel experience (questions
2 through 4). Responses were compared between groups using an independent t test,
and no significant differences were found, p[ .05. Group similarity in these
measures suggests that observed performance differences between groups are
attributable to the manipulation of the independent variable. Assumptions for
statistical tests of significance—such as the presence of outliers and the normality of
distribution for t test data—were evaluated using visual analysis (Laerd Statistics
2015).
Graphing Performance
Separate one-tailed Mann–Whitney U tests were conducted to evaluate differences
between groups in percentage of graph elements formatted correctly (accuracy) and
minutes to construct each graph (duration). This test was used because of the small
sample size and evidence of skewed distributions in accuracy scores and graphing
duration (see Fig. 2).
The top panel of Fig. 2 presents accuracy by group and shows that six of eight
participants (75 %) who received the supplemented TA scored higher than the most
J Behav Educ (2016) 25:379–392 385
123
Author's personal copy
accurate participant who received the control TA. On average, participants who
received the TA supplemented with descriptions of relevant antecedents and
performance criteria formatted a significantly higher percentage of graph elements
47.8
72.3
0
10
20
30
40
50
60
70
80
90
100Pe
rcen
tage
Cor
rect
Accuracy
47.152.8
0
10
20
30
40
50
60
70
80
90
100
Min
utes
Group
Time
Control Supplemental
Fig. 2 Data points represent individual participant data. The top panel shows the percentage of checklistitems scored as correct for each participant, and the bottom panel shows minutes to complete. Darker datapoints indicate overlapping data points. The black bar indicates the group mean
386 J Behav Educ (2016) 25:379–392
123
Author's personal copy
correctly (M = 72.32, SD = 25.59) compared with those who received the control
TA without these details (M = 47.77, SD = 21.51), U = 11, p = .032. The effect
size of this difference was calculated using Hedges’ g, which indicated a large
effect, g = 0.98 (Lakens 2013).
The bottom panel of Fig. 2 displays graphing duration by group. Overall, there
was considerable variability within groups and considerable overlap between
groups. On average, participants who received supplemental descriptions of relevant
antecedents and performance criteria completed the graph in 52.80 min
(SD = 9.80), and those using the control TA completed the graph in 47.10 min
(SD = 15.80); however, the observed difference was not statistically significant,
U = 23, p[ .05.
In order to account for some of the high variability in these measures, graphing
accuracy and duration were correlated using the Pearson product-moment corre-
lation. Figure 3 presents a scatter plot of the percentage of graph elements formatted
correctly and minutes to complete the graph for each group. There was a strong
positive correlation between minutes spent on the task and percentage correct for
participants using the supplemented TA (r = .757, n = 8), which was statistically
significant, p = .03. There was also a strong positive correlation for participants
using the control TA (r = .684, n = 8); however, it was not statistically significant,
Fig. 3 Data points represent individual participant data. The y-axis presents the percentage of checklistitems scored as correct for each participant, and the x-axis presents minutes to complete. Squares and thesolid trendline represent data for participants who received the control TA; circles and the dashedtrendline represent data for participants who received the supplemented TA. Darker data points indicateoverlapping data points
J Behav Educ (2016) 25:379–392 387
123
Author's personal copy
p[ .05. Based on these product-moment correlations, the coefficients of determi-
nation were r2 = .57 for participants using the supplemented and r2 = .47 for those
using the control TA. These values represent the proportion of variance in graphing
accuracy for each group (47 and 57 %, respectively) that can be attributed to the
amount of time participants spent on the graphing task.
Social Validity
Table 1 shows participants’ agreement with statements regarding the researchers’
goals (questions 5 and 6), method (question 7), outcomes (questions 7 through 10)
of the tutorial, and preference for the tutorial over classroom lecture (question 11).
Agreement was reported on a five-point Likert scale, and responses to these
questions were compared with separate Mann–Whitney U tests. Question 12
assessed whether they would recommend the tutorial to others, and was compared
between groups using a Chi-square (v2) statistic. One participant from the control
condition and two participants from the supplemented condition opted not to
complete the social validity questionnaire. Overall, participants in both groups
tended to agree that learning to graph was important, and task analysis instruction
was appropriate, that the tutorial improved their graphing skills, and that they would
recommend the tutorial to others; however, there were no differences in social
validity responses between groups, p[ .05.
Discussion
This study demonstrated that a TA supplemented with descriptions of relevant
antecedent stimuli and performance criteria for correct responses produced
significantly more accurate graphing behavior compared with the control TA, with
a large effect size. Although the supplemented TA included considerably more text
for participants to read, the instruction did not take significantly longer to complete.
This may represent a trade-off in response allocation in that participants completing
control instruction read relatively brief instructions and spent more time locating the
respective stimuli on the screen to click, while participants completing the
supplemented TA spent additional time reading instructions and less time locating
the stimuli to click. Differences in accuracy may suggest that the supplemented TA
had greater instructional control or that descriptions of performance criteria
prompted self-assessment and correction of errors.
The relationship between time and accuracy visualized in Fig. 3 may account for
some of the variability in both of these variables. There is a moderate correlation
between minutes to complete and graphing accuracy for both groups; however, the
scatter plot and the group difference in r2 suggest that the effects of increases in the
amount of time allocated to the task produced larger gains in accuracy for
participants who used the supplemented TA compared to those who used the control
TA. This finding makes intuitive sense: Variance in scores is attributable to both
(a) differences in instructional effectiveness between the TAs used and (b) the time
allocated to the task; however, the effects of increasing the amount of time spent on
388 J Behav Educ (2016) 25:379–392
123
Author's personal copy
task is only true to the extent that instruction is effective. Therefore, increasing the
amount of time spent learning a task is more cost-effective when using more
effective instructions.
This study extends the findings of previous TA research (e.g., Crist et al. 1984;
Graff and Karsten 2012) by identifying descriptions of relevant antecedent stimuli
and performance criteria as details that improve learner performance. Although the
benefit of detail in task analysis instruction has been acknowledged (e.g., Cooper
et al. 2007; Crist et al. 1984), there is a paucity of research on the types of detail and
their contribution to learner performance. For example, research has demonstrated
that presenting TA steps individually produced better performance than presenting
them grouped (Crist et al. 1984), but it did not evaluate the effects of different types
of details. Cooper et al. (2007) emphasized the importance of describing the
antecedent stimuli to which a learner must respond; however, no research has
examined the effects of doing so. This study supports the assertion of Cooper et al.
by demonstrating that task analysis instruction produces greater graphing accuracy
when it includes descriptions of relevant antecedent stimuli and performance
criteria. This study also extends research conducted by Graff and Karsten (2012) by
identifying two specific details that may enhance written instruction. One limitation
of this study, however, is that relevant antecedents and performance criteria were
presented simultaneously; therefore, it is impossible to determine the extent to
which describing either or both of these details contributed to the observed
performance differences. Future research should evaluate the relative role of
descriptions of relevant antecedent stimuli and performance criteria in task analysis
instruction. Furthermore, this study evaluated whether describing these details
influenced TA effectiveness, but did not seek to identify the optimal amount of
detail as suggested by Annett and Duncan (1967). Future research may involve
parametric analysis of the amount of detail with which to describe a task, as too
little detail may leave instruction unclear, and too much detail may be unnecessary
for accurate performance or even overwhelm the learner.
This study also extends research on graphing instruction (e.g., Lo and Konrad
2007; Dixon et al. 2009). Graphing is a complex and difficult task, and the design of
effective graphing instruction is likewise difficult. Effective graphing instruction
may reduce the response effort of constructing a graph and increase the probability
that clinicians graph client behavior. When graphing client behavior, differentiation
and trends in behavior may be more easily detected, which may facilitate the
identification of functional relations (Fahmie and Hyanley 2008) and improve
treatment evaluation and decisions. Therefore, research on graphing instruction is
socially valid for behavior analysts.
Task analysis instruction is also commonly delivered to typically functioning
adults to guide research and intervention procedures. A recent survey on the types of
training received by ABA practitioners (DiGennaro Reed and Henley 2015) found
that written instructions were the second most common form of pre-service training
provided upon initial hire, received by 67.94 % of respondents (n = 142). For
example, TA instruction is frequently a procedural component in research and
interventions for staff and parent training (e.g., McKeel et al. 2015), behavioral
skills training (Duncan et al. 2013), video modeling (Lambert et al. 2014), and
J Behav Educ (2016) 25:379–392 389
123
Author's personal copy
treatment integrity (Cook et al. 2015). Therefore, written instructions are widely
used, and a more thorough understanding and more empirical resources identifying
best practices in TA construction may support treatment delivery and enhance client
outcomes. In addition, the results of the present study may also help inform the
development of other text-based instruction, such as text books or self-instruction
manuals that describe how to conduct behavioral assessments (e.g., Ramon et al.
2015). However, because this study was conducted with a college-student
population engaging in a very specific skill, the generality of these results to other
populations—such as to children with developmental disabilities—and tasks other
than graphing may be premature. Future research should verify the generality of
these data to other populations and target behaviors, as well as to TAs delivered in
formats other than on computers.
One limitation of the present study is that no pretest was administered to assess
participants’ preexisting graphing skills. A posttest-only between-groups design was
used for two reasons. First, during previous research (Tyner and Fienup 2015),
requiring participants to complete more than one graph was too time intensive, and
requiring participants to use complex graphing software without instruction may be
aversive. In addition, graphing without instruction during baseline assessments
provides participants the opportunity to explore the software and to learn either correct
or incorrect methods for completing the task, which would have threatened internal
validity. No significant differenceswere found between groups regarding participants’
self-report of relevant computer skills, education, and experience or their values
regarding graphing ability, which may indicate that observed differences in graphing
performance can be attributed to the effects of describing relevant antecedent stimuli
and performance criteria for the graphing task. However, it should also be noted that
this study used a small sample size (n = 8 per group). The power analysis was
conducted using preexisting participant data for graphing accuracy, but not for social
validity or demographic questions. Therefore, this study may have failed to identify
true differences in relevant graphing skills and experience due to lack of power. For
example, the number of relevant courses was higher for the group that received the
supplemented TA. Conversely, participants in the group that received the control TA
reported having made more graphs. Future research should either test for preexisting
graphing skills or control for them, such as by using a matched samples design. It is
also worth noting that the small group size may have limited the accuracy with which
participants’ agreement with the social validity statements differed between groups. It
is possible that a larger sample size would identify such differences. Future research
may also consider evaluating participant preference between TA types, or evaluate
performance using within-subject research methods.
The present results are of immediate use to ABA instructors and staff trainers who
have encountered challenges in teaching others to graph. The present study is also a
preliminary step toward answering the question of ‘‘what to describe and onwhat level
of detail’’ (Annett and Duncan 1967, p. 1) in TA instruction. These results suggest that
TA instruction may be enhanced by informing instruction through an analysis of the
three-term contingency. Instruction may be enhanced by embedding descriptions of
the necessary individual responses within descriptions of relevant antecedent stimuli
390 J Behav Educ (2016) 25:379–392
123
Author's personal copy
and the consequences of correct responses. The generality of the present study should
be evaluated by replicating this study using other populations and skill sets.
References
American Psychological Association. (2010). Publication manual of the American Psychological
Association (6th ed.). Washington, DC: American Psychological Association.
Annett, J. & Duncan, K. D. (1967). Task analysis and training design. U.S. Department of Health,
Education, & Welfare, Office of Education. Retrieved April 10, 2015, from http://eric.ed.gov/?id=
ED019566.
Behavior Analyst Certification Board, Inc. (BACB). (2012). Fourth Edition Task List (PDF). Retrieved
January 1, 2015, from http://www.bacb.com/Downloadfiles/TaskList/BACB_Fourth_Edition_Task_
List.pdf.
Browder, D. M., Trela, K., & Jimenez, B. (2007). Training teachers to follow a task analysis to engage
middle school students with moderate and severe developmental disabilities in grade-appropriate
literature. Focus on Autsim and Other Developmental Disabilities, 22, 206–219.
Carr, N. T. (2008). Using Microsoft Excel to calculate descriptive statistics and create graphs. Language
Assessment Quarterly, 5, 43–62.
Carr, J. E., & Burkholder, E. O. (1998). Creating single-subject design graphs with Microsoft Excel.
Journal of Applied Behavior Analysis, 31, 245–251.
Cook, J. E., Subramaniam, S., Brunson, L. Y., Larson, N. A., Poe, S. G., & St. Peter, C. C. (2015). Global
measures of treatment integrity may mask important errors in discrete-trial instruction. Behavior
Analysis in Practice, 8, 37–47.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Upper
SaddleRiver, NJ: Pearson.
Crist, K., Walls, R. T., & Haught, P. A. (1984). Degrees of specificity in task analysis. American Journal
of Mental Deficiencies, 89, 67–74.
Cuvo, A. J., Leaf, R. B., & Borakove, L. S. (1978). Teaching janitorial skills to the mentally retarded:
Acquisition, generalization, and maintenance. Journal of Applied Behavior Analysis, 11, 345–355.
DiGennaro Reed, F. D., & Henley, A. J. (2015). A survey of staff training and performance management
practices: The good, the bad, and the ugly. Behavior Analysis in Practice, 8, 16–26.
Dixon, M. R., Jackson, J. W., Small, S. L., Honer-King, M. J., Mui Ker Lik, N., Garcia, Y., & Rosales, R.
(2009). Creating single-subject design graphs in Microsoft Excel 2007. Journal of Applied Behavior
Analysis, 42, 277–293.
Duncan, N. G., Dufrene, B. A., Sterling, H. E., & Tingstrom, D. H. (2013). Promoting teachers’
generalization of intervention use through goal setting and performance feedback. Journal of
Behavioral Education, 22, 325–347.
Fahmie, T. A., & Hyanley, G. P. (2008). Progressing toward data intimacy: A review of within-session
data analysis. Journal of Applied Behavior Analysis, 41, 319–331.
Graff, R. B., & Karsten, A. M. (2012). Evaluation of a self-instruction package for conducting stimulus
preference assessments. Journal of Applied Behavior Analysis, 45, 69–82.
Johnston, J., & O’Neill, G. (1973). The analysis of performance criteria defining course grades as a
determinant of college student academic performance. Journal of Applied Behavior Analysis, 6,
261–268.
Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: a practical
primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863.
Lambert, J. M., Lloyd, B. P., Staubitz, J. L., Weaver, E. S., & Jennings, C. M. (2014). Effect of an
automated training presentation on pre-service behavior analysts’ implementation of trial-based
functional analysis. Journal of Behavioral Education, 23, 344–367.
Lo, Y., & Konrad, M. (2007). A field-tested task analysis for creating single-subject graphs using
Microsoft Office Excel. Journal of Behavioral Education, 16, 155–189.
Lo, Y., & Starling, A. L. P. (2009). Improving graduate students’ graphing skills of multiple baseline
designs with Microsoft Excel 2007. The Behavior Analyst Today, 10, 83–121.
Laerd Statistics. (2015). Testing for normality using SPSS statistics [Statistical tutorials and software
guides]. Retrieved on November 10, 2015 from https://statistics.laerd.com/spss-tutorials/testing-for-
normality-using-spss-statistics.php.
J Behav Educ (2016) 25:379–392 391
123
Author's personal copy
McKeel, A. N., Dixon, M. R., Daar, J. H., Rowsey, K. E., & Szekely, S. (2015). Evaluating the efficacy of
the PEAK relational training system using a randomized controlled trial of children with autism.
Journal of Behavioral Education, 24, 230–241.
Nicol, A. A. M., & Pexman, P. M. (2010). Displaying your findings: A practical guide for creating
figures, posters, and presentations (6th ed.). Washington, DC: American Psychological Association.
Osbourne, J. W. (2008). Best practices in quantitative methods. Thousand Oaks: Sage Publications Inc.
Pritchard, J. K. (2008). A decade later: Creating single-subject design graphs with Microsoft Excel 2007.
The Behavior Analyst Today, 9, 153–161.
Ramon, D., Yu, C. T., Martin, G. L., & Martin, T. M. (2015). Evaluation of a self-instructional manual to
teach multiple-stimulus without replacement preference assessments. Journal of Behavioral
Education,. doi:10.1007/s10864-015-9222-3.
Reed, D. D. (2009). Using Microsoft Office Excel� 2007 to conduct generalized matching analyses.
Journal of Applied Behavior Analysis, 42, 867–875.
Snell, M. E., & Brown, F. (2006). Instruction of students with severe disabilities (5th ed.). Upper Saddle
River, NJ: Prentice Hall.
Touchette, P. E., MacDonald, R. F., & Langer, S. N. (1985). A scatter plot for identifying stimulus control
of problem behavior. Journal of Applied Behavior Analysis, 18, 343–351.
Tyner, B. C., & Fienup, D. M. (2015). A comparison of video modeling, text-based, and no instruction for
creating multiple baseline graphs in Microsoft� Excel�. Journal of Applied Behavior Analysis, 48,
701–706.
Wong, W., Odom, S. L., Hume, K. Cox, A. W., Fettig, A., Kucharczyk, S., Schultz, T. R. (2010).
Evidence-based practices for children, youth and young adults with autism spectrum disorder.
National Professional Development Center on Autism Spectrum Disorders (NPDC-ASD). Retrieved
on April 10, 2015 from http://autismpdc.fpg.unc.edu/sites/autismpdc.fpg.unc.edu/files/imce/
documents/2014-EBP-Report.pdf.
392 J Behav Educ (2016) 25:379–392
123
Author's personal copy