Seeking Empirical Validity in an Assurance of Learning System

20
Running head: SEARCHING FOR EMPIRICAL VALIDITY Seeking Empirical Validity in an Assurance of Learning System Sherry Avery, Rochell McWhorter, Roger Lirely, H. Harold Doty College of Business and Technology The University of Texas at Tyler Contact Email: [email protected] Note: This is the last authorscopy of this work. The final edited, definitive, and distributed copy is published in the Journal of Education for Business, 2014; 89: 156-164, doi: 10.1080/08832323.2013.800467 Abstract Business schools have established measurement tools to support their AoL systems and assess student achievement of learning objectives. However, business schools have not required their tools be empirically validated thus ensuring that they measure what they are intended to measure. We propose confirmatory factor analysis (CFA) be utilized by business schools to evaluate AoL measurement systems. We illustrate a CFA model used to evaluate the measurement tools at our College. Our approach is in initial steps currently evaluating individual measurement tools, but working towards developing a system that can evaluate the entire AoL measurement systems. Keywords: AACSB, Assurances of Learning, Assessment, Confirmatory Factor Analysis A decade ago, the Association to Advance Collegiate Schools of Business (AACSB) International ratified new accreditation requirements including the addition of assurance of learning (AoL) standards for continuous improvement (Martell, 2007). As part of this addition, schools seeking to earn or maintain AACSB accreditation must develop a set of defined learning goals and subsequently collect relevant assessment data to determine direct educational achievement (LeClair, 2012; Sampson & Betters-Reed, 2008). The establishment of the mission- driven assessment process requires “well-documented systematic processes to develop, monitor,

description

Business schools have established measurement tools to support their AoL systems and assess student achievement of learning objectives. However, business schools have not required their tools be empirically validated thus ensuring that they measure what they are intended to measure. We propose confirmatory factor analysis (CFA) be utilized by business schools to evaluate AoL measurement systems. We illustrate a CFA model used to evaluate the measurement tools at our College. Our approach is in initial steps currently evaluating individual measurement tools, but working towards developing a system that can evaluate the entire AoL measurement systems. Authors: Sherry Avery, Rochell McWhorter, Roger Lirely and H. Harold Doty, The University of Texas at Tyler Contact author: [email protected] Note: This is the last authors’ copy of this work. The final edited, definitive, and distributed copy is published in the Journal of Education for Business, 2014; 89: 156-164, doi: 10.1080/08832323.2013.800467

Transcript of Seeking Empirical Validity in an Assurance of Learning System

Page 1: Seeking Empirical Validity in an Assurance of Learning System

Running head: SEARCHING FOR EMPIRICAL VALIDITY

Seeking Empirical Validity in an Assurance of Learning System

Sherry Avery, Rochell McWhorter, Roger Lirely, H. Harold Doty

College of Business and Technology

The University of Texas at Tyler

Contact Email: [email protected]

Note: This is the last authors’ copy of this work. The final edited, definitive, and distributed copy is

published in the Journal of Education for Business, 2014; 89: 156-164,

doi: 10.1080/08832323.2013.800467

Abstract

Business schools have established measurement tools to support their AoL systems and assess

student achievement of learning objectives. However, business schools have not required their

tools be empirically validated thus ensuring that they measure what they are intended to measure.

We propose confirmatory factor analysis (CFA) be utilized by business schools to evaluate AoL

measurement systems. We illustrate a CFA model used to evaluate the measurement tools at our

College. Our approach is in initial steps currently evaluating individual measurement tools, but

working towards developing a system that can evaluate the entire AoL measurement systems.

Keywords: AACSB, Assurances of Learning, Assessment, Confirmatory Factor Analysis

A decade ago, the Association to Advance Collegiate Schools of Business (AACSB)

International ratified new accreditation requirements including the addition of assurance of

learning (AoL) standards for continuous improvement (Martell, 2007). As part of this addition,

schools seeking to earn or maintain AACSB accreditation must develop a set of defined learning

goals and subsequently collect relevant assessment data to determine direct educational

achievement (LeClair, 2012; Sampson & Betters-Reed, 2008). The establishment of the mission-

driven assessment process requires “well-documented systematic processes to develop, monitor,

Page 2: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 2

evaluate, and revise the substance and deliver of the curricula on learning” (Romero, 2008, p.

253).

With establishment of the 2003 AACSB standards, all schools “must develop assessment

tools that measure the effectiveness of their curriculum” (Pesta & Scherer, 2011, p. 164). As a

response to this outcomes assessment mandate, a number of schools created models to depict and

track their assessment functions (Betters-Reed, Nitkin & Sampson, 2008; Gardiner, Corbitt, &

Adams 2010; Zocco, 2011). However, the question arises as to the validity of system models for

measuring learning outcomes—does the model measure what it purports to measure—do the

learning experiences accomplish the learning goals outlined in the systems model? This question

is a very important one because once validity of a measurement system is established, then it

provides confidence in a program and quality of assurance in achieving the school’s mission

(Baker, Ni & Van Wart, 2012).

The purpose of this article is to illustrate development of an empirically-based AoL

System which may be used by other business schools seeking accreditation. Relevant literature

on this topic will be examined next.

Review of Literature

A search of the literature for an empirically-validated AoL system yielded results for

research covering either the validation of AoL tools or validation of an Aol model. Each is

discussed in the following section.

Measures of Validity in AoL Assessment Tools

Measures of validity associated with AoL learning outcomes were located in business

literature by reviewing articles that described locally-developed assessment tools and externally

validated instruments. For instance, researchers developed an assessment tool to explore

Page 3: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 3

students’ self-efficacy toward service and civic participation. They utilized traditional scale

development and confirmatory factor analysis (CFA) and simultaneous factor analysis in several

populations (SIFASP) for insuring the validity and reliability of their instrument to measure AoL

criteria for ethics and social responsibility (Weber, Weber, Sleeper & Schneider, 2004). Another

tool offered was a content valid assessment exam created to measure business management

knowledge (Pesta & Scherer, 2011).

Also, a matrix presented by Harper and Harder (2009) depicts demonstrated abilities

intersected with competency clusters; the clusters were developed from literature describing

valid research into “the kinds of knowledge and skills that are known to be necessary for success

as a practitioner in the MIS field” (p. 492). However, no statistical measures of validity were

provided. Additionally, instances of use of externally validated instruments such as the revised

version of the Defining Issues Test (DIT-2) to assess ethical reasoning instruction in

undergraduate cost accounting (Wilhelm & Czyzewski, 2012) and use of the CAPSIM computer

simulation to assess business strategy integration (Garrett, Marques & Dhiman, 2012), were

found.

Measures of Validity in AoL Assessment Models

Various models have been offered for outcomes measurement as part of a processed-

based approach for meeting AoL standards (i.e. Beard, Schwieger & Surendran, 2008; Betters-

Reed, Nitkin & Sampson, 2008; Hess & Siciliano, 2007) but without statistical evidence or

discussion of validity measures. However, the search of literature found an article by Zocco

(2011) that presented a recursive model to address and document continuous and systematic

improvement and discussed validity issues surrounding the application of recursion to a process

such as AoL. Although helpful for looking at school improvement, the model does not measure

Page 4: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 4

validity of the model itself. Therefore, the review of literature offered several tools and a model

with validity calculations, however, no example of an empirically-validated system was found.

Case Study: Assurances of Learning at [blinded]

During the past five years, the College of Business [removed for blinding purposes,

hereafter referred to as “College”] has conducted a complete redesign of its AACSB Assurance

of Learning system. To understand the rationale for this design change it is important to explore

several key drivers of this decision, especially in light of the fact that our prior AoL system was

cited as one or our “best practices” during our last Maintenance of Accreditation visit. At last

visit, the College operated three different and largely unrelated assessment systems: one for the

AACSB, [names of 2 other accreditation bodies removed for blinding purposes]. In some ways,

these independent assessment systems simplified accreditation reporting: each system was

tailored to the specifics of a single accrediting body and the data associated with one system

were not considered in collaboration with data collected for a different accrediting body. For

example, AACSB and the College’s assessment procedures were treated as completely

independent. This approach simplified reporting, but hindered integrating different assessment

data in the larger curriculum management process.

A second major contextual factor relevant to our AoL process was feedback from our last

AACSB visit that recommended revisions to the vision and mission statements for the College

and AACSB AoL processes. As part of that revision process, the College clarified its mission

and identified five core values.

Based largely on these contextual factors, faculty determined we were at an ideal point to

design a new single integrated assessment model to meet the needs of each of our accrediting

bodies. Further, we determined that the new system should be linked to the new mission by

Page 5: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 5

incorporating the core values as learning outcomes, and that we should attempt to assess the

validity of the system in terms of both the theoretical model used to design the system and the

measurement model used to organize the data collection. These additional steps would allow

more confidence in the evidence-based changes we were making in program structure and course

curriculum. The full scale implementation began in the 2010-11 school year; our model is more

fully described next.

Faculty-Driven Process

AoL in the College is a faculty-driven process. Oversight of this process is charged to

the AoL Committee, a committee comprised of a faculty chair, the undergraduate program

director, the graduate programs coordinator and four at-large faculty members. The composition

of the Committee provides cross-sectional representation of all disciplines and programs in the

College.

The Committee works closely with our faculty to ensure that each learning objective is measured

periodically, at least twice during each five-year period but generally more often. The faculty

employ a variety of measurement strategies, including major field tests, embedded test questions,

case analyses, observation of student presentations, activity logs, simulations and other class

assignments and/or projects. Analyses of results guide the Committee in its work with the

faculty to develop and implement appropriate actions to ensure curricula and pedagogy are

managed in a manner enhancing student learning and development. Figure 1 illustrates how the

Page 6: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 6

AoL assessment process operates in a continuous improvement mode.

FIGURE 1

AoL Curriculum Management Process at [Blinded]

Conceptual Framework

The AoL system in the College is based on a set of shared core values: professional

proficiency, technological competence, global awareness, social responsibility and ethical

courage as seen in Figure 2. These mission-based core values form the framework for our

comprehensive, empirically-validated AoL models for both the Bachelor in Business

Administration program and the Master of Business Administration Program, as well as other

College programs that are outside the scope of AACSB accreditation. AoL in the College has

evolved to the point where our current system is second-generation, that is, it is the culmination

Page 7: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 7

of an assessment of the AoL system itself. Many of the best features of the prior system were

retained, including assessment of discipline-based knowledge, communication skills, and the use

of quantitative tools and business technology. The result of this process is a value-based

conceptual framework whose efficacy can be tested empirically using confirmatory factor

analysis. To our knowledge, our College is the first AACSB-accredited program to design an

empirically-validated AoL system. Figure 2 depicts the conceptual framework of our AoL

system for the BBA program.

Figure 2

Method

Data Collection. The faculty developed ten learning objectives to support the five

learning goals of the College. A measurement tool was designed for each objective, such as the

Major Field Test or rubrics. Assessment was conducted within required core business courses

that included students across all College of Business majors. Students were generally either

junior or seniors in one of the business majors. Results were then collected and compiled

centrally in an administrative function within the College.

College of Business & TechnologyCreating Leaders for a Better Tomorrow

Learning Goal: Graduates have the

knowledge and communication skills to succeed in the business

profession.

Ethical Courage

Social Responsibility

Global Awareness

Technological Competence

Professional Proficiency

Learning Goal: Graduates use the

information sources and tools associated with

their chosen profession.

Learning Goal: Graduates incorporate global considerations in

business activities.

Learning Goal: Graduates evaluate the social consequences of alternative outcomes

when making business decisions.

Learning Goal: Graduates understand ethical considerations

and their impacts.

Cognitive: GA1 - Students demonstrate awareness of global issues and perspectives.

Behavioral: GA2 - Students are knowledgeable of global issues and perspectives that may impact business activities.

Behavioral:SR1 - Students exhibit an understanding of the social consequences of business activities.

Cognitive:EC1 - Students understand legal and ethical concepts.

Behavioral:EC2 - Students make ethical decisions.

Behavioral:TC2 - Students are able to use business software, data sources, and tools.

Cognitive:TC1 - Students understand information systems and their role in business enterprises.

Behavioral:PP3 - Students are able to deliver a presentation that is focused, well-organized, and includes the appropriate verbal and nonverbal behaviors.

Cognitive:PP1 - Students demonstrate that they are knowledgeable about current business theory, concepts, methodology, terminology and practices.PP2 - Students can prepare a business document that is focused, well-organized, and mechanically correct.

Bachelor of Business Administration Updated: 7/11/2012

Page 8: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 8

Analysis Approach. Several of the measurement tools included a number of items that

collectively assessed the specific learning objective. Confirmatory factor analysis (CFA) was

conducted to assess empirical validity of the items measures and learning objectives. CFA was

chosen for assessment because it tests how well the measured variables represent the constructs

(Hair, Black, Babin, Anderson & Tatham, 2005).

The items that comprise each construct are identified prior to running the CFA analysis.

We then confirm or reject that the items properly reflect the construct, in this case the learning

objective. CFA was conducted on six of the learning objectives. We were unable to run CFA

for some learning objectives because they were either measured by a single item, the sample size

was too small, or data was binary, therefore negating the applicability of using CFA. Table 1

details the learning objectives, measurement tools, and when the CFA was conducted. In the

following section, we discuss the general approach used in the CFA analysis. Then, we follow up

with two examples of the CFA analysis; the first example is empirically valid, and the second

example was not empirical valid. We used a combination of the software tools SPSS and AMOS

Page 9: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 9

to support the analysis.

For each of the CFA analyses, we followed a 3-step approach documented in many of the

leading academic journals: (1) review of the raw data, (2) assessment of model fit, and (3)

assessment of construct validity. Prior to the CFA analyses, we reviewed the data for sample

Page 10: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 10

size, outliers, missing data, and normality. We determined if the sample size was adequate for

the model based on suggested requirements that range from 5 to 20 observations per variable.

(Hair et al., 2005). The existence of outliers along with their potential impact on normality and

the final results were assessed at both the univariate and multivariate level by reviewing the

Mahalanobis distance (D2) calculation for each case. Next, we identified missing data and

assessed the potential impact on analysis. A case that has a substantially different value from

other D2 calculations is a potential outlier. Then, we identified the amount of missing data and

assessed the potential impact on the analysis. Finally, we assessed normality by reviewing both

skewness and kurtosis at the univariate and multivariate level. Values of 0 represent a normal

distribution. For skewness, less than 3 is acceptable (Chou & Bentler, 1995; Kline, 2005). For

kurtosis, Kline (2005) stated that less than 8 is reasonable with greater than 10 indicating a

problem and over 20 an extreme problem.

We evaluated how well the data fit the measurement model using Analysis of Moment

Structures (AMOS) software (see http://www.utexas.edu/its/help/spss/526). We used maximum-

likelihood estimations which is a widely used approach and is fairly robust to violations of

normality and produces reliable results under many circumstances (Hair et al., 2005; Marsh,

Balla & McDonald, 1998). First, we evaluated the Chi-Square (χ2) statistic which measures the

difference between the observed and estimated covariance matrices. It is the only statistic that

has a direct statistical significance and is used as the basis for many other fit indices (Hair et al.,

2005). Statistical significance in this case indicates that an error of approximation or estimation

exists. Many researchers question the validity of the chi-square statistics (Bentler, 1990), so if it

is significant additional indices should be used to assess overall evaluate model fit. The root

mean square error of approximation (RMSEA) is a standardized measure of the lack of fit of the

Page 11: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 11

data to the model (Steiger, 1990). It is fairly robust in terms of small sample size; i.e. 250 or

less. Thresholds of .05 to .08 have been suggested with Hu and Bentler (1999) recommending a

cutoff close to .06. The Bentler-Bonnet (1980) Non-Normed Fit Index (NNFI) was used because

it also works well with small sample sizes (Bedeian, 2007). Generally .90 or better is considered

adequate fit with Hu and Bentler (1999) suggesting a threshold of .95 or better for good fit. The

non-normed fit index (NNFI) and comparative fit index (CFI) are incremental fit indices in that

they assess model fit by referring a baseline model (Bentler 1990; Bentler & Bonett, 1980; Hu &

Bentler, 1999). The NNFI and CFI generate values between 0 and 1 with .90 or greater

representing adequate fit (Hu & Bentler, 1999; Bedeian 2007).

The final step was to assess construct validity which is the extent to which a set of

measured items accurately reflect the theoretical latent construct the items were designed to

measure (Hair et al., 2005). The standardized factor loadings should be statistically significant

and .5 or higher (Hair et al., 2005). Convergence validity was assessed by calculating the

average variance extracted (AVE) and construct reliability (CR). The average percent of

variance extracted among a set of construct items is a summary indicator of convergence. AVE

of .5 or higher suggests adequate convergence. Less than 5 indicates that on average more error

remains in the items than variance explained by the latent construct (Fornell & Larcker, 1981).

High CR reliability indicates that the measures consistently represent the same latent construct.

Values of .6 or 7 are acceptable, with .7 or higher indicating good reliability (Nunnally, 1978).

Results. The purpose of this article is to illustrate the method we used to assess empirical

validity of our learning objectives to aid other business schools in their AoL journey. It is not

our goal to suggest that our measurement tools or learning objectives should be universally

adopted. Therefore for illustration of our process, we will limit our discussion to the overall fit

Page 12: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 12

of our constructs and then discuss in detail two examples of how we used CFA; an example of a

valid measure (business knowledge) and an example of a measure that requires some

modifications (oral communication).

Table 2 documents the model fit indices for the CFA analyses performed for six of the

learning objectives. The learning objectives for business knowledge, written communication,

global awareness context, region, and perspectives are valid for construct reliability and model

fit. Therefore we are reasonably confident that these objectives adequately measure the learning

goals established by the College. The model fit for Oral Communication was below suggested

thresholds.

Table 2

Fit Indices

Composite

reliability

Variance

Extracted

χ2 RMSEA RMSR NNFI CFI

Learning Objective

Business Knowledge (n=218)

.853 .409 61.35** .077 11.66 .932 .949

Written Communication (n=147) .798 .498 6.163** .119 .011 .926 .975

Oral Communication (n=161) .62 .246 120.24** .148 .033 .693 .770

Global Awareness Context (n=151) .85 .397 19.65** .089 .336 .958 .975

Global Awareness Region (n=151) .89 .562 8.183 .000 .192 1.003 1.000

Global Awareness Perspectives (n=90) .96 .73 96.643** .208 .031 .867 .905

**p<.05

Business knowledge. Students take the ETS Major Field Test for the Bachelor’s Degree

in Business (MFT) in a capstone class in their senior year. The MFT is a widely used

standardized exam for business students (www.ets.org/mft). The capstone class is required for all

business majors and the MFT is administered in all sections of the capstone class thus ensuring

that all business majors participate in the assessment prior to graduation. To ensure that students

take the exam seriously and give their best effort, their results are reflected in their course grade.

Two hundred and eighteen responses were obtained from the exams administered in 2010 –

2011. Nine composite scores from the exam are used to assess the overall business knowledge

Page 13: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 13

of the student. (See Table 3 for a listing of these nine items.) A review of the data found no

missing data or significant outliers. The sample size divided by the number of responses to the

number of variables (218/9 = 24) was well above the recommended threshold range of 5 to 20.

The kurtosis and skewness statistics were less than the recommended thresholds of 8 and 3,

which indicates only a slight departure from normality. Therefore we were reasonably confident

in proceeding to the next phase of the analysis, evaluating model fit by running a confirmatory

factor analysis on the item measures.

The χ2

was significant, however the RMSEA was .077, below the recommended threshold

of .08. The RMSEA is parsimonious in that it considers the impact of the number of variables in

their calculations, so it is a better indicator of model fit than χ2. The NNFI and CFI was .932

and .949, well above the recommended threshold of .90. Overall, the model fit is acceptable.

Page 14: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 14

In assessing construct validity of the items, we noted that all items were statistically

significant, however one item measure, quantitative analysis, fell below the recommended

loading of .50. The composite reliability of .853 was well above the recommended threshold of

.6 and the variance extracted was slightly below the recommended threshold at .50. Overall

there is evidence that the item measures adequately reflect the latent construct of business

knowledge. However, further analysis is needed to determine the cause of the low factor loading

of quantitative analysis. We provided Figure 3 as a visual representation of the CFA model for

this construct.

Oral communication. The students’ oral communication skills were measured by a rubric

assessed oral presentation assignment administered in the business communication course, which

is part of the required core curriculum. The business communication professor for all sections of

the class completed the assessment. There is only one business communication professor, thus

ensuring consistency of the measurement process. The rubric is comprised of nine item measures

(see Table 3 for a list of these items). Data was obtained from the 2010-2012 assessments which

resulted in 161 observations for the CFA of the oral communication construct. A review of the

data found two missing observations for the item measure conclusion and one missing

Page 15: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 15

observation for eye contact. Because the impact of missing data was small, we used the mean

imputation method for the missing observations. We also identified one potential outlier; we

deleted the case on a trail run and found that it did not have a significant impact on normality or

the results. The sample size ratio (161/9 = 17.9) is in the recommended threshold range of 5 to

20. The multivariate kurtosis statistic of 23.393 was well above the recommended thresholds of

8, which provides evidence of a departure from normality. The univariate skewness statistics

were below the threshold of 3.

Because of the non-normal distribution, we attempted to run an asymptotically

distribution-free estimation method to contact the CFA. Unfortunately this resulted in an

inadmissible solution because of the existence of a negative error variance. Therefore we used

the Maximum-Likelihood estimation technique which often provides reasonable results with

departures from normality. The χ2

was significant, however the adjusted chi-square ratio (χ2

/df)

was 4.453, below the recommended threshold of 5. The RMSR was .033 well below the

recommended threshold of .10 while the RMSEA was .145, above the recommended threshold of

.08. The NNFI and CFI was .693 and .770, below the recommended threshold of .90. Our

overall assessment is the model fit is poor. The RMSEA, NNFI, and CFI are impacted by the

model complexity which could be an indication that the number of variables in the model

affected model fit.

Two of the items were not statistically significant: conclusion and fillers. Only two of

the nine items were greater than the recommended threshold of .50; projection and pace. The

composite reliability of .62 met the recommended threshold of .6. The variance extracted of .246

was well below the recommended threshold of .50. Our conclusion is that the item measures do

not accurately reflect the latent construct oral communications.

Page 16: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 16

Discussion

The purpose of this article is to highlight an AACSB-accredited program as a case study

of its design of an empirically-validated AoL system and to demonstrate how empirical validity

improved our AoL system. When appropriate, we used confirmatory factor analysis to validate

the measurement instruments used to assess student achievement of the learning objectives

established by the faculty and stakeholders of the College. We provided a description of the CFA

process used to assess the empirical validity of the learning objectives. Finally we illustrated the

process by discussing the results of the validation process on two learning objectives.

For the business knowledge learning objective, we found that both the model fit and

construct reliability are valid. However, we noted that factor loading for quantitative analysis

was much lower than the other item measures. In reviewing the raw scores, not surprisingly, we

found that our students do not perform as well in the quantitative analysis topic when compared

to the other topics covered by the MFT indicating that even though we had a valid measure of

business knowledge, our students need improvement in their quantitative skills. These result

prompted us to examine the curriculum of the class where much of the quantitative methods are

taught.

Page 17: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 17

For the oral communication learning objective, we found that the model fit was poor,

construct reliability was low, and many of the item measures from the rubric did not load. These

results prompted us to examine the measurement tool used to assess achievement oral

communication competency. Corrective action includes a review of the rubric and also the

process used to collect the data.

Conclusion and Limitations. We found value in, and therefore will continue to

empirically validate the AoL learning objectives using confirmatory factor analysis. The

validation process has increased the support of the AoL system by faculty that understand and

are trained in the research process. We have received positive feedback from the AACSB and

higher education associations on our validation process. Most importantly, it provides

confidence in the tools we are using to measure student achievement of the learning objectives.

Now that the process and the supporting models are in place, it will be relatively simple to

continue the validation process in order to continually improve. Because of the method we used

to capture the assessments, we are able to use the same validation process for both AACSB and

SACS accreditations.

We are continually striving to improve our validation process. One limitation is that we

are unable to simultaneously assess the empirical validity of the entire model of the learning

objectives. Assessment is conducted by class, rather than individual student across all classes.

Therefore we do not have data for one student for all learning objectives. To address this

limitation, we are evaluating both in-house and commercially developed databases to track

student data across classes and semesters. For example, the University of Central Florida (UCF)

developed an in-house database for tracking of student data (Moskal, Ellis, & Keon, 2008).

Taskstream is a commercially available database for tracking data (https://www.taskstream.com).

Page 18: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 18

References

Baker, D. L., Ni, A. Y., & Van Wart, M. (2012). AACSB assurance of learning: Lessons learned

in ethics module development. Business Education Innovation Journal, 4(1), 19-27.

Beard, D., Schwieger , D., & Surendran, K. (2008). Integrating soft skills assessment through

university, college, and programmatic efforts at an AACSB accredited institution.

Journal of Information Systems, 19(2), 229-240.

Bedeian, A. G. (2007). Even if the tower is 'Ivory,' It Isn't 'White:' Understanding the

consequences of faculty cynicism. Academy Of Management Learning & Education, 1, 9.

doi:10.2307/40214514

Bentler, P. M. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107,

238-246.

Bentler, P. M., & Bonett, D. G. (1980). Significance tests and goodness of fit in the analysis of

covariance structures. Psychological Bulletin, 88, 588-606.

Betters-Reed, B. L., Nitkin, M. R., Sampson, S. D. (2008). An assurance of learning success

model: Toward closing the feedback loop. Organization Management Journal, 5, 224-

240.

Chou, C. P., & Bentler, P. M. (1995). Estimates and tests in structural equation modeling. In R.

Hoyle (Ed.), Structural Equation Modeling (pp. 37-59). Thousand Oaks, CA: Sage.

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable

variables and measurement eror. Journal of Marketing Research, 18, 39-50.

Gardiner, L. R., Corbitt, G., & Adams, S. J. (2010). Program assessment: Getting to a practical

how-to model. Journal of Education for Business, 85(3), 139-144.

doi:10.1080/08832320903258576

Page 19: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 19

Garrett, N., Marques, J., & Dhiman, S. (2012). Assessment of business programs: A review of

two models. Business Education & Accreditation, 4(2), 17-25.

Hair, J. F., Black, B., Babin, B., Anderson, R. E., & Tatham, R. L. (2005). Multivariate Data

Analysis, 6th

edition. Saddle River, NJ: Prentice-Hall.

Harper, J. S., & Harder, J. T. (2009). Assurance of Learning in the MIS Program. Decision

Sciences Journal of Innovative Education, 7(2), 489-504. doi:10.1111/j.1540-

4609.2009.00229.x

Hess, P. W., & Siciliano, J. (2007). A research-based approach to continuous improvement in

business education. Organization Management Journal, 4(2), 135-147.

Hu, L., & Bentler, P. (1999). Cutoff criteria for fit indexes in covariance structure analysis:

Conventional criteria versus new alternatives. Structural Equation Modeling-A

Multidisciplinary Journal, 6(1), 1-55.

Kline, R. B. (2005). Principles and Practice of Structural Equation Modeling (2nd

ed.). New

York: Guilford.

LeClair, D. (2012). Broadening our view of Assurance of Learning. Retrieved from

http://aacsbblogs.typepad.com/dataandresearch/2012/02/broadening-our-view-of-

assurance-of-learning.html

Marsh, H. W., Balla, J. R., & McDonald, R. P. (1988). Goodness-of-fit indices in confirmatory

factor analysis: Effects of sample size. Psychological Bulletin, 103, 391-411.

Martell, K., (2007). Assurance of learning (AoL) methods just have to be good enough. Journal

of Education for Business,(82) 4, 241-243.

Page 20: Seeking Empirical Validity in an Assurance of Learning System

SEARCHING FOR EMPIRICAL VALIDITY 20

Moskal, P., Ellis, T., & Keon, T. (2008). Assessment in Higher Education and the management

of student-learning data. Academy of Management Learning & Education, (2), 269.

doi:10.2307/40214542

Nunnally, J. C. (1978) Psychometric theory (2nd

ed.). New York: McGraw-Hill.

Pesta, B., & Scherer, R. (2011). The Assurance of Learning Tool as Predictor and Criterion in

Business School Admissions Decisions: New Use for an Old Standard? Journal Of

Education For Business, 86(3), 163. doi:10.1080/08832323.2010.492051

Romero, E. J. (2008). AACSB accreditation: Addressing faculty concerns. Academy of

Management Learning & Education, 7(2), 245-255.

Sampson, S. D., & Betters-Reed, B. L. (2008). Assurance of Learning and outcomes assessment:

A case study of assessment of a marketing curriculum. Marketing Education Review,

18(3), 25-36.

Steiger, J. H. (1990). Structural model evaluation and modification: An interval estimation

approach. Multivariate Behavioral Research 25(2), 173-180.

Weber, P., Weber, J. E., Sleeper, B. J., & Schneider, K. C. (2004). Self-Efficacy toward service,

civic participation and the business student: Scale development and validation. Journal of

Business Ethics, 49(4), 359-369.

Wilhelm, W. J., & Czyzewski, A. B. (2012). Ethical reasoning instruction in undergraduate cost

accounting: A non-intrusive approach. Academy of Educational Leadership Journal,

16(2), 131-142.

Zocco, D. (2011). A recursive process model for AACSB assurance of learning. Academy of

Educational Leadership Journal. 15(4), 67-91.