06 distance learning standards-qti

Post on 10-May-2015

236 views 0 download

Tags:

Transcript of 06 distance learning standards-qti

Timothy K. Shih

Distance Learning Standards – QTI

Overview

• The IMS Question & Test Interoperability (QTI) specification describes a data model for the representation of question and test data and their corresponding results reports

• Exchange item among authoring tools, item banks, test constructional tools, learning systems, and assessment delivery systems

History

• March 1999 – initial V0.5 specification• November 1999 – IMS Question & Test

Interoperability V1.0• March 2003 – QTI V1.2.1• September 2003 – draft QTI V2.0• June 2006 – QTI V2.1 (current version)

Specification Use Cases

• Provide a well documented content format for storing and exchanging items independent of the authoring tool used to create them

• Support the deployment of item banks across a wide range of learning and assessment delivery systems

• Provide a well documented content format for storing and exchanging tests independent of the test construction tool used to create them

• Support the deployment of items, item banks, and tests from diverse sources in a single learning or assessment delivery system

• Provide systems with the ability to report test results in a consistent manner

The Role of Assessment Tests and Assessment Items

Tools

• Authoring Tool: creating or modifying an assessment item

• Item Bank: collecting and managing items• Test Construction Tool: assembling tests from

individual items• Assessment Delivery System: managing the

delivery of assessments to candidates• Learning System: enables or directs learners in

learning activities

Actors• Author: the author of an assessment item,

quality can be controlled by another person• Item Bank Manager: managing a collection of

assessment items• Test Constructor: create tests from items• Proctor: overseeing the delivery of an

assessment• Scorer: assessing the candidate's responses,

can be replaced by an auto system• Tutor: supporting the learning process for a

learner• Candidate: person being assessed

Structure of this Specification

• IMS Question & Test Interoperability Overview

• IMS Question & Test Interoperability Implementation Guide

• IMS Question & Test Interoperability Assessment Test, Section, and Item Information Model

• IMS Question & Test Interoperability XML Binding

• IMS Question & Test Interoperability Results Reporting

• IMS Question & Test Interoperability Integration Guide

• IMS Question & Test Interoperability Conformance Guide

• IMS Question & Test Interoperability Meta-data and Usage Data

• IMS Question & Test Interoperability Migration Guide

Implementation Guide

• Simple Items• Composite Items• Response Processing• Feedback• Adaptive Items• Item Templates• Tests (Assessments)• Usage Data (Item Statistics)• Packaged Items, Tests and Meta-data• Validation

Revision: 8 June 2006

Simple Items

• Simple items are items that contain just one point of interaction, for example a simple multi-choice or multi-response question

Unattended Luggage

• single response

• a correct answer

Composition of Water

• multiple responses

• multiple answers

• one point is given to each correct answer

• a 3rd incorrect answer reduce 2 points

Grand Prix of Bahrain

• the correct answer is composed of an ordered list of values

• the shuffle attribute tells the delivery engine to shuffle the order of the choices before displaying them to the candidate

• use the standard response processing template (score 1 or 0)

Shakespearian Rivals

• pairing up the choices• Max number of pairs = 3• undirected pair (i.e., A – P = P – A)

Characters and Plays

• directed pair: from a source set into a target set• each character can be in only one play• each play could contain all the 4 characters

Richard III (Take 1)

• selecting choices (buttons) and using them to fill the gaps

Richard III (Take 2)

• use the combo box to fill each in-line choice independently

Richard III (Take 3)

• use a text entry (i.e., fill-in-blank)• expected length = 15• matching is case sensitive

Writing a Postcard

• multiple lines of answers• no response processing (i.e., grading)

Olympic Games

• similar to simple choice• choices have to be presented in the

context of the surrounding text

UK Airports

• similar to simple choice• select hotspots of a graphical image

Where is Edinburgh?

• mark a coordinate on the map• area mapping is used to check the answer

That is, a circle centered at (102,113) with a radius of 16

Flying Home

• the correct answer is composed of an ordered list of values

• presented as hotspots

Low-cost Flying

• pairing up the choices• max number of pairs = 3• max number of matching items for each item = 3

Airport Tags

• selecting choices and using them to fill the gaps• using drag and drop

Airport Locations

• select a coordinate on the map by positioning a given object

• area mapping is used to check the answer

Jedi Knights

• to obtain a percentage• give partial credits to close percentages• Give lower bound, upper bound, and step

Composite Items

• Composite items are items that contain more than one point of interaction

• Composite items may contain multiple instances of the same type of interaction or have a mixture of interaction types

Response Processing

• Standard response processing templates were used in previous examples

• A more general response processing model is needed– Example: provide partial credits to ordering even the

response is not exactly the same as the correct answer

• Response processing consists of a sequence of rules that are carried out, in order, by the response processor

Feedback

• Feedback consists of material presented to the candidate conditionally based on the result of Response Processing– i.e., instant hints

• Modal feedback is shown to the candidate after response processing has taken place and before any subsequent attempt or review of the item

• Integrated feedback is embedded into the itemBody and is only shown during subsequent attempts or review

Mexican President

• The feedback shown depends directly on the response given by the candidate

Adaptive Items

• New feature of QTI version 2• Allows an item to be scored adaptively

over a sequence of attempts; the scoring is based on the actual strategy you took

• Adaptive items must provide feedback to the candidate in order to allow them to adjust their responses

Monty Hall (Take 1)

Item Templates

• allows many similar items to be defined using the same Assessment Item

• controlled by using template rules

Tests (Assessments)

• Gather items into a test

• Include items from other XML files

Collections of Item Outcomes• Two Sections

(sectionA and sectionB)

• navigation mode is nonlinear (choose any item)

• The submission mode is set to simultaneous (at the end of test)

• Assigning different weights to each item

Additional Functions

• Categories of Item• Arbitrary Weighting of Item Outcomes• Specifying the Number of Allowed Attempts• Controlling Item Feedback in Relation to the

Test• Duration of Tests• Early Termination of Test• Branching Based on the Response to an

Assessment Item• Randomizing the Order of Items and Sections

Packaged Items, Tests and Meta-data

• Both single item and multiple items can be packed

• Packed by a file: imsmanifest.xml• The manifest file demonstrates the use of

a resource element to associate meta-data (both LOM and QTI) with an item and the file element to reference the assessmentItem XML file and the associated image file

Meta-data and Usage Data

• The IEEE LOM standard defines a set of meta-data elements that can be used to describe learning resources, but does not describe assessment resources in sufficient detail

• New Meta-data Elements in IMS QTI v2.0 (extends the IEEE LOM to meet the specific needs of QTI)

• QTI version 2.1 further extends this to enable the description of tests, pools, and object banks

• Secondary meta-data, sometimes known as 'usage data' (item statistics), is defined separately in its own data model

Revision: 8 June 2006

New Meta-data Elements in IMS QTI v2.0

• New category of meta-data• qtiMetadata

– itemTemplate– timeDependent– Composite– interactionType– feedbackType– solutionAvailable– toolName– toolVersion– toolVendor

Interaction Type• associateInteraction• choiceInteraction• customInteraction• drawingInteraction• endAttemptInteraction• extendedTextInteraction• gapMatchInteraction• graphicAssociateInteraction• graphicGapMatchInteraction• graphicOrderInteraction• hotspotInteraction• hottextInteraction• matchInteraction• orderInteraction• positionObjectInteraction• selectPointInteraction• sliderInteraction• textEntryInteraction• uploadInteraction

Feedback Type

• None: no feedback is available• Nonadaptive: feedback is available but it

is non-adaptive• Adaptive: feedback is available and is

adaptive

IEEE LOM Profile

• A few suggestions to the usage of IEEE LOM, when applied to items of QTI 2.0

IEEE LOM - General

• General– Identifier– Title– Language– Description– Keyword– Coverage

IEEE LOM – Lifecycle, Meta-metadata

• Lifecycle– Version– Status– Contribute

• Meta-metadata– Identifier– Contribute– Metadata_schema– Language

IEEE LOM – Technical, Educational• Technical

– Format – Size – Location – Other Platform Requirements

• Educational – Context – typical_learning_time – Description – Language

IEEE LOM – Rights, Relation, Annotation, Classfication

• Rights – cost – copyright_and_other_restrictions – description

• Relation – kind – resource

• Annotation

• Classification

Usage Data • QTI defines a separate class for describing item

statistics• An optional URI that identifies the default glossary in

which the names of the itemStatistics are defined• itemStatistic

– Name– Glossary– Context– CaseCount– stdError– stdDeviation– lastUpdated– targetObject

• Identifier• partIdentifier

– ordinaryStatistic– categorizedStatistic

XML Binding

• The accompanying XML binding provides a binding for the qtiMetadata object

• The qtiMetadata class defines a new category that could appear alongside LOM categories

• qtiMetadata is bound separately and must be used in parallel to the LOM object as an additional meta-data object

Interoperability Assessment Test, Section, and Item

Information Model

• The reference guide to the main data model for assessment tests and items. The document provides detailed information about the model and specifies the requirements of delivery engines and authoring systems.

Revision: 8 June 2006

Results Reporting

• A reference guide to the data model for result reporting. The document provides detailed information about the model and specifies the associated requirements on delivery engines.

Revision: 8 June 2006

QTI related Tools OpenSource

• http://sourceforge.net/search/?type_of_search=soft&words=QTI

Summary

• Representation of question and test data and their corresponding results reports

• Developed by IMS• Can be combined with SCORM• Common Cartridge