The Development of Robotable
A Hands-On Tabletop Environment to Support
Engineering Education
A thesis submitted by
Paul S. Mason
In partial fulfillment of the requirements for the degree of
Master of Science
in
Mechanical Engineering
TUFTS UNIVERSITY
June 2005
Copyright 2005 by Paul S. Mason
All Rights Reserved
Adviser: Chris Rogers Ph.D.
Abstract
Citizens in the 21st century must keep pace with the ever-growing demands of an
increasingly technological society that is propelling the world towards a global econ-
omy. A global economy challenges all nations to increase industrial competitiveness,
and this is done primarily through innovation. Engineers, in particular, require inno-
vation for the great role they play creating wealth through the application of science
and technology in society.
The current education system is, however, not equipping students adequately for
their role in the 21st century. The pressure of today’s educational environment has
produced a “teach to the test” culture that is stifling student creativity.
One organization helping teachers bring passion, innovation, and independence
of thought into classrooms is Tufts University’s Center for Engineering Educational
Outreach (CEEO). The work presented in this thesis is closely associated with the
CEEO and documents the development of a compelling tabletop learning environ-
ment, the Robotable. The Robotable is a platform that supports new and existing
technologies to facilitate the kind of interactions that enhance learning. It is simply
a frosted tabletop that acts as a rear-projection screen. A computer screen is pro-
jected on to it via a mirror below. Instructional content can guide the learner through
hands-on activities that explore engineering concepts. Optical tracking enables the
position and orientation of objects, such as Lego robots, to be known at any time.
This enables participants at separate locations to share in an activity via the Internet.
They can see two dimensional projections of remote robots navigating their tabletop
with their own robot.
Preliminary tests have shown that the Robotable environment is significantly more
stimulating than conventional forms of delivering learning activities, and learning on
the Robotable significantly improves subsequent application of the given content.
ii
Acknowledgements
I sincerely thank all of the people who have been involved in the development of
Robotable’s concept, hardware and software. The initial concept for this work came
from Chris Rogers, who is academic advisor to all contributors at Tufts University. His
guidance has been essential to the project. Our collaborators at Lincoln University
in Canterbury, New Zealand, are lead by Alan Mckinnon and Keith Unsworth. Their
insight and judgement is always appreciated.
My committee members Caroline Cao and Robert Jacob have also been the lec-
turers for the two courses I have found most interesting, and useful, at Tufts. They
are Human Factors and Tangible User Interfaces respectively. Thankyou.
At Tufts University in Boston I have collaborated with Ben Gemmill and Addie
Sutphen, with assistance from Meredith Knight, Barbara Bratzel, Catherine Petron-
ino and Elissa Milto. At Lincoln University in New Zealand, Carl Pattie, Craig Oliver
and Jonathan Festing have done great work. I particularly appreciate their willingness
to bounce ideas back and forth.
Thank you to my friends from Tousey and McCollester houses, and everyone who
has studied at Tuftl or worked at the CEEO. Their good humor has, more than
anything else, made the past twenty two months a genuine pleasure.
I also wish to acknowledge my mother Valerie, my sisters Pam and Alice, and my
brother Tom for the example they set and the support they give.
Last but not least, thank you to Essie for being sole care giver to two black
labradors, Rousseau and Descartes, for three years while I am away.
iii
Contents
Abstract ii
Acknowledgements iii
List of Tables viii
List of Figures ix
1 Introduction 1
1.1 A Changing Society Challenges Nations and Engineers . . . . . . . . 1
1.1.1 Shortage of Engineers . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 An Increasingly Technological Society . . . . . . . . . . . . . . 2
1.1.3 Globalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.4 How to Succeed in a Global Economy . . . . . . . . . . . . . . 3
1.2 Challenges to Education . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Performance of the Current Education System . . . . . . . . . 5
1.2.2 Center for Engineering Educational Outreach . . . . . . . . . 6
1.2.3 Constructivism and Constructionism . . . . . . . . . . . . . . 7
1.2.4 The George Lucas Educational Foundation . . . . . . . . . . . 8
1.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Literature Review 12
iv
2.1 Review of Distance Learning . . . . . . . . . . . . . . . . . . . . . . . 12
2.1.1 Technology and Distance Learning . . . . . . . . . . . . . . . 12
2.1.2 The Internet and Distance Learning . . . . . . . . . . . . . . . 14
2.2 Transactional Distance and Empathic Communication . . . . . . . . . 16
2.3 Tangible Interfaces: MIT . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Augmented Reality: HITLab . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.1 Communication Space and Task Space . . . . . . . . . . . . . 18
2.4.2 The HI-SPACE Table . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.3 The ARToolkit . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.6 Specific Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Robotable: An Overview 24
3.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2.1 Robotable Online (Ben) . . . . . . . . . . . . . . . . . . . . . 27
3.2.2 Image Processing (Carl) . . . . . . . . . . . . . . . . . . . . . 29
3.2.3 Activity Card Toolkit (Craig) . . . . . . . . . . . . . . . . . . 29
3.2.4 Calibration, Whiteboard, Activity Prototyping, and General
Integration (Paul) . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4 Hardware 36
4.1 Table Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.2 Table Top . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.3 Mirror and Projector . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.4 Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.4.1 Software Access to Cameras . . . . . . . . . . . . . . . . . . . 47
4.4.2 Tracking from Above vs. Tracking from Below . . . . . . . . . 47
v
4.4.3 Tracking with IR vs. Tracking with Visible Light . . . . . . . 50
5 Software 54
5.1 Conferencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.2 Whiteboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.3 Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.3.1 Electronic Activity Cards . . . . . . . . . . . . . . . . . . . . 59
5.3.2 Web-based Activity Cards . . . . . . . . . . . . . . . . . . . . 63
5.4 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6 Testing and Evaluation 67
6.1 Experiment Description . . . . . . . . . . . . . . . . . . . . . . . . . 67
6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.3.1 Times to Task Completion . . . . . . . . . . . . . . . . . . . . 73
6.3.2 Subjective Data . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
7 Future Work 82
7.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
7.1.1 Tabletop screen . . . . . . . . . . . . . . . . . . . . . . . . . . 82
7.1.2 Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.1.3 Tangible Devices and Augmented Reality . . . . . . . . . . . . 83
7.1.4 Portability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7.2 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7.2.1 Development of Instructional Content . . . . . . . . . . . . . . 84
7.2.2 Integration and Testing of the Robotable Internet Server . . . 85
A Data and Analysis 86
vi
Bibliography 92
vii
List of Tables
4.1 Epson PowerLite S1 specifications. . . . . . . . . . . . . . . . . . . . 43
6.1 Summary of statistics of the times to task completion. . . . . . . . . 75
6.2 ANOVA: Two-actor with replication for times to task completion. . . 76
6.3 Differences of means for comparing with Q(σd) values. . . . . . . . . . 78
6.4 Calculating Q(σd). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.5 Excerpt from a table of Q-values. . . . . . . . . . . . . . . . . . . . . 79
6.6 Critical values of ±z. . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
A.1 Wilcoxon signed rank test for data from the question rating the activ-
ities from Dull to Stimulating. . . . . . . . . . . . . . . . . . . . . . . 91
viii
List of Figures
2.1 Classroom2000. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Maratech. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Comparing face-to-face collaboration with computer supported work
(Billinghurst). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
(a) The task space is contained within the communication space. . 19
(b) The task space is separate from the communication space. . . . 19
2.4 The HITLab’s virtual dig exhibit at the Seattle museum (HITLab). . 20
2.5 3D virtual object overlaid on the real world (HITLab). . . . . . . . . 21
2.6 Tracking based on ARToolkit. . . . . . . . . . . . . . . . . . . . . . . 21
(a) Incoming video stream. . . . . . . . . . . . . . . . . . . . . . . . 21
(b) Threshold and find squares. . . . . . . . . . . . . . . . . . . . . 21
(c) Calculate 3D position and orientation. . . . . . . . . . . . . . . 21
3.1 Schematic of the Robotable showing key features. . . . . . . . . . . . 25
3.2 Activity cards inspired by a variety of metaphors. . . . . . . . . . . . 33
(a) eBook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
(b) Electronic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 Web based activity card. . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1 Mimio capture bar and pen. . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 The basic Robotable. . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Cross-section of 1530 and 1530-Lite. . . . . . . . . . . . . . . . . . . . 41
ix
4.4 Ghosting is more noticeable with a larger angle of incidence near the
top of the mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
(a) Case 2 - mirror at 45◦. . . . . . . . . . . . . . . . . . . . . . . . 45
(b) Double reflection causes ghosting. . . . . . . . . . . . . . . . . . 45
4.5 Case 3 - projector aimed down (used at Tuftl). . . . . . . . . . . . . . 45
4.6 Cameras currently used on the Tuftl Robotable. . . . . . . . . . . . . 46
(a) iSight with iChat for conferencing (Apple). . . . . . . . . . . . . 46
(b) Channel Vision 5124 B&W night vision for IR tracking (Channel
Vision). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.7 SightFlex (MacMice). . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.8 Marker placement for tracking from above and below. . . . . . . . . . 49
(a) Marker attached to the topside. . . . . . . . . . . . . . . . . . . 49
(b) Marker attached to the underside. . . . . . . . . . . . . . . . . . 49
4.9 The quality of the image deteriorates as distance from the tabletop
increases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
(a) Marker 1 to 2mm from frosted tabletop. . . . . . . . . . . . . . 49
(b) Marker 16mm from frosted tabletop. . . . . . . . . . . . . . . . 49
4.10 850nm longpass filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.11 Non-uniform intensities viewed from above the table. . . . . . . . . . 52
4.12 Image processing is easy without a projection. . . . . . . . . . . . . . 52
(a) Image without a projection. . . . . . . . . . . . . . . . . . . . . 52
(b) Marker detection successful. . . . . . . . . . . . . . . . . . . . . 52
4.13 Image processing fails with a projected texture. . . . . . . . . . . . . 53
(a) Image with a projected texture. . . . . . . . . . . . . . . . . . . 53
(b) Marker detection fails. . . . . . . . . . . . . . . . . . . . . . . . 53
5.1 Robotable whiteboard (March 2005). . . . . . . . . . . . . . . . . . . 58
5.2 Electronic activity cards enable rich media content. . . . . . . . . . . 60
x
5.3 Greyscale image plus lookup table equals terrain. . . . . . . . . . . . 61
5.4 Viewing pages of a Fable. . . . . . . . . . . . . . . . . . . . . . . . . 62
(a) Title with credits. . . . . . . . . . . . . . . . . . . . . . . . . . 62
(b) Story in images and text. . . . . . . . . . . . . . . . . . . . . . 62
(c) The moral of the fable. . . . . . . . . . . . . . . . . . . . . . . . 62
5.5 Tracking based on ARToolkit. . . . . . . . . . . . . . . . . . . . . . . 64
(a) Defining a reference for the projection. . . . . . . . . . . . . . . 64
(b) Construction for finding normalized coordinates. . . . . . . . . . 64
5.6 Finding position and orientation from the marker’s ordered vertices. . 66
6.1 Frustrating to satisfying. . . . . . . . . . . . . . . . . . . . . . . . . . 71
6.2 Dull to stimulating. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.3 Difficult to easy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.4 Comparing data for the time to completion of the first activities. . . . 74
6.5 Comparing data for the time to completion of the second activities. . 74
6.6 Graph of mean times to completion. . . . . . . . . . . . . . . . . . . . 77
A.1 The age and gender of participants in the study. . . . . . . . . . . . . 87
A.2 Comfort with computers. . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.3 Experience with ROBOLAB™. . . . . . . . . . . . . . . . . . . . . . . 87
A.4 Experience with spreadsheets. . . . . . . . . . . . . . . . . . . . . . . 88
xi
Chapter 1
Introduction
1.1 A Changing Society Challenges Nations and
Engineers
1.1.1 Shortage of Engineers
Many people believe that the United States is facing a shortage of scientists and en-
gineers. In 2001 the U.S. Bureau of Labor Statistics predicted that the number of
jobs in science and engineering would increase 47% by the year 2010. Meanwhile,
the National Science Board (NSB) reported undergraduate engineering enrollments
declined by more than 20% from 1983 to 1999[1]. Two years later a National Science
Foundation (NSF) study revealed that this trend has reversed and that enrollments
have increased every year since 1999[2]. The NSF report also states that full-time,
first-time graduate enrollment of foreign students in these fields declined by about
8% in 2002, which is thought to be due to a restriction on temporary visas since the
9/11 terrorist attack. In contrast to this, full-time, first-time science and engineering
graduate enrollment increased almost 14% for U.S. citizens and permanent residents.
So it seems that trends are volatile and, it turns out, there are a number of critics
1
CHAPTER 1. INTRODUCTION 2
do not agree with forecasts of an impending dearth of scientists and engineers. How-
ever, even the critics agree “that America’s science-and-engineering machine faces
significant challenges in a world much altered by global competition and increasing
diversity at home.”[3].
1.1.2 An Increasingly Technological Society
In an increasingly technological society, engineering and science skills can help people
to function more effectively and adapt to change. All citizens should have the oppor-
tunity to develop these skills, irrespective of attributes such as gender, ethnicity, or
socio-economic background. Currently, women and minorities are under-represented
in engineering professions, although an NSF document[2] shows that enrolments based
on gender and ethnicity have shown gains in recent years. Susan Staffin Metz, Presi-
dent of Women in Engineering Programs and Advocates Network, says there is a need
to encourage all students to “pursue a career in engineering so the United States can
continue to meet the ever-growing needs of its technology based society.”[4] Legal
Affairs Editor for The Economist, David Manasian, says, “Despite the dotcom boom
and bust, the computer and telecommunications revolution has barely begun. Over
the next few decades, the internet and related technologies really will profoundly
transform society.”[5] All citizens need to have a basic understanding of the processes
and uses of engineering and technology to make informed choices[6]. In England,
Chair of the Royal Society Education Committee, Sir Alistair MacFarlane, said in
a statement on 17 December 2003, “We live in an increasingly technological world,
and we need, as a nation to have a workforce that includes highly skilled scientists,
engineers, doctors and technicians, and a citizenry able to understand, appreciate
and act upon the consequences of advances in scientific knowledge.” Although he
was speaking in the United Kingdom, his words are just as relevant to the United
States. While the general population must cope with an increasingly technological
CHAPTER 1. INTRODUCTION 3
world, the greatest challenge to scientists and engineers, and to the prosperity of all
nations, over the next few decades comes from globalization.
1.1.3 Globalization
One inevitable and irreversible consequence of the forward march of technology is
globalization[7]. Globalization refers to the spread of free-market capitalism through-
out the world. In a global economy research, manufacturing, and skilled professionals
will go where the economic climate is best. According to the President of the Ameri-
can Society of Mechanical Engineers (ASME) Harry Armen, the disturbing feature of
globalization is that there appears to be no rules; some nations will suffer and some
will thrive. The challenge for all nations, including the United States, in today’s
global economy is to increase industrial competitiveness[8].
The true wealth of an organized entity, be it a company or a country, resides
in its human capital. Engineers play a great role in the creation of wealth through
the application of science and technology in society[9]. In June 2000, President Bill
Clinton said “Our passion for discovery, our determination to explore new scientific
frontiers, and our can-do spirit of technological innovation have always driven this
Nation forward.” It is this capacity to innovate that will determine leadership in the
era of globalization.
1.1.4 How to Succeed in a Global Economy
To ensure the United States remains competitive in a global economy requires invest-
ment in its human capital, which includes:
• Research and development,
• education,
• increasing the diversity of our science and engineering workforce,
CHAPTER 1. INTRODUCTION 4
• attracting the brightest youngsters into the profession,
• and collaborative partnerships between industry, academia and government[8].
In an article titled, Making Connections: The Role of Engineering and Engineering
Education, the Deputy Director of the National Science Foundation, Dr. Joseph
Bordogna suggests what engineers need to succeed:
“To be successful and to promote prosperity, engineers must exhibit more
than first-rate technical and scientific skills. In an increasingly competitive
world, they must help us make good decisions about investing enormous
amounts of time, money, and human resources toward common ends. I
like to think of the engineer as someone who not only knows how to do
things right, but also knows the right thing to do. This requires that he
or she have a broad, holistic background. Since engineering itself is an
integrative process, engineering education must likewise be integrative.
For example, engineers must be able to work in teams and communicate
well. They must be flexible, adaptable, and resilient. Equally important,
they must be able to employ a systems approach in their work, to make
connections within the context of ethical, political, international, environ-
mental, and economic considerations.” (Joseph Bordogna, 1997)[9]
Revered management thinker, Peter F. Drucker, defines innovation as making and
profiting from new things, as opposed to productivity, which implies simply making
existing things more efficiently. It is innovation that drives economic growth and
determines a nation’s competitiveness in a global economy. Therefore, education
needs to cultivate this quality in today’s students to produce tomorrow’s leaders.
Engineers will also require a commitment to lifelong learning in order to hone their
intellectual skills and revitalize their talents for innovation and creativity[8]. Dr.
CHAPTER 1. INTRODUCTION 5
Bordogna says, “A critical element in the innovation process is scientific inquiry, an
analytic, reductionist process that involves delving into the secrets of the universe to
discover new knowledge.” He also believes that engineers should participate in the
process of engineering throughout their educational experience. In addition to the pre-
requisite mathematical and scientific skills, engineers should have an understanding
of risk, and enjoy the excitement of facing an open-ended challenge and creating
something new[9].
1.2 Challenges to Education
The challenge for the education system is to provide all students with the kind of
training that will equip them to be confident and effective citizens throughout their
lives. This section looks at the performance of the current education system, and
introduces some theories and practices that can bridge the gap between what exists
and what is required.
1.2.1 Performance of the Current Education System
The Department of Education does not feel that the existing system is producing the
required results:
“Upon graduating from high school, few students have acquired the math
and science skills necessary to compete in the knowledge-based econ-
omy.”(U.S. Department of Education, 2004)[10]
So current educational practices are not producing the required results. Educa-
tional policies driven by standards and the No Child Left Behind (NCLB) law have
put pressures on State Education Departments, schools and teachers to deliver higher
test scores on limited budgets[11]. Critics of these policies say President Bush was
CHAPTER 1. INTRODUCTION 6
ill advised and has ushered in an age of rigidity in education. Classrooms are full of
teachers who “teach to the test”, which has the effect of stifling student creativity[12].
A fall 1998 survey by the National Center for Education Statistics found that
47% of teachers don’t use software at all for instruction. Nearly four out of every 10
of these teachers said they don’t have enough time to try out software, and almost
as many said they don’t have enough training on instructional software. Most K-5
teachers only hold a general education degree and their awareness of, and comfort
level with engineering and science is minimal. One third of the teachers who don’t
use software do not have a computer in their classroom and 40% have just one or
two. However, since 1998 things have improved. Between 1998 and 2003 the ratio of
students to instructional computers with Internet access in public schools decreased
from 12.1:1, to 4.4:1[13]. Although, to make a difference in learning, teachers must
know how to use the digital content in their classrooms. This underlines the need for
professional development and teacher support. One organization that offers a range of
programs to meet this need is Tufts University’s Center for Engineering Educational
Outreach (CEEO).
1.2.2 Center for Engineering Educational Outreach
The CEEO’s mission is to increase people’s knowledge and awareness of, and comfort
with, science and technology. It provides workshops, institutes, conferences, and sum-
mer programs for children and teachers. It has created research and degree programs
at Tufts University that have combined disciplines such as engineering, education,
child development, computer science, and psychology. One project investigates how
children and teachers learn engineering, and another looks at how to create opti-
mal learning environment for engineering education. The Student Teacher Outreach
Mentorship Program (STOMP) and the GK-12 programs pair graduate and under-
graduate engineering and computer science fellows with school teachers to help them
CHAPTER 1. INTRODUCTION 7
infuse engineering concepts and activities into their lessons. The GK-12 fellows serve
as a technical resource in the classroom, helping the teachers to create and imple-
ment hands-on engineering-based projects and curricula. An emphasis is placed on
creating activities that appeal to both genders, and the program aims to encourage en-
gineering students to make educational outreach an ongoing commitment throughout
their lives. The CEEO’s efforts to bring engineering to the classroom are grounded
in constructionist philosophy, which maintains that people learn better when they
are working with materials that allow them to design and build artifacts that are
meaningful to them[14].
1.2.3 Constructivism and Constructionism
Constructivism is a theory of knowledge developed by Jean Piaget. He argues that
children are not simply empty vessels into which knowledge can be poured, but they
are theory builders who actively construct and rearrange knowledge based on their
experiences in the world. Constructivism is regarded as a learning theory more than
a teaching approach[15]. The “open system” approach of constructivism, where the
content is not pre-specified, the learner determines the direction of the lesson, and
assessment is more subjective, does not appeal to teachers under pressure to “teach
to the test”.
Seymour Papert was a colleague of Piaget’s in the late 1950s and early 1960s. In
1993, he wrote that education “remains largely committed to the educational philoso-
phy of the late nineteenth and early twentieth centuries.”[16]. By this, he is referring
to the objective behaviorist/cognitive learning theories that readily lend themselves to
instructional design. He was convinced of Piaget’s theory of knowledge but wanted
to extend it to learning and education. Papert called his theory constructionism,
which asserts that constructivist learning happens particularly well when people are
engaged in building something external to themselves. In this way they are not only
CHAPTER 1. INTRODUCTION 8
constructing their own knowledge, they are simultaneously constructing a product
such as a sand-castle, machine, computer program, or a book. Piaget recognized
what he called concrete thinking, which is thinking with and through physical ob-
jects. He believed concrete thinking was used by children, who replaced it with more
abstract formal thinking when they grew up. Papert, however, believes concrete
thinking is complementary to formal thinking and applies to adults as well as chil-
dren. In fact he says, “that a prevailing tendency to overvalue abstract reasoning is
a major obstacle to progress in education”[16]. Constructionism is a way of making
formal, abstract ideas and relationships more concrete, more visual, more tangible,
more manipulative, and therefore more readily understandable.
1.2.4 The George Lucas Educational Foundation
The George Lucas Educational Foundation (GLEF) is a nonprofit operating foun-
dation that documents and disseminates information about exemplary programs in
K-12 schools to help these practices spread nationwide. The foundation uses the
word Edutopia to refer to their vision of an ideal educational landscape, where stu-
dents are motivated to learn and teachers are energized by the excitement of teaching.
GLEF’s mission and goals are included here because they support those of the CEEO,
NSF, ASME, the Department of Education, the Federal Government, and many more
organizations who wish to see education revitalized. The GLEF advocates commu-
nity partnerships and the need for volunteers to connect students to the real world.
There are 13 topics that the GLEF believes represent the critical elements in public
education. They are included here because they foster the skills needed by future cit-
izens, engineers and scientists. They also serve as guiding principles while developing
learning tools and instructional content.
• Assessment: Use real-world performance assessments in addition to standard-
ized tests.
CHAPTER 1. INTRODUCTION 9
• Business partnerships: Make learning more challenging and relevant.
• Community partnerships: Community volunteers connect students to the real
world.
• Digital divide: Bridge the divide by allowing all individuals and communities
access to technology resources and training.
• Emotional intelligence: Helping students develop skills to manage their emo-
tions, resolve conflict non-violently, and respect differences.
• Mentoring: Provides benefits for student teachers, new teachers, and veteran
teachers.
• Ongoing professional development: Opportunity to learn from other teachers,
and exposure to the latest research, knowledge, and technology.
• Parents Involvement: Increases learning and self-confidence for students, morale
and support for teachers, understanding for parents.
• Project-based learning: Increases self-direction and motivation, improves re-
search and problem solving skills, results in deeper understanding of subject
matter.
• School-to-career: Provide career exploration opportunities such as job-shadowing,
internships, mentoring, and career counseling.
• Teacher preparation: More practice to supplement theory in schools of educa-
tion.
• Teacher integration: Internet and multi-media.
• Technology professional development: Providing teachers information, training
and assistance to ensure that new technology tools benefit student learning.
CHAPTER 1. INTRODUCTION 10
When effectively integrated into the curriculum, technology tools can extend learn-
ing in powerful ways (www.edutopia.org). The Internet and multi-media can provide
students and teachers with:
• Access to up-to-date primary source material.
• Ways to collaborate with students, teachers, and experts around the world.
• Opportunities for expressing understanding via images, sound and text.
1.3 Summary
An increasingly technological society and a global economy have challenged the na-
tion’s citizens, particularly engineers and scientists, to achieve a higher level and a
more diverse range of skills. Apart from having first-rate technical and scientific skills
engineers will need to be creative and innovative. They will need to enjoy the ex-
citement of facing an open-ended challenge, and to work in teams and communicate
well. They will need to be flexible, adaptable and resilient, and to make connections
within the context of ethical, political, international, environmental, and economic
considerations.
The current education system is not providing graduates with the skills they need
to excel in the increasingly technological 21st century. The education system has
issues which need to be addressed in order to produce the quantity and quality of
engineers that can lead the world in a global economy. Standards driven educational
policies produce an environment that does not foster essential qualities such as pas-
sion, innovation, and independence of thought. A revolution is required to move
into an ”age of learning”. This huge and essential change will require involvement of
community, encouragement of educational ”diversity”, decentralization, fostering of
personal teaching styles, and the involvement of parents, teachers and students[16].
The fact is that teachers need help. The CEEO is one organization that is providing
CHAPTER 1. INTRODUCTION 11
help. The CEEO advocates and delivers practices that can make classrooms an excit-
ing and empowering learning environment that will equip students to live confidently
and successfully in the new millennium. This thesis describes the development of a
hands-on interactive tabletop environment (Robotable) that is designed to support
the outreach mission of the CEEO. Robotable is a platform that supports learning
activities in Papert’s constructionist style. It can provide collaborative online learn-
ing, or independent training for community volunteers in preparation for outreach
work.
Chapter 2
Literature Review
Topics have been selected for review if they contribute towards the understanding
or enhancement of distance education, communication, learning, collaboration, or
human-computer interaction.
2.1 Review of Distance Learning
2.1.1 Technology and Distance Learning
Distance learning and technology have been closely related for thousands of years. The
first significant technology was paper. A variety invented in China, using pounded
mulberry bark, was superior to the Egyptian’s papyrus because ink could penetrate
the surface. This made it suitable for legal documents. It took about 400 years for
Chinese paper to make its way through the Arab world to Europe, carrying mathe-
matics with it.
Gutenberg’s printing press revolutionized distance education. Until the printing
press the only kind of document were manuscripts, either originals or carefully copied
by scribes. The printing press enabled the rapid diffusion of knowledge throughout
the world. It ushered in an Age of Enlightenment, paved the way for democracy, and
12
CHAPTER 2. LITERATURE REVIEW 13
facilitated the international communication and co-operation of scientists[17].
An important mechanism for the movement of documents was the postal system.
A number of older civilizations, including China’s Chou dynasty and the Roman
Empire, had very good postal systems. With the collapse of the empire in the west,
the postal system became fragmented, but lingered until finally falling into disuse
around the 9th century. The growth of commerce during the Renaissance and the
need for business correspondence motivated the re-emergence of the postal system.
The modern postal system, using a fixed rate pre-paid stamp rather than cash on
delivery, appeared in the early 19th century and soon after correspondence courses
appeared.
Radio was invented in the early 20th century. Within a few decades programs,
including news, could be broadcast almost instantaneously across nations. By the
mid-20th century television was established and promised great things as an instruc-
tional tool. However, radio and television suffered two major drawbacks, they were
a “live” medium and they were one-way[18]. Other technologies such as the phono-
graphs, audio and video tapes, and copying equipment allowed a variety of course
materials to be produced and duplicated with ease.
With the introduction of microwave and satellite technologies, radio and television
could be broadcast much further and at a cheaper rate than with previous systems.
As the cost of reception equipment has reduced in the last couple of decades, the
number of distance education courses using these mediums has increased. However
they are still one-way, which prompts critics to complain that education should be
”more than a passive transmission of academic information”[19].
CHAPTER 2. LITERATURE REVIEW 14
2.1.2 The Internet and Distance Learning
Beginning in the late 1960s the U.S. Department of Defense began funding research
on networking using a variety of technologies. By 1982 the Defense Advance Re-
search Projects Agency (DARPA) had a prototype Internet in place running TCP/IP
software. DARPA negotiated a contract with Berkeley so that the next distribution
of UNIX incorporated TCP/IP software. The Internet quickly became popular at
other universities and proved invaluable to scientists and engineers. NSF then took a
leadership role in funding the development of the Internet. Since the mid 1980s the
Internet has grown exponentially, from approximately 2000 computers on the Internet
in 1985 to 73,000,000 in the year 2000[20].
With the growth of the Internet in the 1990s came proclamations of a new era
in distance learning. Unfortunately, distance learning applications have fallen short
of many of their initial promises[21]. Presented here are two case studies of Internet
based learning applications that illustrate two different approaches. Classroom 2000
is typical of the type of one-way distance learning applications that appeared during
the 1990s, while Maratech is designed to be a self-contained, fully-interactive online
learning environment.
Case Study I: Classroom 2000
Classroom2000 is primarily a way of using technology to supplement a standard class.
Students attend a lecture in the usual way, except that the room is equipped with
video cameras, microphones, a screen for projecting slide shows, and an electronic
whiteboard (ZenPad) that allows the lecturer to annotate slides. The different media
streams are time-stamped and automatically integrated into a web page (see Figure
2.1). Later, students can use the page to review certain parts, or they can play the
entire lecture if they missed class. While viewing a slide, students can click on the
teacher’s annotations to replay the audio and video at the time the ink was written.
CHAPTER 2. LITERATURE REVIEW 15
Figure 2.1: Classroom2000.
Classroom2000 does not claim to be a distance learning application although they
do say they can “later recreate the lecture experience”. Its format is very similar
to other, earlier, applications that did claim to be distance learning tools. As a
distance learning tool, this type of application can be criticized for being one-way.
Classroom2000, however, assumes that students are present in class where they have
an opportunity for two-way interaction.
Case Study II: Maratech
Maratech provides two-way interaction with an application that is specifically de-
signed for distance learning1. Remote students are able to join classes from their
homes or schools using a laptop or PC with a headset and webcam. The groups can
see and talk with each other, interact with their teacher, and share documents and
applications over the Internet (see Figure 2.2). It can run on Mac, Linux or Windows,
incorporates echo cancelation, and provides end-to-end encryption, group or private
1It is also designed for managing administration, training, and collaborative international researchprojects.
CHAPTER 2. LITERATURE REVIEW 16
Figure 2.2: Maratech.
chat, and the facility to record meetings and lectures for archiving, distribution, or
playback. Maratech gets around firewall issues by hosting public or private meeting
rooms that participants can join.
2.2 Transactional Distance and Empathic Commu-
nication
The criticism of most distance learning technologies is that they are one-way. This
is true for television and, to a lesser extent, for radio because the cost of two-way
communication in these mediums is largely prohibitive. The attraction of Internet
based technologies is that two-way audio and video is possible at a greatly reduced
cost.
The role of interaction in learning has been highlighted by education researchers.
CHAPTER 2. LITERATURE REVIEW 17
In 1983 Michael Moore introduced the concept of Transactional distance, as opposed
to geographic distance. To enhance learning one must reduce transactional distance,
which depends on structure and dialog[22]. Structure is a measure of an educational
program’s responsiveness to learners individual needs, and dialog is the extent to
which learner and educator are able to respond to each other. A rigid lesson structure
inhibits interaction, while dialog depends on the quality of interaction technologies.
Moore identified three key interactions:
• Learner - content,
• Learner - instructor,
• Learner - learner.
Transactional distance is reduced and learning is enhanced by facilitating these kinds
of interactions.
Holmberg theorized that a more personal/conversational style is more conducive
to learning. In other words, if you explain things in simple language then everyone
can understand it. He notes that some educators do not like this approach because
they are afraid that they will not appear “scholarly” and will lose some “academic
dignity”. Nevertheless, students prefer it and it is more effective.[23]
2.3 Tangible Interfaces: MIT
Pioneering work developing Tangible User Interfaces (TUIs) has been done by the
Tangible Media Group at MIT. A goal of the group is to bridge the divide between
the physical world and cyberspace so that one can seamlessly interact with objects
from both worlds [24]. A TUI is one in which real world objects are used as com-
puter input and output devices. A tangible input device is a physical object whose
manipulations of mapped one-to-one to virtual object operations [25]. Tangible input
CHAPTER 2. LITERATURE REVIEW 18
devices are generally space-multiplexed, which means that each object has a single
function. Because these tools are dedicated to a specific task they are typically very
good at that task. Objects in a toolbox are space-multiplexed devices, for example,
a hammer, a screw-driver, or a pair of pliers. These objects can be extremely intu-
itive because their have physical properties naturally suggest how they can be used.
An advantage afforded by space-multiplexed devices is that several devices can be
manipulated simultaneously. This facilitates collaboration and teamwork in a way
that a single input device cannot. Input devices can be categorized as being either
space-multiplexed or time-multiplexed. A time-multiplexed device is a single physi-
cal object that controls different functions at different points in time. For example,
a computer mouse.
2.4 Augmented Reality: HITLab
The Human Interface Technology Lab at Washington University in Seattle, and now
in New Zealand, have pioneered new ways for humans to interact with computers.
They have investigated some of the social factors and unique technical challenges
presented when using computers to enhance collaboration. It is worth taking note of
the communication space, the task space, and the display space.
2.4.1 Communication Space and Task Space
In a typical face-to-face collaboration around a table the task space encompasses the
volume on and above the table top. The communication space is a little broader and
includes all the participants (see Figure 2.3(a)). While an object on the table may
be the focus of attention, it is easy to maintain good communication with others who
are across the table, or in our peripheral vision. Our communication “bandwidth” is
broad because a rich variety of communication cues are present. Visual cues include
gaze, gesture, facial expression and body position. Audio cues include everything
CHAPTER 2. LITERATURE REVIEW 19
(a) The task space is contained within the communi-cation space.
(b) The task space is separate from thecommunication space.
Figure 2.3: Comparing face-to-face collaboration with computer supported work(Billinghurst).
verbal such as words, inflection, pitch, emphasis, pace, rhythm, pause, volume, and
sounds that are not words such as ”uh-huh”. There are also environmental cues
such as object manipulation, writing and drawing, spatial relationships and object
presence.
By comparison, computer supported collaborative work is generally characterized
by a separation of the task space and the communication space (see Figure 2.3(b)).
Participants are usually facing the same way; towards the computer monitor. This
introduces a functional seam in the workspace and reduces the number of effective
communication cues[26].
The lesson from this is that good communication is best facilitated by providing
an environment that supports the greatest number of communication cues.
2.4.2 The HI-SPACE Table
The HITLab proposes that the key to developing the next generation human to
information interface is to move beyond the limitations of small computer monitors
as our only view into the electronic information space and keyboards and mice as
the only interaction devices. Our physical information space, which includes walls,
CHAPTER 2. LITERATURE REVIEW 20
Figure 2.4: The HITLab’s virtual dig exhibit at the Seattle museum (HITLab).
tables, and other surfaces, could also be our view into the electronic information space.
This line of thinking resulted in construction of the HI-SPACE table. The top is an
interactive display surface and the physical table environment affords collaboration
and natural face-to-face communication. An example application of the HI-SPACE
table is the Virtual Dig exhibition, which appeared at the Seattle Museum from May
to August 2001. In Figure 2.4 participants are asked by the narrator to help in
excavating a new archaeological site in the Sichuan province. As the brushes move
over the table, the virtual grass and dirt are removed to reveal a layer of artifacts.
2.4.3 The ARToolkit
The ARToolkit is an open source, cross-platform software library for building Aug-
mented Reality (AR) applications. AR applications overlay virtual content on to our
view of the real world as shown in Figure 2.5. The core of ARToolkit is an optical
tracking system that tracks markers. The markers are squares containing a unique,
asymmetric identifying pattern. Each frame of the incoming video stream is processed
as shown in Figure 2.6. First, the image is converted to greyscale and a threshold is
applied (Figure 2.6(b)). This image is then searched for squares and, knowing the
size of the marker and the camera parameters, it is possible to extract the 3D position
CHAPTER 2. LITERATURE REVIEW 21
Figure 2.5: 3D virtual object overlaid on the real world (HITLab).
(a) Incoming video stream. (b) Threshold and findsquares.
(c) Calculate 3D positionand orientation.
Figure 2.6: Tracking based on ARToolkit.
and orientation of the markers with respect to the camera (Figure 2.6(c)). A virtual
object can then be overlaid on the image by rendering it with respect to a virtual
camera placed at the same position and orientation as the real camera[27].
This is a type of tracking is known as Outside-In and is characterized by having
a fixed optical sensor, that is the camera. It requires the entire marker to be within
view and sufficiently large to obtain a reliable pattern match. The ARToolkit has the
advantage of being inexpensive, since it is open source and requires only a camera.
CHAPTER 2. LITERATURE REVIEW 22
2.5 Summary
The learning theories reviewed in this chapter agree that interaction as the key to
enhancing learning. Moore speaks of interactions between learner and content, learner
and instructor, and learner and learner. Holmberg says a more natural, conversational
dialogue between learner and instructor is more effective. Piaget notes that we are
not passive learners, rather we actively construct knowledge through our experiences
in the world. Papert adds that by constructing something external to ourselves we
enhance our internal construction of knowledge, and that this is true for adults as
well as children.
The technologies reviewed in this chapter offer ways to implement these interac-
tions. Tangible user interfaces offer a more intuitive and collaborative alternative
to the traditional, keyboard and mouse, way of interacting with computers. The
HITLab suggests bringing computer supported collaborative work back to the table,
which allows greater communication bandwidth between participants. They also sug-
gest broadening our view into the digital world by turning parts of our physical world,
such as a tabletop, into displays. The ARToolkit enables low cost optical tracking,
which can be used to implement tangible interface devices and augmented reality
applications.
2.6 Specific Objectives
The goal of this work is to utilize these theories and technologies to produce a power-
ful tabletop learning environment that can be used to promote engineering education.
Tables can be connected via the Internet to provide remote collaboration and compe-
tition. A tabletop environment affords face-to-face style interactions, and it provides
a convenient work space for hands on constructionist style learning. The tabletop
will be a display surface that can be used with tangible devices. Video and audio
CHAPTER 2. LITERATURE REVIEW 23
conferencing will allow communication between remote tables, and will support re-
mote collaborations. A camera will be used for optical tracking using the ARToolkit.
Keeping in mind that simplicity and manageability are the keys to introducing tech-
nology, one of the foremost goals will be to make the tabletop environment intuitive
and easy to use. The hardware needs to be robust and aesthetically pleasing. The
software needs to be reliable, flexible, and easy to use.
Chapter 3
Robotable: An Overview
The Robotable is a tabletop learning environment that is designed to integrate new
and existing technologies and learning theories to create a powerful learning expe-
rience. This chapter presents an overview of the Robotable to show readers where
others have contributed to this project. Detailed discussion of the author’s work
appears in Chapter 4 for hardware, and Chapter 5 for software.
3.1 Hardware
The hardware is determined by the requirements of the technologies, which have been
chosen to enable interactions that are richer, more intuitive, and more collaborative
than traditional computer supported work.
Occasionally, implementing a technology has undesirable side effects. In this case
the benefits must be weighed against disadvantages to evaluate if the technology
should be included. For example, wearing head-mounted displays (HMDs) enables
tangible-augmented reality, which eliminates a functional seam between the display
space and the task space. However, the expense and awkwardness of using HMDs
effectively vetoes their inclusion. A schematic of a prototype Robotable is shown in
Figure 3.1. This shows the key hardware components.
24
CHAPTER 3. ROBOTABLE: AN OVERVIEW 25
Figure 3.1: Schematic of the Robotable showing key features.
CHAPTER 3. ROBOTABLE: AN OVERVIEW 26
The physical components of the table are:
• Frame: The frame is constructed from 15 Series aluminum extrusions manufac-
tured by 80/20 Inc, and is sturdy and vibration-proof.
• Tabletop: Half inch Plexiglas provides a strong tabletop that can support a
computer and a piece of frosted glass that acts as a rear projection screen. It
is also possible to produce an equally good rear projection screen by applying
a translucent adhesive vinyl coating directly to the Plexiglas.
• Projector: The projector is fixed to the table frame so it will remain in the
required position, even if the table is moved.
• Mirror: The current version of the Tuftl Robotable uses a mirror that lies on
the floor and reflects the projected display back to the table surface. This
arrangement is required to increase the path length traveled by the projected
image so that it is sufficiently large when it arrives at the tabletop.
• Computers: Different versions of the Robotable have used different computer
configurations. To increase performance, tasks can be shared between two ma-
chines. The computers run video and audio conferencing, optical tracking, the
table display, shared applications, and anything else that may be required to
support a learning activity.
• Camera: Two cameras are used; one for video conferencing and one for optical
tracking. For video conferencing the camera is mounted to the vertical moni-
tor on the tabletop, and for optical tracking the camera has been tested both
above and below the table. Discussion of the relative merits of different camera
positions is given in Chapter 4.
• Electronic pen (not shown): An input device alternative to the mouse and
keyboard - it allows natural point/click and drag functionality.
CHAPTER 3. ROBOTABLE: AN OVERVIEW 27
3.2 Software
The people who have helped to develop Robotable’s software are identified in this
section. In addition to the work done by Paul Mason, there are three main bodies
of work; a Masters thesis by Ben Gemmill, an Honors dissertation by Carl Pattie,
and an Honors dissertation by Craig Oliver. Also, Addie Sutphen spent the winter of
2004-2005 developing electronic activity cards and prototype activities, and Jonathan
Festing spent the summer 2005 (southern hemisphere) investigating tracking using
infrared light. The following sections present descriptions of the main bodies of work.
For more in-depth information relating to this material you will need to contact
Professor Chris Rogers at Tufts University Department of Mechanical Engineering.
3.2.1 Robotable Online (Ben)
The Robotable Internet Server was designed by Ben Gemmill to connect Robotables.
Ben’s thesis is titled Design and Construction of a Physically Controlled, Online,
Persistent 3D World and was completed as part of a Masters of Science in Mechanical
Engineering at Tufts University, Medford. There are two parts to Ben’s work; the
Internet server and the interface to a 3D rendering engine. Ben’s work is described
below.
Internet server
In order to create a physically controlled, online, persistent 3D world, Ben designed a
custom internet server capable of handling multiple users and their objects transpar-
ently. Users can log on and off at will, and are synchronized to the current goings-on
in the world every time they rejoin. The world itself is managed like a library where
users can write in the books: a user would check an object out, modify it, and later
check it back in again for others to modify. In this way many people can collaborate
on tasks, even if they’re across the world from one another. The system keeps track of
CHAPTER 3. ROBOTABLE: AN OVERVIEW 28
the changed data and only gives the clients what changed, so even users on slow con-
nections can be kept up to speed. Using this server, our counterparts in Christchurch,
New Zealand, have successfully run activities and collaborated with Tuftl laboratory
in Boston.
The server was designed to be general purpose, supporting multi-user applications
from simple chatting to immersive 3D worlds.
Spyglass
Say two Robotables are connected and involved in a competition where each table has
a Lego robot involved in some task. Each table uses optical tracking to determine their
robot’s position and orientation. This data is then sent to the Robotable Internet
Server, which forwards changes to the other table. After the data arrives, a two
dimensional virtual representation of the remote robot is projected onto the tabletop
so that participants can follow their competitors progress. Spyglass is the term applied
to an environment that uses data from all connected Robotables to reconstruct a 3D
virtual view of the competition. For this task Ben integrated an open-source 3D
graphics engine with Robolab™and the Server. This can enable children to create
and share their own 3D worlds with their friends. This system could also be used
to show remote users how to construct a Lego model in 3D, have kids show off their
creations to others, or to play 3D movies. It works by sending commands out of
Robolab™to an external graphics program, combining Robolab’s ease of use with the
power of the open source Ogre3D.
Both of the systems are fully modular, so that both users and developers of Robo-
lab™can include them in their own programs without having to “re-invent the wheel”.
CHAPTER 3. ROBOTABLE: AN OVERVIEW 29
3.2.2 Image Processing (Carl)
The ARToolkit tracking system was adapted for use with Robolab™(hence Lab-
VIEW™) by Carl Pattie. Carl’s dissertation is titled Optical Tracking for the Ro-
botable Project and was completed as part of a Bachelor of Applied Computing with
Honors at Lincoln University, New Zealand.
The task is to track objects, usually robots, on the Robotable. Since the table
surface is a plane, only four degrees of freedom are required to fully specify position
and orientation. The ARToolkit, however, returns six degrees of freedom. Therefore,
Carl selected the parts of ARToolkit source code that were required for the task
and eliminated the rest. This improved performance because there was no need to
iterate for a solution to the 3D transformation matrix. The stripped-down code was
interfaced with Robolab™data structures and compiled on PC and Mac. The DLL
or Shared Library can then be accessed using the LabVIEW™Call Library Function.
The system identifies markers in the video stream at rates up to 15 Hz depending
on the image complexity and lighting conditions. A marker’s position is accurate to
within 1.4% of the size of the table surface1. All blobs found in the image are returned
in an array including blob number, blob area, whether it is a square, whether it is a
marker, pattern number, confidence factor, coordinates of the first vertex found, and
a list of the squares’ vertices in order.
In addition to the image processing, Carl wrote code to load pattern files, which
are used to recognize markers, and he wrote code that does pre- and post-processing
of the video stream. (Carl Pattie, 2004)
3.2.3 Activity Card Toolkit (Craig)
Craig Oliver carried out this work so that teachers, volunteers, and others with lit-
tle or no LabVIEW™programming experience could easily develop content for the
1As Carl mentioned in his dissertation, there are ways to improve upon this accuracy.
CHAPTER 3. ROBOTABLE: AN OVERVIEW 30
Robotable. Craig’s dissertation is titled A LabVIEW™Tool for Creating Electronic
Activity Cards and was completed as part of a Bachelor of Applied Computing with
Honors at Lincoln University, New Zealand.
Traditional Activity Cards consist of a set of cards that contain a series of iterative
steps designed to provide students with an independent learning environment. Elec-
tronic Activity Cards are based on the format of traditional activity cards, but they
add rich media such as video, audio, and interactive content to further enhance stu-
dents’ learning. Using LabVIEW™, Craig developed a system to simplify the creation
of electronic Activity Cards and enable the result to be viewed with LabVIEW™or
Robolab™. The solution employs LabVIEW™Express VIs, which allow users to add
code chunks in a single step, and then automatically configure them to achieve the
desired result.
3.2.4 Calibration, Whiteboard, Activity Prototyping, and
General Integration (Paul)
This section presents an overview of Paul’s work with software and how it fits with
others’ contributions. Names are included in braces where their work appears. A
more detailed description of Paul’s code is given in Chapter 5.
Calibration
Calibration is the process of mapping the physical environment to an internal repre-
sentation, so that the computer’s model matches the real world. Inaccuracy in the
estimation of position and orientation causes a lack of precision in the projection of
virtual content to the tabletop, which results in a loss of realism and usability. In our
case, accurate calibration enables us to project a 2D representation of a robot on a
remote table in the position that exactly corresponds to the real robots position on
the local table. Image processing with calibration allows robots to be tracked on a
CHAPTER 3. ROBOTABLE: AN OVERVIEW 31
Robotable. Optical tracking may be used with offline or online activities.
Optical Tracking
Pseudo-code of a simple tracking application is presented here. For a collaborative
activity involving remote tables, Server-related tasks would need to be included.
1. Calibrate
2. Initialize
◦ Camera
◦ Load pattern files (Carl)
◦ Load calibration information
3. Grab frame from camera
4. Image Processing (ARToolkit) (Carl)
5. Derive normalized position and orientation
6. Close
◦ Camera
Steps 3 through 5 are repeated until the activity is stopped.
Whiteboard
It is assumed that the Whiteboard is typically used collaboratively, so Server related
tasks are included in this pseudo-code.
1. Start the Server (Ben)
2. Initialize
◦ Calibrate electronic pen
CHAPTER 3. ROBOTABLE: AN OVERVIEW 32
◦ Connect to server (Ben)
◦ Clear whiteboard, initialize variables, etc.
◦ Start monitoring for user events
3. Get remote data from server (Ben)
4. Handle user events
5. Send local data to server (Ben)
6. Close
◦ Connection to server (Ben)
◦ Camera
Again, steps 3 through 5 are repeated until the activity is stopped.
Activity Prototyping
An activity refers to the content delivered by the Robotable. A variety of metaphors
have been used to create different styles of activity. These designs will be elaborated
upon in Chapter 5. Activities can be categorized as being either offline or online. An
offline activity does not use an Internet connection to another Robotable and is used
for independent learning. An online activity refers to a collaborative exercise using
connected Robotables. Pseudo-code for a remote activity is similar to that given
above for the whiteboard, except that the user events are specific to the activity
rather than the whiteboard. Robotable development has, up to now, explored three
main types of offline activity:
1. eBook (Figure 3.2(a)): This idea had each activity presented as a booklet pro-
jected onto the tabletop workspace. It was implemented with Robolab™.
2. Electronic activity card (Figure 3.2(b)): Like the traditional activity card only
including multi-media content. This design incorporates Craig’s Activity Toolkit
CHAPTER 3. ROBOTABLE: AN OVERVIEW 33
(a) eBook. (b) Electronic.
Figure 3.2: Activity cards inspired by a variety of metaphors.
for easy content creation. Addie created many activities based on this design.
These activities are implemented with Robolab™.
3. Web based (Figure 3.3): Currently being investigated, these are implemented
with Robolab™and html, javascript, asp, etc.
Thought has been given to library structures for storing the activities, and in-
terfaces that allow users to browse the libraries. An initial idea involved separating
activities from references. In this case, a reference is a short tutorial on some specific
aspect of building, programming or theory that may help in the completion of an
activity. For example, a user who is building a Lego robot may need to consult a
reference on ways to attach a motor, or the theory of gear trains and gear ratios.
The activities may also be categorized in a variety of ways to facilitate easy access.
A user may wish to search for an activity by grade2, by subject3, or relating to specific
standards or curriculum units.
2For example, All or specific such as K-5.3For example, mathematics, engineering, social studies.
CHAPTER 3. ROBOTABLE: AN OVERVIEW 34
Figure 3.3: Web based activity card.
3.3 Discussion
Integrating the technologies presented in this chapter enables the Robotable to be
a self-contained learning environment. Instructional content can guide the learner
through hands-on activities using Lego Bricks™to explore engineering concepts. Lego
parts are available in bins attached to the side of the table. Design, testing and
evaluation are iterative steps in the engineering product development cycle, which
is advocated for most tabletop activities. By attaching an ARToolkit marker to a
Lego robot, its position and orientation can be tracked. This information can be used
to modify the tabletop display interactively both locally and remotely. This enables
participants at separate locations to share in the same activity. They will see two
dimensional projections of remote robots navigating their tabletop. By using head-
mounted displays and augmented reality techniques, it is even possible for them to
see 3D virtual representations of remote robots navigating their tabletop.
Creating online activities tends to be more labor intensive than offline activities.
Online activities typically involve custom coding, which reduces re-useability, and
CHAPTER 3. ROBOTABLE: AN OVERVIEW 35
makes their development unsuitable for novice programmers.
Until recently the focus for prototype activities has been on producing something
that can be used to test Robotable’s hardware and software capabilities. During this
process it has become clear that many factors including content, structure, layout,
language, assumed prior knowledge, and mechanism of delivery are important to the
success of the resulting learning experience. These factors are being investigated now
and are a part of future work for the Robotable.
Chapter 4
Hardware
This chapter discusses issues surrounding design and construction of the Robotable.
Sections are given to the table frame, the table top, the mirror and projector, and
the cameras. The computers and the electronic pen do not warrant a section of their
own, so they are discussed now.
Computers
During development of the Robotable a number of different machines have been used.
The choice of computer determines options for the peripheral hardware. Robotable
is currently run by an iMac G5, which allows us to use iChat AV for conferencing.
When running Internet activities the Robotable Server needs to be run on a separate
machine because it is CPU intensive. On the client side, the number of concurrent
applications and the nature of their tasks determines how the whole system behaves.
In some prior incarnations of the Robotable, two machines have been used to spread
the workload. Although these issues have hardware implications, they are really
software issues and are discussed in Chapter 5.
36
CHAPTER 4. HARDWARE 37
Figure 4.1: Mimio capture bar and pen.
Electronic pen
The electronic pen gives Robotable users a more natural way to manipulate tabletop
content, and an alternative to keyboard and mouse input. It is particulary effective
if the user does not have to switch back to the keyboard or the mouse during an
activity. If it is necessary to return to the mouse to interact with the vertical screen,
or to use the keyboard to input text, then we are simply forcing the user to juggle
three input devices instead of two.
The Mimio™pen works like this; it has a capture bar, which is fixed to one side of
the whiteboard area (see Figure 4.1). It has infrared and ultrasonic sensors at each
end of the capture bar. The pens1 emit synchronized pulses of ultrasonic sound and
IR light so that by comparing the delays recorded at the sensors, the position can be
triangulated.
Care must be taken that the working environment does not have too much ambient
infrared or ultrasonic noise, because this can interfere with detection and makes
calibration impossible. We have tested two pens; an eBeam™and a Mimio™. The
Mimio™has a facility to check ambient noise levels. In Tuftl Lab it was necessary to
disconnect the ultrasonic motion detectors that are a part of the power saving lighting
1And the eraser.
CHAPTER 4. HARDWARE 38
system.
4.1 Table Frame
Design
When in its operating position, the table needs to be rigid enough to prevent vibration.
Vibration can interfere with optical tracking and the stability of the projection on
the tabletop. It is assumed that, once in position, the table will not require moving.
Therefore, the Robotable has been built for stability rather than portability. At one
time castors were considered for the legs of the table. Although this would have
enabled a single person to easily move the table, this idea was also rejected in the
interests of stability. Robotable may be dragged by one person but lifting requires
two people. The dimensions of the table have been decided to ensure that:
• People standing on either of the three open sides are easily able to reach the
center of the table.
• The height of the table allows the average adult to work comfortably while
standing, and to clearly see what is projected on to the tabletop.
• There is enough space beneath the table for the image from the projector to
attain sufficient size after reflecting off a mirror on the floor.
The table frame will have a projector, a camera, bins for Lego bricks, and perhaps
other things attached to it so the chosen material needs to enable this.
Implementation
The frame of the TUFTL Robotable (see Figure 4.2) is built using the Industrial
Erector Set, which is manufactured by 80/20 Inc. This product is convenient to use
because it is versatile, has excellent online catalogs and price-lists, can be cut or
CHAPTER 4. HARDWARE 39
Figure 4.2: The basic Robotable.
CHAPTER 4. HARDWARE 40
machined to order, and is easy to assemble. Also, it is easy to add, adjust, or move
attachments. 80/20 has a large network of distributors across North America2 which
helps reduce shipping time for all destinations. One disadvantage with this product
is that it takes some time to become familiar with their huge range of parts and to
know which is most appropriate to use for a given task.
The first prototype of the table was constructed using 80/20’s 15 Series as the
basic component. That is, the legs and the perimeter of the table-top used the
”1530” extruded profile. This extrusion is called 1530 because it has a cross section
that measures 1.5” x 3.0” (see Figure 4.3). This material was used initially since it
was available in the laboratory from a previous project. In an attempt to lower costs
and reduce the weight, a later version of the table was built using 80/20’s 10 Series,
where the basic component was a 1.0” x 2.0” extruded section. This version of the
table proved to be too prone to vibration. Even after bracing the table with extra
parts, a vibration problem persisted. Interestingly enough, the extra parts used in
the attempt to eliminate vibration caused the total price to exceed that of the 15
Series table. Therefore, the 10 Series was abandoned. The next, and current, version
of the table was constructed from ”1530-Lite”. 1530-Lite has the same cross-section
as 1530 (see Figure 4.3), is advertised with the same vibration proof properties as
1530, but is 83% lighter.
2In other countries, New Zealand for example, 80/20 products are not available and anothersolution must be found.
CHAPTER 4. HARDWARE 41
Figure 4.3: Cross-section of 1530 and 1530-Lite.
4.2 Table Top
Design
The two essential requirements are that the tabletop is strong enough to support
computers, monitors and miscellaneous equipment, and part of it can act as a rear-
projection screen. It was learned at the beginning of the project that HITLab’s HI-
SPACE table used frosted glass as a tabletop/rear-projection screen. Initial testing
included a variety of glass finishes to see if there was something better. Trials were
done with sand-blasted glass, white-laminated glass, satin-etched glass, and frosted
glass. Sand blasted was considered to be too coarse and did not produce a fine image.
White-laminated was too opaque and blurred the image. Satin-etched was not opaque
enough and did not produce a bright image. As a result, frosted glass was chosen
above other varieties.
Implementation
There are currently two slightly different approaches to implementing the tabletop.
Tuftl laboratory uses frosted glass as a screen and Lincoln uses translucent adhe-
sive vinyl. The first Robotable, built at Tuftl, uses a sheet of half-inch thick, clear
Plexiglas as the basic tabletop. A smaller sheet of quarter-inch frosted glass rests on
CHAPTER 4. HARDWARE 42
top of the Plexiglas to act as a rear-projection screen. The size of the frosted glass
was chosen to be a few inches larger, on both dimensions, than the image from the
projector appears after being projected twice the table-height. This makes the sheet
of frosted glass at Tuftl 37” by 28.5”. This is convenient because it allows approxi-
mately six inches of space around the projection area for placing things such as mouse
pads, robots, measuring tape, and Lego pieces. Concerns about the weight, durability,
portability, and safety of using glass led Lincoln to first use a sheet of Plexiglas with
an abrasive blasted finish. In this case the abrasive particles used were perhaps too
fine resulting in a finish that took a long time to apply, was not opaque enough, and
was uneven. Staff at Lincoln University discovered a 3M™product, Dusted Crystal,
that was a translucent adhesive vinyl coating. Initial experiments proved this product
to be very effective as a rear-projection screen, so they polished their Plexiglas back
to transparent and applied a sheet of 3M™Dusted Crystal. Subsequent trials at Tuftl
with other 3M™adhesive vinyls revealed Milano to be more opaque and a slightly
better rear-projection material, however it did not work so well when attempting to
track markers using a camera underneath the table3. This is because the more opaque
material reduces contrast in the image of the marker. Comparing the two methods,
one could say that glass is more resistant to wear and tear. On the other hand vinyl
is lighter and more portable. Although this is not an issue right now, future plans
include a portable Robotable that can be taken to visit schools.
When judging the performance of materials used for rear-projection screens, one
considers factors such as uniformity, brightness, sharpness, and color shift. Frosted
glass and translucent vinyl score well in sharpness, brightness and color shift. How-
ever, uniformity is an issue that plagued optical tracking until switching to infrared
light4.
3More about camera positioning in section 4.4.2.4More about tracking with IR light in section 4.4.3.
CHAPTER 4. HARDWARE 43
Pixel number 800 × 600Brightness (Typical) 1400 ANSI lumensContrast Ratio 500:1Screen Width Ratio (Distance/Width) 1.45 to 1.8:1Aspect Ratio 4:3 (supports 16:9)Keystone ±15◦
Lamp Life (Typical) 2000HLamp Price $200Total Price $1000
Table 4.1: Epson PowerLite S1 specifications.
4.3 Mirror and Projector
When choosing a projector consideration was given to the number of pixels, bright-
ness, screen width ratio (distance/width), aspect ratio, lamp life, cost of lamp re-
placement, and price. At the time of building the Tuftl table an Epson PowerLite S1
was chosen based on these factors. Some of the projectors specifications are given in
Table 4.1. This projector was brighter than most of its competitors and the contrast
ratio was relatively high. These factors combine to give a better image in a lighted
room.
The information provided with screen-width-ratio is sometimes given as throw-
distance. It refers to the “field of view” of the projected image and determines how
rapidly the image grows larger as distance to the screen increases. This Epson pro-
duces an image measuring approximately 35” × 26” after being projected a distance
of 51”. The pixel number is a parameter of the LCD. With the Epson, this is 800 ×
600. Input resolutions higher than this are compressed to fit, which results in a loss
of clarity. Keystone adjustment is essential and a larger angle is prefered.
The resulting quality of the projected image also depends on the mirror and the
relative arrangement of both. There are three possible basic arrangements:
1. Projector: placed on or near the floor and aimed vertically up at the tabletop.
CHAPTER 4. HARDWARE 44
The path from the projector to the tabletop is limited to the height of the table
less the depth of the projector. Even a projector with a small Distance/Width
ratio cannot produce a very large image.
Mirror: Not required.
2. Projector: placed near the floor, slightly away from the table and aimed hori-
zontally towards the table.
Mirror: Set at a 45◦ angle. This requires a large mirror since, near the tabletop,
the mirror needs to be almost as wide as the image on the table.
3. Projector: Projector placed near the tabletop and aimed (almost) vertically
down. It can’t be aimed precisely vertical or the projector would get in the
way. The slight angle required in this case produces a keystoned image but it
should be possible to correct it by making an adjustment with the projector.
Mirror: Set horizontally on or near the floor, reflects the image back up to the
tabletop. The mirror can be much smaller than the one used in the second case
since the longest path from the projector to the mirror is shorter.
The first two Robotables, the ones at Tuftl and the one at Lincoln, used the
second method. However this produces a ghosting problem at one end of the image
(see Figure 4.4). The angle, 45◦, is measured where the projector’s centerline meets
the mirror. This means the light at the bottom of the image has an angle of incidence
smaller than 45◦, and the light at the top of the image has and angle of incidence
greater than 45◦. This is compounded by the fact that projectors typically project
above the centerline at a much greater angle than below the centerline (see Figure
4.4(a). Ghosting occurs because the light is reflected from the glass at the front of
the mirror, and again at the silvered surface at the rear of the mirror (illustrated in
Figure 4.4(b)) . A thicker mirror accentuates ghosting because the distance between
the two reflected images increases.
CHAPTER 4. HARDWARE 45
(a) Case 2 - mirror at 45◦. (b) Double reflection causesghosting.
Figure 4.4: Ghosting is more noticeable with a larger angle of incidence near the top of themirror.
Figure 4.5: Case 3 - projector aimed down (used at Tuftl).
Lincoln University solved their ghosting problem by getting a front-surface mir-
ror5, whereas Tuftl switched to mirror-projector arrangement number 3 where the
projector is aimed downwards and a horizontal mirror is near the floor as in Figure
4.5.
4.4 Cameras
The are two main functions requiring cameras; video-conferencing and optical track-
ing. Currently an iSight camera (Figure 4.6(a)) is being used with iChat AV for
conferencing, and a Channel Vision 5124 (Figure 4.6(b)) black and white night vision
camera is being used for optical tracking. This camera is used with an IR long-pass
5Sometimes called a first-surface mirror.
CHAPTER 4. HARDWARE 46
(a) iSight with iChat for conferencing(Apple).
(b) Channel Vision 5124 B&W night vi-sion for IR tracking (Channel Vision).
Figure 4.6: Cameras currently used on the Tuftl Robotable.
filter enables tracking with infrared light. The Unibrain Fire-i fire-wire camera is
very good and works with Mac, PC and Linux, but does not have the convenience of
a built-in microphone. The Logitech Quickcam (a.k.a. LEGO Cam) USB has been
used successfully with a PC6 for tracking with visible light, but does not work with
infrared light.
The night vision camera is an analog line scanning video camera and is used
because it can see into the infrared range, whereas a standard webcam can not. Also,
the Channel Vision camera comes with 10 built in, high intensity IR LEDs which
act as an infrared flashlight. To use the night vision camera with Robotable, an
XLR-8 Video-USB adapter is used since it is compatible with Robolab’s QuickTime
drivers. This setup can snap still images at 640 × 480 pixels, and deliver video at 320
× 240 pixels. There is no doubt that tracking is improved when detecting markers
in an image of size 640 × 480, however the cost in processing time outweighs the
improvement in tracking.
6It can also be used on a Mac with a free USB webcam driver by Macam.
CHAPTER 4. HARDWARE 47
4.4.1 Software Access to Cameras
If an application requires a particular hardware resource, say a camera, then it must
acquire the camera from the operating system. If another application has already
acquired the camera, it will not be available. In this case the application will probably
pop up a dialog box to inform the user, and possibly continue with a black rectangle
where the image ought to be. For example, say, a video conference is in progress
and one wishes to snap an image for the whiteboard, the problem arises. Robolab
will attempt to acquire the iSight camera but it will not be available since iChat is
using it. One option is to close the video conference so that the iSight camera is
released. It may be easier to aim the iSight camera to show your remote counterpart
what you wish them to see. This is not easily done if the iSight camera is fixed to a
mounting and stuck to a computer. However, a company called MacMice (through
DVForge) sells two flexible “gooseneck” firewire holders7 for the iSight; an iFlex, and
a SightFlex that comes with a stand (see Figure 4.7). Another option is to replace the
iSight with a digital video camera. This has been tried with a Sony DV camera fixed
to tripod. The camera is then easy to swivel around, up and down, it has zoom, and
excellent image and sound. However, the position of the tripod can be restricted by
the table frame, and the length of the fire-wire cable determines the camera’s range
of motion.
4.4.2 Tracking from Above vs. Tracking from Below
One of the requirements for optical tracking is that the camera must have a clear
view of the target. To track a Lego robot using the ARToolkit, markers need to be
fixed to the robot. With a camera placed above the table, the marker is placed on
top of the robot as shown in Figure 4.8(a). A camera placed under the table requires
the marker to be attached underneath the robot as in Figure 4.8(b).
7These are popular items and currently hard to get.
CHAPTER 4. HARDWARE 48
Figure 4.7: SightFlex (MacMice).
Tracking from below offers two main advantages; the robot is clearly visible to
users from above, and access to the RCX is not obstructed. On the other hand, care
must be taken to obtain a clear image of the marker from below. This is because of
the effect of the frosted tabletop. The clearest image is produced when the marker
is lying directly on the tabletop. Since the marker is attached to the robot there
will inevitably be at least a millimeter or two of clearance, which introduces minor
blurring (see Figure 4.9(a)). Blurring increases the further the marker is above the
table surface. Figure 4.9(b) shows the result when the robot is raised 16mm on Lego
bricks. This image is not good enough for tracking. Another disadvantage is that
the contrast in the marker is significantly reduced when viewed through a frosted
tabletop. This can be seen by comparing Figure 4.8(a) with Figure 4.9(a). Although
the images in Figure 4.9 were taken using visible light, the effect is the same with
infrared light.
One issue when tracking from beneath the table concerns the field of view of
the camera. The Channel Vision 5124 has a field of view similar to most standard
cameras. If set near the base of the table and aimed upwards, it does not “see” the
entire tabletop. Our solution is to attach it to the table frame and aim it down so
CHAPTER 4. HARDWARE 49
(a) Marker attached to the topside. (b) Marker attached to the underside.
Figure 4.8: Marker placement for tracking from above and below.
(a) Marker 1 to 2mm from frosted table-top.
(b) Marker 16mm from frosted tabletop.
Figure 4.9: The quality of the image deteriorates as distance from the tabletop increases.
CHAPTER 4. HARDWARE 50
Figure 4.10: 850nm longpass filter.
that it sees the tabletop through the mirror. This works well but it requires a larger
mirror because it must be shared with the projector.
4.4.3 Tracking with IR vs. Tracking with Visible Light
The reason for tracking with IR light is to avoid problems posed by visible light
such as non-uniformity of intensities, interference from the texture being projected,
artificial and natural lighting. To “see” in IR alone requires a filter to block visible
light, and an IR light source. The filter currently in use is 12.5mm in diameter and
fits nicely into a recess in the night vision camera (see Figure 4.10). This filter, from
Edmund Optics, has a cutoff position8 of 850nm, a stop-band limit9 of 700nm, and
a pass-band limit10 of 950nm. It works very well. The two main reasons for using
infrared light for tracking are non-uniform intensities and texture projection. These
two are now explained.
8Specified at 50% internal transmittance.9Specified at 0.001% internal transmittance.
10Above 99% internal transmittance
CHAPTER 4. HARDWARE 51
Non-uniform intensities
Variable lighting conditions pose problems for any image processing application. Part
of the problem is that most cameras have an automatic gain control, which adjusts
according to lighting conditions. Tracking with infrared light is used because many
of the factors that affect visible light cannot be controlled. One of the main problems
is due to non-uniform intensities on the table’s screen. This is particularly apparent
when the camera is above the table. Commercial rear projection screen manufacturers
strive to produce a screen that has good uniformity, brightness, contrast, color, and
a wide viewing angle. These features are determined by the optical properties of the
material used for the screen. Non-uniformity is often manifested as a “hot spot” and
is related to the transmittance and diffraction of the incident light. The center of the
hot spot occurs at the point on the screen that intersects a line from the viewer to
the projector (see Figure 4.11).
Interference from texture projection
Another problem with visible light is that any texture projected onto the tabletop
interferes with recognition of squares and identifying patterns. This can cause the
image processing routine to fail to detect a marker in the image. Figure 4.12 shows
the input image and the result of applying a threshold when there is no texture in
the projection. Compare this to Figure 4.13 which shows the same image processing
applied to an image that includes black and white lines projected onto the table.
Hence, tracking from beneath the table is not possible using visible light.
CHAPTER 4. HARDWARE 52
Figure 4.11: Non-uniform intensities viewed from above the table.
(a) Image without a projection. (b) Marker detection successful.
Figure 4.12: Image processing is easy without a projection.
CHAPTER 4. HARDWARE 53
(a) Image with a projected texture. (b) Marker detection fails.
Figure 4.13: Image processing fails with a projected texture.
Chapter 5
Software
A requirement with all software used for this project is that it is cross-platform.
Occasionally this has been ignored in the interests of convenience while prototyping,
but ultimately it is a must. An example serves to illustrate this requirement. Ron
Daniels, chief information officer for the School District of Philadelphia, reports they
have about 35,000 computers within the district, and 80 to 85 percent of them are
Macs. It is also interesting to note that their philosophy is to develop more platform-
independent, web-delivered applications of software.
5.1 Conferencing
Simplicity and manageability are the keys to introducing technology and this thought
was foremost while choosing a conferencing tool. The Internet has no quality of service
guarantees, so an important requirement was to find a solution that not only delivers
good quality, but is robust and recovers well when hiccups occur. A number of options
have been considered and this section presents a brief description of them.
• Mbone and Darwin Streaming Server (DSS): Combining these two open source
tools can produce a multi-point conferencing application. The DSS receives
54
CHAPTER 5. SOFTWARE 55
two1 unicast streams from Mbone and sends two2 unicast relay streams and
two3 multi-cast relay streams. The streams can be received using Real Me-
dia player or QuickTime player. Mbone tools, VIC for video and RAT for
audio, are available as open source and binaries for Solaris, SunOS4, Irix 6.2,
Linux, FreeBSD, Windows. Macintosh uses can use Coolstream, which is a
QuickTime streaming file server that allows you to upload, to connected work-
stations, QuickTime files in streaming mode. Unfortunately, the Mbone tools
are not currently maintained, so everyday they support fewer audio and video
devices. This is one reason why these technologies were not chosen. However
the main reason was simply that developing a conferencing application from
these tools was considered to be “re-inventing the wheel” and not good use of
available time.
• Microsoft NetMeeting : Provides video and audio conferencing, chat, file trans-
fer, program sharing, remote desktop sharing, security, and whiteboard. How-
ever, it is strictly PC.
• VRVS : Provides public or private chat, it is cross-platform, has desktop sharing,
allows you to pop up web pages on others’ desktops, conferences held in booked
meeting rooms, is multi-point, and it supports Windows, Linux, Mac OS X with
iSight, and UNIX. It also has a voice-switched view mode which can receive
video of the participant who is speaking by default. One potential drawback is
that it relies on the Mbone tools, which are currently not maintained. For some
reason, we could not get VRVS to work although we tried everything suggested
in documentation and by email.
• iVisit 3.4.3 : Multi-point, supports chat, web co-browsing, share files, pictures,
1One for video and one for audio.2One for video and one for audio.3One for video and one for audio.
CHAPTER 5. SOFTWARE 56
videos, music and PowerPoint presentations, and runs on Windows and Mac.
Experiments with iVisit went very well.
• iChat AV : Easy to use, full screen, and the best quality of all applications
tested. With Tiger operating system you can video chat with three others or
audio chat with 9 others. It is compatible with AIM 5.5/5.9 or later which gives
it cross-platform capabilities.
Currently the Robotable is using iChat AV because it provides best quality and
is robust and easy to use. At this time there is no need to video chat with more than
three other participants but, should the need arise, there are other applications that
can support more.
It would be desirable to have software that allows an instructor to observe the
desktops of one or more remote learners, to control a remote desktop, or to distribute
software. These features are provided by Apple Remote Desktop (ARD). ARD, how-
ever, is not cross-platform and requires static IP addresses to connect to computers
outside the local network. One option might be to use Virtual Network Computing
(VNC), which is a cross platform solution that allows one computer to view and in-
teract with another computer anywhere on the Internet. In an educational context it
can allow a distributed group of students simultaneously to view a computer screen
being manipulated by an instructor, or to allow the instructor to take control of the
students’ computers to provide assistance.
5.2 Whiteboard
Some video conferencing applications include a shared whiteboard, however these are
generally proprietary and not customizable. We would like Robotable’s whiteboard
to be an integral part of the workspace with connections to other applications such
as Robolab™.
CHAPTER 5. SOFTWARE 57
The first Robotable whiteboard was programmed in Robolab. It was very basic
and provided freehand drawing without options to undo, move, or delete. It used
LabVIEW’s Open Application Reference VI to connect to a specified IP address, and
then Open VI Reference to draw to the front panel picture control on the correspond-
ing whiteboard program on the remote machine. This required the remote machine
to have a static IP address, so it could not cope with firewalls or network address
translation (NAT).
Later versions improved on the Internet connection method, the number of draw-
ing features, the interface and usability. The current connection method allows one
of the machines to be behind a firewall, although the other machine must have a
static IP address. The machine with the static IP address begins “listening” prior
to the other machine “calling”. The machine that is calling specifies the IP address
of the machine that is listening. This opens up a “pipe” through the firewall that is
used for the remainder of the session. If the connection is terminated for any reason
the applications must be restarted. Since the Listener must start before the Caller,
it is convenient to use video or audio conferencing to arrange the connection. The
reader may refer to the screenshot shown in Figure 5.1 as the features are explained.
A list of layers is in the top left corner and a new layer is created for each new object
added to the whiteboard. Objects can be deleted by highlighting them4 in this list
and clicking the Delete Layer button. Beneath the list of layers is where the current
whiteboard can be saved5 as a page, new pages can be added, and existing pages can
be reviewed. The Clear Whiteboard button deletes all layers. The toolbar along the
top allows the user to:
• Snap an image from a camera,
• import an image of the diagram of a Robolab VI,
4In Figure 5.1 layer three is highlighted - new layers are added to the top of the list.5Pages from a whiteboard are saved as JPG files.
CHAPTER 5. SOFTWARE 58
Figure 5.1: Robotable whiteboard (March 2005).
• import an image from disk,
• Begin a new drawing,
• edit pen parameters - style (solid, dashed, etc.) and width,
• colors - black, blue, green or red,
• add text,
• edit font - font, size, text orientation, bold, italics, and underline,
• drag - the default mode is to draw so if you wish to move a drawing you need
to hit the drag button first.
There are a number of options for optimizing this version, the main one being to
pre-draw lower layers. Since Ben’s Internet Server provides mechanisms for efficiently
CHAPTER 5. SOFTWARE 59
managing shared data, the next version of whiteboard will use the Internet Server to
handle those tasks. This will allow multiple people to share a whiteboard simultane-
ously, whereas the current version can accommodate just two participants. Since the
development of the Internet Server occurred in parallel with this work, integration
and testing of the whiteboard and the Server will be a part of future work.
5.3 Activities
The eBook style activity, mentioned in Chapter 3, was soon superseded by the idea
of an electronic activity card. A more recent approach has been to use web-based
activity cards. The focus of this section is on the electronic activity cards.
5.3.1 Electronic Activity Cards
Inspired by a standard activity card, an electronic activity card can include audio,
video, images, links to web pages, and interactive content. The prototype shown in
Figure 5.2, is a learning activity about the “Factors that Influence Climate”. This
activity included interactive 3D models of the sun and earth to explain the seasons,
and a Robotable application that allowed users to create their own continent. The
virtual sun and earth were implemented using C and the OpenGL graphics library.
User controls were added with GLUI6, the GLUT7 based C++ user interface library.
GLUI, GLUT and OpenGL are available on Windows and Macintosh so this is po-
tentially cross-platform, although a Mac version was not produced. The reason that
the 3D content was coded using OpenGL is that LabVIEW (hence Robolab) does
not support 3D hardware accelerated graphics. This is the motivation for Ben’s work
with the Ogre 3D rendering engine.
When creating electronic activity cards, key issues concern time and skills. It is
6OpenGL User Interface.7OpenGL Utility Toolkit.
CHAPTER 5. SOFTWARE 60
Figure 5.2: Electronic activity cards enable rich media content.
not reasonable to expect an activity to take a week to make, and require someone
who is an experienced graphics programmer on both platforms. Ideally, content for
the Robotable can be created by teachers and volunteers who may have little or no
programming experience. In this way they can share their work and contribute to
the knowledge-base. If having special skills was a pre-requisite, too few people would
be able to make activities, and accumulating an activity library would take too long.
Also, the replacement and updating of activities would be slow process, which would
make the library slow to adapt to changing needs. This criticism is true for this
method of adding 3D content, and also for applications like the Continent Creator.
Activity prototype: Continent Creator
The Continent Creator is intended to be run on the tabletop using the electronic pen
as an input device. It begins with the whole tabletop as a blue sea. Then, as the pen
is moved over an area, the land is gradually raised above the surface of the water.
CHAPTER 5. SOFTWARE 61
Figure 5.3: Greyscale image plus lookup table equals terrain.
The more the pen is used over an area, the higher the land becomes, until mountains
are formed. The pen can also be used to remove land to form lakes, valleys, and
rivers.
Actually, moving the pen over an area just changes the intensity of that part of
an 8-bit greyscale image. When the image is displayed with a color lookup table, it
looks like terrain(see Figure 5.3). After learning about the factors that affect climate,
the activity asks learners to create their own continent and add at least two mountain
ranges, at least two rivers, global wind pattern arrows, and six cities - one interior,
one coastal, one high altitude, one at sea-level, one windward of a mountain, and one
leeward of a mountain. The final task is to predict temperature and precipitation for
each city and explain reasons why.
Activity prototype: Fable Maker
This activity was produced to show that Robolab can be used for subjects other
than engineering, science and mathematics. The task is to adapt a well known fable
or to create an entirely new story. Preparation requires reading some fables and
discussing the idea of a ”moral” and how it works as a theme in a story. Other
elements of writing such as characterization and plot should also be discussed. A
CHAPTER 5. SOFTWARE 62
(a) Title with credits. (b) Story in images andtext.
(c) The moral of the fable.
Figure 5.4: Viewing pages of a Fable.
moral is chosen as the focus of the fable, and planning is essential to tell the story
in 6 to 10 pages. Thought should be given to characterization and care taken to
make the fable readable and interesting. The story can be illustrated by constructing
scenes from Lego and other materials, and snapping the scene with a webcam. The
application, FableMaker, takes the user through the steps of producing the fable. The
steps include the making the title, credits, text, an illustration for each page, and the
moral. When finished, the Fable Viewer presents the result as an old style book (see
Figure 5.4).
Activity prototype: Habitat
This activity was produced to investigate more complex interactions with the table.
The idea is that your robot is a creature exploring its habitat (the tabletop). Initially
you choose your creature. The habitat is populated with other creatures and objects,
some of which are food and some are not depending on the creature you have chosen.
The robot is programmed to walk in a random manner, searching for food. There
is a marker attached to your creature which is tracked by the camera. When the
position of your creature is compared with objects in the habitat and is found to be
CHAPTER 5. SOFTWARE 63
at some food, a message is sent to halt the creature. This message is sent in direct
mode from a Lego tower suspended overhead. While the creature is stopped at the
food, the amount of food gradually decreases until it is gone. The creature is then
sent a message to begin searching for more food. The learning is about ecosystems
and creatures that inhabit them, food chains, and how to program your creature to
find the most food.
5.3.2 Web-based Activity Cards
The focus is currently on web-based activity cards. These have the advantage of
being held at a central location (a server), so they are easy to update and maintain.
They can be coded to work with the widest possible number of browsers to ensure
consistent performance across platforms. Since this work has only recently begun, it
will simply be mentioned here and will be the focus of future work.
5.4 Calibration
Since all the markers appear on a plane (the tabletop) and there is no visible radial
distortion, it is reasonable to consider calibration as a 2D problem. This involves
mapping the 2D camera image coordinates to the 2D tabletop coordinates, which
requires specifying the coordinate systems.
Defining coordinate systems
The origin for the tabletop coordinates can be specified arbitrarily and it is convenient
to consider it as being the left-top corner of the projection (see Figure5.5(a)), since it
is the projected image that will be constructed from these coordinates. The origin for
the camera image is simply the left-top corner of the image, with x-values increasing
to the right and y-values increasing towards the bottom. Figure 5.5(b) shows a
CHAPTER 5. SOFTWARE 64
(a) Defining a reference for the projec-tion.
(b) Construction for finding normalizedcoordinates.
Figure 5.5: Tracking based on ARToolkit.
schematic of a camera view. Viewed through the mirror from under the table, the
projection area appears horizontally flipped8. This means the point P in Figure
5.5(b) corresponds to the left-top in Figure 5.5(a). The entire projected area and a
margin around it must be visible from the camera. The margin is required because
it is necessary to recognize markers whose centers may be inside the projected area
although parts of the marker extend beyond it.
In our case, the goal of calibration is to derive normalized projection coordinates
and orientation vectors for each of the markers seen in the camera image. Using
normalized coordinates allows the resolution of the projector to be altered without
requiring modification of the code. For example, if a marker’s position is determined
to be (0.5, 0.75) in normalized coordinates and the resolution of the projector is 800
× 600, then the virtual object will appear at pixel position (400, 450) in the projected
image.
To improve the response of the system it is important to make the transformation
from one set of coordinates to the other to be as fast as possible. For this reason
the calibration routine calculates a set of lookup tables which can be accessed by
the application while it is running. This way the normalized projection coordinates
8Of course, this depends on the position and orientation of the camera.
CHAPTER 5. SOFTWARE 65
can be obtained by indexing the lookup table using the x and y pixel positions as
subscripts.
The lookup tables are computed by iterating along each row of the camera image
and calculating the normalized x and y values for each pixel position. The lookup
table consists of three 2D arrays of 8-bit values, one for the x-parameter, one for the
y-parameter, and one to flag whether the given pixel is within the projection region.
The size of the lookup tables is determined by the size of a video frame from the
camera, which in our case is 320 × 240. Accuracy will be improved in future versions
by using floating point, instead of 8-bit, values for lookup tables.
Calculating x -parameters
Referring to Figure 5.5(b) the task is to find the normalized x coordinate for the
marker whose center is at the head of vector C. The vectors A and B can be written
as:
A = P + x (Q - P), B = S + x (R - S)
The value x is found so that the vectors C and D are collinear. This can be
done by considering the value of the 2D cross product of C and D. These vectors are
collinear when their 2D cross product is zero. This value has opposite sign depending
on which side of the line, from the head of A to the head of B, the marker lies.
This makes the problem suitable for solving using the Bisection Method. Iterations
are stopped when the value of the 2D cross product becomes smaller than 10−4.
Experiments have shown this can be achieved with an average of 22 iterations for a
320 × 240 image.
Calculating the y-parameter
Once the x -value is known and vectors C and D are collinear, the normalized y value
can be found from the ratio of the magnitudes of vectors C and C + D.
CHAPTER 5. SOFTWARE 66
Figure 5.6: Finding position and orientation from the marker’s ordered vertices.
Determining the in-bounds flag
The in-bounds flag is true if both the x and y parameters lie within the projection
region, which is within the quadrilateral PQRS in Figure 5.5(b). If the resulting x
and y parameters do not both lie between 0 and 1.0, inclusive, then the image pixel
is out of bounds.
Finding Position and Orientation
The position and orientation is determined from the ordered list of the markers ver-
tices returned by Carl’s image processing code. Referring to Figure 5.6, the center of
the marker’s front edge can be found by averaging vertices 0 and 1, while the center
of the back edge can be found by averaging vertices 3 and 2. The corresponding
normalized coordinates, F and B, can be found by accessing the lookup tables. The
direction vector is then, F −B, which should then be normalized. The average of F
and B gives the marker’s center in normalized coordinates.
Chapter 6
Testing and Evaluation
6.1 Experiment Description
Twenty undergraduate and graduate students were recruited from Tufts University.
Age, gender, and ethnic background was not part of the selection process.
Purpose and rationale
The primary purpose of this experiment was to evaluate one aspect of the Robotable
as a learning tool. The experiment compared the involvement and performance of a
subject in an activity that was presented in two different ways. One way presented
the activity in the form of a worksheet, and the other way delivered the same activity
as web pages augmented by interactions with the Robotable. From the results it is
hoped to determine the success of design goals relating to the provision of an engaging
environment, and also to guide future development of the Robotable.
The constructivist learning theory advocates that learning is an active process in
which meaning is constructed from experience. Consequently, one of the design goals
of Robotable has been to facilitate this kind of learning by providing a compelling
environment that will motivate the learner to action. In an age of technology literate
67
CHAPTER 6. TESTING AND EVALUATION 68
children, digital media can engage children who lose interest in traditional instruction
methods. The purpose of the experiment is to gain some insight into how successful
we have been in achieving this design goal.
Task
Subjects were asked to program a Lego car to run for 2, 4, and 6 seconds, and measure
the corresponding distances traveled. They recorded the data in the table provided
and generated a line graph based on the data. Using the graph, they then determined
the speed of the car. A target1 was set at a random distance from the start line and
subjects used the car’s calculated speed to predict the time required to run the car
as close as possible to the target without knocking it over. Successful completion of
the activity was judged to be when the car stopped within one inch of the target.
Subjects were asked to ”think aloud” during the activity to gain an insight into
their cognitive processes. A video recording of the task space was made for the purpose
of capturing the think-aloud and linking those comments to the corresponding actions.
The videos were reviewed to identify issues relating to performance of the activity.
At the end of the activities, subjects were asked to complete a short questionnaire
on their general experience with computers, and their subjective evaluations and
impressions of the two methods of presenting an activity. The entire experiment took
between approximately 20 and 30 minutes.
Apparatus
Lego Car : Two pre-built Lego cars were provided. Each car was geared to have a
different speed. Subjects used the slower car for the shorter distances on the Robot-
able, and the faster car for the worksheet activity which is run on the floor. Another
important reason for using cars with different speeds is to ensure that subjects do not
simply transfer the numbers from one trial to the next.
1A Lego person.
CHAPTER 6. TESTING AND EVALUATION 69
Computer : A computer was made available for programming the car using ROBO-
LAB’s simplest level, Pilot 1. All subjects were given a short tutorial (a couple of
minutes) and practice with Plot 1 before the experiment began. This enabled them
to learn and practice everything they need. It was proposed that the input device for
the tabletop would be the Mimio™electronic whiteboard pen but, due to problems
with calibration, this idea was abandoned part way through the first trial.
Miscellaneous : Two tape measures were provided, one for measuring on the table-
top and one for measuring on the floor. For the case where the activity is presented in
worksheet form, writing instruments and graph paper were provided so that subjects
could record the data in tables and plot a graph.
Procedure
The experiment was explained in detail to the subjects, who were given time to
ask questions before starting. The experiment was counter-balanced to avoid any
confounding effects, such as learning, caused by the order of presentation. Of the 20
subjects, 10 were presented with the Robotable first and 10 were presented with the
worksheet first. Participants were informed that they would not be offered help, but
that requests for help would be answered if required. This was to allow the number
of requests for assistance to be included as a performance measure.
Variables
1. Objective variables:
• Time to task completion.
• Successful task completion: This turned out to be irrelevant since all sub-
jects completed the task.
• Number of times the investigator is consulted: This also turned out to be
irrelevant. Each person consulted me twice to inform me that they were
CHAPTER 6. TESTING AND EVALUATION 70
ready for the target to be set. Apart from that there were two occasions
where subjects, who were not familiar with Macintosh computers, lost
one of their windows behind another and didn’t know how to get it back.
These were issues that I considered to be peripheral to the relationship of
the subject to the content of the activity.
2. Involvement is derived from answers to the subjective questions in the question-
naire.
• Satisfaction
• Stimulation
• Ease of use
3. Concurrent verbal protocol: used to detect issues not revealed through obser-
vation.
6.2 Results
Attributes
There were 11 female subjects and 9 male subjects, ranging in age from 18 to 30.
Questions 1 to 3 asked the subjects to rate their comfort with computers, their ex-
perience with Robolab, and experience with spreadsheets such as Microsoft Excel.
Five point rating scales were used for these attributes. Graphs of the results, which
appear in Appendix A, show that most subjects judged themselves to be very com-
fortable with computers. Experience with ROBOLAB™was more evenly distributed.
Four subjects reported having no experience at all, while six reported being very ex-
perienced. With regard to the question on spreadsheets, all but one of the subjects
judged themselves to be either experienced or very experienced.
CHAPTER 6. TESTING AND EVALUATION 71
Figure 6.1: Frustrating to satisfying.
Subjective Data
Questions 4 to 6 asked subjects for a comparison of the two ways of presenting an
activity by rating their impressions for each of the following items. Question 4 (see
Figure 6.1) rated from frustrating to satisfying, question 5 (see Figure 6.2) rated from
dull to stimulating, and question 6 (see Figure 6.3) rated from difficult to easy. These
three questions used a nine-point rating scale.
These graphs appear to show that the Robotable version of the activity was consid-
ered to be more satisfying and more stimulating than the worksheet version. However,
these graphs do not consider the paired nature of the data, so the data will be tested
in more depth in Section 6.3.
Subjects’ Comments
Question 7 gave subjects an opportunity to express any other comments, or opinions.
These responses are given in Appendix A. To summarize, one could say that approx-
imately one half of these comments were positive or complementary, while the other
half reflected problems. Many subjects said that the Robotable environment was
CHAPTER 6. TESTING AND EVALUATION 72
Figure 6.2: Dull to stimulating.
Figure 6.3: Difficult to easy.
CHAPTER 6. TESTING AND EVALUATION 73
more convenient because the activity could be done in one place, without the need
to bring additional equipment. Some said Robotable was more precise. This may
be due to the reduced error that is a consequence of the slower speeds and shorter
distances required on the Robotable. Some enjoyed the automatic plotting of data,
saying this was more accurate and time saving. Others did not like things to be
automated and would rather have maintained control and done things by hand. A
couple of comments noted that tick marks on Robotable’s graph were not clear, and
their intervals could change depending on the range of data graphed. Users found
it annoying to be required to switch input devices, from electronic pen to mouse to
keyboard. Other comments related to unfamiliarity with the environment, such as
locating the mouse pointer with dual monitors (the vertical monitor and the table
screen), or recovering hidden windows on the Macintosh platform.
6.3 Analysis
There are two main parts to the analysis. An analysis of variance (ANOVA) and a
Multiple Range Test on the times to task completion, and a Wilcoxon signed rank
test to find significant differences in the subjective data from questions 4, 5, and 6.
6.3.1 Times to Task Completion
The raw data is graphed with means and bars to indicate the 2σ (95%) range. The
first activities are compared in Figure 6.4, and the second activities are compared in
Figure 6.5.
It is difficult to tell whether the differences between the times to completion for
the two first activities, or the two second activities, are significant so, to establish
this, an analysis of variance was done.
CHAPTER 6. TESTING AND EVALUATION 74
Figure 6.4: Comparing data for the time to completion of the first activities.
Figure 6.5: Comparing data for the time to completion of the second activities.
CHAPTER 6. TESTING AND EVALUATION 75
SUMMARY Robotable Worksheet TotalGroup 1Count 10 10 20Sum 118.4168 69.2500 187.6668Average 11.8417 6.9250 9.3833Variance 8.0501 2.5809 11.3972SE 0.8972 0.5080
Group 2Count 10 10 20Sum 93.5834 124.4169 218.0003Average 9.3583 12.4417 10.9000Variance 5.4167 13.6961 11.5553SE 0.7360 1.1703
TotalCount 20 20Sum 212.0002 193.6669Average 10.6000 9.6833Variance 8.0019 15.7191
Table 6.1: Summary of statistics of the times to task completion.
ANOVA
Statistical significance was investigated by analyzing the data in Microsoft Excel using
Tools→ Data Analysis→ ANOVA: Two-Factor With Replication. There are four sets
of times with ten values each; the Robotable activity done first and the Worksheet
activity done second, and the Robotable activity done second and the Worksheet
activity done first.
Table 6.1 gives the summary statistics for the four sets of data; two methods
of delivering the activities, and two orders in which the activities were carried out.
Group 1 refers to the Robotable first, followed by the Worksheet. Group 2 refers to
the Worksheet first, followed by the Robotable. The mean times for the four groups
are graphed in Figure 6.6. The error bars are set at ±SE (standard error of the mean)
CHAPTER 6. TESTING AND EVALUATION 76
ANOVASource of Variation SS df MS F P-value F-critSample 23.0030 1 23.0030 3.0935 0.0871 4.1132Columns 8.4027 1 8.4027 1.1300 0.2949 4.1132Interaction 160.0012 1 160.0012 21.5173 4.5072E-05 4.1132Within 267.6940 36 7.4359
Total 459.1010 39
Table 6.2: ANOVA: Two-actor with replication for times to task completion.
for each data point. That is, there is a 68% chance that the population mean for each
dataset lies within this region. The x-axis represents the method by which the activity
was delivered. The chronological order of activities in each case is represented by the
direction of the arrow.
Table 6.2 gives the analysis of variance for p = 0.05. The P -values in this table
exceed p = 0.05 in the rows labeled Sample and Columns, which means there is no
overall difference between times for Group 1 and Group 2, and there is no overall
difference between times for the Robotable and the Worksheet. However, the Inter-
action with P -values = 4.5072E−05 indicates there are highly significant effects due
to specific (order, method) pairs over and above differences based on order alone or
method alone.
The two-factor ANOVA test says there is a significant interaction between average
times in the experiment as a whole, but it does not say which times differ from one
another. There are two ways of comparing individual sets of times that might provide
more information. One way is to calculate the least significant difference (LSD), and
another way is to use the multiple range test. The multiple range is chosen because
it is more reliable and must satisfy stricter criteria.
CHAPTER 6. TESTING AND EVALUATION 77
Figure 6.6: Graph of mean times to completion.
CHAPTER 6. TESTING AND EVALUATION 78
MeansWk 1st 12.4417Rb 1st 11.8417 0.6000Rb 2nd 9.3583 2.4833 3.0834Wk 2nd 6.9250 2.4333 4.9167 5.5167
6.9250 9.3583 11.8417 12.4417 MeansWk 2nd Rb 2nd Rb 1st Wk 1st
Table 6.3: Differences of means for comparing with Q(σd) values.
Multiple Range Test
To implement the multiple range test a table of differences between mean times is
built. The means are ranked from highest to lowest and the differences are calcu-
lated (see Table 6.3). These differences are then compared with calculated values for
Q(σd). If any are greater, then they are significant. To calculate the required values
for Q(σd), the first step is to find σ2d and then σd.
σ2d = 2 × residual mean square ÷ n
= 2× 7.4359÷ 10
= 1.4872
σd = 1.2195
The residual mean square is actually a variance and is the MS value in the
“Within” row of the ANOVA table. The next step is to get Q-values from a ta-
ble using the number of datasets being compared as one index, and the degrees of
freedom for the residual mean square as the other index. The relevant part of the Q
table, sometimes referred to as the “studentized range” table, is given in Table 6.5.
Since there is no row for 36 degrees of freedom, the values for 30 degrees of freedom
are used, which constitutes an even stricter test. The required Q(σd) values are found
by multiplying the tabulated Q-values by σd (see Table 6.4).
CHAPTER 6. TESTING AND EVALUATION 79
Q(σd)2 = Q2× σd = 2.89× 1.2195 = 3.5244Q(σd)3 = Q3× σd = 3.48× 1.2195 = 4.2439Q(σd)4 = Q4× σd = 3.84× 1.2195 = 4.6829
Table 6.4: Calculating Q(σd).
df Number of Datasets2 3 4
30 2.89 3.48 3.8440 2.86 3.44 3.79
Table 6.5: Excerpt from a table of Q-values.
The only two differences of interest are between the Worksheet first (Wk 1st) and
the Robotable second (Rb 2nd), and the Robotable first (Rb 1st) and the Worksheet
second (Wk 2nd). From Table 6.3, the difference between Wk 1st and Rb 2nd is
3.0834 minutes. Since this is in column four of the data, Q(σd)4 = 4.6829 is used for
comparison. In this case the value of Q(σd)4 is not exceeded so the difference is not
significant. Again from Table 6.3, the difference between Rb 1st and Wk 2nd is 4.9167
minutes. Since this is in column three of the data, Q(σd)3 = 4.2439 is used, and it
is exceeded. Therefore, a significant difference in times to completion of the activity
exists when subjects did the Robotable first and the Worksheet second.
6.3.2 Subjective Data
Wilcoxon Signed Rank Test
The Wilcoxon Signed Rank Test is used to establish significance in the subjective
rating scales. The Wilcoxon test is appropriate because the subjective ratings are
ordinal, the scale intervals are not equal, and because each subject provides two rat-
ings, one for the Robotable and one for the worksheet, which makes them a matched
pair. The analysis was done in Excel and significance was found in one case; the data
CHAPTER 6. TESTING AND EVALUATION 80
Level of Significance for aDirectional Test
0.05 0.025 0.01 0.005 0.0005Non-Directional Test
– 0.05 0.02 0.01 0.001zcritical
1.645 1.960 2.326 2.576 3.291
Table 6.6: Critical values of ±z.
comparing the two methods on a scale from Dull to Stimulating. For this data the
z-value was calculated to be z = 2.93. Using critical z-values from Table 6.6, this is
better than 99% significant for a directional test. The z-values for the other responses
were z = 0.72 for question 4, and z = 0.09 for question 6.
6.4 Discussion
There are two key results from this experiment:
1. Statistically significant effects due to interaction were found. That is, specific
order, method pairs were found to be different over and above differences based
on order alone or on method alone. Further analysis with the multiple range
test, which applies stricter criteria, revealed that the most significant improve-
ment in performance was for the case where the Robotable activity was done
first.
2. When analyzing subjective impressions of the two methods of delivering an
activity, the Robotable was found to be more stimulating with high statistical
significance.
If the effectiveness of the learning experience is judged by the amount of improve-
ment in times to completion, then one could say the Robotable was a more effective
CHAPTER 6. TESTING AND EVALUATION 81
learning experience. This is because the Robotable activity followed by the Worksheet
activity resulted in a average improvement of 4.9 minutes. An average improvement
of 3.1 minutes occurred when the Worksheet activity was followed by the Robotable
activity. This observation is verified statistically by the multiple range test.
While speculating on the reasons for this result, there is another factor that may
be relevant. That is, the novelty of the Robotable environment. It is reasonable to
suppose that a subject would proceed cautiously with an unfamiliar environment,
resulting in greater times to completion. This might explain why the mean time
for the 2nd Robotable activity is approximately 2.5 minutes greater than that for
the 2nd Worksheet activity. If the novelty factor was having an effect, one would also
expect the 1st Robotable activity to be proportionately longer than the 1st Worksheet
activity. However, the mean time for the 1st Robotable activity is slightly less than
that for the 1st Worksheet activity, which contradicts the supposed effect of novelty.
Recalling that the difference in mean times of the first activities, and the difference
in mean times of the second activities were not statistically significant, one should
not indulge in too much speculation. This topic could be explored in future tests.
The subjects were all university students, of both genders, ranging in age from
18 to 30 years old. This implies that any conclusions should apply only to the same
demographic. One of the intended uses of the Robotable is as an environment for
training teachers and professionals for outreach work to schools. It cannot be assumed
that all teachers and professionals have similar age to the subjects in our study. It
would, however, be reasonable to assume that most professionals are experienced
with computers and technology. This assumption would not be reasonable for most
teachers. Therefore, it would be wrong to deduce that most teachers would have a
similarly positive attitude towards Robotable based on this study.
Chapter 7
Future Work
7.1 Hardware
7.1.1 Tabletop screen
Initially, the tabletop screen was implemented using a sheet of frosted glass because
it was recommended by staff at the New Zealand HITLab. Frosted glass works well
as a rear projection screen, and its hard surface is resistant to scratching and is easy
to clean. However, glass is also heavy and brittle, which reduces portability and, in
case it is broken, has possible safety issues.
For this reason adhesive vinyl film was investigated. It is possible to obtain this
material that has optical properties at least as good as frosted glass. Additionally,
it is safe, lightweight1, cheap relative to glass, and far more portable. It is tough
and, although it will not break, it is more prone to minor surface damage such as
scratching.
The technology of commercial rear projection screens has advanced over recent
years and, although they are relatively expensive, they have the potential to greatly
1Nearly all of the weight associated with adhesive vinyl is contributed by the surface it is attachedto, rather than the vinyl itself.
82
CHAPTER 7. FUTURE WORK 83
improve the quality of the tabletop image. To be used as a table surface they would
need to be resilient enough to operate as a workbench as well as a screen. An ad-
vantage is that they are almost certain to be lighter and more durable than glass. It
would be worthwhile to keep a lookout for a commercial screen that might be suitable
for the Robotable. An attempt was made to salvage a screen from a discarded rear
projection TV, but such an item could not be found.
7.1.2 Mirror
Just as a frosted glass screen is heavy and breakable, so is a glass mirror. In the
setup at Tufts University, the optical properties of the mirror are not critical. This is
because Tuftl’s, almost vertical, projector arrangement does not have a problem with
a ghost image, which means a front silvered mirror need not be required. Therefore, it
may be possible to replace the glass mirror with, for example, something like mirrored
lucite. A mirror made of this type of material would be tough, portable, and safe.
7.1.3 Tangible Devices and Augmented Reality
The greatest advantage to be gained from using tangible interfaces would be to elim-
inate the functional seam caused by a separation of the display space and the task
space. Space-multiplexed input devices can be more intuitive to use because they
are designed specifically for a single function and their physical shape indicates this.
However, the Robotable needs to be a flexible environment, which can make a single
purpose tangible input device unsuitable. Therefore, incorporating tangible input
devices needs to be done carefully. Perhaps, as more content is created for use on
the Robotable, a common use for certain tangible devices will become apparent and
their inclusion will be warranted. At this time, no tangible input devices are used
because a real need has not yet been established. Tangible user interfaces and aug-
mented reality applications are still new, and it seems that most applications of these
CHAPTER 7. FUTURE WORK 84
technologies are labor intensive to produce, require skilled people to produce them,
and generally the code is not particularly re-usable. Hence, every new activity would
require a significant investment of time from skilled people. Augmented reality con-
tent is very compelling and if a suitable use can be found for the Robotable, it should
be included. It was mentioned earlier that tangible-augmented reality requires head
mounted displays. Unfortunately, it is not feasible at this time to include a set of
head mounted displays with the Robotable.
7.1.4 Portability
Using glass for the tabletop screen or the mirror reduces portability of the table since
glass is brittle and heavy. One issue that arose during preliminary testing serves to
motivate the development of a more portable Robotable. Because the Robotable was
constructed in Tuftl and is not easily moved, the experiments were also conducted
in Tuftl. This was not an ideal environment for a study of this nature. Other stu-
dents share the laboratory and, although they were considerate, there were difficulties
because of this. One difficulty was that some subjects felt uncomfortable doing the
think-aloud in this environment. These subjects did not verbalize much despite being
encouraged. Having other students working nearby also raised ambient noise levels.
7.2 Software
7.2.1 Development of Instructional Content
The development of the Robotable is a work in progress and instructional content
is currently being produced. The focus is on Internet based activities that will be
connected to an existing knowledge-base. Besides activities there will be access to
technical support, and building and programming help. Ways of organizing and
delivering these activities is also being investigated. For example, being able to search
CHAPTER 7. FUTURE WORK 85
the database by grade level or by subject. In addition to this, work should continue
on creating, prototyping and testing new concepts in instructional content for the
Robotable.
7.2.2 Integration and Testing of the Robotable Internet Server
Since Robotable’s Internet Server was developed in parallel with other software, there
is still integration and testing to be done. The whiteboard, and existing online activ-
ities, should be converted to run on the Internet Server and tested between at least
three tables, since any testing to date has been done with just two tables.
Currently, the ARToolkit-based image processing code has not been compiled for
use on a Macintosh. Also, Ben suspects that one of the reasons for poor update rates
when using optical tracking is because the DLL, on the Windows platform, has been
compiled in Debug Mode. Apparently, compiling in Release Mode greatly improves
performance.
Apart from testing the various software components of the Robotable individually,
they also need to be tested together by running multi-user online workshops and
activities. An application may perform splendidly on its own, but when sharing the
CPU with many other processes its performance may seriously suffer. In the past,
this has happened with just two or three separate applications such as trying to use
the whiteboard while video conferencing and optical tracking. Managing access to the
CPU may yet turn out to be very important. In this respect, a product like Maratech
has an advantage because all of the essential distance education tools are provided in
a single application. This enables the application to manage those tools sensibly.
Appendix A
Data and Analysis
Attributes
Age and gender data is given in Figure A.1.Figure A.2) gives comfort with comput-
ers, Figure A.3) gives experience with Robolab, and Figure A.4 indicates subject’s
experience with spreadsheets.
Subject’s comments
General comments
• The task was simple enough, the Robotable didn’t add anything.
• Robotable is better because you can stay in one place.
• Robotable does not require you to find pen and paper.
• Robotable reduces the need for additional equipment.
• Robotable is more convenient.
• Robotable was more exact.
• Data entry on Robotable was easier and accurate and allowed you to focus on
what it means.
86
APPENDIX A. DATA AND ANALYSIS 87
Figure A.1: The age and gender of participants in the study.
Figure A.2: Comfort with computers.
Figure A.3: Experience with ROBOLAB™.
APPENDIX A. DATA AND ANALYSIS 88
Figure A.4: Experience with spreadsheets.
• Robotable was more precise.
• Robotable was easier because everything was right there.
• Not sure how to get the mouse from the table to the computer monitor.
• The Lego car is traveling in curves, which isn’t a problem on the Robotable.
• Want pop-up help on Robotable screen objects, wasn’t sure what things did.
• Should project a ruler on Robotable, then a click where the car stops could drop
a perpendicular line to the ruler.
Graphing comments
• Robotable was easier with automatic plotting.
• Did not like automatic plotting, would rather graph manually.
• The worksheet version allows you to add notes, rule lines, and feel more in
control.
• Switching from electronic pen to mouse to keyboard is annoying.
APPENDIX A. DATA AND ANALYSIS 89
• Robotable lines are difficult to see, tick marks are not accurate.
• Robotable’s connecting lines are good because you didn’t have to use equations
or rulers.
• Robotable’s automatic graphing is more accurate than by hand.
• Liked Robotable’s automatic graphing.
• Wanted a scratch pad on Robotable to do some calculations.
• Could do calculations more accurate by hand.
• When drawing lines, they can be adjusted many times on Robotable without
making a mess.
• As points are added on Robotable, the tick marks change which makes the
intervals confusing.
• Plotting by hand is more familiar but less exciting.
• Sick of plotting manually, so Robotable is good.
Wilcoxon’s Signed Rank Test
The procedure used for this analysis is given here and the result is shown in Table
A.1. The table was initialized with three columns; Subjects, Robotable and Work-
sheet. Subjects contained numbers 1 to 20. Robotable contained subjects’ ratings
for the Robotable, and Worksheet contained subject’s ratings for the worksheet. The
worksheet rating was then subtracted from the Robotable rating to create a column
of differences labeled Diff.. The absolute value of these differences were then put in
the fifth column and labeled ABS. All five columns were sorted by the absolute values
of the differences, ABS, in ascending order. Ranks were then assigned to the absolute
values of the differences in the following way. Ranking begins at the first non-zero
APPENDIX A. DATA AND ANALYSIS 90
value. When two values are the same, the same rank is assigned to both and is an
average of the ranks that would otherwise have been assigned. Referring to Table A.1
for example, rows 3 through 6 all have a value of 1. If the values were increasing, they
would take ranks 1 through 4. Therefore the average rank, (1 + 2 + 3 + 4)/2 = 2.5,
is assigned. Finally, a column of signed ranks is created by copying the ranks to this
column and giving them signs equivalent to the signs in the difference column.
The number of signed ranks, ns/r = 18. The sum of the signed ranks, W = 135.
The distribution of possible values for W has a mean of zero and a standard deviation
given by σw = ns/r(ns/r +1)(2ns/r +1)/6, hence σw = 18(18+1)(2∗18+1)/6 = 45.92.
The z-value can then be calculated, including a ±0.5 correction for continuity, using
the formula z = (W − 0.5)/σw. Therefore, z = (135− 0.5)/45.92 = 2.93.
APPENDIX A. DATA AND ANALYSIS 91
Subject Robotable Worksheet Diff ABS Rank Signed rank5 4 4 0 015 8 8 0 09 6 5 1 1 2.5 2.512 7 6 1 1 2.5 2.516 8 7 1 1 2.5 2.519 8 7 1 1 2.5 2.53 7 5 2 2 7 78 7 5 2 2 7 710 5 7 -2 2 7 -718 7 5 2 2 7 720 7 5 2 2 7 72 5 8 -3 3 11 -117 5 2 3 3 11 1114 7 4 3 3 11 111 7 3 4 4 14 146 8 4 4 4 14 1417 8 4 4 4 14 144 8 3 5 5 16.5 16.511 8 3 5 5 16.5 16.513 8 2 6 6 18 18
Table A.1: Wilcoxon signed rank test for data from the question rating the activities fromDull to Stimulating.
Bibliography
[1] Jean M. Johnson and Terry S. Woodin. Higher education in science and en-
gineering. In Science and Engineering Indicators - 2002, volume 1, chapter 2.
National Science Board, 2002.
[2] Lori Thurgood. Graduate enrolment in science and engineering fields reaches a
new peak; first-time enrolment of foreign students declines. June 2004. National
Science Foundation, Science Resources Statistics.
[3] Richard Monastersky. Is there a science crisis? maybe not. The Chronicle of
Higher Education, July 9 2004.
[4] Susan Staffin Metz. President of Women in Engineering Programs and Advocates
Network, June 2000.
[5] David Manasian. Digital dilemmas. The Economist, January 2003.
[6] International Society for Technology in Education. National educational technol-
ogy standards for students: Connecting curriculum and technology. International
Society for Technology in Education, 2000.
[7] Thomas Friedman. The Lexus and the Olive Tree. Anchor Books, May 2000.
[8] Harry Armen. Engineering and technology leadership in a global economy. Engi-
neers Joint Committee of Long Island: Annual Engineers Week Dinner, February
2005.
92
BIBLIOGRAPHY 93
[9] Joseph Bordogna. Making connections: The role of engineers and engineering
education. The Bridge, 27(1), Spring 1997. National Academy of Engineering.
[10] U.S. Department of Education. A guide to education and no child left behind,
October 2004.
[11] Erin Cejka. Going the distance: Can a robolab activity for teaching mathemat-
ics through engineering replace a traditional lesson. Proposal for an intervention
study using ROBOLAB and Lego Mindstorms technologies to teach the mathe-
matical concepts of fractions, decimals, and graphing through engineering., May
2004.
[12] Julia Silverman. Oregon education professors influence bush. Boston Globe, July
2004. Associated Press Writer.
[13] National Center for Education Statistics NCES. Internet access in u. s. public
schools and classrooms: 19942003, February 2005.
[14] Dave Baum and Rodd Zurcher. Dave Baum’s Definitive Guide to LEGO Mind-
storms (Technology in Action). Apress, 2000.
[15] Brenda Mergel. Instructional Design and Learning Theory. PhD thesis, Educa-
tional Communications and Technology, University of Saskatchewan, May 1998.
[16] Seymour Papert. The Children’s Machine: Rethinking School in the Age of the
Computer. Basic Books, 1993.
[17] Will Durant. The reformation: A history of european civilization from, wyclif
to calvin, 1300 to 1564. In The Story of Civilization, volume 6, page 60. Simon
and Shuster, 1954.
[18] Edward F. Spodick. The evolution of distance learning. Hong Kong University
of Science & Technology Library, August 1995.
BIBLIOGRAPHY 94
[19] G.P. Cartwright. Distance learning: A different time, a different place. Change,
26(4):64–72, July/August 1994.
[20] Douglas E. Comer. The Internet Book. Prentice Hall, 3rd edition, 2000.
[21] Farhad Saba. Why there is no significant difference between face-to-face and
distance education. In Distance Education Report, volume 4, pages 1–4. 2000.
[22] M. G. Moore. Three types of interaction. In K. Harry, J. Magnus, and D. Keegan,
editors, Distance Education: New Perspectives. Routledge: New York, NY., 1993.
[23] B. Holmberg. Key issues in distance educatiion. In K. Harry, J. Magnus, and
D. Keegan, editors, Distance Education: New Perspectives. Routledge: New
York, NY., 1993.
[24] Hiroshi Ishii and Brygg Ullmer. Tangible bits: Towards seamless interfaces be-
tween people, bits and atoms. In Proceedings of CHI ’97. Computer-Human
Interaction, 1997.
[25] H. Kato, M. Billinghurst, I. Poupyrev, N. Tetsutani, and K. Tachibana. Tangible
augmented reality for human computer interaction. In Proceedings of Nicograph
2001, 2001.
[26] Mark Billinghurst and Hirokazu Kato. Collaborative mixed reality. In Pro-
ceedings of the First International Symposium on Mixed Reality, pages 261–284.
ISMR, Springer Berlag, 1999.
[27] Hirokazu Kato, Mark Billinghurst, Rob Blanding, and Richard May. Artoolkit:
Pc version 2.11, December 1999.
Top Related