Visual scene displays - augcominc.com

16
1 August 2004 Volume 16 Number 2 Clinical News Visual scene displays Beginning Communicators Penn State projects Adults with Aphasia Nebraska project Other Applications of VSDs: Beyond Conversation Childrens Hospital Boston projects Prototype to Market Resources & Select Articles and Chapters Continued on page 2 Continued on page 2 This issue is about visual scene displays (VSDs). VSDs look different from our typical AAC displays and are meant to be used in slightly different ways to support the communication process. Before reading further, you might be well advised to browse through the illustrations on the various pages of this issue, to get a quick sense of what the displays we’ll be discussing actually look like. ------------------ Welcome back. Visual scene displays (VSDs), like other types of AAC displays, may be used to enhance communi- cation on either low-tech boards or high-tech devices. The VSDs described in this issue are meant primarily to address the needs of beginning communicators and individuals with significant cognitive and/or linguistic limitations. These groups are unserved or underserved by current AAC technologies. VSDs are designed to provide a high level of contextual support and to enable communication partners to be active and supportive participants in the communication process. Take, for example, a VSD showing a photo of picnic tables and family members eating, drinking and having fun at a reunion. If used on a low-tech board, the photo provides a shared context for interactants to converse about topics related to the reunion. When used on an AAC device, spoken messages can be embedded under related elements in the digitized photo. These are known as “hot spots.” Someone with aphasia may use a VSD to converse with a friend by touch- ing hot spots to access speech output, e.g.,“This is my brother Ben,” or “Do you go to family reunions?” Or, a three-year-old child with cerebral palsy may tell how “Grandpa” ate “marshmallows” at the reunion by touching the related “hot spots” on her VSD. My thanks to David Beukelman, Kathy Drager, Jeff Higginbotham, Janice Light and Howard Shane for their generous help in preparing this issue. These researchers are collaborators in the AAC-RERC (Rehabilitation Engineering Re- Visual scene displays Decades ago, people who were unable to speak (or write) had to rely solely on their impaired speech and other natural methods of communication. This led to frustra- tion for both the individuals and their communication partners. During the 1970s and 1980s, the disability rights movement, new laws and new public policies brought children with severe physical, communication and cogni- tive disabilities into their neighbor- hood schools and gave adults who had been sequestered at home or living in institutions the means to move out into their communities. Concurrently, teachers, therapists, family members and advocates found ways for indi- viduals with limited speech and language skills to communicate, by providing a growing range of AAC techniques and strategies, such as manual signs, gestures, pictures and graphic symbols. By the mid 1980s, the field of AAC had been firmly launched. Today we have machines that “talk” and a growing cadre of professionals and advocates working to provide functional communication to people with complex communica- tion needs. The AAC community has helped pave the roads to educa- tion, employment and community participation. AAC techniques,

Transcript of Visual scene displays - augcominc.com

Page 1: Visual scene displays - augcominc.com

1

August 2004

Volume 16

Number 2

Clinical NewsVisual scene displays

Beginning CommunicatorsPenn State projects

Adults with AphasiaNebraska project

Other Applications of VSDs:

Beyond ConversationChildren�s Hospital Boston projects

Prototype to Market

Resources & Select Articles

and Chapters

Continued on page 2

Continued on page 2

This issue is about visual scenedisplays (VSDs). VSDs lookdifferent from our typical AACdisplays and are meant to be used inslightly different ways to support thecommunication process. Beforereading further, you might be welladvised to browse through theillustrations on the various pages ofthis issue, to get a quick sense ofwhat the displays we’ll be discussingactually look like. ------------------Welcome back.

Visual scene displays (VSDs),like other types of AAC displays,may be used to enhance communi-cation on either low-tech boards orhigh-tech devices. The VSDsdescribed in this issue are meantprimarily to address the needs ofbeginning communicators andindividuals with significant cognitiveand/or linguistic limitations. Thesegroups are unserved or underservedby current AAC technologies.

VSDs are designed to provide ahigh level of contextual support andto enable communication partners tobe active and supportive participantsin the communication process. Take,for example, a VSD showing a photoof picnic tables and family memberseating, drinking and having fun at areunion. If used on a low-tech board,the photo provides a shared contextfor interactants to converse abouttopics related to the reunion. Whenused on an AAC device, spokenmessages can be embedded under

related elements in thedigitized photo. Theseare known as “hotspots.” Someone withaphasia may use a VSD

to converse with a friend by touch-ing hot spots to access speechoutput, e.g.,“This is my brotherBen,” or “Do you go to familyreunions?” Or, a three-year-oldchild with cerebral palsy may tellhow “Grandpa” ate “marshmallows”at the reunion by touching therelated “hot spots” on her VSD.

My thanks to David Beukelman,Kathy Drager, Jeff Higginbotham,Janice Light and Howard Shane fortheir generous help in preparing thisissue. These researchers arecollaborators in the AAC-RERC(Rehabilitation Engineering Re-

Visual scene displays

Decades ago, people who wereunable to speak (or write) had torely solely on their impaired speechand other natural methods ofcommunication. This led to frustra-tion for both the individuals and theircommunication partners. During the1970s and 1980s, the disability rightsmovement, new laws and new publicpolicies brought children with severephysical, communication and cogni-tive disabilities into their neighbor-hood schools and gave adults whohad been sequestered at home orliving in institutions the means tomove out into their communities.

Concurrently, teachers,therapists, familymembers and advocatesfound ways for indi-

viduals with limitedspeech and language skills to

communicate, by providing agrowing range of AAC techniquesand strategies, such as manual signs,gestures, pictures and graphicsymbols. By the mid 1980s, the fieldof AAC had been firmly launched.

Today we have machines that“talk” and a growing cadre ofprofessionals and advocates workingto provide functional communicationto people with complex communica-tion needs. The AAC communityhas helped pave the roads to educa-tion, employment and communityparticipation. AAC techniques,

Page 2: Visual scene displays - augcominc.com

2

Upfront, Continued from page 1

search Center on CommunicationEnhancement).[www.aac-rerc.com.]They are committed to improvingthe design of AAC technologies sodevices become more intuitive, userfriendly and (bottom line) easier forpeople with complex communica-tion needs to learn and to use.

Clinical News briefly exploreswhy visual scene displays (VSDs)may be particularly useful to youngchildren and individuals with cogni-tive and linguistic limitations. Inaddition, this section defines VSDs,their features and potential applica-tions. Beginning Communicatorssummarizes completed researchprojects at Pennsylvania StateUniversity documenting the difficul-ties children with typical develop-ment have using AAC technologies.It also describes a current Penn Stateproject that features VSDs to supportthe language and communicationdevelopment in young children withsevere communication impairments.Adults with Aphasia summarizes

Clinical News, Continued from page 1

strategies and technologies haveenabled some people with severecommunication impairments to fulfillthe same social roles as theirnondisabled peers and to pursuetheir personal goals and dreams.However, many people who havecomplex communication needs arestill not benefitting from AACtechnologies.Current AAC technologies

Simple AAC devices enableusers to express a small number ofmessages. While useful, the mes-sages are typically restricted to afew preprogrammed phrases. Otherdevices are more complex and havedictionaries of symbols to representlanguage, an ability to store and

beginning communicators andindividuals with cognitive andlinguistic limitations are comprised ofisolated symbols, whose meaningmust be learned. These symbols aretypically arranged in rows andcolumns. Users must navigatethrough pages and pages of symbolsto retrieve phrases or words toconstruct meaning, or they mustlearn to use coding strategies(abbreviation expansion, semanticcompaction, Morse code). Thecognitive demands of these ap-proaches make learning time-consuming. Some people manage tomaster these systems, but many donot.Visual scene displays

Advances in mainstream tech-nologies (e.g., miniaturization, multi-functionality, storage capacities,processing speed, video, photo andvoice capabilities, locator functions)offer options that may better addressthe requirements of people withcomplex communication needs whohave, until now, been unserved orunderserved by AAC technologies.One approach researchers arecurrently investigating—visual scenedisplays (VSDs)—may be particu-larly helpful to beginning communi-cators and individuals with cognitiveand linguistic limitations. VSDs offera way to (1) capture events in theindividual’s life, (2) offer interactantsa greater degree of contextualinformation to support interaction

retrieve thousandsof messages,colorful displays,intelligible voiceoutput with maleand female voices,multiple accessoptions, environ-mental controls,Internet access andrate enhancingstrategies.

A common complaint aboutcomplex AAC devices, however, isthey are difficult to use and timeconsuming to learn. Research andclinical experience suggest that AACtechnologies often place enormouscognitive demands on the individualswho use them. For example, currentAAC devices that are available to

Traditional grid display Visual scene display

Figure 1. Contrasting types of AAC displays

work underway at the University ofNebraska and describes ways inwhich VSDs support the socialconversations of individuals withsevere aphasia. Other Applicationsof VSDs: Beyond Conversationdiscusses ways VSDs can supportlearning, offer prompts and provideinformation. It highlights a researchproject underway at Children’sHospital Boston that is exploringways to use VSDs with children onthe autism spectrum.

Most of the technologies dis-cussed in this issue are still proto-types. Even so, the ideas andapproaches described herein canalready be creatively put to use.

Sarah W. Blackstone,Ph.D., CCC-SP

Page 3: Visual scene displays - augcominc.com

3

and (3) enable communicationpartners to participate more activelyin the communication process.

Visual scene displays (VSDs)portray events, people, actions,objects and activities against thebackgrounds within which theyoccur or exist. These scenes areused as an interface to language andcommunication. A VSD mayrepresent

• a generic context (e.g., a drawingof a house with a yard, an officewith workers or a school roomwith a teacher and students.)[See Figures 10 to 13.]

• a personalized context (e.g., adigital photo of a child playing inhis bedroom or a picture of thefamily on a beach while onvacation.) [See Figures 3 to 9.]

Note: A VSD may also have animatedelements (i.e., intelligent agents) thatcan move around a scene. [See Figure13.]

Differences between VSDs andgrid displays

Figure 1 illustrates and Table Icontrasts some of the variables thatdifferentiate traditional AAC griddisplays from visual scene displays:(a) primary types of representationused (e.g., photos, symbols, letters);(b) extent to which the representa-tions are personalized (e.g., involvethe individual in the scene, arefamiliar or generic); (c) how muchcontext is provided in the represen-tation (e.g., low=isolated concepts;high=concepts provided in a personalphoto); (d) type of layout (e.g., grid,full scene, partial scene, combinationgrid and scene); (e) how displaysare managed (e.g., menu pages,navigation bars); (f) how conceptsare retrieved (e.g., by selecting agrid space, pop ups, hot spots,speech keys) and (g) primary usesof the display (e.g., communicationof wants/needs, information ex-

change, conversational support,social interaction, social closeness).

Both VSDs and grid displays canbe used on low- and high-techdevices. Traditional AAC displaysare configured in grids, most often inrows and columns, with elementscomprised of individual graphicsymbols, text and/or pictures ar-ranged on the display according tospecific organizational strategies(i.e., by category, by activity,alphabetically, or idiosyncratically).The elements on a grid display aredecontextualized so that users canconstruct desired messages onnumerous topics using the samesymbols.

In contrast, the elements of aVSD are components of a personal-ized or generic scene, i.e., picturedevents, persons, objects and relatedactions, organized in a coherentmanner. As such, the display pro-vides a specific communicationenvironment and a shared contextwithin which individuals can tellstories, converse about a topic,engage in shared activities and soon. VSDs also enable communica-tion partners to assume a supportiverole, if needed, to make interactioneasier for beginning communicatorsand individuals with cognitive andlinguistic limitations.

Table I. Traditional grids and visualscene displays (VSDs) (Light, July 2004)

Contributors to this issue (Beu-kelman, Light and Drager, Shane)currently are investigating variousfeatures of VSDs to determinewhich can best meet the needs ofdifferent population groups, includingyoung children with complex com-munication needs, children withautism and adults with aphasia.Current applications of VSDsDepending upon the age and cogni-tive and linguistic needs of theindividual using the display, VSDscan be configured to serve severalpurposes.

1. Stimulate conversation betweeninteractants. Adults with aphasia andvery young children often find itdifficult to have conversations becauseof their limited language skills. Yet,within a shared context and with askilled communication partner,conversations can occur. For example, aman with severe aphasia, still has muchto share with his friends. Using a VSDwith digitized photos of his trips andfamily activities, he can set topics,converse and navigate between topicswhen a friend comes to visit. [SeeFigure 7.]

2. Support play, share experiencesand tell stories. Bill wants to playwith his toy farm, so his Mom findsthe page on his AAC system that has alarge photo of them playing with thefarm. Bill touches the part of the photothat has the bucket of feed that thefarmer is carrying. Bill’s touch retrievesthe speech output ‘feed.” His Momgets the farmer and waits. Bill thentouches the cow in the photo to retrievethe speech output “cow”. His Momtouches the farmer, the feed and thecow in sequence and then says, “Oh,the farmer’s going to feed the cow,” andacts it out. [See Figure 3.]

3. Facilitate the active participationof interactants during sharedactivities. In this application, theactivity is “in” the VSD. For example,Joe and his mom read the scanned pagesfrom Brown Bear, Brown Bear, or sing“Old Macdonald had a farm� using apicture of a farm with various stanzasof the song. [See Figure 4.]

4. Provide instruction, specificinformation or prompts. Many

Continued on page 4

selbairaV lanoitidarTyalpsiddirg s

enecslausiVsyalpsid

foepyTnoitatneserper

slobmyS

lanoitidarT

yhpargohtro

sgniwardeniL

sotohplatigiD

sgniwardeniL

noitazilanosreP detimiL laudividnievlovnI

dezilanosreP

cireneG

fotnuomAtxetnoc

woL hgiH

tuoyaL dirG enecSlaitraprolluF

)dirg&enecs(obmoC

fotnemeganaMyalpsidcimanyd

segapuneM

tneserperslobmys(

)segap

segapuneM

)segaptneserpersenecs(

srabnoitagivaN

folaveirteRstpecnoc

ecapsdirgtceleS

spupoP

stopstoH

ecapsdirgtceleS

syekhceepS

sesulanoitcnuF)yramirp(

noitacinummoC

,sdeen/stnaW(

)egnahcxenoitamrofnI

troppuslanoitasrevnoC

ytivitcaderahS

noitcaretnilaicoS

tnemnorivnegninraeL

noitacinummoC

,egnahcxenoitamrofnI(

,ssenesolclaicos

)sdeen/stnaw

Page 4: Visual scene displays - augcominc.com

4

BeginningCommunicators

Penn State projectsFor all children, early childhoodrepresents a crucial developmentperiod. Very young children at riskfor severe communication impair-ment especially need the earliestpossible access to AAC strategiesand tools to support their languageand communication developmentduring a critical period of growth.The need for this support, whilewidely accepted, is exceptionallydifficult to achieve.Difficulties with current AACtechnologies

A research team at PennsylvaniaState University (PSU), led byJanice Light and Kathy Drager, hasbeen investigating techniques (1) toincrease the appeal of AAC tech-nologies for young children and (2)to decrease the learning demands onyoung children who can benefit fromAAC technologies. The results aresummarized below.

Increasing the appeal of AACtechnologies. Researchers andclinicians have noted that youngchildren without disabilities do notfind current AAC devices particu-larly appealing. To investigate theseobservations, researchers at PSUasked groups of children to design

an AAC device for ahypothetical youngersibling who was unableto speak. The designs

of typically developingchildren (ages 7-10) did not

resemble the box-like, black/graydesign of current AAC technologies.The devices these children devel-oped were colorful, multifunctionaland easily embedded into playactivities.

Decreasing the learning demandsof AAC technologies. The PSUresearch team also investigatedways to reduce the learning de-mands of AAC technologies byredesigning the representations oflanguage concepts, their organizationand their layout.

Representation. Researchershypothesized that popular AACsymbol sets are not meaningful tovery young children because thesymbols are based on adult concep-tual models. Results confirmed thehypothesis. When asked to drawpictures of words representingnouns, verbs, interrogatives, etc.,typically developing four- and five-year-old children (independent ofcultural and language group) did notdraw parts of objects or events as iscommon in AAC symbol sets.Rather, they drew personalizedscenes that involved themselves andother people engaged in meaningfulactivities. For example, most chil-

Clinical News, Continued from page 3

children with autism are drawn toelectronic screen media (e.g., videos,DVDs, TV) and, therefore, may findVSDs engaging. An animated character,known as an intelligent agent, may beuseful in teaching language concepts orprompting the use of speech to labelobjects, activities and attributes. [SeeFigure 13 .]

Potential benefits of VSDsVSDs are personalized, create a

shared context, provide language incontext and enable partners toassume more responsibility duringinteractions. They also shift thefocus away from the expression ofwants and needs toward socialinteraction and the exchange ofideas and information. VSDs can behighly personalized, which is impor-tant for young children learninglanguage and can be helpful forothers as well (e.g., individuals withaphasia, autism, Down syndrome,etc.) VSDs may reduce cognitivedemands, make learning easier andoffer beginning communicators andindividuals with limited cognitive andlinguistic skills immediate success.Researchers report that very youngchildren and individuals with aphasiaand autism have required minimal, ifany, instruction before using devicesconfigured with VSDs to communi-cate.

Persons who rely on conventionalAAC technology may also benefitfrom VSD support of their conver-sational interactions in much thesame ways that typical speakers usephoto albums to support theircommunication. For example,

Tom relies on AAC technology becausehis speech is largely unintelligible as aresult of amyotrophic lateral sclerosis.When he takes a trip or goes to anevent, his family takes digital photo-graphs and develops a simple slideshow. When meeting with friends, thefamily runs the slide show in thebackground on a portable computer. Hisguests often take a couple of momentsto view the show before talking withTom. If his AAC technology contained

VSD capability in addition to spelling,word prediction and message retrieval,he would probably interact more easilyand effectively.

AAC-RERC researchers arestriving to inform the field about theways in which VSDs and hybrid(combination of grid and VSD)displays are effective with specificpopulations. They are observing theimpact of VSDs on early language/concept development and on social

interaction. Neither the current northe potential value of ongoing VSDresearch and development should beunderestimated. By continuing toexplore the many unansweredquestions about VSDs, their configu-rations and their uses, these pioneer-ing researchers are opening newwindows of communication opportu-nities for individuals we don’t yetserve very well.

Page 5: Visual scene displays - augcominc.com

5

Table III. Enabling YOU! Ways the website can help AAC stakeholders

Continued on page 6

dren drew “who” by showing twofigures together (a parent and child)in the foreground and the figure of astranger in the background. Thisrepresentation is very different fromany of the symbols typically used onAAC displays to mean “who.”

Organization. Researchershypothesized that AAC devicedisplays are not organized in waysthat reflect how young childrenthink. Findings confirmed thehypothesis. Young children withoutdisabilities do not group symbols inpages by category, by activity,alphanumerically or idiosyncratically,although these are the organizationalstrategies currently used on AACdevices and low-tech displays.

When typical children wereasked to organize a set of 40 AACsymbols by grouping them, theysorted the symbols into pairs or smallgroups, not into large groups orpages. Also, more than 80 percent ofthe time, they sorted according to anactivity-based (schematic) organiza-tion (i.e., grouping together thepeople, actions, objects and descrip-tors associated with bedtime,mealtime or shopping). They rarelygrouped symbols based on taxo-nomic categories (i.e., food, people,toys or actions). Today’s AACdisplays typically organize symbolson multiple pages, often in taxonomiccategories.

Learning to use AAC devices.PSU researchers hypothesized thatthe cognitive demands of currentAAC devices are well beyond whatmost young children without disabili-ties can handle. They investigatedthe extent to which 140 typicallydeveloping children grouped by agetwo years (N=30), three years(N=30), four years (N=40) and fiveyears (N=40) could learn to useAAC devices to communicateduring a familiar play activity (abirthday party theme). Each child

received 1.5 to 2.5 hours of instruc-tion over four sessions: three trainingsessions and a generalizationsession. All participating children hadnormal cognitive, language andsensory motor abilities.

Researchers investigated theeffects of different display organiza-tions and layouts on learning. TheAAC devices used in the study wereconfigured with four types ofdisplays. The systems under investi-gation necessarily differed in relationto the types of representations usedas well. The taxonomic grid andschematic grid conditions usedDynaSyms.TM The iconic encodingtechnique represented languageconcepts using standard iconsequences from Condensed UnityTM

software. The visual scene displaycondition used scenes representing abirthday party. [See Figure 2.] Theconditions were:

1. Taxonomic grid display. Targetvocabulary items were organized onfour pages (5 x 4 grid with approxi-mately 12 to 20 symbols per page).The DynavoxTM with DynasymsTM wasused. Items were grouped taxonomi-cally so each page represented acategory: people, actions, things, socialexpressions. A total of 60 symbols wereavailable. There was a menu page toallow the children to select one of thefour pages.

2. Schematic grid display. Targetedvocabulary items were organized onfour pages (5 x 4 grid with approxi-mately 12 to 20 symbols per page).The DynavoxTM with DynasymsTM wasused. Items were grouped according toevent schema associated with the

Figure 2. Device used for eachcondition

1 and 2. DynavoxTM device withDynaSymsTM (taxonomic and schematicgrid conditions)

3. Companion SoftwareTM on theFreestyle SystemTM (visual scenedisplay condition)

4. LiberatorTM with Condensed UnitySoftwareTM (iconic encoding condition)

birthday party (arriving at the party,eating cake, opening presents, playinggames). A total of 60 symbols wereavailable. There was a menu page toallow the children to select one of thefour pages.

3. Visual scene display (VSD).Targeted vocabulary was organized onfour pages of the FreestyleTM systemusing CompanionTM software. Eachpage represented events related to abirthday party that occur withindifferent scenes in a house: (1) arrivingat a birthday party in the living room,(2) playing games in the play room, (3)eating cake in the kitchen and (4)opening presents in the family room.Scenes were drawn by a student ingraphic design and were generic ratherthan personalized. Sixty vocabularyitems were preprogrammed under “hotspots” in the scenes. There was a menupage to allow the children to select oneof the four scenes.

4. Iconic encoding. The CondensedUnityTM software on the LiberatorTM

was used. A total of 60 vocabularyitems were stored in two icon se-quences. Forty-nine symbols wereorganized on a single page.

[Note: All four conditions required thechildren to make two selections to retrievetarget vocabulary.]

During four intervention sessions,an effort was made to teach eachchild to use between 12 (two-year-old group) and 30 (five-year-oldgroup) symbols out of the 60 avail-able vocabulary items. Thirtyconcrete concepts and 30 abstractconcepts were included in thecorpus. Ten children from each agegroup participated in each of theconditions. Children ages two andthree participated in three conditions.Children ages four and five partici-pated in four conditions.Results

Findings across the studiesconfirmed that all children learnconcrete symbols more readily thanabstract symbols. Even so, no groupperformed well during the initialsession, suggesting that current

Page 6: Visual scene displays - augcominc.com

6

Figure 3a. Visual scene display: A FisherPrice® farm toy.

Figure 3b. Shows hot spots wherelanguage is embedded.

technologies are not transparent toyoung children under the age of six.The studies confirmed that typicalchildren experience significantdifficulties learning to use AACdevices as they are currentlydesigned and configured.

Two year olds performed poorlyacross all learning sessions, showedlimited progress across all conditionsand demonstrated no generalization.However, they did significantly betterlocating the 12 vocabulary items taughtin the VSD condition than they did inthe taxonomic or schematic gridconditions. [Note: They did notparticipate in the iconic encodingcondition.]

Three year olds performed somewhatbetter than the two year olds after theinitial learning session. Overall, theymade limited progress learning to usethe 18 targeted symbols. Similar to thetwo year olds, they did better usingVSDs than the grid layouts. A fewshowed some evidence of generalizationto new vocabulary. [Note: They did notparticipate in the iconic encodingcondition.]

Four- and five- year-old childrenlearned a greater number of symbols inthe dynamic display conditions (gridsand VSDs) than they did in the iconicencoding condition, and they general-ized to novel vocabulary items exceptin the iconic encoding condition. By thefinal learning session, four-year-oldchildren were performing with a meanof 67% accuracy (16 of 24 itemscorrect) in the taxonomic grid condition;62% accuracy (14.8 of 24 items correct)in the schematic grid condition; 66%accuracy (15.8 of 24 items correct) inthe VSD condition; and 14% accuracy(3.4 of 24 items correct) in the iconicencoding condition. Five-year-oldchildren were performing with a meanof 80% accuracy (24.1 of 30 itemscorrect) in the taxonomic grid condition;79% accuracy (23.8 of 30 items correct)in the schematic grid condition; 71%accuracy (21.4 of 30 items correct) inthe VSD condition and 27% accuracy(8.1 of 30 items correct) in the iconicencoding condition.

In summary, while typical four-and five-year-old children learned touse AAC devices to communicate,they did so quite slowly. Two- andthree-year-old children were notsuccessful and really struggled. Theyounger children performed signifi-cantly better using VSDs than eithergrid display but did not perform well.The researchers hypothesized thatthe VSDs might have been moreeffective if they were personalized toeach child’s experiences. No child inthe study under six performed wellusing iconic encoding. During thegeneralization session, most of thechildren did not choose to usedevices despite being prompted to doso. Researchers concluded thattypical children under the age of sixhave difficulty learning to usecurrent AAC technologies. Thus, itis essential that researchers and

developers designdevices for beginningcommunicators that(1) children findappealing and (2)support, rather thanovercomplicate, theuse of AAC technolo-gies. Devices foryoung children mustpresent and organizelanguage in develop-mentally appropriateways.Designingnew AACtechnologies

Janice Light, KathyDrager and othermembers of the PSUresearch team are nowconducting a longitu-dinal research projectto design features ofAAC technologies forchildren that willreduce learningdemands and create

new opportunities for languagelearning and communication success.The team has incorporated findingsfrom their previous research and areusing personalized VSDs in conjunc-tion with instructional strategiestailored to meet the needs of eachparticipant in their research.

Figure 4. “Old Macdonald hada Farm.� Shows hot spots wherelanguage is embedded.

Beginning Communicators, Cont. from page 5

Page 7: Visual scene displays - augcominc.com

7

Study #1 focuses on children withdevelopmental disabilities (ages one tothree). A spin-off study targets olderchildren with autism spectrum disorder.These studies are underway.

Study #2 targets older children oradolescents with significant cognitiveimpairments. This study has not yetbegun.

The device prototype for theVSDs is Speaking DynamicallyProTM on the Gemini® and Mer-cury® devices. Researchers aremeasuring a number of variables atbaseline (i.e., rate of vocabularyacquisition, frequency of communi-cative turns and range and type ofcommunicative functions expressed)and then periodically documentingthe children’s progress in theseareas over time.

Figures 3, 4, 5 and 6 showexamples of personalized VSDs.[Note: The digital photos that comprise theVSDs typically have the child and familymembers in them. However, because ofprivacy issues, these particular scenes donot.]

Figures 3a and 3b show the VSDof a child’s favorite Fisher Price® toy.Figure 3a shows what the child seesand Figure 3b shows where languageis embedded under “hot spots.” Thescene is designed to support play andlanguage development, i.e., the use ofsingle words and/or two or three wordcombinations.

Figure 5. This is a hybriddisplay. Vocabulary can beaccessed using all elementsin the display. “Hot spots”are highlighted in blue in theVSD. Symbols are located ina grid display.

Figure 6. Menu page of a personalized VSDNote: Actual pages would be more personalized and havephotos of the child and family.

The child touches the cow on the VSDto retrieve speech output “cow.” Theclinician might then say, “Where do youwant the cow to go?” and pause. If thechild does not respond, the clinicianmight touch the barn and the devicewould say, “barn.” Then the cliniciancould select cow and barn and say, “thecow walks in the barn,” and walk it intothe barn.

Continued on page 8

Songs and stories can build jointattention, an important componentof language development. In Figure4 the VSD represents the song, “OldMacdonald had a farm.”

The parent touches the farmer and thedevice sings the refrain “OldMacdonald had a farm E I E I O.” Theparent might then sing, “On his farm hehad some . . .” and pause. The childtouches the pig and the device sings“Pigs. E I E I O. With an oink, oinkhere and an oink, oink there... .”Through modeling and expectantpauses, the parent and child have funparticipating in this shared activity,using language.

In Figure 5, the scene is a digitalphoto of a college football game.The child accompanies her parentsto the PSU games and has becomequite a fan.

The child touches the “hot spot” wherethe fans are sitting, which starts thecheer, “We are Penn State!”

Page 8: Visual scene displays - augcominc.com

8

Beginning Communicators, Cont. from page 7

Along the left side, additional symbolsenable the child to ask questions ormake comments, e.g., “What’s that?”“Touchdown!” These additionalsymbols introduce the use of griddisplays, while continuing to offerimportant contextual support.

Figure 5 shows a menu page,which enables a clinician, familymember and ultimately the child tonavigate through a child’s system.Instructional strategies

Longitudinal data are presentlybeing collected on eight youngchildren. According to Janice Light,“None of the eight children withsevere disabilities enrolled in thestudy uses an AAC device only.”Researchers encourage the use ofmulti-modalities (e.g., gestures/signs,eye pointing, facial expression, lowtech, high tech and speech) and aredesigning systems to support lan-guage learning and concept develop-ment. Light said, “Children are notbeing asked to ‘show me’ or ‘tellme’ and are not tested.” Instead,clinicians are modeling the use oflow-tech and high-tech VSDs tosupport interactions during play andshared activities. Light reported thatthe children regularly use theirVSDs during solitary play, as well aswhen they are communicating withothers.Preliminary data

While preliminary, these longitudi-nal data are very encouraging. Twoexamples follow:

Child #1 at baseline. Prior tointervention, a 25-month-old boywith severe cerebral palsy, a tra-cheostomy and a feeding tube hadno vocalizations or speech approxi-mations and used no signs orgestures. However, his mother hadcreated approximately 25 digitalphotos of toys he liked, which heused to request objects whenoffered a choice between two toys.

During the baseline session, heexpressed one concept or less per 5minutes of interaction, requestingobjects only.

Child #1 at followup. After 12weeks of intervention (1 hour perweek), the child had acquired morethan 400 words/concepts. He startedby using VSDs, but has also learnedto use grid displays as well. Datashow he is increasing his vocabularyby four or five words per day.

He expressed more than 12concepts during five minutes ofinteraction, ten times more than hedid during the baseline session. Inaddition, he spontaneously combinedconcepts using two- and three-wordutterances and expressed a varietyof semantic relations (agent, action,object, locative, quantifier, descrip-tors, questions). He requestedobjects and actions, initiated andsustained social interaction withadults and was beginning to shareactivities with peers and to commentand share information.

Child #2 at baseline. Prior tointervention, this 29-month-old boywith Down syndrome used fewerthan ten spoken word approxima-tions and had fewer than ten signs.He expressed less than one conceptper minute during the initial sessionand less than 20 words or conceptsduring a 25 minute baseline period.Mostly, he used nouns.

Child #2 at followup. After sevenmonths of intervention (one hour perweek), this 36-month-old boy wasusing 1,210 words via speech, signs,light-tech and high-tech AAC. Hisvocabulary had increased by morethan five words per day. He usedVSDs initially and, later, grid dis-plays. He also relied on signs,gestures and speech to expressmore than 10 words per minute andmore than 250 words in 25 minutes.This was ten times his baseline rate.

In addition, he was using a varietyof semantic relations: agent, action,object, locative, demonstrative,possessor, quantifier, instrumentaland asking questions.Comment

To date, the PSU research teamreports that all children in the studyhave been able to use VSDs oninitial introduction and they continueto be engaged, active, interested andmotivated to use their AAC systems.Also, interactions last longer andinvolve social routines and comment-ing, not just requesting. The childrenare using their systems indepen-dently during play and learningactivities.

Light cautioned, “Preparingexisting devices to use VSDsrequires programming that is verytime consuming and means thecoordination of a lot of hardwareand software.” She also pointed outthat because of the children’s rapidrates of vocabulary acquisition, it iscritical that new vocabulary beregularly added to the systems. As aresult, clinical support is intense.Identifying vocabulary needs anddesigning pages is done in collabora-tion with the family, with the clinicalsupport managed through a collabo-rative effort with parents, clinicians,PSU students and volunteers.

This research reflects the dedicated work ofPenn State students: Katherine Bercaw,Brianne Burki, Jennifer Curran, KarenD’Silva, Allison Haley, Carol Hammer,Rebecca Hartnett, Elizabeth Hayes, LisaKirchgessner, Line Kristiansen, HeatherLarson, John McCarthy, Suzanne Mellott,Jessie Nemser, Rebecca Page, ElizabethPanek, Elizabeth Parkin, Craig Parrish,Arielle Parsons, Sarah Pendergast, StacyRhoads, Laura Pitkin, Maricka Ward,Michelle Welliver and Smita Worah.

Page 9: Visual scene displays - augcominc.com

9

Adults withAphasia

Nebraska projectA research team at the University ofNebraska, led by David Beukelman,Susan Fager, Karen Hux and JamieCarlton, has studied the acceptanceof AAC technologies by people withacquired disabilities, including thosewith aphasia. The team has foundthat people with aphasia, theircommunication partners and clini-cians who provide aphasia therapyare underwhelmed by AAC devices.In part, this reflects the cognitiveand linguistic demands of currentAAC technologies, which canmagnify the difficulties people withsevere aphasia have generatinglanguage.

When adults with aphasia useAAC technologies, they typically dosomething very specific, such asmake a phone call or requestassistance. While using the tele-phone, stating one’s needs andasking for something are importantcommunication functions, they donot begin to address the reasonsmost adults want to communicate.Engaging in “small talk,” tellingstories, sharing ideas and informa-tion, discussing current events andgossiping about family and friendsare the fodder for adult interaction.When someone cannot participate inthis kind of communication, theirsocial circles begin to shrink, andthey are at risk for increasingisolation. Thus, to meet the needs ofthis population an AAC device musthave built-in ways to support theseimportant conversational functions.

This article describes the ongoingeffort of researchers at the Univer-sity of Nebraska to create an AAC

device that will increasethe ability of adults withaphasia to engage inconversations andmaintain their social

roles.Goal and rationaleThe goal of the project is to developa prototype device that (1) usesdigital images (scenes) to representmeaning, (2) enables people withaphasia to navigate from scene toscene during a conversation and (3)reduces the cognitive and linguisticload of using AAC technologies. Theresearch team hypothesized thatadults with aphasia would be able tocommunicate more effectively usingvisual scene displays (VSDs).

The Nebraska prototype device isbeing developed in collaboration witha corporate partner, DynavoxSystems LLC. In designing featuresfor the prototype device, the re-search team considered ways tohelp people with aphasia representmeaning by capitalizing on theirstrengths while finding ways tocompensate for their difficulties withlanguage. As a result, the prototypeuses visual scene displays (VSD)that are rich in context and thatenable communication partners to beactive participants in the conversa-tion.

Beukelman recounts some earlyobservations that underlie the team’srationale for selecting specific designfeatures.Use of personalized photos.There is power in personal photo-graphs; and they quite naturallyprovide a shared context withinwhich to have a conversation.Beukelman noted this when hismother was moving into an assistedliving facility.

I found myself curious aboutthe decisions she made aboutwhat to take with her, espe-

cially her photographs. Sheselected photos to be displayedin her room at the assistedliving facility. These were oflarge family groupings, scenesof the farm where she and myfather lived for many years, afew historical favorites, indi-vidual pictures of the greatgrandchildren and a picture ofmy father standing beside theirfavorite car, a 1948 model.

Later, I also noticed that anumber of my mother�s newfriends, who share her newhome, also had photo collec-tions that supported theirconversations.

The research team decided todesign a prototype device that usedpersonal digital photos in ways thatcould both engage others andsupport the formulation of personalstories.The use of context to supportmeaning. At the AAC-RERC Stateof the Science Conference in 2001,Janice Light and her colleaguesdescribed the contextual nature ofthe “symbols” that young childrendraw to represent meaning.

�Mother� was represented by awoman with a young child.�Who� was represented bydrawing a relatively smallperson in the distance who wasapproaching an adult and ayoung child. �Big� was pre-sented in the context of some-thing or someone small or little.�Food� was drawn as a mealon the table, not by a singlepiece of fruit. Young childrenmaintained context when theyrepresented meaning.

The Nebraska team hypothesizedthat adults with aphasia would alsobenefit from contextual supports

Continued on page 10

Page 10: Visual scene displays - augcominc.com

10

Figure 7 . Example of VSD for adults with aphasia. Family scenes.

when attempting to constructmeaning. They decided to use digitalphotos to capture important activitiesand events in the lives of individualswith aphasia and organize them onprototype devices in ways that couldprovide rich contextual support.Capitalizing on residualstrengths of adults with aphasia.Being able to tap into the residualvisual-spatial skills of people withsevere aphasia was key to designingthe prototype device. Beukelmanexplains the team’s thinking.

Jon Lyon, a well-knownaphasiologist, has described adrawing strategy that heteaches to persons with apha-sia in order to support andaugment their residual speech.Many of these drawings arerelatively rich in context, eventhough they are usually drawnwith the non-dominant hand.

Mr. Johnson, a man withexpressive aphasia, taughtKathy Garrett and me a lotabout communication. The firsttime I met him, he took out abulging billfold and began tospread race-track forms,pictures from the newspaper,pictures of his family andarticles from the paper on thetable between us. Whenever, hedid this, I knew that we weregoing to have a �conversa-tion.�

The research team decided thatthe prototype AAC systems shouldincorporate strategies like maps,diagrams, one-to-ten opinion scalesand photos, because they currentlywork for people with aphasia, andthen should organize these strategiesin ways that provide ready access tolanguage.

DescriptionThe prototype device, which is

currently under development,primarily uses digital images. Thevisual scenes are accompanied bytext and symbols to (1) supportspecific content (e.g., names,places), (2) cue communicationpartners about questions or topicsthat are supported by displays and/or(3) support navigation features. Thedigital images are used to (1)represent meaning, (2) providecontent and (3) support pagenavigation.

Unlike the VSDs discussed in theprevious article, the prototype devicedoes not use “hot spots” in theimages. Thus, nothing happens to animage when it is touched. Thismeans that the scenes can betouched many times by both interac-tants as they co-construct meaning,ask and answer questions and tellrelated stories. This enables personswith aphasia to use images in muchthe same way they would use aphoto album.

Speech, questions and navigationbuttons are placed around the imageas shown in Figures 7, 8 and 9.These elements allow both interacta-nts to speak messages, ask questionsor move to related images. Thefigures show the available themes onone person’s system. These arerepresented by digital images locatedaround the edge of the display, e.g.,family, vacations, shopping, recre-ational activities, home and neighbor-hood, hobbies, exercise, favoritefood and places to eat, life story,former employment and geography.

Figure 7 highlights the familytheme and provides a way to talkabout family-related issues. Forexample, the “theme” scene of thefamily on vacation has mountains inthe background and babies inbackpacks, providing lots of informa-tion with which to represent meaningand build a conversation. To expandthe conversation to other family-related topics, the individual can godeeper into the family theme (bypushing the [+] symbol) and get to

Adults with Aphasia, Continued from page 9

Page 11: Visual scene displays - augcominc.com

11

Continued on page 12

Figure 8. Example of VSD that shows more scenes on the family.

additional photos (Christmas tree,wedding, new baby). The blue lineon Figure 7 shows that the themephoto also appears in the left handcorner. Figure 8 illustrates whathappens when the person goesdeeper into the family theme. Forexample, if talking about a newgrandchild, the person can touch the[+] symbol near the photo with thebaby and retrieve more picturesrelated to the arrival of the newaddition to the family.

Figure 9 shows a shift to ashopping-related topic. The scenerepresents various stores the personfrequents. To access these displays,the person simply selects the store toretrieve more information.

Figures 7, 8 and 9 also illustrateelements around the perimeter thathave specific functions.

• s))) symbol specifies synthe-sized speech messages that describethe picture or individuals within thepicture.

• ? symbol gives a way to askone’s communication partner a

question (e.g., Where do you like togo for vacations?”).

• + symbol means there is moreinformation about the theme avail-able.

• two hands icon stands for “Ineed help” or “I’m lost here, giveme a hand.”Using the displays

For the past year, the researchteam has collaborated with twoindividuals with severe aphasia in aneffort to develop design features andinvestigate the uses of VSDs.

1. Pat has expressive aphasia andis able to say very little, other thanstereotypic utterances and somejargon. Before she began to use theprototype device, she was using alow-tech communication bookcontaining drawings, photos of someitems (buildings and people) andsome words. While the low-techbook successfully representedmeaning for her, she was unable tonavigate through it easily. Sheusually engaged in a linear search,beginning at the first page, until shefound the item or the page that shewas looking for. In an effort tosupport her, family members often

Figure 9 . Example of shopping scenes.

Page 12: Visual scene displays - augcominc.com

12

Adults with Aphasia, Continued from page 11

located a page beforehand, askingher a question she could answerusing the page.

When first presented with theVSD prototype device, Pat did notengage in a linear search. Rather,she went directly to the scene shewanted. Researchers noted that sheused the photos to support herconversation by relating the storythat was illustrated. She also wasobserved using gestures, vocaliza-tions and items in the photos tocommunicate something quitedifferent from the illustrated mean-ing. Beukelman recounts,

Pat had selected a largenumber of digital photos totalk about a trip she and herhusband had taken to Hawaiibefore her stroke. Her favoritescene was an autographedpicture of them with the enter-tainer Don Ho. She loved totell us about the concert andthe visit with Don afterward.

Another series of photographsillustrated a bus trip in themountains to see the sun riseover the ocean. It was duringthat side trip that her camerawas lost. When asked how itwas lost, she navigated back tothe Don Ho scene, pointed toher husband, pointed to theside of her head and twirledher finger in a circle, exhaledloudly and began to laugh. Noone needed to translate for her.

2. Ron is another collaborator. Hehas severe aphasia secondary to astoke which has limited his verbalinteraction. He produces stereotypicutterances and uses many genericwords, but is unable to retrieve thecontent words that enable him toconvey messages. Prior to the

stroke, he was a semi-professionalphotographer, so he brings theresearch team gorgeous photographsto include in his AAC device. Aspart of the research protocol, Ronwas videotaped conversing withDave Beukelman about a trip hetook to the Rio Grande after hisstroke. Beukelman reported,

Ron wanted to tell me aboutrock formations, the light onthe rocks in the morning andthe evening, the location of thepark on several different maps,the direction that the canoetook, the new wheelchairaccess for persons in wheel-chairs and much more. Thedigital photos, digital imagesof maps and photographs fromthe Internet that were on hisVSDs supported our conversa-tion.

When the research team lookedat the tape of the session, theycommented that the video “looks liketwo old guys who like to talk toomuch.” Beukelman adds,

As I watched the video, Ibegan to understand. We wereinterrupting each other, talkingover the top of each other andholding up our hands toindicate that the other personshould be quiet for a minute.And, over 20 minutes of con-versation, he apologized onlyonce for not being able to talkvery well, something that filledmost of his prior communica-tion efforts. It looked and feltlike a conversation of equalssharing information andcontrol.

CommentThe Nebraska research is

revealing that pictures appearingearly in a theme are more effectiveif they are rich in context and

contain a good deal of information.Thus, the first page of each theme inthe prototype system is designed tosupport at least five conversationalturns. Subsequent pages maycontain elements with more detail toclarify the content of the early photo,e.g., names of family members,where they live, how old they areand so on.

Researchers are using highlypersonalized photos early in thetheme content. Then, as an indi-vidual progresses further into atheme, more generic pictures maybe used to represent a doctor,hospital, grocery store and so on.

The team finds that people withsevere aphasia can use the variousnavigation strategies provided on theprototype AAC device. They areimpressed by the visual-spatialabilities of persons with severeaphasia when the language repre-sentation and categorization tasksare minimized.

To summarize, it appears thatVSDs allow individuals with aphasiato make use of their strengths withimages, to navigate through pages ofcontent and to communicate in waysthat cannot be accomplished usinglow-tech communication bookapplications and current AACtechnologies.

Page 13: Visual scene displays - augcominc.com

13

Continued on page 14

Children’s HospitalBoston projects

This section provides examples ofadditional applications of visualscene displays (VSDs). The ex-amples illustrate the use of genericscenes designed to facilitate organi-zation, cueing and instruction, as wellas communication.Communication

Figure 10 shows two scenes fromCompanion®, a program designedby Howard Shane in 1995 forAssistive Technology, Inc. Until afew years ago, it was available ontheir Gemini® and Mercury® AACdevices. The intent of Companion®was to create a program usingcommon everyday locations, such asa home, school, playground or farmthat could support interaction. Forexample, the scene in Figure 10shows a house with several rooms.When a person touches the bed-room, the scene changes and showsa bedroom with a dresser, bed, lampand so on. There can be 20 hot spotswith linguistic elements in the scene.Touching the dresser drawers, forexample, can retrieve the speech

Figure 10. VSD for communication using Companion® program.

Other Applicationsof VSDs:

Beyond Conversation

output, “I’d like somehelp getting dressed.”

Shane has talkedabout these scenes as

“graphical metaphors,”because they stand for real

places, actions and objects. He saysthey were designed primarily toenable persons on the autismspectrum to indicate what theywanted by making requests.

CueingThe Companion® program also

was used to give visual and auditorycues to persons with anomia to helpthem recall the names of familiaritems. Figure 11 shows a living roomand illustrates what would happenwhen someone clicked on the chairto retrieve the word “chair.” You see

Figure 11.Example ofVSD withpartial cue.Hot spot ishighlighted.

a ch__ cue, i.e., a partial cue. Theprogram features three types ofcues: (1) a complete visual cue(chair), (2) a partial visual cue(ch__) and (3) a spoken cue.Individuals with aphasia havedemonstrated an ability to navigatethrough this environment and selectintended objects.Organization

Figure 12 on page 14 showsComet, which stands for the Control

Of Many Everyday Things. Shanealso developed this VSD forAssistive Technology, Inc. Differentfunctions occur when hot spots areselected.

Calendar. Touch the calendar and thedate is spoken. This is set to thecomputer’s calendar.

Clock. Touch the clock and the time isspoken. This is set to the system clockof the computer.

TV. Touching the T.V. launches theremote control.

Two figures. Touching the figureslaunches the built-in communicationprogram.

Computer. Touching the computerlaunches an Internet connection.

Cabinet. Touching the cabinet launchesa menu of computer games.

Calculator. Touching the calculatorlaunches the calculator program.

Persons with ALS have found

ch___

Page 14: Visual scene displays - augcominc.com

14

Other Applications,Continued from page 13

Figure 13.Example of VSD

for multiplefunctions

PuddingtonPlace

Figure 12.Example of

VSD fororganizationthe Senseof Soundprogram

COMET to be a helpful computerinterface.Auditory training

The next application is notpictured. Sense of Sound is aprogram that provides an interactiveauditory experience for persons whoare hard of hearing and includes avariety of familiar environmentalsounds. When an object is selected,the sound associated with that objectis presented. The program is thoughtto be a useful component for acochlear implant auditory trainingprogram.Puddington place

Figure 13 illustrates PuddingtonPlace, a program that uses VSDs asan interface for multiple functions.Shane reports that the photo-realisticscenes seem to have meaning forchildren with pervasive developmen-tal disabilities and autism. Thesechildren immediately see the rela-tionship of the scene to the realworld and also understand thenavigation inherent in PuddingtonPlace. They can go into the house,enter rooms and make things talkand make sounds. For example, onechild immediately started using theprogram. When his interactantmentioned items, he would hunt forand find things embedded in differ-ent scenes. Children enjoy interact-ing with the program.

Puddington Place serves as thebackground for the intelligent agent(IA) known as Champ. Champ is ananimated character who can beprogrammed to give instructions, askquestions and demonstrate actions,such as dressing. For instance, byactivating hot spots, Champ candemonstrate the sequences andactions involved in getting dressed.

Shane is currently studying theuse of IAs as a way to engageindividuals on the autism spectrum.

If a child accepts input from an IA,then perhaps the IA can be used toteach language concepts, support thedevelopment of speech, regulatebehaviors and so on.

• Phase I of the project asksparents of children with autism toidentify the features of electronicscreen media, such as videos,that their child prefers. Data arecurrently being collected and willbe analyzed in ways that provideinformation about the reportednature of the children’s prefer-ences and the extent of interac-tions individuals on the spectrumcurrently have with electronicscreen media. Researchers willcompare these data to informa-tion about the siblings of these

children and to the results ofavailable national studies.

• Phase II will investigate howVSDs can support comprehen-sion and expression and helpregulate the behaviors of childrenon the autism spectrum using anintelligent agent and genericvisual scene displays.

CommentThe use of visual scenes can

extend well beyond supportingconversation and social interaction tooffer a wide range of applications toindividuals on the autism spectrum,as well as to children and adults withand without disabilities.

Page 15: Visual scene displays - augcominc.com

15

Augmentative CommunicationNews (ISSN #0897-9278) is publishedbi-monthly. Copyright 2004 byAugmentative Communication, Inc. 1Surf Way, Suite 237, Monterey, CA93940. Reproduce only with writtenconsent.Author: Sarah W. BlackstoneTechnical Editor: Carole KrezmanManaging Editor: Harvey PressmanOne Year Subscription: Personalcheck U.S. & Canada = $50 U.S.;Overseas = $62 U.S.Institutions, libraries, schools,hospitals, etc.: U.S. & Canada=$75U.S.; Overseas = $88 U.S. Single rate/double issue = $20.Special rates for consumers and full-time students. Periodicals Postagerate paid at Monterey, CA. POST-MASTER send address changes toAugmentative Communication, Inc.1 Surf Way, Suite 237, Monterey, CA93940. Voice: 831-649-3050. Fax: 831-646-5428. [email protected];www.augcominc.com

1

Continued on page 16

ResourcesDavid R. Beukelman, Department of Special

Education and Communication Disorders,University of Nebraska, 202 BarkleyMemorial Center, Lincoln, NE [email protected]

Kathy Drager, Department of CommunicationSciences and Disorders, Pennsylvania StateUniversity, 110 Moore Building, UniversityPark, PA 16802. [email protected]

Jeff Higginbotham, Department of Communi-cative Disorders and Sciences, University atBuffalo, 1226 Cary Hall, Buffalo, NY14214. [email protected].

Janice Light, Department of CommunicationSciences and Disorders, Pennsylvania StateUniversity, 110 Moore Building, UniversityPark, PA 16802. [email protected]

Howard Shane, Communication EnhancementCenter, Children’s Hospital Boston, FeganPlaza, 300 Longwood Avenue, Boston, MA02115. [email protected]

LingraphicaLingraphica (www.lingraphica.com). A devicefor people with aphasia with synthesizedspeech, a large dictionary of images or icons(some animated) organized in groups and away to create messages, access familiarphrases and develop personalized informa-tion. Uses generic scenes (hospital, town andhome) with embedded messages. Has clinicalexercises for practicing outside the therapyroom.

Select Articles andChaptersBall, L., Beukelman, D., & Pattee, G. (2004).

Augmentative and alternative communica-tion acceptance by persons with amyo-trophic lateral sclerosis. Augmentative andAlternative Communication (AAC), 20, 113-123 .

Christakis, D., Zimmerman, F., DiGiuseppe,D., McCarty, C. (2004). Early televisionexposure and subsequent attentionalproblems in children. Pediatrics, 113(4):917-8.

Drager, K., Light, J., Speltz, J., Fallon, K., &Jeffries, L. (2003). The performance oftypically developing 2 ˚ year olds ondynamic display AAC technologies withdifferent system layouts and languageorganizations. Journal of Speech Languageand Hearing Research, 46, 298-312.

Fallon, K., Light, J., & Achenbach, A. (2003).The semantic organization patterns ofyoung children: Implications for Augmenta-tive and Alternative Communication. AAC,19, 74-85.

Garrett, K., Beukelman, D., & Morrow, D.(1989). A comprehensive augmentativecommunication system for an adult withBroca’s aphasia. AAC, 5, 55-61.

Garrett, K. L., & Beukelman, D. R. (1992).Augmentative communication approaches

Prototype to Market

This issue mentions several types ofvisual scene displays (VSDs). Someare completed products; most areprototypes and under development.This type of technical developmentoften entails close communication,collaboration and/or partnership withan equipment manufacturer. Theserelationships involve informationsharing about capabilities of existingtechnology, the addition of featuresto support the researchers’ develop-ment activities and perhaps more.

The Penn State VSD project hasreceived valuable technical support fromMayer Johnson, Inc., for their use ofSpeaking Dynamically Pro®. Inaddition, the research team has usedAAC technologies produced byAssistive Technologies, Inc., DynaVoxSystems LLC and the Prentke RomichCompany in their research.

The Nebraska prototype device is beingdeveloped in collaboration with acorporate partner, DynaVox SystemsLLC. The prototype is based onEnkidu’s Impact software. DynaVox hasplayed an important technical role inredesigning Impact’s user interface andprogramming the system to accommo-date large numbers of digital images.

The Children’s Hospital Boston hassupported the ongoing development ofPuddington Place and will seek adistributor. Dr. Shane developed theother programs mentioned in collabora-tion with Assistive Technology, Inc.

A variety of relationships arepossible among researchers, devel-opers and manufacturers. The AAC-RERC continuously seeks to fostercollaborations, cooperation andpartnerships with members of themanufacturing community to helpinsure that products, which researchsuggests will benefit individuals withcomplex communication needs,become available to them at reason-able cost.

for persons with severe aphasia. In K.Yorkston (Ed.), Augmentative communica-tion in the medical setting. 245-321.Tucson, AZ: Communication Skill Builders.

Garrett, K., & Beukelman, D. (1995).Changes in the interaction patterns of anindividual with severe aphasia given threetypes of partner support. In M. Lemme(Ed.), Clinical Aphasiology, 23, 237-251.Austin, TX: Pro-ed.

Garrett, K. & Beukelman, D. (1998). Aphasia.In D. Beukelman & P. Mirenda (Eds.).Augmentative and alternative communica-tion: Management of severe communicationdisabilities in children and adults. Balti-more: Paul Brookes Publishing.

Garrett, K. & Huth, C. (2002). The impact ofcontextual information and instruction onthe conversational behaviours of a personwith severe aphasia. Aphasiology, 16, 523-536.

Lasker, J., & Beukelman, D. R. (1999). Peers’perceptions of storytelling by an adult withaphasia. Aphasiology, 13(9-11), 857-869.

Lasker, J., & Beukelman, D. (1998). Partners’perceptions of augmented storytelling by anadult with aphasia. Proceedings of the 8th

Biennial Conference of International Societyfor Augmentative and Alternative Communi-

For additional information about theAAC-RERC and its projects andactivities, go to http://www.aac-rerc.com

Page 16: Visual scene displays - augcominc.com

16

Augmentative Communication News

1 Surf Way, #237Monterey, CA 93940

Address Service Requested.

PeriodicalsSelect Articles & ..., Cont from page 15

cation (ISAAC), August 1998, Dublin,Ireland.

Light, J. & Drager, K. (2002). Improving thedesign of augmentative and alternativecommunication technologies for youngchildren. Assistive Technology, 14(1), 17-32.

Light, J., Drager, K., Baker, S., McCarthy, J.,Rhoads, S., & Ward, M. (August 2000). Theperformance of typically-developing five-year-olds on AAC technologies. Posterpresented at the biennial conference of theISAAC, Washington, DC.

Light, J., Drager, K., Burki, B., D’Silva, K.,Haley, A., Hartnett, R., Panek, E., Worah,S., & Hammer, C. (2003). Young children’sgraphic representations of early emerginglanguage concepts: Implications for AAC.In preparation.

Light, J., Drager, K., Carlson, R., D’Silva,K., & Mellott, S. (2002). A comparison ofthe performance of five-year-old childrenusing iconic encoding in AAC systems withand without icon prediction. In preparation.

Light, J., Drager, K., Curran, J., Fallon, K., &Zuskin, L. (August 2000). The performanceof typically-developing two-year-olds on

AAC technologies. Poster presented at thebiennial conference of the ISAAC,Washington, DC.

Light, J., Drager, K., D’Silva, K. & Kent-Walsh, J. (2000). The performance oftypically developing four-year-olds on anicon matching task. Unpublished manu-script, The Pennsylvania State University.

Light, J., Drager, K., Larsson. B., Pitkin, L., &Stopper, G. (August 2000). The performanceof typically-developing three-year-olds onAAC technologies. Poster presented at thebiennial conference of the ISAAC,Washington, DC.

Light, J., Drager, K., Millar, D., Parrish, C., &Parsons, A. (August 2000). The perfor-mance of typically-developing four-year-olds on AAC technologies. Poster presentedat the biennial conference of the ISAAC,Washington, DC.

Light, J., Drager, K., McCarthy, J., Mellott, S.Parrish, C., Parsons, A., Rhoads, S., Ward,M. & Welliver, M. (2004). Performance ofTypically Developing Four- and Five-Year-Old Children with AAC Systems usingDifferent Language Organization Tech-niques. AAC, 20, 63-88.

Light, J. & Lindsay, P. (1992). Messageencoding techniques in augmentativecommunication systems: The recallperformances of nonspeaking physicallydisabled adults. Journal of Speech andHearing Research, 35, 853-864.

Light, J. & Lindsay, P. (1991). Cognitivescience and augmentative and alternativecommunication. AAC, 7 186-203.

Light, J., Lindsay, P., Siegel, L. & Parnes, P.(1990). The effects of message encodingtechniques on recall by literate adults usingaugmentative communication systems.AAC, 6, 184-202.

Richter, M., Ball, L., Beukelman, D., Lasker,J. & Ullman, C. (2003) Attitudes towardthree communication modes used by personswith amyotrophic lateral sclerosis forstorytelling to a single listener. AAC, 19 ,170-186.

Rideout V, Vandewater E, Wartella E (2003).Zero to Six: Electronic Media in the Lives ofInfants, Toddlers and Preschoolers. KaiserFamily Foundation and the Children’s DigitalMedia Centers.