Journal of Experimental Psychology: Human Perception and...

24
Journal of Experimental Psychology: Human Perception and Performance Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence From Eye Movements Michael C. Hout, and Stephen D. Goldinger Online First Publication, May 16, 2011. doi: 10.1037/a0023894 CITATION Hout, M. C., & Goldinger, S. D. (2011, May 16). Incidental Learning Speeds Visual Search by Lowering Response Thresholds, Not by Improving Efficiency: Evidence From Eye Movements. Journal of Experimental Psychology: Human Perception and Performance. Advance online publication. doi: 10.1037/a0023894

Transcript of Journal of Experimental Psychology: Human Perception and...

Page 1: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

Journal of Experimental Psychology: HumanPerception and Performance

Incidental Learning Speeds Visual Search by LoweringResponse Thresholds, Not by Improving Efficiency:Evidence From Eye MovementsMichael C. Hout, and Stephen D. GoldingerOnline First Publication, May 16, 2011. doi: 10.1037/a0023894

CITATIONHout, M. C., & Goldinger, S. D. (2011, May 16). Incidental Learning Speeds Visual Search byLowering Response Thresholds, Not by Improving Efficiency: Evidence From Eye Movements.Journal of Experimental Psychology: Human Perception and Performance. Advance onlinepublication. doi: 10.1037/a0023894

Page 2: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

Incidental Learning Speeds Visual Search by Lowering ResponseThresholds, Not by Improving Efficiency: Evidence From Eye Movements

Michael C. Hout and Stephen D. GoldingerArizona State University

When observers search for a target object, they incidentally learn the identities and locations of “background”objects in the same display. This learning can facilitate search performance, eliciting faster reaction times forrepeated displays. Despite these findings, visual search has been successfully modeled using architectures thatmaintain no history of attentional deployments; they are amnesic (e.g., Guided Search Theory). In the currentstudy, we asked two questions: 1) under what conditions does such incidental learning occur? And 2) whatdoes viewing behavior reveal about the efficiency of attentional deployments over time? In two experiments,we tracked eye movements during repeated visual search, and we tested incidental memory for repeatednontarget objects. Across conditions, the consistency of search sets and spatial layouts were manipulated toassess their respective contributions to learning. Using viewing behavior, we contrasted three potentialaccounts for faster searching with experience. The results indicate that learning does not result in faster objectidentification or greater search efficiency. Instead, familiar search arrays appear to allow faster resolution ofsearch decisions, whether targets are present or absent.

Keywords: visual search, attention, learning, working memory

Imagine searching for some target object in a cluttered room:Search may at first be difficult, but will likely become easier asyou continue searching in this room for different objects. Althoughthis seems intuitive, the results of analogous experiments are quitesurprising. In numerous studies, Wolfe and colleagues have shownthat, even when visual search arrays are repeated many times,people do not increase the efficiency of visual search (see Oliva,Wolfe, & Arsenio, 2004; Wolfe, Klempen, & Dahlen, 2000;Wolfe, Oliva, Butcher, & Arsenio, 2002). More formally, thesestudies have shown that, even after hundreds of trials, there is aconstant slope that relates search reaction times (RTs) to array setsizes. That is, search time is always a function of array size, neverapproaching parallel processing.

Although search efficiency does not improve with repeatedvisual search displays, people do show clear learning effects. If thesearch arrays are configured in a manner that helps cue targetlocation, people learn to guide attention to targets more quickly—the contextual cuing effect (Chun & Jiang, 1998; Jiang & Chun,2001; Jiang & Leung, 2005; Jiang & Song, 2005). However, evenwhen search arrays contain no cues to target locations, people learnfrom experience. Consider an experimental procedure wherein

search targets and distractors are all discrete objects (as in acluttered room), and the same set of distractors is used across manytrials: Sometimes a target object replaces a distractor; sometimesthere are only distractors. As people repeatedly search through thedistractor array, they become faster both at finding targets anddetermining target absence (Hout & Goldinger, 2010). Moreover,people show robust incidental learning of the distractor objects, asindicated in surprise recognition tests (Castelhano & Henderson,2005; Hollingworth & Henderson, 2002; Hout & Goldinger, 2010;Williams, Henderson, & Zacks, 2005; Williams, 2009). Thus,although search remains inefficient with repeated displays, peopleacquire beneficial knowledge about the “background” objects,even if those objects are shown in random configurations on everytrial (Hout & Goldinger, 2010).

In the present investigation, we sought to understand how peo-ple become faster at visual search through repeated, nonpredictivedisplays. We assessed whether search becomes more efficient withexperience, verifying that it did not (Wolfe et al., 2000). Never-theless, people became faster searchers with experience, and theyacquired incidental knowledge of nontarget objects. Our new ques-tion regarded the mechanism by which learning elicits fastersearch, at the level of oculomotor behaviors. Do people becomefaster at identifying, and rejecting, familiar distractor objects? Dothey become better at moving their eyes through spatial locations?In addition, if visual search is improved by learning across trials,what conditions support or preclude such learning?

Attentional Guidance in Visual Search

Imagine that you are asked to locate a target letter among a setof distractor letters. It would be quite easy if, for instance, thetarget was an X presented among a set of Os, or presented in greenamid a set of red distractors. In such cases, attentional deploymentin visual search is guided by attributes that allow you to process

Michael C. Hout and Stephen D. Goldinger, Department of Psychology,Arizona State University.

We thank Jessica Dibartolomeo, Evan Landtroop, Nichole Kempf,Holly Sansom, Leah Shaffer, and Marissa Boardman for assistance in datacollection. We also thank Matthew Peterson, Tamiko Azuma, and DonaldHoma for helpful comments. Support provided by National Institutes ofHealth Grant R01 DC 004535-11 to S. D. Goldinger.

Correspondence concerning this article should be addressed to StephenD. Goldinger, Department of Psychology, Box 871104, Arizona StateUniversity, Tempe, AZ 85287-1104. E-mail: [email protected]

Journal of Experimental Psychology: © 2011 American Psychological AssociationHuman Perception and Performance2011, Vol. ●●, No. ●, 000–000

0096-1523/11/$12.00 DOI: 10.1037/a0023894

1

Page 3: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

information across the visual field in parallel, directing your at-tention (and subsequently your eyes; Deubel & Schneider, 1996;Hoffman & Subramaniam, 1995) immediately to the target.Among these preattentive attributes are color, orientation, size, andmotion (Treisman, 1985; Wolfe, 2010); they can be used to guideattention in a stimulus-driven manner (e.g., pop-out of a colorsingleton), or in a top-down, user-guided manner (e.g., searchingfor a blue triangle among blue squares and red triangles). In thelatter conjunctive-search, attention may be swiftly focused on asubset of items (i.e., the blue items, or the triangles), after whichyou may serially scan the subset until you locate the target.Accordingly, Feature Integration Theory (FIT; Treisman & Ge-lade, 1980) explains search behavior using a two-stage architec-ture: A preattentive primary stage scans the whole array for guid-ing features, and a secondary, serial-attentive stage performs finergrained analysis for conjunction search. Like FIT, the influentialGuided Search (GS) model (Wolfe, Cave, & Franzel, 1989; Wolfe& Gancarz, 1996) proposes a preattentive first stage, and anattentive second stage. In the preattentive stage, bottom-up andtop-down information sources are used to create a “guidance map”that, in turn, serves to guide attentional deployments in the secondstage. Bottom-up guidance directs attention toward objects thatdiffer from their neighbors, and top-down guidance directs atten-tion toward objects that share features with the target. Implicit inan early version of the model (GS2; Wolfe, 1994) was the assump-tion that people accumulate information about the contents of asearch array over the course of a trial. Once an item is rejected, itis “tagged,” inhibiting the return of attention (Klein, 1988; Klein &MacInnes, 1999). GS2 was therefore memory-driven, as knowl-edge of previous attentional deployments was assumed to guidefuture attentional deployments (see also Eriksen & Lappin, 1965;Falmange & Theios, 1969).

Several studies, however, suggest that visual search processesare in fact “amnesic” (Horowitz & Wolfe, 2005). The randomizedsearch paradigm (Horowitz & Wolfe, 1998) was developed toprevent participants from using memory during search. In thisdesign, a traditional, static-search condition (wherein search ob-jects remain stationary throughout a trial) is compared to adynamic-search condition, wherein items are randomly reposi-tioned throughout a trial (typically, the target is a rotated T,presented among distracting rotated Ls). If search processes rely onmemory, this dynamic condition should be a hindrance. If searchis indeed amnesic, performance should be equivalent across con-ditions. Results supported the amnesic account, with statisticallyindistinguishable search slopes across conditions (see Horowitz &Wolfe, 2003, for replications). In light of such evidence, the mostrecent GS model (GS4; Wolfe, 2007) maintains no history ofprevious attentional deployments; it is amnesic. A core claim ofGuided Search Theory, therefore, is that the deployment of atten-tion is not determined by the history of previous deployments.Rather, it is guided by the salience of the stimuli. Memory,however, is not completely irrelevant to search performance inGS4 (Horowitz & Wolfe, 2005); we return to this point later.Importantly, although GS4 models search performance withoutkeeping track of search history, the possibility remains that, underdifferent circumstances (or by using different analytic techniques),we may find evidence for memory-driven attentional guidance.Indeed, several studies have suggested that focal spatial memorymay guide viewing behavior in the short-term by biasing attention

away from previously examined locations (Beck et al., 2006; Beck,Peterson, & Vomela, 2006; Dodd, Castel, & Pratt, 2003; Gilchrist& Harvey, 2000; Klein, 1988; Klein & MacInnes, 1999; Muller &von Muhlenen, 2000; Takeda & Yagi, 2000; Tipper, Driver, &Weaver, 1991).

To assess claims of amnesic search, Peterson et al. (2001)examined eye-movement data, comparing them to Monte Carlosimulations of a memoryless search model. Human participantslocated a rotated T among rotated Ls; pseudosubjects located atarget by randomly selecting items to “examine.” For both real andpseudosubjects, revisitation probability was calculated as a func-tion of lag (i.e., the number of fixations between the originalfixation and the revisitation). A truly memoryless search model isequivalent to sampling with replacement; it therefore predicts asmall possibility that search will continue indefinitely. Thus, itshazard function (i.e., the probability that a target will be found,given that it has not already been found) is flat, because thepotential search set does not decrease as items are examined. Incontrast, memory-driven search predicts an increasing hazardfunction—as more items are examined, the likelihood increasesthat the next item will be the target. The results were significantlydifferent from those predicted by the memoryless search model.Peterson et al. concluded that visual search has retrospectivememory for at least 12 items, because participants rarely reexam-ined items in a set size of 12 (see also Dickinson & Zelinsky, 2005;Kristjansson, 2000). Subsequent research (McCarley et al., 2003)suggested a more modest amount of 3 to 4 items (see also Horow-itz & Wolfe, 2001).

What do these conflicting results suggest about attentional guid-ance in visual search? The conclusions are threefold: 1) Visualattention is efficiently directed by both bottom-up features, and byuser-defined, top-down guidance. 2) Search performance can bemodeled with an architecture that disregards previous attentionaldeployments. And 3) despite the second conclusion, there is reasonto believe that memory may bias attention away from previouslyvisited locations.

The Role of Implicit Memory in Visual Search

Although the role of memory in attentional guidance is a con-troversial topic, it is clear that implicit memory for display con-sistencies can facilitate performance. For example, research onperceptual priming has consistently shown that repeated targets arefound quickly, relative to novel targets (Kristjansson & Driver,2008; Kristjansson, Wang, & Nakayama, 2002). Moreover, Geyer,Muller, and Krummenacher (2006) found that, although suchpriming is strongest when both target and distractor features arerepeated, it also occurs when only distractor features were re-peated. They suggested that such distractor-based priming arisesbecause their repetition allows faster perceptual grouping of ho-mogenous distractors. With repetition, the salience of even a noveltarget is increased because it “stands out” against a background ofperceptually grouped nontargets (see also Hollingworth, 2009). Insimilar fashion, Yang, Chen, and Zelinsky (2009) found preferen-tial fixation for novel distractors (relative to old ones), suggestingthat when search objects are repeated, attention is directed towardthe novel items.

As noted earlier, the contextual cueing paradigm (Chun & Jiang,1998, 1999; Jiang & Chun, 2001; Jiang & Leung, 2005; Jiang &

2 HOUT AND GOLDINGER

Page 4: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

Song, 2005) also suggests that implicit memory can facilitatevisual search. The typical finding is that, when predictive displays(i.e., those in which a target is placed in a consistent location in aparticular configuration of distractors) are repeated, target local-ization is speeded, even when observers are not aware of therepetition. Chun and Jiang (2003) found that contextual cuing wasimplicit in nature. In one experiment, following search, partici-pants were shown the repeated displays, and performed at chancewhen asked to indicate the approximate location of the target. Inanother experiment, participants were told that the displays wererepeated and predictive. Such explicit instructions failed to en-hance contextual cueing, or explicit memory for target locations.Thus, when global configurations are repeated, people implicitlylearn associations between spatial layouts and target placement,and use such knowledge to guide search (but see also Kunar,Flusberg, Horowitz, & Wolfe, 2007; Kunar, Flusberg, & Wolfe,2006; Smyth & Shanks, 2008).1

The Present Investigation

In the current study, we examined the dynamics of attentionaldeployment during incidental learning of repeated search arrays.We tested whether search performance would benefit from inci-dental learning of spatial information and object identities, and wemonitored both eye movements and search performance acrosstrials. We used an unpredictable search task for complex visualstimuli; the location and identity of targets was randomized onevery trial, and potential targets were only seen once. Thus, par-ticipants were given a difficult, “serial” search task that requiredcareful scanning. Of critical importance, across conditions, differ-ent aspects of the displays were repeated across trials. Specifically,we repeated the spatial configurations, the identities of the distrac-tor objects, both, or neither. We had several guiding questions: 1)When target identities and locations are unpredictable, can inci-dental learning of context (spatial layouts, object identities) facil-itate search? 2) If search improves as a function of learning, doesattentional deployment become more efficient? And 3) how arechanges in search performance reflected in viewing behavior?

To obtain a wide range of viewing behavior, we varied levels ofvisual working memory (WM) load, requiring finer-grained dis-tinctions at higher levels (Menneer et al., 2007, Menneer, Cave, &Donnelly, 2008), but remaining within the capacity of visual WM(Luck & Vogel, 1997). Across experiments, we systematicallydecoupled the consistency of distractor objects and spatial config-urations. In Experiment 1, spatial configurations were held con-stant within blocks; in Experiment 2, spatial configurations wererandomized on each trial. Each experiment had two between-subjects conditions: In the first condition, object identities werefixed within blocks of trials. In the second condition, objects wererepeated throughout the experiment, but were randomly groupedwithin and across blocks. Specifically, in the consistent conditions,one set of objects was used repeatedly throughout one block oftrials, and a new set of objects was used repeatedly in the nextblock, and so forth. In the inconsistent conditions, we spread theuse of all objects across all blocks, in randomly generated sets,with the restriction that all objects must be repeated as often as inthe consistent conditions. In this manner, learning of the distractorobjects was equally possible, but they were inconsistently grouped

for half the participants. In both experiments, we varied the set sizeacross blocks so we could examine search efficiency.

Following all search trials, a surprise 2AFC, token-discrimination test was given, to assess incidental retention oftarget and distractor identities. Time series analyses were per-formed on search RTs, number of fixations per trial, and averagefixation durations. We also regressed recognition memory perfor-mance on the total number of fixations and total gaze duration (perimage) to determine which measure(s) of viewing behavior con-tributed to the retention of visual details for search items. Based ona recent study (Hout & Goldinger, 2010), we expected to findfaster search RTs as a function of experience with the displays. Assuch, we contrasted three hypotheses pertaining to the specificconditions under which facilitation would occur, and how thiswould be reflected in the eye movements (note that the hypothesesare not mutually exclusive).

We know from prior work that memory for complex visualtokens is acquired during search (Castelhano & Henderson, 2005;Williams, 2009), and that search for familiar items is speeded,relative to unfamiliar ones (Mruczek & Sheinberg, 2005). Addi-tionally, according to Duncan and Humphreys’ (1989, 1992) at-tentional engagement theory of visual search, the difficulty ofsearch depends on how easily a target gains access to visualshort-term memory (VSTM). Accordingly, our rapid identificationand dismissal (RID) hypothesis posits that, as the identities ofsearch items are learned over time, RTs may be speeded byincreased fluency in the identification and dismissal of distractors.This is an intuitive hypothesis: When a person sees the samedistractor object repeatedly, it should become easier to recognizeand reject. The RID hypothesis makes several specific predictions.First, we should find faster RTs across trials in any conditionwherein participants learn the search items, as measured by oursurprise memory test. Second, this search facilitation should derivefrom shorter object fixations over time. By analogy fixation dura-tions in reading are typically shorter for high-frequency words,relative to low-frequency words (Reichle, Pollatsek, & Rayner,2006; Schilling, Rayner, & Chumbley, 1998; Vanyukov, Warren,& Reichle, 2009).

Alternatively, the spatial mapping hypothesis suggests that re-peated spatial contexts enhance memory for item locations (see,e.g., McCarley et al., 2003). By this hypothesis, incidental learningof repeated layouts may allow participants to avoid unnecessaryrefixations. As such, the spatial mapping hypothesis predicts sig-nificant search facilitation in Experiment 1, wherein spatial con-figurations were held constant within blocks. However, if spatialmapping is solely responsible for search facilitation, we shouldfind no such effect in Experiment 2, wherein configurations wererandomized on each trial. With respect to eye-movements, weshould find that search facilitation derives from people makingfewer fixations over time. (Note that such a spatial mappingmechanism could work in tandem with RID.)

Finally, following the work of Kunar et al. (2007), the searchconfidence hypothesis suggests a higher-level mechanism.

1 For real-world scenes, observers are also able to learn the relationshipbetween scene context and target position, but such learning tends not to beavailable for explicit recall (Brockmole, Castelhano, & Henderson, 2006;Brockmole & Henderson, 2006).

3RESPONSE THRESHOLDS IN VISUAL SEARCH

Page 5: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

Whereas the foregoing hypotheses suggest specific enhancementsin viewing behaviors, search confidence suggests that learning arepeated display may decrease the information required for thesearcher to reach a decision. By this view, familiar contextsencourage less conservative response thresholds, enabling fasterdecisions. Both object and spatial learning would contribute to thiseffect: Knowledge of a fixed spatial configuration may provideincreased certainty that all relevant locations have been inspected,and memory for specific objects may reduce perceptual uncer-tainty about items under inspection, making targets more easilydistinguished from distractors. The search confidence hypothesistherefore predicts that search facilitation will occur under coherentlearning conditions. When object identities are consistent withinblocks (Experiments 1a, 2a), facilitation should be robust. InExperiments 1b and 2b, despite object repetition, the lack oftrial-to-trial set coherence should prevent people from searchingfaster over time. We should find that spatial knowledge aids, but isnot critical to, facilitation (see Hout & Goldinger, 2010). Withrespect to eye movements, the search confidence hypothesis pre-dicts fewer fixations as a function of experience, indicating anearlier termination of search, rather than an increased ability toprocess individual items.

Experiment 1

In Experiment 1, people searched for new targets (indicatingtheir presence or absence) among repeated distractors in fixedspatial configurations. When present, the target replaced a singledistractor object, and no target appeared more than once in anycondition. As such, the displays conveyed no useful informationabout likely target location (i.e., no contextual cueing). Spatiallayouts were randomly generated at the start of each block, andthen held constant throughout the entire block. In Experiment 1a,distractor identities were fixed—the same nontarget items ap-peared in consistent locations throughout each block of trials.(New distractors were used in subsequent blocks.) In Experiment1b, distractor identities were quasi-randomized: They were notcoherently grouped within sets, and could appear in any searchblock, in any of the fixed locations. Figure 1 shows these twoforms of object consistency. Experiment 1 was thus designed todetermine if (when spatial configurations are held constant), visualsearch facilitation occurs through familiarity with coherent searchsets, or knowledge of individual search objects. Following allsearch trials, we administered a 2AFC, surprise recognition mem-ory test. Participants were presented with the previously seendistractor and target images, each with a semantically matchingfoil (e.g., if the “old” image was a shoe, the matched foil would bea new, visually distinct shoe). Randomization of search items inExperiment 1b was constrained such that the number of presenta-tions for every item was equivalent across both experiments,making recognition memory performance directly comparable.

Method

Participants. One hundred and eight students from ArizonaState University participated in Experiment 1 as partial fulfillmentof a course requirement (18 students participated in each of sixbetween-subjects conditions). All participants had normal orcorrected-to-normal vision.

Design. Two levels of object consistency (consistent, Exper-iment 1a; inconsistent, Experiment 1b), and three levels of WMload (low, medium, high) were manipulated between-subjects. Inevery condition, three search blocks were presented, one for eachlevel of set size (12, 16, or 20 images). Target presence duringsearch (present, absent) was a within-subjects variable, in equalproportions.

Stimuli. Search objects were photographs of real-world ob-jects, selected to avoid any obvious categorical relationshipsamong the images. Most categories were represented by a singlepair of images (one randomly selected for use in visual search, theother its matched foil for 2AFC). Images were resized, maintainingoriginal proportions, to a maximum of 2.5° in visual angle (hori-zontal or vertical), from a viewing distance of 55 cm. Images wereno smaller than 2.0° in visual angle along either dimension. Stim-uli contained no background, and were converted to grayscale. Asingle object or entity was present in each image (e.g., an ice creamcone, a pair of shoes). Elimination of color, coupled with therelatively small size of individual images, minimized parafovealidentification of images.

Apparatus. Data were collected using a Dell Optiplex 755PC (2.66 GHz, 3.25 GB RAM). Our display was a 21-inch NECFE21111 CRT monitor, with resolution set to 1280 � 1024, andrefresh rate of 60 Hz. E-Prime v1.2 software (Schneider, Eschman,& Zuccolotto, 2002) was used to control stimulus presentation andcollect responses. Eye-movements were recorded by an Eyelink1000 eye-tracker (SR Research Ltd., Mississauga, Ontario, Can-ada), mounted on the desktop. Temporal resolution was 1000 Hz,and spatial resolution was 0.01°. An eye movement was classifiedas a saccade when its distance exceeded 0.5° and its velocityreached 30°/s (or acceleration reached 8000°/s2).

Procedure.Visual search. Participants completed three blocks of visual

search, one for each set size (12, 16, 20 images). Each of the sixpossible permutations for the order of presentation of set sizes wasevenly counterbalanced across participants. Each block consistedof an equal number of target-present and target-absent trials,randomly dispersed, for a total of 96 search trials. The number oftrials per block was twice the number of items in the display (suchthat each item could be replaced once per block, while maintaining50% target-presence probability). On target-present trials, the tar-get replaced a single distractor, occupying its same spatial location. InExperiment 1a, the same distractor sets were repeated for an entireblock (24, 32, or 40 trials). A new distractor set was introduced foreach block of trials. In Experiment 1b, distractors were randomizedand could appear in any search block, in any of the fixed locations(there were no restrictions regarding how many items could be re-peated from trial n to trial n � 1). A 1-min break was given betweenblocks to allow participants to rest their eyes. Spatial layouts wererandomly generated for each participant, and held constant throughouta block of trials. Target images were randomized and used only once.Participants were not informed of any regularity regarding spatialconfiguration or repeated distractors.

At the beginning of each trial, participants were shown one,two, or three different potential target images (low-, medium-,and high-load conditions, respectively) to be held in WM forthe duration of the trial. Real-time gaze monitoring ensured thatparticipants looked at each target for at least one second. Oncea target was sufficiently viewed, the border around the image

4 HOUT AND GOLDINGER

Page 6: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

changed to green, signifying the criterion was met. When eachtarget had a green border, the participant was able to beginsearch, or continue viewing the targets until ready to begin. Aspacebar press cleared the targets from the screen, replacingthem with a central fixation cross. Once the cross was fixatedfor one second (to ensure central fixation at the start of eachtrial), it was replaced by the search array presented on a blankwhite background. Participants searched until a single targetwas located, or it was determined that all targets were absent.People rested their fingers on the spacebar during search; oncea decision had been reached, they used the spacebar to termi-nate the display and clear the images from view. Thereafter,

they were prompted to indicate their decision (present, absent)using the keyboard. Brief accuracy feedback was given,followed by a one second delay screen before starting thenext trial. Figure 2 shows the progression of events in a searchtrial.

Instructions emphasized accuracy over speed. Eight practicetrials were given: the first six consisted of single-target search,allowing us to examine any potential preexisting between-groups differences by assessing participants’ search perfor-mance on precisely the same task. To familiarize the higherload participants with multiple-target search, the final twopractice trials involved two and three targets, for medium- and

Figure 1. Examples of both forms of object consistency used in Experiment 1 (for a set size of 12 items). Theleft panel depicts a starting configuration. The top-right panel shows a subsequent trial with consistent objectidentities (Experiment 1a); the bottom-right panel shows randomized object identities (Experiment 1b). Spatialconfigurations were constant in both experiments. Gridlines were imaginary; they are added to the figure forclarity.

5RESPONSE THRESHOLDS IN VISUAL SEARCH

Page 7: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

high-load groups, respectively. (The low-load group continuedto search for a single target.)

Search array organization. A search array algorithm wasused to create spatial configurations with pseudorandom organi-zation. An equal number of objects appeared in each quadrant(three, four, or five, according to set size). Locations were gener-ated to ensure a minimum of 1.5° of visual angle between adjacentimages, and between any image and the edges of the screen. No

images appeared in the center four locations (see Figure 1), toensure that the participant’s gaze would never immediately fall ona target at the start of a trial.

Recognition. Following the search blocks, participants weregiven a surprise 2AFC recognition memory test for all objectsencountered during search. Participants saw two objects at a timeon a white background: one previously seen image and a matchedfoil, equally mapped to either side of the screen. (At the start of the

Figure 2. Timeline showing the progression of events for a single visual search trial. Participants were shownthe target image(s) and progressed to the next screen upon key press. Next, a fixation cross was displayed andautomatically cleared from the screen following 1-s fixation. The visual search array was then presented, andterminated upon spacebar press. Finally, target presence was queried, followed by 1-s accuracy feedback, and a1-s delay before the start of the next trial.

6 HOUT AND GOLDINGER

Page 8: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

experiment, image selection was randomized such that one itemfrom each pair was selected for use in visual search, and its partnerfor use as a recognition foil.) Participants indicated which imagethey had seen (using the keyboard), and instructions again empha-sized accuracy. Brief feedback was provided. All 48 distractor-foilpairs were tested. In addition, each of the target-present targets(i.e., targets that appeared in the search displays) were tested, fora total of 96 recognition trials. All pairs were pooled and order ofpresentation randomized to minimize any effect of time elapsedsince learning. Although distractor sets were quasi-random acrosstrials in Experiment 1b, the number of presentations across itemswas constrained to match that of Experiment 1a, making therecognition results directly comparable.

Eye-tracking. Monocular sampling was used at a collection rateof 1000 Hz. Participants used a chin-rest during all search trials, andwere initially calibrated to ensure accurate tracking. Periodic driftcorrection and recalibrations ensured accurate recording of gaze po-sition. Interest areas (IAs) were defined as the smallest rectangulararea that encompassed a given image, plus .25° of visual angle oneach side (to account for error or calibration drift).

Results

As noted earlier, we examined practice trials to identify any apriori differences across load groups when all participantssearched for a single target. No differences were found in searchperformance (accuracy, RTs) or viewing behavior (average num-ber of fixations, average fixation duration). Similarly, we found nodifferences across permutation orders of the counterbalanced de-sign. Overall error rates for visual search were very low (5%). Forthe sake of brevity, we do not discuss search accuracy, but resultsare shown in Table 1.2

Visual search RTs. Search RTs were divided into epochs,with each comprising 25% of the block’s trials. Each block in-cluded a different number of trials (varying by set size); thus,epochs were composed of 3, 4, and 5 trials, for set sizes 12, 16, and20, respectively. Only RTs from accurate searches were includedin the analysis. Following Endo and Takeda (2004; Hout & Gold-inger, 2010), we examined RTs as a function of experience withthe search display. Main effects of Epoch would indicate that RTsreliably changed (in this case, decreased) as the trials progressed.Although Experiment 1 had many conditions, the key results areeasily summarized. Figure 3 shows mean search RTs, plotted as afunction of Experiment, WM Load, Set Size, and Trial Type. Asshown, search times were slower in Experiment 1b, relative toExperiment 1a. Participants under higher WM load searchedslower, relative to those under lower WM load. Search was sloweramong larger set sizes, and people were slower in target-absenttrials, relative to target-present. Of key interest, reliable learning(i.e., an effect of Epoch) was observed in Experiment 1a, but notin Experiment 1b. Despite the facilitation in search RTs in Exper-iment 1a, there was no change in search efficiency across epochs.Figure 4 shows search RTs across epochs, plotted separately fortarget-present and absent trials, for each experiment. The secondand third rows show fixation counts and average fixation dura-tions, plotted in similar fashion; these findings are discussed next.

RTs were entered into a five-way, repeated measures ANOVA,with Experiment (1a, 1b), WM Load (low, medium, high), Set Size(twelve, 16, 20), Trial Type (target-present, target-absent), and

Epoch (1–4) as factors. We found an effect of Experiment, F(1,102) � 4.88, �p

2 � .05, p � .03, with slower search in Experiment1b (3,039 ms), relative to Experiment 1a (2,763 ms). There was aneffect of Load, F(2, 102) � 83.97, �p

2 � .62, p � .001, withslowest search among the high-load group (3,840 ms), followed bymedium-load (3,000 ms), and low-load (1,862 ms). There was alsoan effect of Set Size, F(2, 101) � 133.16, �p

2 � .73, p � .001, withslower search among larger set sizes (2,448, 2,958, and 3,297 msfor 12, 16, and 20 items, respectively). We found an effect of TrialType, F(1, 102) � 659.67, �p

2 � .87, p � .001, with participantsresponding slower to target-absence (3,794 ms), relative to target-presence (2,007 ms). Of key interest, there was a main effect ofEpoch, F(3, 100) � 12.33, �p

2 � .27, p � .001, indicating thatsearch RTs decreased significantly within blocks of trials (3,040,2,864, 2,870, and 2,829 ms for epochs 1–4, respectively).

Of key interest, we found an Experiment � Epoch interac-tion, F(3, 100) � 8.67, �p

2 � .21, p � .001, indicating a steeperlearning slope (i.e., the slope of the best fitting line relating RTto Epoch) in Experiment 1a (�129 ms/epoch), relative to Ex-periment 1b (�3 ms/epoch). The Set Size � Epoch interactionwas not significant (F � 2), indicating no change in searchefficiency over time. Because the effects of Epoch were ofparticular interest, we also tested for simple effects separatelyfor each experiment and trial type. We found significant simpleeffects of Epoch on both trial types in Experiment 1a (target-absent: F(3, 49) � 10.44, �p

2 � .39, p � .001; target-present:F(3, 49) � 10.88, �p

2 � .40, p � .001), indicating that RTsdecreased on both target-absent and target-present trials. Incontrast, we found no effects of Epoch on either trial type inExperiment 1b (target-absent: F(3, 49) � 1.81, p � .16; target-present: F(3, 49) � 0.55, p � .65).

Eye movements. We performed time-series analyses on twoeye-movement measures: number of fixations, and average fixation du-rations. Our questions were: 1) over time, do people demonstrate learningby making fewer fixations? And 2) do fixations become shorter as peopleacquire experience with the distractor items?

Number of fixations. Average fixation counts per trial wereentered into a five-way, repeated-measures ANOVA (see VisualSearch RTs, above). The results mirrored those of the search RTs.Participants in Experiment 1b made more fixations than those inExperiment 1a. More fixations were also committed by higher loadgroups, among larger set sizes, and when the target was absent. Ofparticular interest, reliable effects of Epoch were found in Exper-iment 1a, but not Experiment 1b (see Figure 4).

Each of the main effects was significant: Experiment, F(1,102) � 7.63, �p

2 � .07, p � .01; Load, F(2, 102) � 88.17, �p2 �

.63, p � .001; Set Size, F(2, 101) � 148.70, �p2 � .75, p � .001;

Trial Type, F(1, 102) � 677.05, �p2 � .87, p � .001; Epoch, F(3,

100) � 11.50, �p2 � .26, p � .001. More fixations occurred in

Experiment 1b (14.14), relative to Experiment 1a (12.73), and

2 We examined error rates in a five-way, repeated-measures ANOVA(see Experiment 1, visual search RTs). We found main effects of Load,Trial Type, Set Size, and Epoch (all F � 3; p � .05). There were moreerrors among the higher load groups, participants committed more missesthan false-alarms, and error rates were higher when search was conductedin larger sets. Of key interest, errors decreased by 2% across epochs,indicating no speed–accuracy trade-off with search RTs.

7RESPONSE THRESHOLDS IN VISUAL SEARCH

Page 9: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

among higher load groups relative to lower load (9.08, 13.92, and17.31 for low-, medium-, and high-load, respectively). Larger setsizes elicited more fixations (11.35, 13.67, and 15.28 for 12, 16,and 20 items, respectively), as did target-absent trials (17.39)relative to target-present (9.48). Of key interest, fixation countsdecreased significantly within blocks of trials (14.09, 13.37, 13.26,and 13.02 for epochs 1–4, respectively).

Of key interest, we found an Experiment � Epoch interaction,F(3, 100) � 6.78, �p

2 � .17, p � .001, indicating a steeper learningslope in Experiment 1a (�.59 fixations/epoch) relative to Exper-iment 1b (�.08 fixations/epoch). The Set Size � Epoch interactionwas not significant (F � 2). We again found significant simpleeffects of Epoch on both trial types in Experiment 1a (target-absent: F(3, 49) � 9.12, �p

2 � .36, p � .001; target-present: F(3,49) � 13.39, �p

2 � .45, p � .001), indicating that fixationsdecreased on both target-absent and target-present trials. We foundno effects of Epoch in Experiment 1b (target-absent: F(3, 49) �1.45, p � .24; target-present: F(3, 49) � 1.64, p � .19).

Average fixation durations. The average fixation durationswere entered into a five-way, repeated-measures ANOVA (iden-tical to number of fixations). The main effect of Experiment wasnot significant, F(1, 102) � 1.68, p � .20. Each of the remainingfour main effects was reliable: Load, F(2, 102) � 26.88, �p

2 � .35,p � .001; Set Size, F(2, 101) � 25.73, �p

2 � .34, p � .001; TrialType, F(1, 102) � 715.11, �p

2 � .88, p � .001; Epoch, F(3, 100) �3.92, �p

2 � .11, p � .01. When search RTs were longer (amonghigher load groups, larger set sizes, and target-absent trials), av-erage fixation durations were shorter. Fixation durations wereshortest among high-load participants (260 ms), followed bymedium-load (270 ms) and low-load (316). Durations were shorterwhen search was conducted among more items (291, 283, and 272ms for 12, 16, and 20 items, respectively), and on target-absenttrials (234 ms) relative to target-present (330 ms). Of key interest,and contrary to the RID hypothesis (that predicted decreasingfixation durations over epoch), average fixation durations changedsignificantly within blocks, increasing slightly (276, 283, 285, and283 ms for epochs 1–4, respectively).

Recognition. Recognition accuracy was entered into a three-way ANOVA with Experiment, WM Load, and Image Type (target,distractor) as factors. Figure 5 shows recognition accuracy as a func-tion of Load and Image Type. As shown, higher load groups remem-

bered more items, relative to lower load groups, and targets wereremembered better than distractors. In all cases, search items wereremembered at levels exceeding chance (50%). The main effect ofExperiment was not significant, F(1, 102) � 2.82, p � .10. We founda main effect of Load, F(2, 102) � 7.36, �p

2 � .13, p � .001, withhigh- and medium-load participants performing best (both 86%),relative to low-load (81%). There was a main effect of Image Type,F(1, 102) � 444.36, �p

2 � .81, p � .001, with better performance ontargets (94%), relative to distractors (74%). We also found a Load �Image Type interaction, F(1, 102) � 14.38, �p

2 � .22, p � .001.Planned comparisons revealed that recognition for target images (94%for each load group) were not statistically different (F � 1). Memoryfor distractors, F(2, 102) � 12.97, �p

2 � .20, p � .001, however,varied as a function of Load (67, 77, and 79% for low-, medium-, andhigh-load, respectively).

Regression analyses. We examined the relationship of eyemovements to recognition memory performance by regressing theprobability of correctly remembering an item on the total numberof fixations, and total dwell time (the sum of all fixation durations)received per item. Because each participant contributed multipledata points, we included a “participant” factor as the first predictorin the regression model (Hollingworth & Henderson, 2002). Inaddition to the eye movement measures, we included coefficientsfor the between-subjects factors of Experiment and WM Load.Following Williams et al. (2005), we simplified presentation bydividing the data into quartile bins, and plotted them against themean recognition accuracy for each quartile. We are primarilyinterested in the relationship between viewing behavior and visualmemory. Thus, Figure 6 presents recognition memory perfor-mance for distractors and targets, as a function of total number offixations (top panel), and total dwell time (bottom panel).

To determine which measurement best predicted visual mem-ory, we entered the predictors using a stepwise procedure, follow-ing the participant factor (as in Williams, 2009). This allowed usto determine which predictor(s) accounted for significant variancein the model when each factor was considered. For the distractors,a regression model with a single factor, dwell time, accounted forthe greatest change in variance, Fchange(1, 105) � 31.04, R2 � .27,p � .001, with no other significant predictors. Similarly, for thetargets, the best model contained a single predictor, dwell time,

Table 1Visual Search Error Rates, as a Function of Set Size, WM Load, Trial Type, and Epoch, From Experiment 1

Trial type

Target-absent Target-present

Set size WM load group Epoch 1 Epoch 2 Epoch 3 Epoch 4 Epoch 1 Epoch 2 Epoch 3 Epoch 4

Twelve Low 0% 2% 0% 0% 4% 5% 5% 2%Medium 3% 2% 1% 0% 7% 5% 5% 5%High 2% 1% 2% 3% 16% 8% 7% 8%

Sixteen Low 1% 1% 4% 1% 8% 4% 6% 6%Medium 3% 1% 3% 1% 10% 9% 12% 8%High 4% 2% 2% 0% 12% 12% 8% 10%

Twenty Low 1% 1% 1% 2% 9% 10% 4% 4%Medium 3% 3% 2% 1% 12% 17% 8% 11%High 3% 2% 1% 2% 14% 14% 12% 13%

Note. Data are presented collapsed across Experiments 1a and 1b, as they did not differ significantly.

8 HOUT AND GOLDINGER

Page 10: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

Fchange(1, 106) � 9.95, R2 � .09, p � .001, with no othersignificant predictors.

Block Effects. We examined practice effects across blocks toascertain whether performance changed over the course of theexperiment (see Table 2). We found no effect of Block on searchaccuracy or RTs (Fs � 1). Regarding eye-movements, therewas no effect of Block on the number of fixations (F � 2).However, we did find an effect of Block on average fixation

durations, F(2, 106) � 17.09, �p2 � .24, p � .001, indicating

longer durations across blocks.

Discussion

The results of Experiment 1 carry three implications: 1) Mem-ory for search items is incidentally acquired over time, and con-tributes to RT facilitation when the search set is coherently

Figure 3. Mean visual search RTs (�1 SEM) in Experiment 1 as a function of Experiment, WM Load, Set Size,and Trial Type. Results are plotted for each WM-load group, collapsed across epochs. White, gray, and blacksymbols represent low-, medium-, and high-load groups, respectively.

9RESPONSE THRESHOLDS IN VISUAL SEARCH

Page 11: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

Figure 4. Mean visual search RTs, number of fixations, and fixation durations (�1 SEM) in Experiment 1 asa function of Experiment and Epoch.

10 HOUT AND GOLDINGER

Page 12: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

grouped. 2) Even when search is faster, it does not become anymore efficient (Oliva et al., 2004; Wolfe et al., 2000). And 3) RTsare speeded by decreasing the number of fixations required tocomplete search, without a concomitant change in individual itemprocessing.

When distractor sets were coherently grouped, RTs decreasedacross trials, consistent with our previous work (Hout & Gold-inger, 2010). Conversely, when search sets were randomizedacross blocks, RTs remained flat. Despite finding significant fa-cilitation (in Experiment 1a), we found no evidence for increasedsearch efficiency (as measured by the slope of RT by set size) overtime. Participants thus responded more quickly, without increasingthe efficiency of attentional deployment. Moreover, we found thatsearch facilitation was best explained by participants making fewerfixations per trial as the search sets were learned. This result isconsistent with Peterson and Kramer (2001), who found thatcontextual cueing resulted in participants committing fewer fixa-tions. On the other hand, average fixation durations remained flatacross epochs, suggesting that participants did not increase theirability to process individual items, despite having learned theiridentities.

Given these results, how did our three hypotheses fare? First,because we found no change in fixation durations, there is littlesupport for the RID hypothesis, which predicted more fluentidentification and dismissal of distractors. The spatial mappinghypothesis was supported by Experiment 1a, as facilitation oc-curred by way of participants committing fewer fixations. How-ever, this hypothesis also predicted facilitation in Experiment 1b(because spatial configurations were held constant), which did notoccur. Moreover, we conducted a subsequent analysis, examiningrefixation probability as a function of “lag” (see Peterson et al.,2001) and epoch. In the event of a refixation, we recorded thenumber of intervening fixations since the initial visit, and exam-ined potential changes in the distribution of refixations over var-ious lags, as a function of epoch. We tested the possibility that, as

the displays were learned, the distribution of fixations shifted. Ifthe spatial mapping hypothesis is correct, we might expect to findthat, as search RTs decreased, the likelihood of refixating an itemshifts to longer and longer lags. That is, participants may get betterat avoiding recently fixated items, only committing refixationswhen the original visit occurred far in the past. However, we foundno evidence that refixation probability distributions changed overtime, suggesting that participants did not become any better atavoiding recently fixated locations, which seems to argue againstspatial mapping.

The search confidence hypothesis fared best with the results ofExperiment 1. As it predicts, we found search facilitation undercoherent set conditions, and no facilitation when search sets wererandomized. Also of key importance, search facilitation occurredto equivalent degrees in both target-present and target-absent trials.While this finding is consistent with all three hypotheses, it sug-gests that search facilitation did not entirely derive from perceptualor attentional factors. Performance was improved only when dis-tractors were coherently grouped, but without any evidence thatpeople processed them more quickly on an individual basis. Be-cause fixations decreased within blocks (in Experiment 1a), with-out a concurrent shift in the refixation probability distribution, itappears that search was terminated more quickly as displays grewmore familiar. Although the RID hypothesis appears disconfirmedby Experiment 1, we cannot rule out the spatial mapping hypoth-esis. Experiment 2 was designed to further adjudicate between thespatial mapping and search confidence hypotheses.

Additionally, the results of Experiment 1 replicated and ex-tended several previous findings. Not surprisingly, search wasslower in larger sets (e.g., Palmer, 1995), and when targets wereabsent (e.g., Chun & Wolfe, 1996). Increasing WM load elicitedslower, more difficult search processes, but increased incidentalretention for distractor objects (Hout & Goldinger, 2010). Oursurprise recognition test showed that all objects were incidentallylearned during search, and that recognition was very high for

Figure 5. Recognition performance (�1 SEM) for targets and distractors, shown as a function of WM-load,collapsed across Experiments 1a and 1b.

11RESPONSE THRESHOLDS IN VISUAL SEARCH

Page 13: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

targets, relative to distractors (Williams, 2009). Although partici-pants had many opportunities to view the distractors (and sawtargets only once), it seems that intentionally encoding the targetsbefore search created robust visual memories for their appearance.

Early work by Loftus (1972) indicated that the number offixations on an object was a good predictor of its later recognition,a finding that has been often replicated (Hollingworth, 2005;Hollingworth & Henderson, 2002; Melcher, 2006; Tatler, Gil-christ, & Land, 2005). Although we found a trend for increasedmemory as a function of fixation counts, our stepwise regressionanalyses indicated that memory was better predicted by a modelwith a single predictor of total dwell time. Our findings comple-ment those of Williams (2009), who found that distractor memoryappears to be based on cumulative viewing experience. Withrespect to WM load, it may be suggested that memory was betteramong higher load groups simply because they viewed the itemslonger and more often. Indeed, our WM load manipulation causedparticipants to make more fixations (consistent with Peterson,Beck, & Wong, 2008) and to thus spend more time on the items.In a previous experiment (Hout & Goldinger, 2010), however, weused a single-object search stream, with constant presentationtimes, equating stimulus exposure across WM load conditions.Higher WM loads still resulted in better memory for search items,suggesting that loading WM enhances incidental retention of vi-sual details, perhaps by increasing the degree of scrutiny withwhich items are examined.

Figure 6. Probability of correct recognition of distractors and targets, as a function of total number of fixations, andtotal dwell time, from Experiment 1. Results were divided into quartile bins for each load group, and plotted againstthe mean accuracy per quartile. Memory for distractors is indicated by white, gray, and black circles for low-,medium-, and high-load groups, respectively. Squares indicate memory for targets, collapsed across WM Load.

Table 2Search Performance (Accuracy, RTs) and Viewing Behavior(Number of Fixations, Average Fixation Duration), as aFunction of Experiment and Block Number

Dependent Measure

Block number

Block 1 Block 2 Block 3

Experiment 1Search accuracy (% error) 5% 5% 5%Search RTs (ms) 2,970 2,929 2,895Average number of fixations 13.73 13.41 13.18Average fixation duration (ms)� 272 284 290

Experiment 2Search accuracy (% error) 6% 5% 5%Search RTs (ms)� 3,196 3,018 2,974Average number of fixations� 14.77 13.84 13.56Average fixation duration (ms)� 266 284 280

� p � .05.

12 HOUT AND GOLDINGER

Page 14: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

Experiment 2

In Experiment 2, we removed the ability to guide search throughthe learning of consistent spatial configurations by randomizingthe layout on each trial. In Experiment 2a, object identities re-mained constant within each block of trials: Participants could useexperience with the distractors to improve search, but could notuse any predictable spatial information. In Experiment 2b, objectidentities were randomized within and across blocks, as in Exper-iment 1b (see Figure 7). The spatial mapping hypothesis predictsthat search RTs will not decrease without spatial consistency,whereas search confidence predicts that facilitation will still occur

(in Experiment 2a) through the learning of coherent distractor sets.No such learning should occur in Experiment 2b. As before, theRID hypothesis predicts that equivalent facilitation should occur inboth conditions.

Method

One hundred and eight new students from Arizona State Uni-versity participated in Experiment 2 as partial fulfillment of acourse requirement. The method was identical to the previousexperiment, with the exception of search arrays, which were nowrandomized on each trial.

Figure 7. Examples of both forms of object consistency used in Experiment 2 (for a set size of 12 items). Theleft panel depicts a starting configuration. The top-right panel shows a subsequent trial with consistent objectidentities (Experiment 2a); the bottom-right panel shows randomized object identities (Experiment 2b). Spatialconfigurations were randomized on every trial, in both experiments. Gridlines were imaginary; they are addedto the figure for clarity.

13RESPONSE THRESHOLDS IN VISUAL SEARCH

Page 15: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

Results

As in Experiment 1, we examined practice trials to gauge any apriori group differences under identical task conditions. We foundno differences in search performance or viewing behaviors. Sim-ilarly, we found no differences across permutation orders of thecounterbalanced design. Overall error rates for visual search werevery low (6%; see Table 3).3

Visual search RTs. Visual search RTs were entered into afive-way ANOVA, with Experiment, Load, Set Size, Trial Type,and Epoch as factors, as in Experiment 1. Figure 8 presents meansearch times, plotted as a function of Experiment, Load, Set Size,and Trial Type. The key results are easily summarized: Search wasslower as WM load increased, and when set sizes increased. RTswere slower on target-absent trials, relative to target-present. Ofparticular interest, we found reliable effects of Epoch in Experi-ment 2a, but not in Experiment 2b (see Figure 9). As before, wedid not find any change in search efficiency over time.

The main effect of Experiment was not significant, F(1, 102) �1, p � .73. There was an effect of Load, F(2, 102) � 62.56, �p

2 �.55, p � .001, with slower search among higher load groups(2,035, 3,225 ms, and 3,824 ms for low-, medium-, and high-loadgroups, respectively). We found an effect of Set Size, F(2, 101) �154.99, �p

2 � .75, p � .001, with slower search among larger setsizes (2,569, 3,065, and 3,451 ms for 12, 16, and 20 items,respectively). There was an effect of Trial Type, F(1, 102) �605.39, �p

2 � .86, p � .001, with slower responding to target-absence (3,907 ms) than target-presence (2,149 ms). In addition,there was a main effect of Epoch, F(3, 100) � 4.02, �p

2 � .11, p �.01, indicating that search RTs decreased significantly withinblocks of trials (3,093, 3,057, 2,988, and 2,974 ms for epochs 1–4,respectively).

We found a marginally significant Experiment � Epoch inter-action, F(3, 100) � 2.38, �p

2 � .07, p � .07, with a steeper learningslope in Experiment 2a (�77 ms/epoch) relative to Experiment 2b(�8 ms/epoch).). The Set Size � Epoch interaction was notsignificant (F � 2), indicating no change in search efficiency overtime. Simple effects testing revealed significant effects of Epochon both trial types in Experiment 2a (target-absent: F(3, 49) �4.07, �p

2 � .20, p � .01; target-present: F(3, 49) � 3.66, �p2 � .18,

p � .02) but no effects of Epoch on either trial type in Experiment2b (target-absent: F(3, 49) � 1.21, p � .32; target-present: F(3,49) � 1.45, p � .24).

Eye movements. The second and third panels of Figure 9show mean fixation counts and durations, as a function of TrialType and Experiment. We entered fixation counts and averagefixation durations into five-way, repeated-measures ANOVAs, asin Experiment 1. The fixation counts again mirrored the searchRTs: Higher load participants committed more fixations, peoplemade more fixations to larger set sizes, and more fixations wererecorded during target-absent search, relative to target-present.Reliable effects of Epoch were found in Experiment 2a, but notExperiment 2b.

Number of fixations. The main effect of Experiment was notsignificant, F(1, 102) � 1, p � .61. Each of the other main effectswas significant: Load, F(2, 102) � 68.90, �p

2 � .57, p � .001; SetSize, F(2, 101) � 147.36, �p

2 � .74, p � .001; Trial Type, F(1,102) � 549.62, �p

2 � .84, p � .001; Epoch, F(3, 100) � 4.51, �p2

� .12, p � .01. More fixations occurred among higher load groups

relative to lower load (9.65, 14.59, and 17.93 for low-, medium-,and high-load, respectively) and larger set sizes (11.94, 14.24, and15.99 for 12, 16, and 20 items, respectively). There were morefixations on target-absent trials (17.92) relative to target-present(10.19), and the number of fixations decreased significantly withinblocks of trials (14.35, 14.19, 13.88, and 13.80 for epochs 1–4,respectively).

Again, we found a marginal Experiment � Epoch interaction,F(3, 100) � 2.5, �p

2 � .07, p � .06, suggesting a steeper learningslope in Experiment 2a (�.35 fixations/epoch) relative to Exper-iment 2b (�.04 fixations/epoch). We again found significant sim-ple effects of Epoch on both trial types in Experiment 2a (target-absent: F(3, 49) � 3.23, �p

2 � .17, p � .03; target-present: F(3,49) � 4.88, �p

2 � .23, p � .01) but no significant simple effects ofEpoch on either trial type in Experiment 2b (target-absent: F(3,49) � 0.41, p � .75; target-present: F(3, 49) � 0.81, p � .49). TheSet Size � Epoch interaction was not significant (F � 2).

Fixation durations. Average fixation durations were enteredinto a five-way, repeated-measures ANOVA. There was no maineffect of Experiment, F(1, 102) � 1, p � .62. There was an effectof Load, F(2, 102) � 24.42, �p

2 � .32, p � .001. There were alsoeffects of Set Size, F(2, 101) � 7.68, �p

2 � .13, p � .001, and TrialType, F(1, 102) � 755.18, �p

2 � .88, p � .001. The effect of Epochwas marginal, F(3, 100) � 2.55, �p

2 � .07, p � .06. Consistentwith Experiment 1 (and again contrary to the RID hypo-thesis), fixation durations did not decrease across epochs (273,276, 279, and 278 ms for epochs 1–4, respectively). As in Exper-iment 1, when search RTs were prolonged (among higher loadgroups, larger set sizes, and target-absent trials), average fixationdurations decreased. They were shortest among high-load partici-pants (252 ms), followed by medium-load (266 ms) and low-load(311), and were shorter when search was conducted among moreitems (288, 272, and 269 ms for 12, 16, and 20 items, respec-tively). Durations were shorter on target-absent trials (231 ms),relative to target-present (321 ms).

Recognition. Recognition accuracy (see Figure 10) was en-tered into a three-way ANOVA, as in Experiment 1. Higher loadgroups again remembered more items, relative to lower loadgroups, and targets were remembered better than distractors. Allitems were remembered at levels exceeding chance performance.The main effect of Experiment was null, F(1, 102) � 1, p � .55.We found main effects of Load, F(2, 102) � 8.96, �p

2 � .15, p �.001, and Image Type, F(1, 102) � 603.59, �p

2 � .86, p � .001.Performance was better among higher load groups (80, 85, and86% for low-, medium-, and high-load groups, respectively), andtargets (94%) were remembered better than distractors (74%). Wealso found a Load � Image Type interaction, F(1, 102) � 17.13,�p

2 � .25, p � .001. Planned comparisons revealed that recognitionfor target images (94% for low- and high-load groups; 95% formedium-load) were equivalent (F � 1). Memory for distractors,F(2, 102) � 15.75, �p

2 � .24, p � .001, however, varied as a

3 We again examined error rates in a five-way, repeated measuresANOVA. There were main effects of Load, Trial Type, and Set Size (allF � 10; p � .01). There were more errors among the higher load groups,participants committed more misses than false-alarms, and error rates werehigher, given larger set sizes. The main effect of Epoch was not reliable(F � 2), indicating no speed–accuracy trade-off.

14 HOUT AND GOLDINGER

Page 16: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

function of Load (67, 76, and 78% for low-, medium-, and high-load, respectively).

Regression analyses. We regressed the probability of correctrecognition per item on its total fixation count and total dwell time,as in Experiment 1. Figure 11 shows recognition memory perfor-mance for distractors and targets, as a function of total number offixations (top panel), and total dwell time (bottom panel). For thedistractors, a stepwise regression model indicated that a singlefactor, number of fixations, accounted for the greatest change invariance, Fchange(1, 105) � 31.13, R2 � .23, p � .001, with noother significant predictors. For the targets, the best model con-tained a single predictor, dwell time, Fchange(1, 106) � 8.03, R2 �.07, p � .01, with no other significant predictors.

Block effects. We examined practice effects across blocks todetermine whether performance changed over the course of theexperiment (see Table 2). We found no Block effect on searchaccuracy (F � 2), but we did find an effect on RTs, F(2, 105) �4.13, �p

2 � .07, p � .02, with shorter RTs across blocks. Regardingeye-movements, there was an effect of Block on the number offixations, F(2, 106) � 6.39, �p

2 � .11, p � .01, and averagefixation durations, F(2, 106) � 14.38, �p

2 � .21, p � .001,indicating fewer and longer fixations across blocks.

Discussion

Despite the lack of spatial consistency in either Experiment 2aor 2b, we again found search facilitation with coherent sets (Ex-periment 2a), but not when distractors were randomly groupedacross trials (Experiment 2b). Although there was a marginalinteraction between our Experiment and Epoch factors (in RTs),the learning slope in Experiment 2a (�77 ms/epoch) was morethan nine times the slope observed in Experiment 2b (�8 ms/epoch). Moreover, simple effects testing revealed significant fa-cilitation in both target-present and target-absent trials in Experi-ment 2a, but no facilitation in Experiment 2b. As before, searchefficiency remained constant across epochs, and facilitation oc-curred through a decrease in fixations, without a change in averagefixation durations. These results again argue against the RIDhypothesis, and also suggest that enhanced short-term memory forrecent fixations (i.e., spatial mapping) cannot be solely responsible

for the faster RTs across epochs found in Experiment 1a. However,the results support the search confidence hypothesis, which pre-dicted facilitation in coherent set conditions, irrespective of spatialconsistency.

To examine how the loss of spatial information affected searchRTs, we conducted a combined analysis of Experiments 1a and 2a,which were identical, save the consistency of spatial layouts. Therewere main effects of Experiment and Epoch (both F � 4; p � .05),indicating faster RTs in Experiment 1a (2,763 ms), relative toExperiment 2a (3,005 ms), and a significant decrease in RTs overtime. Critically, we found an Experiment � Epoch interaction(F � 3; p � .05), indicating a steeper learning slope in Experiment1a (�129 ms/epoch), relative to Experiment 2a (�77 ms/epoch).This interaction indicates that when object identities are consis-tently tied to fixed locations, RT facilitation is greater, relative towhen objects are held constant but spatial consistency is removed.These findings further support the search confidence hypothesis,which predicts search facilitation under conditions of object co-herence, and to the highest degree, the conjunction of object andspatial consistency.

The present findings complement those of Chun and Jiang(1999) who found a contextual cueing effect in the absence ofspatial consistency. Their participants searched for novel shapes:Targets were symmetrical around the vertical axis, and distractorswere symmetrical around other orientations. Spatial configurationswere randomized on each trial, but a given target shape wassometimes paired with the same distractor shapes; on suchconsistent-mapping trials, RTs were shorter, relative toinconsistent-mapping trials, wherein target-distractor pairingswere randomized. Critically, our experiments differed from thecontextual cueing paradigm in that neither the identities nor loca-tions of targets were predicted by background context. WhereasChun and Jiang’s (1999) findings suggest that background contextcan prime target identity, our results suggest that learned, coherentbackgrounds can facilitate search for novel targets by enhancingsearch decision confidence. Because our participants encounteredeach target only once, this benefit in RTs must be explained not byperceptual priming, but rather by a shift in response thresholds.

Table 3Visual Search Error Rates, as a Function of Set Size, WM Load, Trial Type, and Epoch, From Experiment 2

Trial type

Target-absent Target-present

Set size WM load group Epoch 1 Epoch 2 Epoch 3 Epoch 4 Epoch 1 Epoch 2 Epoch 3 Epoch 4

Twelve Low 0% 2% 0% 0% 4% 5% 5% 2%Medium 3% 2% 1% 0% 7% 5% 5% 5%High 2% 1% 2% 3% 16% 8% 7% 8%

Sixteen Low 1% 1% 4% 1% 8% 4% 6% 6%Medium 3% 1% 3% 1% 10% 9% 12% 8%High 4% 2% 2% 0% 12% 12% 8% 10%

Twenty Low 1% 1% 1% 2% 9% 10% 4% 4%Medium 3% 3% 2% 1% 12% 17% 8% 11%High 3% 2% 1% 2% 14% 14% 12% 13%

Note. Data are presented collapsed across Experiments 1a and 1b, as they did not differ significantly.

15RESPONSE THRESHOLDS IN VISUAL SEARCH

Page 17: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

More generally, the findings from Experiment 1 were replicatedin Experiment 2. Loaded search was slower, as was search amonglarger set sizes. Memory for the targets was uniformly high, andWM load increased the retention of distractor identities. Thesefindings corroborate those of Experiment 1, suggesting that de-grading spatial information did not change basic search perfor-

mance, or disrupt retention for encountered objects. However, theregression analyses yielded slightly different results. In Experi-ment 1, both target and distractor memory was best predicted bythe total dwell time falling on the items. In Experiment 2, distrac-tor memory was best predicted by the number of fixations received(although there was a trend toward better memory as a function of

Figure 8. Mean visual search RTs (�1 SEM) in Experiment 1, as a function of Experiment, Load, Set Size,and Trial Type. White, gray, and black symbols represent low-, medium-, and high-load groups, respectively.

16 HOUT AND GOLDINGER

Page 18: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

Figure 9. Mean visual search RTs, number of fixations, and fixation durations (�1 SEM) in Experiment 2, asa function of Experiment and Epoch.

17RESPONSE THRESHOLDS IN VISUAL SEARCH

Page 19: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

dwell time). This latter finding is consistent with several previousstudies (Hollingworth, 2005; Hollingworth & Henderson, 2002;Loftus, 1972; Melcher, 2006; Tatler, Gilchrist, & Land, 2005);clearly both fixation frequency and duration contribute to objectmemory; future work should aim to determine which is a morepowerful predictor.

General Discussion

This investigation contrasted three hypotheses regarding howrepeated displays may facilitate search RTs: rapid identificationand dismissal of familiar nontarget objects, enhanced memory fortheir spatial locations, and a reduction in decisional evidencerequired for target detection. Our findings can be summarized byfour points: 1) When search objects were coherently groupedacross trials, RTs decreased within blocks; when distractor setswere unpredictable across trials, there was no facilitation. 2) Spa-tial consistency contributed to search facilitation, but repetition ofspatial configurations was neither necessary nor sufficient to de-crease RTs. 3) Even when search was facilitated by incidentallearning, the efficiency of attentional deployments did not in-crease. And 4) facilitation arises by participants making fewerfixations over time, without a concomitant drop in the time re-quired to identify and dismiss distractors. Although our threehypotheses were not mutually exclusive, our results best supportthe search confidence hypothesis, and argue against the RID andspatial mapping hypotheses.

Additionally, we replicated several well-known findings fromthe visual search literature, finding that target-present search wasuniformly faster than exhaustive, target-absent search, but partic-ipants were more likely to miss than to false-alarm (Chun &Wolfe, 1996). Increasing the number of distractors (i.e., set size)led to slower RTs and decreased accuracy (Palmer, 1995; Treis-man & Gormican, 1988; Ward & McClelland, 1989; Wolfe et al.,1989). Memory for search items was incidentally generated, and,

when viewing behavior on the items increased, so did memory(Williams, 2009). Finally, when participants searched for multipletargets, rather than a single target, search was slower and lessaccurate (Menneer et al., 2005, 2008) and it led to greater inci-dental learning for distractors (Hout & Goldinger, 2010).

Our RID hypothesis suggests that incidental learning of repeatedsearch objects may increase the fluency of their identification anddismissal. As they are learned, attention may be engaged anddisengaged more quickly, allowing the observer to move swiftlyacross items. Examination of average fixation durations suggestedthat this proposal is incorrect. Although we are hesitant to interpreta null effect, we found no decrease in fixation durations acrosstrials in any of our conditions. Thus, despite its intuitive appeal, itseems unlikely that RID is responsible for facilitation of searchRTs.4

Built upon a body of research indicating that retrospectivememory biases fixations away from previously visited locations,the spatial memory hypothesis suggests that configural repetitionmay enhance visual search. It predicts that, when spatial layoutsare held constant (Experiment 1), people should more effectivelyavoid refixating previous locations; faster search would thus bedriven by fewer fixations per trial. This hypothesis attributesfacilitation to spatial learning, thereby predicting no decrease in

4 All of our reported fixation duration analyses included fixations to bothdistractors and targets (when present). We conducted follow-up analyses,examining fixation durations for distractors and targets separately. Weassessed whether inclusion of target fixations may have biased our results(because fixations are typically longer to targets), preventing us fromfinding a decrease in fixation durations to the learned distractors. We alsoassessed whether distractor learning may cause targets to stand out fromthe background, analogous to pop-out effects (Treisman & Gelade, 1980).The analyses showed no significant downward trends in fixation durationsfor distractors or targets in either experiment.

Figure 10. Mean recognition accuracy (�1 SEM) for targets and distractors in Experiment 2. Results areplotted as a function of WM load, collapsed across Experiments 2a and 2b.

18 HOUT AND GOLDINGER

Page 20: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

RTs in Experiment 2. Although prior research indicates that short-term spatial memory was likely utilized in Experiment 2 to biasattention away from previously fixated locations, this could onlyarise within a single trial. (See Peterson, Beck, & Vomela, 2007,for evidence that prospective memory, such as strategic scan-pathplanning, may also guide visual search.) No such learning couldpersist across trials. Thus, spatial mapping cannot explain all ofour findings. It is clear that spatial consistency contributed tosearch facilitation, as it was greater in Experiment 1a, relative toExperiment 2a. However, facilitation still occurred in Experiment2a, in the absence of spatial consistency, and failed to occur inExperiment 1b, wherein configurations were held constant. Nei-ther finding is consistent with the spatial mapping hypothesis.Finally, a tacit assumption of spatial mapping is that, when facil-itation occurs, we should find that fixations are biased away frompreviously visited locations at longer and longer lags. We found noevidence that fixation probability distributions changed over time,once more arguing against this hypothesis.

Our search confidence hypothesis was derived largely fromwork by Kunar and colleagues (2006, 2008b). It suggests that

search facilitation does not occur by enhancement of specificviewing behaviors, but by speeding decision processes in thepresence of familiar context. In nature, organisms and objectsrarely appear as they do in an experiment: isolated, scatteredarbitrarily across a blank background. Instead, they appear inpredictable locations, relative to the rest of a scene. Imagine, forinstance, that you are asked to locate a person in a photograph ofa college campus. You are unlikely to find them crawling up thewall of a building or hovering in space. It is more likely that theywill be standing on the sidewalk, sitting on a bench, and so forth.Biederman (1972) showed that the meaningfulness of scene con-text affects perceptual recognition. He had participants indicatewhat object occupied a cued position during briefly presented,real-world scenes. Accuracy was greater for coherent scenes, rel-ative to a jumbled condition (with scenes cut into sixths andrearranged, destroying natural spatial relations). The visual systemis sensitive to such regularities. For example, search paths toobjects that appear in natural locations (e.g., a beverage located ona bar) are shorter, relative to objects that appear in arbitrarylocations (e.g., a microscope located on a bar; Henderson, Weeks,

Figure 11. Probability of correct recognition of distractors and targets, as a function of total number offixations, and total dwell time, from Experiment 2. Results were divided into quartile bins for each load group,and plotted against the mean accuracy per quartile. Memory for distractors is indicated by white, gray, and blackcircles for low-, medium-, and high-load groups, respectively. Squares indicate memory for targets, collapsedacross WM Load.

19RESPONSE THRESHOLDS IN VISUAL SEARCH

Page 21: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

& Hollingworth, 1999). Models that incorporate such top-downcontextual information are successful in extrapolating regions ofinterest, when compared to data from human observers (Oliva,Torralba, Castelhano, & Henderson, 2003).

In line with such findings, Kunar et al. (2007) suggested thatsearch times may improve with experience, without an improve-ment in attentional guidance. As an observer learns a display, lessinformation may be necessary to reach a search decision. It mighttake just as long to search through a repeated scene, relative to anovel one, but familiar context may allow earlier, more confidentdecisions. Thus, to isolate the cost of visual search from perceptualor response factors, it is important to differentiate between thefactors that jointly determine RT: the slope, and the intercept. Theslope of the line (fitting RT � set size) indicates the efficiency ofsearch, whereas nonsearch factors contribute to the intercept. Us-ing the contextual cueing paradigm, Kunar et al. (2007) investi-gated search slopes in repeated visual contexts. If attention isguided by contextual cueing, then search slopes should be moreefficient in repeated conditions. However, they found no differ-ences in the search slopes: RT differences were explained by theintercept, suggesting no changes in attentional guidance (see alsoKunar, Flusberg, & Wolfe, 2008a). Conversely, we should expectno contextual cueing to occur when the target pops out (Treisman,1985; Treisman & Gelade, 1980; Treisman & Sato, 1990), as serialattentional guidance is unnecessary in such situations. Neverthe-less, Kunar et al. (2007) found significant contextual cueing inbasic feature search, wherein targets could be immediately iden-tified by color.

Critically, in their final experiment, Kunar et al. (2007) addedinterference to the response selection process. Using a search taskin which distractor items elicited either congruent or incongruentresponses (Starreveld, Theeuwes, & Mortier, 2004), they lookedfor contextual cueing effects in the presence or absence of re-sponse selection interference. When target and distractor identitiesare congruent, slowing of RTs is generally attributed to interfer-ence with response selection (Cohen & Magen, 1999; Cohen &Shoup, 1997; Eriksen & Eriksen, 1974). Participants searched fora red letter among green distractors, indicating whether the targetwas an A or an R. In congruent trials, the target identity matchedthe distractors (e.g., a red A among green As); in incongruent trials,targets and distractors were different (e.g., a red R among greenAs). The results showed a contextual cueing effect only in con-gruent trials. When the target and distractor identities matched,RTs were faster in predictive (relative to random) displays. Thispattern suggested that contextual cueing speeded response selec-tion in a familiar context, reducing the threshold to respond. Takentogether, the findings from Kunar et al. (2007) indicate a responseselection component in the facilitation of search RTs in a repeatedvisual display.

The present results are consistent with view from Kunar et al.(2007) and with our search confidence hypothesis. We foundsearch facilitation under coherent set conditions, but not whenfamiliar objects were encountered in random sets. Facilitation wasmost robust in Experiment 1a, wherein items and locations werelinked, providing the highest degree of contextual consistency.Critically, despite decreasing RTs, we found no change in searchefficiency over time and fixations never became shorter in dura-tion. Items were given fewer fixations across trials, withoutchanges in individual item processing, indicating quicker termina-

tion of the search process (without any decrease in accuracy). Ourfindings therefore suggest that, when a display is repeated, famil-iarity with background information leads the observer to terminatesearch with less perceptual evidence. We hypothesize that, bybecoming more familiar with the distractor objects, people becomebetter at differentiating those objects from the targets held in visualworking memory. Rather than allowing shorter fixations, whichseems most intuitive, this familiarity instead allows people toconfidently stop searching, rather than refixating prior locations.

Our results are also consistent with those of Wolfe et al. (2000),who showed that search efficiency was constant, even after 350presentations of an unchanging, conjunctive display (see alsoWolfe et al., 2002). They argued that the perceptual effects ofattention vanish once attention is redeployed to a new location.That is, the attentional processes used to conduct search in afamiliar scene are no different from those used to locate a target ina novel scene. With these findings in mind, it is easy to appreciatethe (perhaps) counterintuitive nature of the Guided Search model(Wolfe, 2007); because attentional guidance does not improve,even given extensive experience with a consistent display, it isreasonable to adopt an amnesic model, rather than one that re-members prior attentional deployments. Our findings fit well withthis view: We propose that search decisions are facilitated byrepeated context not because attentional guidance becomes moreefficient, but because search decisions are executed more quickly.

Taken together, it appears that, when people repeatedly searchin a novel display, information about object identities is inciden-tally acquired over time. When object information is coherentacross repeated searches, observers become faster to locate atarget, or to determine its absence. Although the present resultssuggest some role for a spatial memory component (Peterson et al.,2001), it cannot explain the full pattern of results. Instead, memoryfor spatial configurations seems to complement object memory byincreasing an observer’s overall sense of familiarity with a display.Eye movements appear to remain relatively chaotic, with memoryacting only to avoid very recent fixations. Speeded search alsoappears no more efficient than unspeeded search in the identifica-tion and dismissal of well-learned distractors. Rather, given arepeated context, familiarity with background information facili-tates search by decreasing response thresholds.

Given the present findings, we might question the extent towhich “object clusters” may be processed with each fixation. Thatis, as people learn the background context of a display, do theyalso learn to process multiple items simultaneously, rather than asingle item per fixation? This possibility would fit well with ourfindings for several reasons: 1) such a mechanism would allow aperson to search the entire display with fewer fixations, therebydecreasing fixation counts without necessarily increasing the effi-ciency of attentional deployments. 2) Such a process could explainwhy fixation durations remained constant over epochs. Perhapsfixations appeared constant because visual processing beganspreading to adjacent objects. And finally, 3) we would expect this“expanding view” process to accelerate, given both object andspatial consistency, which comports well with our finding thatsearch facilitation is greatest when both forms of information areheld stable. We are currently implementing new experiments de-signed specifically to test this possibility. For now, the findingssuggest that, as people learn, they do not become better searchers;they become faster, more confident resolvers.

20 HOUT AND GOLDINGER

Page 22: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

References

Beck, M. R., Peterson, M. S., Boot, W. R., Vomela, M., & Kramer, A. F.(2006). Explicit memory for rejected distractors during visual search.Visual Cognition, 14, 150–174. doi:10.1080/13506280600574487

Beck, M. R., Peterson, M. S., & Vomela, M. (2006). Memory for where,but not what is used during visual search. Journal of ExperimentalPsychology: Human Perception and Performance, 32, 235–250. doi:10.1037/0096–1523.32.2.235

Biederman, I. (1972). Perceiving real-world scenes. Science, 177, 77–80.doi:10.1126/science.177.4043.77

Brockmole, J. R., Castelhano, M. S., & Henderson, J. M. (2006). Contex-tual cueing in naturalistic scenes: Global and local contexts. Journal ofExperimental Psychology: Learning, Memory, and Cognition, 32, 699–706. doi:10.1037/0278–7393.32.4.699

Brockmole, J. R., & Henderson, J. M. (2006). Using real-world scenes ascontextual cues for search. Visual Cognition, 13, 99–108. doi:10.1080/13506280500165188

Castelhano, M. S., & Henderson, J. M. (2005). Incidental visual memoryfor objects in scenes. Visual Cognition, 12, 1017–1040. doi:10.1080/13506280444000634

Chun, M. M., & Jiang, Y. (1998). Contextual cueing: Implicit learning andmemory of spatial context guides spatial attention. Cognitive Psychol-ogy, 36, 28–71. doi:10.1006/cogp.1998.0681

Chun, M. M., & Jiang, Y. (1999). Top-down attentional guidance based onimplicit learning of visual covariation. Psychological Science, 10, 360–365. doi:10.1111/1467–9280.00168

Chun, M. M., & Jiang, Y. (2003). Implicit, long-term spatial contextualmemory. Journal of Experimental Psychology: Learning, Memory, andCognition, 29, 224–234. doi:10.1037/0278–7393.29.2.224

Chun, M. M., & Wolfe, J. M. (1996). Just say no: How are visual searchesterminated when there is no target present? Cognitive Psychology, 30,39–78. doi:10.1006/cogp.1996.0002

Cohen, A., & Magen, H. (1999). Intra- and cross-dimensional visual searchfor single feature targets. Perception & Psychophysics, 61, 291–307.

Cohen, A., & Shoup, R. (1997). Perceptual dimensional constraints inresponse selection processes. Cognitive Psychology, 32, 128–181. doi:10.1006/cogp.1997.0648

Deubel, H., & Schneider, W. X. (1996). Saccade target selection and objectrecognition: Evidence for a common attentional mechanism. VisionResearch, 36, 1827–1837. doi:10.1016/0042–6989(95)00294–4

Dickinson, C. A., & Zelinksky, G. J. (2005). Marking rejected distractors:A gaze-contingent technique for measuring memory during search.Psychonomic Bulletin & Review, 12, 1120–1126.

Dodd, M. D., Castel, A. D., & Pratt, J. (2003). Inhibition of return withrapid serial shifts of attention: Implications for memory and visualsearch. Perception & Psychophysics, 65, 1126–1135.

Duncan, J., & Humphreys, G. (1989). Visual search and stimulus similar-ity. Psychological Review, 96, 433– 458. doi:10.1037/0033-295X.96.3.433

Duncan, J., & Humphreys, G. (1992). Beyond the search surface: Visualsearch and attentional engagement. Journal of Experimental Psychol-ogy: Human Perception and Performance, 18, 578–588. doi:10.1037/0096–1523.18.2.578

Endo, N., & Takeda, Y. (2004). Selective learning of spatial configurationand object identity in visual search. Perception & Psychophysics, 66,293–302.

Ericksen, B. A., & Eriksen, C. W. (1974). Effects of noise letters upon theidentification of a target letter in a nonsearch task. Perception & Psy-chophysics, 16, 143–149.

Eriksen, C. W., & Lappin, J. S. (1965). Internal perceptual system noiseand redundancy in simultaneous inputs in form identification. Psycho-nomic Science, 2, 351–352.

Falmange, J. C., & Theios, J. (1969). On attention and memory in reaction

time experiments. Acta Psychologica, 30, 316–323. doi:10.1016/0001–6918(69)90056–0

Geyer, T., Muller, H. J., & Krummenacher, J. (2006). Cross-trial primingin visual search for singleton conjunction targets: Role of repeated targetand distractor features. Perception & Psychophysics, 68, 736–749.

Gilchrist, I. D., & Harvey, M. (2000). Refixation frequency and memorymechanisms in visual search. Current Biology, 10, 1209–1212. doi:10.1016/S0960-9822(00)00729–6

Henderson, J. M., Weeks, P. A., & Hollingworth, A. (1999). The effects ofsemantic consistency on eye movements during scene-viewing. Journalof Experimental Psychology: Human Perception and Performance, 25,210–228. doi:10.1037/0096–1523.25.1.210

Hoffman, J. E., & Subramaniam, B. (1995). The role of visual attentionin saccadic eye movements. Perception & Psychophysics, 57, 787–795.

Hollingworth, A. (2005). The relationship between online visual represen-tation of a scene and long-term scene memory. Journal of ExperimentalPsychology: Learning, Memory, and Cogntiion, 31, 396–411. doi:10.1037/0278–7393.31.3.396

Hollingworth, A. (2009). Two forms of scene memory guide visual search:Memory for scene context and memory for the binding of target objectto scene location. Visual Cognition, 17, 273–291. doi:10.1080/13506280802193367

Hollingworth, A., & Henderson, J. M. (2002). Accurate visual memory forpreviously attended objects in natural scenes. Journal of ExperimentalPsychology: Human Perception and Performance, 28, 113–136. doi:10.1037/0096–1523.28.1.113

Horowitz, T. S., & Wolfe, J. M. (1998). Visual search has no memory.Nature, 394, 575–577. doi:10.1038/29068

Horowitz, T. S., & Wolfe, J. M. (2001). Search for multiple targets:Remember the targets, forget the search. Perception & Psychophysics,63, 272–285.

Horowitz, T. S., & Wolfe, J. M. (2003). Memory for rejected distractors invisual search? Visual Cognition, 10, 257–298. doi:10.1080/13506280143000005

Horowitz, T. S., & Wolfe, J. M. (2005). Visual search: The role ofmemory for rejected distractors. In L. Itti, G. Rees, & J. Tsotsos(Eds.), Neurobiology of attention (pp. 264 –268). San Diego, CA:Academic Press.

Hout, M. C., & Goldinger, S. D. (2010). Learning in repeated visual search.Attention, Perception & Psychophysics, 72, 1267–1282. doi:10.3758/APP.72.5.1267

Jiang, Y., & Chun, M. M. (2001). Selective attention modulates implicitlearning. The Quarterly Journal of Experimental Psychology, 54, 1105–1124. doi:10.1080/02724980042000516

Jiang, Y., & Leung, A. W. (2005). Implicit learning of ignored visualcontext. Psychonomic Bulletin & Review, 12, 100–106.

Jiang, Y., & Song, J. (2005). Hyperspecificity in visual implicit learning:Learning of spatial layout is contingent on item identity. Journal ofExperimental Psychology: Human Perception and Performance, 31,1439–1448. doi:10.1037/0096–1523.31.6.1439

Klein, R. (1988). Inhibitory tagging system facilitates visual search. Na-ture, 334, 430–431. doi:10.1038/334430a0

Klein, R., & MacInnes, W. J. (1999). Inhibition of return is a foragingfacilitator in visual search. Psychological Science, 10, 346–352. doi:10.1111/1467–9280.00166

Kristjansson, A. (2000). In search of remembrance: Evidence for memoryin visual search. Psychological Science, 11, 328–332. doi:10.1111/1467–9280.00265

Kristjansson, A., & Driver, J. (2008). Priming in visual search: Separatingthe effects of target repetition, distractor repetition, and role-reversal.Vision Research, 48, 1217–1232. doi:10.1016/j.visres.2008.02.007

Kristjansson, A., Wang, D. L., & Nakayama, K. (2002). The role of

21RESPONSE THRESHOLDS IN VISUAL SEARCH

Page 23: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

priming in conjunctive visual search. Cognition, 85, 37–52. doi:10.1016/S0010-0277(02)00074–4

Kunar, M. A., Flusberg, S., Horowitz, T. S., & Wolfe, J. M. (2007). Doescontextual cuing guide the deployment of attention? Journal of Exper-imental Psychology: Human Perception and Performance, 33, 816–828. doi:10.1037/0096–1523.33.4.816

Kunar, M. A., Flusberg, S., & Wolfe, J. M. (2006). Contextual cueing byglobal features. Perception & Psychophysics, 68, 1204–1216.

Kunar, M. A., Flusberg, S., & Wolfe, J. M. (2008a). Time to guide:Evidence for delayed attentional guidance in contextual cueing. VisualCognition, 16, 804–825. doi:10.1080/13506280701751224

Kunar, M. A., Flusberg, S., & Wolfe, J. M. (2008b). The role of memoryand restricted context in repeated visual search. Perception & Psycho-physics, 70, 314–328. doi:10.1080/13506280701751224

Loftus, G., R. (1972). Eye fixations and recognition memory for pictures. Cogni-tive Psychology, 3, 525–551. doi:10.1016/0010–0285(72)90021–7

Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memoryfor features and conjunctions. Nature, 390, 279–281. doi:10.1038/36846

McCarley, J. S., Wang, R. F., Kramer, A. F., Irwin, D. E., & Peterson,M. S. (2003). How much memory does oculomotor search have? Psy-chological Science, 14, 422–426. doi:10.1111/1467–9280.01457

Melcher, D. (2006). Accumulation and persistence of memory for naturalscenes. Journal of Vision, 6, 8–17. doi:10.1167/6.1.2

Menneer, T., Barrett, D. J. K., Phillips, L., Donnelly, N., & Cave, K. R.(2007). Costs in searching for two targets: Dividing search across targettypes could improve airport security screening. Applied Cognitive Psy-chology, 21, 915–932. doi:10.1002/acp.1305

Menneer, T., Cave, K. R., & Donnelly, N. (2008). The cost of search formultiple targets: Effects of practice and target similarity. Journal ofExperimental Psychology: Applied, 15, 125–139. doi:10.1037/a0015331

Mruczek, R. E. B., & Sheinberg, D. L. (2005). Distractor familiarity leadsto more efficient visual search for complex stimuli. Perception & Psy-chophysics, 67, 1016–1031.

Muller, H. J., & von Muhlenen, A. (2000). Probing distractor inhibition invisual search: Inhibition of return. Journal of Experimental Psychology:Human Perception and Performance, 26, 1591–1605. doi:10.1037/0096–1523.26.5.1591

Oliva, A., Torralba, A., Castelhano, M. S., & Henderson, J. M. (2003).Top-down control of visual attention in object detection. Paper presentedat the Proceedings of the IEEE International Conference on ImageProcessing, Barcelona, Spain.

Oliva, A., Wolfe, J. M., & Arsenio, H. C. (2004). Panoramic search: Theinteraction of memory and vision in search through a familiar scene.Journal of Experimental Psychology: Human Perception and Perfor-mance, 30, 1132–1146. doi:10.1037/0096–1523.30.6.1132

Palmer, J. (1995). Attention in visual search: Distinguishing four causes ofa set size effect. Current Directions in Psychological Science, 4, 118–123. doi:10.1111/1467–8721.ep10772534

Peterson, M. S., Beck, M. R., & Vomela, M. (2007). Visual search isguided by prospective and retrospective memory. Perception & Psycho-physics, 69, 123–135.

Peterson, M. S., Beck, M. R., & Wong, J. H. (2008). Were you payingattention to where you looked? The role of executive working memoryin visual search. Psychonomic Bulletin & Review, 15, 372–377. doi:10.3758/PBR.15.2.372

Peterson, M. S., & Kramer, A. F. (2001). Attentional guidance of the eyesby contextual cuing information and abrupt onsets. Perception & Psy-chophysics, 63, 1239–1249.

Peterson, M. S., Kramer, A. F., Wang, R. F., Irwin, D. E., & McCarley,J. S. (2001). Visual search has memory. Psychological Science, 12,287–292. doi:10.1111/1467–9280.00353

Posner, M. I., Walker, J. A., Friedrich, F., & Rafal, R. D. (1984). Effectsof parietal injury on covert orienting of attention. Journal of Neurosci-ence, 4, 1863–1874.

Reichle, E. D., Pollatsek, A., & Rayner, K. (2006). E-Z Reader: Acognitive-control, serial-attention model of eye-movement behavior dur-ing reading. Cognitive Systems Research, 7, 4 –22. doi:10.1016/j.cogsys.2005.07.002

Schilling, H. E. H., Rayner, K., & Chumbley, J. I. (1998). Comparingnaming, lexical decision, and eye fixation times: Word frequency effectsand individual differences. Memory & Cognition, 26, 1270–1281.

Schneider, W., Eschman, A., & Zuccolotto, A. (2002). E-Prime User’sGuide. Pittsburgh, PA: Psychology Software Tools Inc.

Smyth, A. C., & Shanks, D. R. (2008). Awareness in contextual cuing withextended and concurrent explicit tests. Memory & Cognition, 36, 403–415. doi:10.3758/MC.36.2.403

Starreveld, P. A., Theeuwes, J., & Mortier, K. (2004). Response selectionin visual search: The influence of response compatability of nontargets.Journal of Experimental Psychology: Human Perception and Perfor-mance, 30, 56–78. doi:10.1037/0096–1523.30.1.56

Takeda, Y., & Yagi, A. (2000). Inhibitory tagging in visual search can befound if search stimuli remain visible. Perception & Psychophysics, 62,927–934.

Tatler, B. W., Gilchrist, I. D., & Land, M. F. (2005). Visual memory forobjects in natural scenes: From fixations to object files. QuarterlyJournal of Experimental Psychology, 58, 931–960.

Tipper, S. P., Driver, J., & Weaver, B. (1991). Object centered inhibitionof return of visual attention. The Quarterly Journal of ExperimentalPsychology, 43, 289–298.

Treisman, A. (1985). Preattentive processing in vision. Computer Vision,Graphics and Image Processing, 31, 156–177.

Treisman, A., & Gelade, G. (1980). A feature integration theory of atten-tion. Cognitive Psychology, 12, 97–136. doi:10.1016/0010 –0285(80)90005–5

Treisman, A., & Gormican, S. (1988). Feature analysis in early vision:Evidence from search asymmetries. Psychological Review, 95, 15–48.doi:10.1037/0033-295X.95.1.15

Treisman, A., & Sato, S. (1990). Conjunction search revisited. Journal ofExperimental Psychology: Human Perception and Performance, 16,459–478. doi:10.1037/0096–1523.16.3.459

Vanyukov, P. M., Warren, T., & Reichle, E. D. (2009, November). Search-ing for O: Evidence for cognitive control in eye-movement behavior.Presented at the 50th Annual Meeting of the Psychonomic Society,Boston.

Ward, R., & McClleland, J. L. (1989). Conjunctive search for one andtwo identical targets. Journal of Experimental Psychology: HumanPerception and Performance, 15, 664 – 672. doi:10.1037/0096 –1523.15.4.664

Williams, C. C. (2009). Not all visual memories are created equal. VisualCognition, 18, 201–228.

Williams, C. C., Henderson, J. M., & Zacks, R. T. (2005). Incidental visualmemory for targets and distractors in visual search. Perception & Psy-chophysics, 67, 816–827.

Wolfe, J. M. (1994). Guided Search 2.0: A revised model of visual search.Psychonomic Bulletin & Review, 1, 202–238.

Wolfe, J. M. (2007). Guided Search 4.0: Current progress with a model ofvisual search. In W. Gray (Ed.), Integrated Models of Cognitive Systems(pp. 99–119). New York: Oxford.

Wolfe, J. M. (2010). Visual search. Current Biology, 20, 346–349.Wolfe, J. M., Cave, K. R., & Franzel, S. L. (1989). Guided Search: An

alternative to the Feature Integrations model for visual search. Journal ofExperimental Psychology: Human Perception and Performance, 15,419–433. doi:10.1037/0096–1523.15.3.419

Wolfe, J. M., & Gancarz, G. (1996). Guided Search 3.0: A model of visualsearch catches up with Jay Enoch 40 years later. In V. Lakshminaray-anan (Ed.), Basic and clinical applications of vision science (pp. 189–192). Dordrecht, Netherlands: Kluwer Academic.

Wolfe, J. M., Klempen, N., & Dahlen, K. (2000). Postattentive vision.

22 HOUT AND GOLDINGER

Page 24: Journal of Experimental Psychology: Human Perception and ...sgolding/docs/pubs/Hout_Goldinger_JEPHPP_… · search paradigm (Horowitz & Wolfe, 1998) was developed to prevent participants

Journal of Experimental Psychology: Human Perception and Perfor-mance, 26, 693–716. doi:10.1037/0096–1523.26.2.693

Wolfe, J. M., Oliva, A., Butcher, S. J., & Arsenio, H. C. (2002). Anunbinding problem? The disintegration of visible, previously attendedobjects does not attract attention. Journal of Vision, 2, 256–271. doi:10.1167/2.3.5

Yang, H., Chen, X., & Zelinsky, G. J. (2009). A new look at novelty

effects: Guiding search away from old distractors. Attention, Perception,& Psychophysics, 71, 554–564. doi:10.3758/APP.71.3.554

Received January 5, 2011Revision received March 29, 2011

Accepted March 30, 2011 �

23RESPONSE THRESHOLDS IN VISUAL SEARCH