Yale Perception & Cognition Lab

VSS '07 Abstracts
Jump to:  
Flombaum, J., & Scholl, B. (2007). Attending to moving vs. static stimuli: A surprising dissociation in multiple object tracking. Poster presented at the annual meeting of the Vision Sciences Society, 5/15/07, Sarasota, FL.  
Most experimental paradigms used to study attention require brief unitary shifts to single objects or locations, whereas everyday experience requires sustained attention to multiple dynamic objects and events. Such dynamic attention has often been studied using multiple object tracking (MOT): observers track several featurally identical targets that move haphazardly and unpredictably among identical moving distractors. But how and where is attention allocated during MOT? Here we present a deeply counterintuitive answer to this question: though targets are prioritized over distractors, as expected, attention prioritizes static stimuli over moving stimuli -- including moving targets. In addition to tracking, observers had to detect small probes that appeared sporadically on targets, distractors, or various regions of empty space. There were no visual differences between these conditions, since all objects were patches of random visual noise moving on a background of random visual noise -- such that the objects were invisible on any given static frame. We used probe detection as a measure of attention, and consistently observed several surprising effects. First, probe detection in static empty space was always better than on moving targets. Second, probe detection was worse for moving targets compared to targets that were momentarily stopped. Third, the attentional deficit for moving objects relative to the background disappeared when the entire noise-defined background was itself constantly translating. We will describe and demonstrate these and several other results which all fuel the same conclusion: attention treats dynamic and static information differently, and there appears to be a severe cost involved in attending to dynamic information. This could reflect the depletion of attentional resources on moving objects due to motion processing itself. These results are also consistent with the idea that moving objects are represented on a visual map -- the 'motion map' -- that is independent of global salience maps.
Gao, T., & Scholl, B. (2007). Are objects required for object-files? Poster presented at the annual meeting of the Vision Sciences Society, 5/15/07, Sarasota, FL.  
A central task of vision is to segment the retinal image into discrete objects, and to keep track of them as the same persisting individuals over time and motion. Such processing is often discussed in terms of object files -- midlevel visual representations that 'stick' to moving objects on the basis of spatiotemporal properties, and store (and update) information about those properties. Object files have traditionally been studied via 'object-specific preview benefits' (OSPBs): discriminations of an object's features are speeded when an earlier preview of those features occurs on the same object, as opposed to on a different object, beyond general display-wide priming. This effect is clearly 'object-based' (vs. space-based), but what counts as an 'object' in this framework? Here we studied this question via much more extreme manipulations than previous work, by removing all static segmentation cues. In Experiment 1, both the objects and the background were composed of random visual noise -- so that the objects were defined only via their motion. Experiment 2 went even further, removing all segmentation cues: the entire random-noise background simply rotated as a whole. Robust OSPBs were nevertheless found in both cases. We conclude that the construction and maintenance of object files does not require static surface cues to 'objecthood', nor any segmentation cues at all. In addition, since objects were always invisible until the motion began -- after the offset of the previewed features -- we conclude that object files can be established 'after the fact', postdictively. These results clearly conflict with the assumption that object-files require previously segmented objects, but they do maintain the two key aspects of the object-file framework -- individuation and tracking. Overall, these experiments help characterize what "object files" really are, and how they do and do not relate to our common-sense notions of objects.
Isola, P., Turk-Browne, N., & Scholl, B. (2007). Multidimensional visual statistical learning. Poster presented at the annual meeting of the Vision Sciences Society, 5/11/07, Sarasota, FL.  
Statistical relationships between objects in space and time are automatically extracted via visual statistical learning (VSL). Such processing has traditionally been thought to operate over visual objects, but visual input is also highly structured at the level of individual surface features. Here we studied VSL in temporal sequences of colored shapes, exploring how features are combined into objects. Observers were familiarized to sequences of colored shapes that appeared one at a time, with statistical regularities present in the order of repeated shape subsequences. In Experiment 1, half of these were bound-color subsequences, in which each shape was always presented in its own unique color; the other half were random-color subsequences, with colors randomly drawn upon each presentation from a different set of possible values. During a later test phase, observers repeatedly judged which of two shape subsequences -- now presented all in black -- was more familiar: one previously encountered during familiarization vs. a misordered foil subsequence constructed from the same shapes. Observers reliably chose the previously encountered subsequences for both bound-color and random-color conditions. Since each shape had been encountered equally often, this performance must reflect learning of the shapes' statistical ordering. Moreover, performance in these conditions did not differ, suggesting that the covariance between individual feature values did not affect the expression of VSL for black shapes. In Experiment 2, however, familiarization consisted of only bound-color subsequences, and performance at test with black shapes was significantly (and surprisingly) lower. Thus, color appears to have been more integral for the learned representations of the bound-color subsequences in Experiment 2 -- compared to those same subsequences in Experiment 1, which were encountered in the context of additional random-color subsequences. In sum, what determines the input to VSL is the diagnosticity of feature dimensions, not only of individual feature values.
New, J., & Scholl, B. (2007). A 'perceptual scotoma' theory of motion-induced blindness. Talk given at the annual meeting of the Vision Sciences Society, 5/15/07, Sarasota, FL.  
In motion-induced blindness (MIB), salient target objects will fluctuate into and out of conscious awareness when superimposed onto certain global moving patterns. Previous studies have delimited several factors that mediate MIB, but there is little consensus on why it occurs at all. Here we explore a new possibility: MIB occurs due to the visual system's attempt to separate distal stimuli from artifacts of damage to the visual system itself. When a small target object is invariant with respect to changes that are occurring to large regions of the surrounding visual field, the visual system may discount that stimulus as akin a scotoma, and may thus fill it in. We discuss how this theory can account for many previous MIB results, and then describe and demonstrate several new effects that support this idea. Three examples: (1) Motion itself is not required: similar effects are obtained from other manipulations, such as cyclic global changes to the luminance of a static global pattern. (2) MIB is stronger with monocular vs. binocular viewing of the target, when the global motion pattern is always binocular. (3) MIB is vastly stronger when both the target and fixation point move together, compared to when they move relative to each other (e.g. in opposite directions). These and other results are all consistent with the hypothesis that the visual system treats the target object as an artifact or insult to its own structure and processing, rather than as an object in the external world. This new account is considerably different than previous accounts of MIB, but helps to connect it with other types of visual phenomena. In the end, 'motion-induced blindness' may be better described as just a special case of 'perceptual scotomas'.
Newman, G., Choi, H., Wynn, K., & Scholl, B. (2007). The origins of causal perception: Evidence from postdictive processing in infancy. Poster presented at the annual meeting of the Vision Sciences Society, 5/15/07, Sarasota, FL.  
The currency of our visual experience consists not only of visual features such as color and motion, but also seemingly higher-level features such as causality -- as when we see two billiard balls collide, with one causing the other to move. One of the most important and controversial questions about causal perception involves its origin: do we learn to see causality, or does this ability derive in part from innately specified aspects of our cognitive architecture? Such questions are difficult to answer, but can be indirectly addressed via experiments with infants. Here we explore causal perception in 7-month-old infants, using a different approach from previous work. Recent work in adult visual cognition has demonstrated a postdictive aspect to causal perception: in certain situations, we can perceive a collision between two objects in an ambiguous display even after the moment of potential 'impact' has already passed. This illustrates one way in which our conscious perception of the world is not an instantaneous moment-by-moment construction, but rather is formed by integrating information over short temporal windows. Here we demonstrate analogous postdictive processing in infants' causal perception. This result demonstrates that even infants' visual systems engage in subtle spatiotemporal grouping, and process information in temporally extended chunks. Moreover, this work provides a new way of demonstrating causal perception in infants that differs from previous strategies, and is immune to some previous types of critiques.
White, A., & Scholl, B. (2007). Inattentional blindness, object persistence, and foveal inhibition. Poster presented at the annual meeting of the Vision Sciences Society, 5/13/07, Sarasota, FL.  
The importance of attention for conscious perception is especially clear in inattentional blindness (IB) -- the failure to consciously perceive salient unexpected objects (UOs) when attention is otherwise engaged. Surprisingly, IB is especially severe for UOs that appear at fixation -- literally right in front of your eyes -- compared to peripheral UOs. This effect has been explained by especially strong foveal inhibition when observers' attention is directed to the periphery. Because these experiments included a fixation mark, however, the visual system may instead have interpreted the UO not as a new object, but as a sudden change to the features of the fixation mark itself. Such interpretations may be automatically computed in mid-level vision, where object persistence is driven primarily by spatiotemporal factors rather than surface features. Observers in our tests of these explanations fixated centrally and had to judge which arm of a briefly presented (and masked) peripheral cross was longer. After three trials, a salient UO at fixation was presented along with the cross. The key manipulation involved the fixation mark, which was either a small cross (which was then completely replaced by the UO), or a much larger cross formed by two diagonal lines that spanned the entire display. If enhanced IB at fixation is due to the representation of persisting objects despite featural change, then less IB should be observed with the global fixation mark. In fact, however, IB in both conditions was dramatic (though not at ceiling) and did not differ -- even with twice as many observers as in other studies. We conclude that parafoveal attention may indeed require especially strong foveal inhibition, with severe consequences for conscious awareness. However, the existence of foveal inhibition does not rule out independent effects of object persistence on IB, as explored in additional studies of appearance vs. disocclusion.