Attentional capture by irrelevant emotional distractor faces

Share Embed


Descripción

Emotion 2011, Vol. 11, No. 2, 346 –353

© 2011 American Psychological Association 1528-3542/11/$12.00 DOI: 10.1037/a0022771

Attentional Capture by Irrelevant Emotional Distractor Faces Sara Hodsoll, Essi Viding, and Nilli Lavie University College London We establish attentional capture by emotional distractor faces presented as a “singleton” in a search task in which the emotion is entirely irrelevant. Participants searched for a male (or female) target face among female (or male) faces and indicated whether the target face was tilted to the left or right. The presence (vs. absence) of an irrelevant emotional singleton expression (fearful, angry, or happy) in one of the distractor faces slowed search reaction times compared to the singleton absent or singleton target conditions. Facilitation for emotional singleton targets was found for the happy expression but not for the fearful or angry expressions. These effects were found irrespective of face gender and the failure of a singleton neutral face to capture attention among emotional faces rules out a visual odd-one-out account for the emotional capture. The present study thus establishes irrelevant, emotional, attentional capture. Keywords: emotional faces, attentional capture, visual search, bottom-up, top-down

& Mattingley, 2005; Lamy, Amunts, & Bar-Haim, 2008; Hahn & Gronlund, 2007). The efficiency of search task performance is measured through the effects of the display set size on search latencies. If an emotional expression can draw attention to a face more readily than a neutral expression, then search for an emotional face target should be little affected by the number of emotionally neutral distractors. Such an effect would be expressed in a shallow slope of the search set size reaction time (RT) function. Both schematic and photographic facial expression stimuli have been used in the visual search studies and overall the findings suggest that emotional facial expressions, particularly angry expressions, confer attentional advantage (e.g., Gerritsen, Frischen, Blake, Smilek & Eastwood, 2008; Pinkham, Griffin, Baron, Sasson, & Gur, 2010). Emotional face targets typically attract attention and their search is fairly unaffected by the number of the neutral distractors (producing a shallow set size RT function; e.g., Ohman et al., 2001; Hahn & Gronlund, 2007; Hansen & Hansen, 1988; Eastwood et al., 2001; Fox et al., 2000). Perhaps the clearest example of attention advantage for emotional facial expressions was provided by Williams et al. (2005). Williams and colleagues used photographic stimuli (these can convey a more rich emotional expression and are more ecologically valid than schematic faces) and demonstrated that visual search performance was more efficient when the target displayed an emotional (sad or happy) expression among emotionally neutral distractors, compared to when the target displayed a neutral expression among emotional (sad or happy) distractors. The search set-size slopes were shallower for emotional compared to neutral targets, suggesting that the emotional target could draw attention to them more efficiently than neutral targets. Importantly, the emotional face advantage was no longer found when the face arrays were inverted. Face inversion is thought to disrupt the holistic processing that is necessary for the perception of emotional expressions, while leaving low-level visual processing intact. The search advantage for emotional (among neutral) over neutral (among emotional) faces is therefore unlikely to be due to any low-level visual factor. Instead, the results suggest the emotional faces are better able to guide attention during search.

Paying attention to emotional facial expressions, even when these are not an explicit part of the particular task at hand, is essential in everyday life. For example, accidentally pushing forward in a queue and failing to notice the angry expressions of those around you, could have undesirable consequences (these can range from unpleasant social interaction to even bodily harm depending on the temper of the people in the queue). Conversely, failing to notice a positive facial expression may lead to missing out on some potentially beneficial social interactions, such as a new friendship. A growing body of research has investigated the effects of emotional facial expressions on attention. Data from a variety of tasks, such as dot-probe (e.g., Mogg & Bradley, 1999), flanker (e.g., Fenske & Eastwood, 2003), attentional blink (e.g., Stein, Zwickel, Ritter, Kitzmantel, & Schneider, 2009), spatial cueing (e.g., Fox, Russo, Bowles & Dutton, 2001), visual search (e.g., Ohman, Lundqvist, & Esteves, 2001), and priming of pop-out in visual search (Lamy, Amunts, & Bar-Haim, 2008), have all suggested that the emotional content of faces affects the extent to which attention is paid to the face. The visual search task has probably been the most prevalent paradigm to study the effects of emotional facial expression on attention (e.g., Calvo, Avero, & Lundqvist, 2006; Eastwood, Smilek, & Merikle, 2001; Horstmann & Becker, 2008; Ohman et al., 2001; Schubo, Glendolla, Minecke, & Abele, 2006; Juth, Lundqvist, Karlsson & Ohman, 2005; Williams, Moss, Bradshaw,

Sara Hodsoll and Essi Viding, Research Department of Clinical, Educational and Health Psychology, Division of Psychology and Language Sciences, and Institute of Cognitive Neuroscience, University College London, London, United Kingdom; Nilli Lavie, Institute of Cognitive Neuroscience, University College London. Preparation of this article was supported by Wellcome Trust Grant WT080568MA (to Nilli Lavie), British Academy Grant BARDA-53229 (to Essi Viding), and an Economic and Social Research Council (UK) Ph.D. studentship (to Sara Hodsoll). Correspondence concerning this article should be addressed to Nilli Lavie, Institute of Cognitive Neuroscience, University College London, 17 Queens Square, London, WC1N 3AR. E-mail: [email protected] 346

EMOTIONAL CAPTURE IN VISUAL SEARCH

From an evolutionary point of view, it makes sense that facial expressions of emotion should have privileged access to attention. They provide powerful signals that may be critical for survival and successful social interaction (Darwin, 1904; Ohman, 1993). However, we rarely look for specific emotional expressions in our surroundings or for simply any emotional (rather than neutral) stimulus, irrespective of its valence. Our success in executing appropriate survival, motivational, and social behaviors often depends on our ability to detect emotional expressions, even when we are currently engaged in a different task, such as reading text messages on our phone while wandering into a queue or eyeing up that elusive drink at a party. However, in all previous suggestions that emotional faces may capture attention during visual search, emotion was always in some way relevant to the task, either because emotional faces appeared in the same location as the task stimuli or because they were an inherent part of the task at hand (e.g., participants were instructed to search for an emotional face or an “odd face out” that was defined as being such by its emotional expression). For example, in the Williams et al. (2005) study, the participants were instructed to look for an emotional or neutral face. Thus, although it is clear that emotional expression could guide the search more efficiently than a neutral expression, it is not clear that emotion is capable of capturing attention when it is irrelevant to the task. In a recent visual search study (Horstmann & Becker, 2008), a sad emotional face expression (a schematic face with a mouth curved down) only captured attention when the visual feature forming the odd-one-out expression (i.e., mouth curve orientation) was relevant to the search task (when the search was based on the nose orientation (“⬍” or “⬎”). The sad mouth curve feature failed to capture attention once the search-target feature (color) and the emotion feature (orientation) were different. As we discuss in greater detail later in the General Discussion, Horstmann and Becker’s failure to find emotional capture when the emotion feature was clearly irrelevant to the task may be due to task-specific factors (e.g., the specific use of curve orientation for the distractor feature and color for the target feature). Overall therefore, the important question of whether emotional faces can capture attention, even when entirely irrelevant to the task as stipulated in an evolutionary perspective and in the cognitive concept of “capture,” remains open. In the present study, we aimed to address this important question. We adopted the irrelevant attentional capture paradigm (Theeuwes, 1991, 1992, 1994) to address irrelevant emotional attentional capture by emotional faces. Using photographic face stimuli, we requested participants to search for a male face among two female faces (or for a female face among two male faces in Experiment 4) and discriminate the orientation of the target face (whether tilted clockwise or counterclockwise). In two-thirds of trials, all three faces were neutral in expression. In the remaining one-third of trials, an emotional singleton was present. In half of the singleton-present trials the emotional expression appeared on a female distractor, and on the other half it appeared on a male target (or vice versa in Experiment 4). However, in all cases, the face expression was irrelevant to the gender-based search task. If the irrelevant emotional singleton captures attention this should result in slower RTs in the distractor emotional singleton condition compared to both the target emotional singleton condition and the all-neutral condition. Because the emotional singleton

347

was irrelevant to the search, a failure to ignore it would indicate emotional attentional capture. In Experiment 1, we examined the effects of fearful distractor and target singletons on search for a male target face among female distractor faces. In Experiments 2 and 3, we examined the effects of happy and angry singletons respectively. In Experiment 4, we examined the effects of angry distractor and target-singletons on search for a female target face among male distractor faces. Finally, Experiment 5 was a control experiment in which we examined the effects of a neutral distractor and target singleton in a display of angry faces.

Experiment 1 Method Participants. A total of 11 people (5 male) between 21 and 40 (M ⫽ 27) years of age were recruited from the UCL Subject Pool and were paid £2 for their participation. All reported normal or corrected-to-normal color vision. Stimuli and procedure. The experiment was run in a dimly lit room using a PC with a 15-inch monitor. Stimuli were presented and RTs recorded using E-Prime V.1. A viewing distance of 60 cm was maintained with a chin-rest. Stimuli consisted of 12 gray-scale scale images of the faces of six (three female and three male) identities (see Figure 1). These were adapted with permission1 from the NimStim Face Stimulus Set at http://www.macbrain.org/ resources.htm. Each identity had an image showing a neutral expression and a fearful expression. Each of the faces subtended 2.1 cm (vertically) by 1.7 cm (horizontally). The faces were presented on a black background in a virtual triangle with the center of each image placed at 1.3 cm from the central fixation cross. There was a 0.5-cm gap between images. A central fixation point was presented for 500 ms followed by the search displays, which were presented until response. Participants were requested to search for the male target singleton in a display with two female distractor faces and indicate by pressing the “1” key on the numerical keyboard, whether it was tilted 15° to the left or the “2” key if it was titled 15° to the right. Feedback for errors was given by a short tone. The experiment consisted of four blocks of 96 trials that were preceded by a short practice block of 24 trials. In order to prevent habituation to the emotional singleton, in two-thirds of trials, no fearful face was present; all three faces were neutral in expression. In the remaining one-third of trials, there was one fearful face present; in one-half of these trials (1/6 of the total trials), the fearful face was one of the female distractor singletons, and, in the other half (1/6 of the total trials), it was the male target face. The emotional singleton was the target on some trials, because it could be argued that, if the emotional expression never appeared on the target, the target’s location could be implicitly deduced by the presence of the emotional face on the distractor. Within each block, the type of trial (i.e., fearful face absent, fearful male target singleton present or fearful female distractor 1

Development of the MacBrain Face Stimulus Set was overseen by Nim Tottenham and supported by the John D. and Catherine T. MacArthur Foundation Research Network on Early Experience and Brain Development.

HODSOLL, VIDING, AND LAVIE

348

Neutral Condition

Distractor Fear Singleton

Target Fear singleton

Figure 1. Example displays for neutral, emotional (fear) distractor singleton, and emotional (fear) target singleton conditions (not to scale). Equivalent displays were used for all experiments as specified in the methods. Please note that the female face in the top right side of each of the three images was not a stimulus in the experiments. It is only included here to comply with NimStim publishing guidelines.

face present) was randomized. The location of the identities and the orientation of each stimulus were randomized across trials. The identities of the faces were randomized across trials, but the presentation was constrained so that none of the face identities would repeat on two successive trials.

Results and Discussion Figure 2 presents the results as a function of the experimental conditions. Trials with an error or a RTs greater than 2000 ms (0.18% of the total number of trials, SD ⫽ 0.2%; MAX for any one participant ⫽ 0.5%) were excluded from the RT analysis. A one-way withinsubjects analysis of variance (ANOVA) on the mean correct RTs revealed a main effect of condition, F(2, 20) ⫽ 6.029, p ⬍ .05. Pairwise comparisons indicated that target RTs were significantly longer in the presence of a fearful distractor face compared to the

770

all-neutral condition, t(1, 10) ⫽ 3.91, p ⬍ .005, and the fearful target condition (t(1, 10) ⫽ 3.07, p ⬍ .05). In addition, an analysis of covariance (ANCOVA) with participant gender as a covariate of no interest verified that these findings are not driven by one gender. The findings for condition did not change, F(2, 18) ⫽ 19.203, p ⬍ .001; there was no significant effect of gender, F(1, 9) ⫽ .420, p ⫽ .533; and there was no interaction between condition and gender, F(2, 18) ⫽ .463, p ⫽ .529. Error rates were low and did not vary between the fearful distractor (M ⫽ 6%) fearful target (M ⫽ 6%) and all-neutral conditions (M ⫽ 6%). These findings suggest that a fearful singleton distractor face is able to capture attention. We note that no facilitation with the fearful target singleton was found, possibly because the genderbased search task was sufficiently easy to perform and hence not sensitive enough to reveal further facilitation. Alternatively, the facilitatory effects caused by the capture of the target may have been offset by some interference caused by the negative valence of the emotional face. This issue is further addressed in Experiment 2, in which we examined whether the results generalize across to a positively valenced (happy) emotional singleton face.

Fearful Distractor

Experiment 2

RTs (ms)

Fearful Target All Neutral

750

Method Participants. A total of 24 people (8 male) between 18 and 42 (M ⫽ 26) years of age were recruited in the same manner as for Experiment 1. All reported normal or corrected-to-normal vision. Stimuli and procedure. The apparatus, design, and procedure were exactly the same as in Experiment 1. Neutral faces were of the same identities that were used in Experiment 1, but the emotional stimuli consisted of happy faces.

730

710 Conditions

Results and Discussion Figure 2. Mean RTs (in milliseconds) to locate the target face in the presence (either the target or a distractor) or absence of a fearful face. Inset image is an example of distractor fearful singleton.

Figure 3 presents the results as a function of the three experimental conditions. Trials with an error or RTs greater than 2000

EMOTIONAL CAPTURE IN VISUAL SEARCH

770

Happy Distractor

750

All Neutral 730

710

expressions presented were open-mouthed (with teeth on display). This was important because the difference in facilitation results between Experiment 1 and 2 could have been due to a difference in low-level visual factors rather in valence. The happy facial expressions used in Experiment 1 were open-mouthed (with teeth on display), whereas the fearful facial expressions were closedmouthed. The open mouth and teeth features may have led to a better discriminability of the happy faces, and, consequently, facilitation in the target search condition. Thus, in Experiment 3, we examined whether the results of Experiment 2 can generalize over to a negatively valenced, open-mouthed facial expression of emotion (anger).

Method

690 Conditions

Figure 3. Mean RTs (in milliseconds) in Experiment 2 as a function of the presence (as a distractor or target) or absence (all neutral) of an emotional happy face singleton. Inset image is an example of a distractor happy singleton.

ms (0.29% of the total number of trials, SD ⫽ 0.2%; MAX for any one participant ⫽ 0.89%) were excluded from the RT analysis. A one-way within-subjects ANOVA of the mean correct RTs revealed a main effect of condition F(2, 46) ⫽ 9.197, p ⬍ .001. Planned pairwise comparisons indicated that target RTs were significantly longer in the presence of a happy distractor singleton face compared to the all-neutral condition, t(1, 23) ⫽ 2.72, p ⬍ .005, and that target RTs were significantly shorter when the target was a happy face compared to the all-neutral condition, t(1, 23) ⫽ 3.06, p ⬍ .01 (see Figure 3). An additional ANCOVA with participant gender as a covariate of no interest verified that these findings are not driven by one gender. The findings for condition did not change, F(2, 44) ⫽ 7.431, p ⬍ .005; there was no significant effect of gender, F(1, 22) ⫽ 1.542, p ⫽ .227; and there was no interaction between condition and gender, F(2, 44) ⫽ .272, p ⫽ .719. Error rates were again low, and did not vary between the happy distractor (M ⫽ 5%) happy target (M ⫽ 5%) and the all-neutral condition (M ⫽ 4%). Experiment 2 demonstrated that task-irrelevant distractor happy singletons caused distraction and a slowing of RTs, whereas a target happy singleton caused a facilitation effect. Together, the findings from Experiments 1 and 2 suggest that irrelevant distractors of both positive and negative valence can capture attention and disrupt task performance. However, the results of the two experiments differed in terms of the facilitation of search RTs in the presence of an emotional target face, with such facilitation being found for a happy (Experiment 2) but not fearful (Experiment 1) target singleton. This contrast may point to a general asymmetry between the facilitatory attentional-capture effects of negative and positive valence. This is discussed further following the results of Experiment 3, in which a different negative facial expression of emotion was used (anger).

Participants. A total of 16 people (7 male) between 21 and 40 (M ⫽ 26) years of age were recruited in the same manner as for the other experiments. All reported normal or corrected-to-normal vision. Stimuli and procedure. The apparatus and procedure were exactly the same as in the previous experiments. Stimuli for the neutral faces were also the same as in Experiment 1 and 2, with the exception of the emotional stimuli, which were in this case openmouthed angry faces.

Results and Discussion Figure 4 presents the results as a function of the three experimental conditions. Again, trials with an error or RTs greater than 2000 ms (0.4% of the total number of trials, SD ⫽ 0.2%; MAX for any one participant ⫽ 0.9%) were excluded from the analysis. A one-way ANOVA on the mean correct RTs revealed a main effect of condition, F(2, 28) ⫽ 10.483, p ⬍ .001. As can be seen in Figure 4, target RTs were longer in the presence of an angry distractor singleton face compared to the all-neutral condition, t(1, 15) ⫽ 5.358, p ⬍ .001, and the angry target face condition, t(1, 15) ⫽ 2.384, p ⬍ .05. There was no difference in target RTs between the angry target and all-neutral conditions, t(1, 15) ⫽ .806, p ⫽ .434.

860 840 Angry Distractor

820 RTs (ms)

RTs (ms)

Happy Target

349

Angry Target All Neutral

800 780 760 740 Conditions

Experiment 3 In Experiment 3, the effects of irrelevant attentional capture were tested using an angry singleton face expression. The angry

Figure 4. Mean RTs (in milliseconds) to locate the target face in the presence (either the target or a distractor) or absence of an angry face. Inset image is an example of a distractor angry singleton.

HODSOLL, VIDING, AND LAVIE

An ANCOVA with participant gender as a covariate of no interest verified that these findings were not driven by one gender. The findings for condition did not change, F(2, 24) ⫽ 5.370, p ⬍ .05; there was no significant effect of gender, F(1, 12) ⫽ .335, p ⫽ .574; and there was no interaction between condition and gender, F(2, 24) ⫽ .402, p ⫽ .673. Errors rates were again low and did not significantly vary between the angry distractor singleton (M ⫽ 4%), angry target (M ⫽ 5%) and all-neutral conditions (M ⫽ 4%). In line with Experiments 1 and 2, the emotional (angry) distractor singleton faces captured attention and thus produced a search cost. However, the angry target faces did not produce any facilitation (the small and nonsignificant numerical difference was in the reverse direction; see Figure 4). Because this experiment used open-mouthed facial expressions, it is unlikely that salient lowlevel visual features associated with the open mouth caused the facilitation effect seen with happy target faces in Experiment 2. Additionally, because RTs in Experiment 3 were clearly not shorter than RTs in Experiment 2 (the overall RT tended to be longer; compare Figure 3 and Figure 4), the lack of facilitation in Experiment 3 is unlikely to be the result of a floor effect. The difference in the ability to facilitate search may thus indicate a genuine asymmetry between negative and positive emotional expressions. However, before such a conclusion is made, an additional alternative account was addressed.

Experiment 4 In the first three experiments, the search target was always male, and the distractors were always female. It is therefore possible that the pattern of effects found (interference for distractor singleton faces of negative expression but facilitation only for target faces with a positive expression) was due to an interaction with the face gender. For example, it is possible that the happy facial expressions were easy to detect, irrespective of the face gender, whereas the negative facial expressions were more difficult to decipher (e.g., Goren & Wilson, 2006) for male (always a target in the previous experiments) than for female (always a distractor) facial expressions of emotion. Indeed, there is some evidence for female superiority in displaying at least some facial expressions of emotion (e.g., Weiss et al., 2006). To address this alternative account, the target and distractor gender was reversed in Experiment 4; here, the effect of an angry face singleton on search for a female target face among male distractor faces was examined.

Method Participants. Ten people (four male) between 18 and 43 (M ⫽ 26) years of age were recruited in the same manner as for the other experiments. All reported normal or corrected-to-normal vision. Stimuli and procedure. The apparatus and procedure were exactly the same as in the previous experiments. Stimuli were the same as in Experiment 3; open-mouthed angry faces and neutral faces were used. In Experiment 4, the target face was female (where it was male in the previous experiments), and the two distractor faces were male (where they were female in the previous experiments). Thus, in one sixth of total trials, the female target

face was angry in expression and the two male distractors were neutral in expression, and on another sixth of total trials one of the distractor male faces was angry in expression, whereas the other distractor male face and the female target face were neutral in expression (as in previous experiments, for the remaining twothirds of trials, all faces were neutral in expression.).

Results and Discussion Figure 5 presents the results as a function of the experimental conditions. Trials with an error or RTs greater than 2000 ms (0.16% of the total number of trials, SD ⫽ 0.1%; max per any one participant ⫽ 0.4%) were excluded from the analysis. A one-way ANOVA on the mean correct RTs revealed a main effect of condition, F(2, 18) ⫽ 4.932, p ⬍ .005. As shown in Figure 5, target RTs were significantly longer in the presence of an angry distractor singleton face compared to the all-neutral condition, t(1, 9) ⫽ 5.67, p ⬍ .001, and the angry target condition, t(1, 9) ⫽ 3.52, p ⬍ .005. There was no difference in target RTs between the angry target and all-neutral conditions, t(1, 9) ⫽ ⫺.354, p ⫽ .732. ANCOVA with participant gender was not permitted in this experiment as the variance between the genders was unequal. However, we note that the singleton distractor effect for the male participants (Distractor RT M ⫽ 718 ms, all neutral RT M ⫽ 658 average difference ⫽ 61 ms) was equivalent to that found for the female participants (Distractor RT M ⫽ 720 ms, all neutral RT M ⫽ 662, average difference ⫽ 59 ms). Error rates were low and did not vary between the angry male distractor singleton (M ⫽ 6%), angry female target (M ⫽ 6%), and all-neutral conditions (M ⫽ 6%). In sum, Experiment 4 replicated the results of Experiment 3 despite the reversal of target and distractors gender. This allows us to generalize the effects of attentional capture by a task-irrelevant emotional face of negative expression (angry) across the face gender.

Experiment 5 Experiment 5 further examined whether the emotional singleton interference effect established in our new paradigm can indeed be

740 720 Angry Distractor

700 RTs (ms)

350

Angry Target All Neutral

680 660 640 620 600 Conditions

Figure 5. Mean RTs (in milliseconds) to locate the female target face in the presence (either the female target or a male distractor) or absence of an angry face. Inset image is an example of a distractor angry singleton.

EMOTIONAL CAPTURE IN VISUAL SEARCH

attributed to the emotionality of the singleton face expression, rather than to any visual difference between the emotional and neutral faces. In Experiment 5, we used the same gender-based search task as in the previous experiments, but, this time, the singleton face (target or distractor) had a neutral expression, whereas the other faces were all of angry expression. In two-thirds of the trials, all of the stimuli comprised angry faces. In the remaining one-third of trials, one of the faces was a neutral singleton. In one-half of these trials, the target face was neutral, and in the other half a distractor singleton was neutral. If the interference effect seen in the first four experiments can be attributed to a visual “odd-one-out” effect, then responses to the target should now be slower when a singleton neutral distractor was present (compared to the singleton absent, and the singletonneutral target conditions).

Method Participants. A total of 14 people (six male) between 19 and 40 (M ⫽ 27) years of age were recruited in the same manner as for the other experiments. All reported normal or corrected-to-normal vision. Design, apparatus, stimuli, materials, and procedure. The apparatus, design, and procedure were exactly the same as in Experiments 1–3. The stimuli were those used in Experiment 3. Thus, the same neutral faces and angry (open-mouthed) faces as those used in Experiment 3 were used here, and the target faces were all male, whereas the distractor singleton faces were all female.

Results and Discussion Figure 6 presents the results as a function of the experimental conditions. Trials with an error or RTs greater than 2000 ms (0.08% of the total number of trials, SD ⫽ 0.1%; MAX for any one participant ⫽ 0.35%) were excluded from the RT analysis. A one-way ANOVA on the mean correct RTs revealed a main effect of condition, F(2, 26) ⫽ 8.315, p ⬍ .005. As shown in Figure 6, RTs were significantly longer for responses to a neutral target face

850 830 810

Neutral Distractor

RTs (ms)

790

Neutral Target

770

All Angry

750 730 710 690 670 650 Conditions

Figure 6. Mean RTs (in milliseconds) to locate the target face in the presence (either the target or a distractor) or absence of a neutral face. Inset image is an example of a distractor neutral singleton.

351

compared to an all-angry faces conditions, t(1, 13) ⫽ 2.633, p ⬍ .05, or to the neutral distractor singleton condition, t(1, 13) ⫽ ⫺3.45, p ⬍ .005. There was no difference in RTs between the neutral distractor singleton condition and the all-angry condition, t(1, 13) ⫽ ⫺.097, p ⫽ .924. Error rates were low and did not significantly vary between the all-angry condition (M ⫽ 5%), the neutral target condition (M ⫽ 7%), and the neutral distractor singleton condition (M ⫽ 6%, F ⬍ 1). The results clearly indicate that, unlike the presence of an emotional distractor singleton, the presence of a neutral distractor singleton has no effect on performance. These findings rule out a visual “odd-one out” account for the irrelevant emotional singleton effects we established in Experiments 1– 4. The slowing effect found for a target with a neutral expression compared to the all-angry faces condition was unexpected but a plausible account for this arises in pointing to the combination of a neutral target with two angry distractors under this condition. It is possible that the slowing of responses in this condition is due to an interference effect produced by the presence of two angry distractors. This explanation is a more likely than one that attributes the slowing effect to interference produced by the presence of a singleton neutral target because a neutral singleton failed to produce any interference effect when present in a distractor item. The attribution of this effect to the presence of two angry distractors raises the interesting possibility that capture by angry emotional expression does not require a singleton presentation. Sociobiological considerations suggest that it is just as valuable to pay attention to two angry faces as it is to one. Exploring this further would be an interesting direction for future research. For now, we can conclude that Experiment 5 clearly rules out a visual odd-one out account for the attentional capture by emotional singleton distractors.

General Discussion People’s ability to detect emotional expressions that are irrelevant to their current task could be critical for survival or for initiating or avoiding important social interactions. However, as we discussed in the Introduction, in all previous visual search reports of an attentional advantage for emotional faces, emotion was relevant to the task in some way. In contrast, our findings consistently demonstrate emotional attentional capture when emotion is clearly irrelevant to the task. Because we used a paradigm in which the emotional aspects of the distractor stimuli are entirely irrelevant to the task and because we did not observe a distractor effect for an odd-one-out neutral distractor, our findings strongly suggest that the distractor interference effects found are due to attentional capture by the emotional content of the distractor faces. Moreover, we have established attentional capture for emotional distractor faces across the expressions of anger, fear and happiness and for anger across face gender too. Although future research can address the effects of the combinations of other emotional expressions and face gender, the present research has clearly established that irrelevant emotional face expressions can capture attention in accord with the sociobiological importance of such an effect. This conclusion is consistent with a recent physiological demonstration that a fearful face can elicit the N2pc, an ERP signature of attentional selection, when presented during a luminancemonitoring task to which it was clearly irrelevant (Eimer & Kiss,

352

HODSOLL, VIDING, AND LAVIE

2007). However, the fearful face failed to elicit N2pc in the presence of a luminance target and failed to produce any behavioral interference effect. It is thus unclear whether the ERP effect indicates involuntary attentional capture or whether the subjects have not attempted to ignore the nondistracting face in the first place or even willfully paid attention to the faces (perhaps simply out of interest) as might be expected given the undemanding nature of a task of monitoring for a rare luminance increase at fixation (a task often used to provide a “passive-viewing” baseline condition in neuroimaging studies). In contrast, in the present study, participants performed a rather demanding visual search task, and capture by the irrelevant emotional face distractors was detrimental to task performance as indicated by their behavioral interference effects. Our findings therefore suggest true emotional “capture” of attention rather than any strategy-based voluntary allocation of attention. Notice that strategy-based allocation of attention to emotional stimuli cannot be ruled out even when emotion is not explicitly defined as task-relevant. In cases where the visual search task concerns the same visual features as those used to convey emotion (e.g., both concern line orientations; see Horstmann & Becker, 2008), the allocation of attention to the emotional feature may be simply driven by a strategic allocation of attention to the same visual features as those defining the target. One can only confidently conclude that emotional expression has captured attention involuntarily when emotion is irrelevant to the task both in semantic content and in its constituting features. Horstmann and Becker (2008) found that, whereas a singleton schematic face with mouth curve pointing down (creating a sad expression) guided or misguided attention in searches for a target defined by nose orientation (“⬍” or “⬎”), the singleton face failed to affect performance once the search task concerned the nose color instead of its orientation. This failure to find irrelevant emotion capture appears at first sight to be inconsistent with our conclusion that emotion can capture attention even when it is entirely irrelevant to the task. However, a careful consideration of Horstmann and Becker’s task suggests that it may have had reduced sensitivity to reveal irrelevant emotional capture. Specifically, the face emotion was conveyed with a single feature (mouse curve orientation). Given the configural nature of emotion perception (e.g., Thompson, 1980), a single feature is likely to produce a weaker emotion signal. In addition, the feature-based search task used is likely to be more sensitive to the effects of low-level “bottom up” visual factors such as the relative visual salience of the specific feature used. Attention research (Theeuwes, 1994) has previously indicated that visual search for an odd-colored target is likely to be immune to attentional capture by an irrelevant odd orientation or shape (because the target feature [color] is likely to be more visually salient than the distractor [orientation] feature). Thus, although the study by Horstmann and Becker (2008) is important in highlighting boundary conditions for irrelevant emotional capture, it does not cast any doubt on the robustness of the irrelevant emotional capture we establish. Our findings indicate that when a higher-level perception task (involving face gender judgments) is used and the distractor faces display rich emotional expressions (as conveyed with photographic images), irrelevant attentional capture by emotion is found, which generalizes across a range of emotional expressions, including both negative (fear, anger) and positive (happy) emotions.

The findings that expressions of both negative and positive valence captured attention even when they were irrelevant to the task, substantiate an irrelevant emotional capture effect and sheds light on another important issue concerning the potential asymmetry between negative and positive emotional valence (e.g., Nasrallah, Carmel & Lavie, 2009). Several previous studies have reported detection advantage only for threatening or negatively valenced emotional facial expressions (e.g., Calvo et al., 2006; Eastwood et al., 2001; Ohman et al., 2001; Schubo et al., 2006). In contrast, our study indicated that task-irrelevant happy distractors were just as distracting as fearful and angry facial expressions, which suggests that the ability of task irrelevant emotional faces to capture attention is not restricted to negative emotional expressions but may be a general effect of emotion, including positive emotions. Interestingly, only capture by happy faces involved facilitation of RT when the target face was an emotion singleton, whereas the capture effect by negative emotion was confined to distractor interference effects. This contrast raises the possibility that the cost associated with processing negative expressions is not merely due to capture of attention to the wrong item (distractor rather than target). There appears to be an additional effect of slowing simply due to the very processing of the negative emotion and its unpleasant connotations. Such cost can offset any potential benefit of capture to negative target singletons and may result in a difficulty to disengage from negative emotional faces (e.g., Fox, Russo, & Dutton, 2002). A disengagement cost for negative emotions is a particularly appealing account in our task because it required response to the target orientation rather than its emotional expression. Thus, one would have to disengage from processing the emotional attribute that captured attention to the face in the first place. Although this can rapidly occur for the happy face targets it takes longer to disengage attention from the negative fearful targets. Although compelling, at present this account remains speculative. Future research, incorporating for example a dot probe within our novel paradigm, is needed to further test this disengagement account for the asymmetry of emotional valence with respect to facilitatory effects of attentional capture. Future research may also examine whether emotional attentional capture can survive under conditions that eliminate other forms of emotionally neutral capture. For example, the search task we employed has allowed for the use of a singleton-detection mode (e.g., Bacon & Egeth, 1994), because the target was defined by being of an odd gender (e.g., male among females). Would emotional faces capture attention even when irrelevant to a task that cannot be conducted on the basis of a singleton detection mode (e.g., a search for specific face identity)? Such a result may establish the intriguing possibility that capture of attention by irrelevant emotions may not be as readily modulated by top-down, task-orientated processes as nonemotional capture, so that people may be able to pay attention to irrelevant emotional information because this, even when task-irrelevant, still has the potential to be of even higher value than successful performance of the current task.

References Bacon, W. F., & Egeth, H. E. (1994). HYPERLINK “http://www. biomedexperts.com/Abstract.bme/8008550/Overriding_stimulus-driven_

EMOTIONAL CAPTURE IN VISUAL SEARCH attentional_capture” Overriding stimulus-driven attentional capture. Perception & Psychophysics 55(5), 485– 496. Calvo, M. G., Avero, P., & Lundqvist, D. (2006). Facilitated detection of angry faces: Initial orienting and processing efficiency. Cognition and Emotion, 20, 785– 811. Darwin, C. (1904). The expression of emotions in man and animals. London, United Kingdom: Murray. (Original work published 1872.) Eastwood, J. D., Smilek, D., & Merikle, P. M. (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception & Psychophysics, 63, 1004 –1013. Eimer, M., & Kiss, M. (2007). Attentional capture by task-irrelevant fearful faces is revealed by the N2pc component. Biological Psychology, 74, 108 –112. Fenske, M. J., & Eastwood, J. D. (2003). : Modulation of focused attention by faces expressing emotion: Evidence from flanker tasks. Emotion, 3, 327–341. Fox, E., Russo, R., Bowles, R. J., & Dutton, K. (2001). Do threatening stimuli draw or hold visual attention in sub-clinical anxiety? Journal of Experimental Psychology: General, 130(4), 681–700. Fox, E., Russo, R., & Dutton, K. (2002). Attentional bias for threat: Evidence for delayed disengagement from emotional faces. Cognition and Emotion, 16, 355–379. Gerritsen, C., Frischen, A., Blake, A., Smilek, D., & Eastwood, J. D. (2008). Visual search is not blind to emotion. Perception & Psychophysics, 70, 1047–1059. Goren, D., & Wilson, H. R. (2006). Quantifying facial expression recognition across viewing conditions. Vision Research, 46, 1253–1262. Hahn, S., & Gronlund, S. D. (2007). Top-down guidance in visual search for facial expressions. Psychonomic Bulletin & Review, 14, 159 –165. Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd: An anger superiority effect. Journal of Personality and Social Psychology, 54, 917–924. Horstmann, G., & Becker, S. (2008). Attentional Effects of Negative Faces: Top-down Contingent or Involuntary? Perception and Psychophysics, 70, 1416 –1434. Juth, P., Lundqvist, D., Karlsson, A., & Ohman, A. (2005). Looking for foes and friends: Perceptual and emotional factors when finding a face in the crowd. Emotion, 4, 379 –395. Lamy, D., Amunts, L., & Bar-Haim, Y. (2008). Emotional priming of pop-out in visual search. Emotion, 8, 151–161. Mogg, K., & Bradley, B. P. (1999). Orienting of attention to threatening facial expressions presented under conditions of restricted awareness. Cognition & Emotion, 13, 713–740.

353

Nasrallah, M., Carmel, D., & Lavie, N. (2009). “Murder she wrote”: Enhanced sensitivity to negative word valence. Emotion 9(5), 609 – 618. Ohman, A. (1993). Fear and anxiety as emotional phenomena: Clinical phenomenology, evolutionary perspectives, and information processing mechanisms. In M. Lewis & J. M. Haviland (Eds.), Handbook of emotions (pp. 511–536). New York: Guilford Press. Ohman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowd revisited: A threat advantage with schematic stimuli. Journal of Personality and Social Psychology, 80, 381–396. Pinkham, A. E., Griffin, M., Baron, R. Sasson, N. J., & Gur, R. C. (2010). The Face in the Crowd Effect: Anger superiority when using real faces and multiple identities. Emotion, 10(1), 141–146. Schubo, A., Meinecke, C., Abele, A., & Gendolla, G. H. E. (2006). Detecting emotional faces and features in a visual search paradigm: Are faces special? Emotion, 6, 246 –256. Stein, T., Zwickel, J., Ritter, J., Kitzmantel, M., & Schneider, W. X. (2009). The effect of fearful faces on the attentional blink is task dependent. Psychonomic Bulletin & Review, 16(1), 104 –109. Theeuwes, J. (1991). Cross-dimensional perceptual selectivity. Perception and Psychophysics, 50, 184 –193. Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception and Psychophysics, 51, 599 – 606. Theeuwes, J. (1994). Stimulus-driven capture and attentional set: Selective search for color and visual abrupt onsets. Journal of Experimental Psychology: Human Perception and Performance, 20, 799 – 806. Thompson, P. (1980). Margaret Thatcher: A new illusion. Perception, 9(4), 483– 484. Tottenham, N., Tanaka, J., Leon, A. C., McCarry, T., Nurse, M., Hare, T. A., . . . Nelson, C. A. (2009). The NimStim set of facial expressions: Judgments from untrained research participants. Psychiatry Research, 168, 242–249. Weiss, E. M., Kohler, C. G., Brensinger, C. M., Bilker, W. B., Loughead, J., Delazer, M., & Nolan, K. A. (2007). Gender differences in facial emotion recognition in persons with chronic schizophrenia. European Psychiatry, 22, 116 –122. Williams, M. A., Moss, S. A., Bradshaw, J. L., & Mattingley, J. B. (2005). Look at me, I’m smiling: Visual search for threatening and nonthreatening facial expressions. Visual Cognition, 12, 29 –50.

Received October 14, 2009 Revision received September 21, 2010 Accepted November 15, 2010 䡲

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.