Visual processing in a facial emotional context: An ERP study

Share Embed


Descripción

International Journal of Psychophysiology 71 (2009) 25–30

Contents lists available at ScienceDirect

International Journal of Psychophysiology j o u r n a l h o m e p a g e : w w w. e l s ev i e r. c o m / l o c a t e / i j p s yc h o

Visual processing in a facial emotional context: An ERP study Andrés Antonio González-Garrido a,b,⁎, Julieta Ramos-Loyo a, Adriana L. López-Franco a, Fabiola R. Gómez-Velázquez a a b

Instituto de Neurociencias (Universidad de Guadalajara), Mexico O.P.D. Hospital Civil de Guadalajara, Mexico

a r t i c l e

i n f o

Available online 30 July 2008 Keywords: Face recognition Facial emotional expression ERP Visual comparison Spatial attention

a b s t r a c t Facial emotional processing can be bypassed when faces are task-irrelevant and attention is diverted, although this effect has not been examined when cognitive task occurs within a facial background. Event-related potential (ERP) measures were obtained to evaluate the influence of different irrelevant facial emotional contexts on a simultaneous “ear-size” detection task performance in five processing contexts: (1) neutral face, (2) happy face, (3) fearful face, (4) facial contour, and (5) non-facial context. Reaction times were longer when visual processing occurred in a facial context, regardless of its emotional content. The context of neutral faces also demonstrated a lower number of correct responses, with fewer incorrect responses found during the presentation of fearful faces compared to the neutral facial context. ERP morphology was similar across all conditions, but ERP amplitude from components for the non-facial context was larger than that of the alternative conditions from 100 to 300 ms, with a similar N170-like potential also observed. The findings suggest that simultaneous irrelevant emotional facial stimuli may affect cognitive processing by altering two temporarily overlapped neural mechanisms: one responsible for earlier face detection, and the other involved in emotional recognition. The first might delay simultaneous cognitive actions by diverting attention, whereas while the latter may enhance the availability of processing resources through the participation of a subcortical pathway. © 2008 Elsevier B.V. All rights reserved.

1. Introduction Processing face information involves integrating activation within a network of several cortical regions in which the fusiform gyrus participates (Haxby et al., 1994; Kanwisher et al., 1997; Sergent and Signoret, 1992), together with other areas in the visual cortex, the limbic system, and the prefrontal cortex (Ishai et al., 2002, 2005). Haxby et al. (2000) have suggested that facial perception is mediated by a distributed neural network in the human brain, and proposed a model that includes a ‘core’ and an ‘extended’ system. The core system comprises the inferior occipital gyrus, fusiform gyrus and the superior temporal sulcus, and involves mechanisms for coding changeable and invariant facial properties (Calder and Young, 2005). The extended system includes the amygdala and the insula, both of which are involved in the perception of emotional facial expressions, particularly fear and disgust (Hennenlotter and Schroeder, 2006). Emotional perception of faces generally is considered to be processed by a neural circuitry that includes the superior temporal gyrus and amygdala (Adolphs et al., 2002; Winston et al., 2003).

⁎ Corresponding author. Instituto de Neurociencias (Universidad de Guadalajara), Francisco de Quevedo 180. Col. Arcos Vallarta, Guadalajara, Jalisco. 44130, Mexico. Tel.: +52 33 38 18 07 40; fax: +52 33 36 15 52 01. E-mail address: [email protected] (A.A. González-Garrido). 0167-8760/$ – see front matter © 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.ijpsycho.2008.07.017

Several neuroimaging studies have shown increased amygdala activity when viewing fearful (Breiter et al., 1996; Hariri et al., 2003; Morris et al., 1996; Phillips et al., 1997; Whalen et al., 2001) and happy facial expressions (Breiter et al., 1996; Dolan et al., 1996). In addition, amygdala lesions have been shown to impair fear recognition (Yang et al., 2002). Although it has been postulated that affection and cognition are mediated by separate and partially independent systems (Le Doux, 1998; Zajonc, 1980), considerable evidence indicates that responses to facial emotional stimuli, especially fear and happiness, are modulated by attentional processes (Holmes et al., 2003; McKenna et al., 2001; Pessoa et al., 2002; Pourtois et al., 2004, 2005, 2006). Faces can be very effective at capturing attention, even more than other types of changing objects (Jenkins et al., 2005; Ro et al., 2001). However, facial perception is conceived of as an automatic process that attracts abundant attentive resources, regardless of their importance for the successful realization of any concurrent cognitive operation. Consequently, the probable effects associated with the emotional contents of the faces could be masked by facial processing itself, especially when emotional faces are task-irrelevant (GonzalezGarrido et al., 2007; Vuilleumier et al., 2001). As ERP techniques have high temporal resolution, they have been used to assess the association between attention and the processing of emotional facial content, thereby demonstrating that when emotional faces are taskirrelevant and attention is diverted away facial emotional processing

26

A.A. González-Garrido et al. / International Journal of Psychophysiology 71 (2009) 25–30

2.2. Design and procedure

face (3) fearful face, (4) facial contour (5) non-facial context. Conditions 4 and 5 corresponded to the non-facial control conditions, whereas conditions 1, 2, and 3 consisted of 18 pictures that were fullcolor, 16 cm × 13 cm Latin models (3 males; 3 females) with happy, neutral, and fearful facial expressions selected as visual stimuli. These facial expressions had been categorized correctly with a hit rate above 90% by a pool of 20 similar subjects in a pilot study. Viewing distance was 60 cm, which produced stimulus visual angles of approximately 12° horizontal and 15° vertical. Fig. 1 illustrates the different visual contexts. Facial contours were built by randomizing the pixels of all of the facial image stimuli from condition 1, while keeping the outer facial contour intact. Condition 5 was composed of a cross-line in which both the vertical and perpendicular horizontal axes corresponded to the matching dimensions of the faces used as stimuli. The horizontal line was displayed to coincide with the bilateral tragus level. Using the five visual contexts as backgrounds, the task consisted of a go-go paradigm in which subjects had to determine as quickly as possible the location of the appearance of a smaller ear by pressing a right or left keyboard button, respectively, in 50% of the trials. During the remaining trials, in which the size of the ears was identically regular (25%) or equally shrunken (25%), subjects were instructed to press an alternate key. The manipulation of the stimuli was achieved by digitally reducing the total size of the target ears by 30%. Participants were seated comfortably in a quiet, dimly lit room. Visual stimuli were presented on an SVGA monitor (refresh rate: 100 Hz). Each stimulus was presented during 500 ms in the center of the screen with an inter-stimulus interval of 1200 ms. Three blocks of 200 trials each for a total of 600 trials were presented by combining five randomly-distributed main conditions. After each block, subjects had a brief rest period. Each condition was made up of 125 trials divided equally between trials with similar (60 trials) and different (60 trials) ear sizes. The presentation order of the blocks, the location of the appearance of the shrunken ears, and the presentation order of the trials in each block were all counterbalanced. The number of correct responses and reaction times were measured during the task performance.

2.2.1. Behavioral data and experimental task Behavioral and ERP data were evaluated in five conditions, representing different visual contexts: (1) neutral face, (2) happy

2.2.2. ERP acquisition ERPs were obtained in all conditions from 100 ms before the onset of the stimuli until 1000 ms after it. ERPs were recorded from the Fp1,

can be evaded (Eimer et al., 2003; Holmes et al., 2003; Holmes et al., 2006; Gonzalez-Garrido et al., 2007). Holmes et al. (2006) evaluated whether the processing of emotional expressions of faces presented within foveal vision could be modulated by spatial attention. They presented either fearful or neutral faces foveally at fixation, flanked by two vertical lines that were either identical or different in length. Participants had to detect immediate repetitions of an identical pair of lines on consecutive trials when explicitly instructed to ignore the faces. An early positive ERP component that did not last beyond 220 ms post-stimulus suggested that the initial fast stage of emotional expression processing was unaffected by attention. In this experimental design lines and faces acted as two different attention targets that constituted two distinctly different items. However, previous findings revealed differences in the discrimination process of two or more attributes of the same object in comparison with similar discrimination in two different objects, which suggested that attentional selection may be object-based (Barnes et al., 2001; Law and Abrams, 2002). Consequently, the interpretation of the early stages of emotional expression processing might be biased by the experimental use of two different and unrelated objects. The present study was designed to evaluate the different irrelevant facial emotional contexts on the performance of a simultaneous visual comparison task when faces are presented within foveal vision in which the visual comparisons take place. 2. Method 2.1. Subjects A total of 16 healthy university voluntary subjects — 10 malesparticipated in the experiment (mean = 25.94, SD= 3.79 years). Inclusion criteria were right-handedness and normal or corrected-to-normal vision. Exclusion criteria were a history of psychopathology for self or immediate relatives, epilepsy, head injury and drug or alcohol use (within 24 h prior to testing), assessed through clinical interviews.

Fig. 1. Diagram of the experimental task showing trial types and events sequence. Five categories of trials were involved as defined by the type of stimuli context. Participants were instructed to press a left button when the shrunken ear appeared on the left; a central button when both ears had the same size; and a right button when the shrunken ear was presented on the right.

A.A. González-Garrido et al. / International Journal of Psychophysiology 71 (2009) 25–30

27

Table 1 Mean (SD) behavioral performance data

Correct responses Incorrect responses No responses Reaction times

Happy faces

Neutral faces

Fearful faces

Facial contour

No facial context

Mean (SD)

Mean (SD)

Mean (SD)

Mean (SD)

Mean (SD)

85.56 (13.2) 29.63 (14.1) 4.81 (5.3) 802.5 (123.2)

81 (15.5) 34.25 (15.7) 4.75 (4.8) 807.5 (115.9)

86.51 (14.3) 28.13 (14) 5.5 (5.8) 804.3 (116.9)

84.25 (18.2) 29.6 (19) 5.6 (5.2) 795.8 (116.2)

87.19 (16.3) 28.8 (16.8) 3.6 (3.6) 762.4 (99.9)

p value

p b 0.05 p b 0.05 p = 0.22 p b 0.05

SD = standard deviation. Reaction times are presented in milliseconds.

Fp2, F7, F8, F3, F4, C3, C4, P3, P4, O1, O2, T3, T4, T5, T6, Fz, Cz, and Pz scalp electrode sites, according to the 10–20 system. The electrooculogram (EOG) was recorded from the outer canthus and infraocular orbital ridge of the right eye. Electrophysiological recordings were made using 10 mm diameter gold disk electrodes. All recording sites were referred to linked mastoids. Interelectrode impedances were below 5 kΩ. EEG and EOG signals were amplified at a bandpass of 0.05–30 Hz (3-dB cutoff points of 6 dB/octave roll off) with a sampling period of 5 ms on the MEDICID-04 system. Single trial data were examined off-line for averaging and analysis. Epochs of data on all channels were excluded from averages when voltage in a given recording epoch exceeded 100 μV on any EEG or EOG channel. At least 40 artifact-free correct trials were obtained from each condition and subject to yield a signal–noise ratio higher than 1.5. Amplitude and latency for the ERP components were measured according to a 100 ms pre-stimulus baseline. All scoring was conducted baseline-to-peak through visual inspection. 2.2.3. Data analysis Repeated measure analyses of variance (RM-ANOVAs) were applied to the behavioral data. Electrophysiological measures were assessed using randomized-block analysis of variance (conditions × recording sites) with average voltage across each time window as the

dependent variable. The latency and amplitude of each ERP component were quantified by the highest peak within each respective latency window. Considering the appearance of the stimuli as the initial time instant (t0), several time windows were used to examine averaged ERP waveforms. Greenhouse-Geisser corrections to the df were applied as needed, with the corrected probabilities reported. Post-hoc Tukey's HSD tests were used to explore the trend of the differences found. 3. Results 3.1. Behavioral During the task performance, reaction times among the conditions were significantly different [F(2.6,39.5) = 4.01, p b 0.05]. Post-hoc analysis showed that they were significantly shorter in the non-facial context condition with respect to the happy [(q3.98)= 4.347; p b 0.05)], neutral [(q4.82) = 4.898; p b 0.01)] and fearful ones [(q3.98)= 4.552; p b 0.05)]. The number of correct responses was significantly different among conditions [F(3.2,47.3) = 2.83, p b 0.05]. Post-hoc analysis also showed that the number of correct responses was significantly lower for the neutral faces with respect to the non-facial context [(q3.98) = 4.316; p b 0.05)]. There were also significant differences between the groups in

Fig. 2. Grand mean waveforms of the ERP from conditions A) neutral face; B) happy face; C) fearful face; D) facial contour; and, E) non-facial context, during cognitive task performance.

28

A.A. González-Garrido et al. / International Journal of Psychophysiology 71 (2009) 25–30

the number of incorrect responses [F(3.1,45.3) = 2.96, p b 0.05]. Post-hoc analysis revealed that the incorrect responses were significantly lower for the fearful faces in comparison with the neutral ones [(q3.98) = 4.379; p b 0.05)] (Table 1). 3.2. Electrophysiological results Fig. 2 shows four main components in a typical sequence of negative–positive–negative–positive peaks that reached their maxima at vertex over 70, 170, 240 and 720 ms, respectively. ERP morphology was similar across the different conditions. Topographical distribution and amplitude values varied among the different waveforms. The N70, P170 and N240 components were almost symmetrical and maximal over fronto-parietal areas, whereas the late positive component P720 demonstrated a wider, asymmetrical distribution; being predominantly frontal and basically located over the right hemisphere. In the occipital leads there was an early P115 component showing higher amplitude on the right side during condition 4 (facial contour), which was not evident when no facial context was present (condition 5). In addition, N170 was also observable over the right temporo-occipital regions, but no clear differences among conditions were discernable. 3.3. ERP analysis Visual inspection of the group-averaged indicated that voltage changes occurred over the midline and the frontal regions. Thus, statistical analysis was restricted to the averaged ERP waveforms from locations Fp1, Fp2, F3, F4, F7, F8, Fz and Cz, starting from the stimuli onset in the following five time windows: W1: 0–100; W2: 100–200; W3: 200–300; W4: 300–500; W5: 500–1000 ms, respectively. In W1, none of the factors was statistically significant. However, in W2 the time window in which P1 and N170 components appeared the factor Condition [F(4,585) = 7.84; p b 0.0001] reached significance, with relevant interaction effect (Condition × Recording Site) [F(28,585) = 5.70; p b 0.0001]. Post-hoc analysis revealed that conditions 1 [(q4.60) = 7.15; p b 0.01)]); 2 [(q4.60) = 6.51; p b 0.01)]; 3

[(q4.60) = 4.63; p b 0.01)]) and 4 [(q4.60) = 4.61; p b 0.01)]) reached significantly higher amplitudes than non-facial context, basically at F7, F8, Fz and Cz locations. W3 showed similar changes to those reported in W2. The factor Condition [F(4,585) = 15.60; p b 0.0001] reached significance with relevant interaction effect [F(28,585) = 7.09; p b 0.0001]. Post-hoc analysis revealed that non-facial context condition had significantly higher voltages in all of the fronto-central locations than A [(q4.60) = 9.74; p b 0.01)]), B [(q4.60) = 8.01; p b 0.01)], C [(q4.60) = 8.85, p b 0.01)]) and D [(q4.60) = 5.12; p b 0.01)] respectively, in a time window in which the N240 waveform was predominant. The analysis of W4 showed a significant effect restricted to the factor of Recording Site [F(7,585) = 2.12; p b 0.05] with relevant interaction effect [F(28,585)= 14.03; p b 0.0001]. Post-hoc analysis of the interaction among these factors showed that fearful faces reached significantly lower voltages than happy and neutral at F4, F8 and Fz, respectively. Fig. 3 illustrates the voltage amplitudes of grand mean ERP waveforms from each of the facial conditions in both cerebral hemispheres. In order to evaluate the electrophysiological effects of facial emotional contexts across cerebral hemispheres during W4, RM-ANOVAs [Conditions (neutral, happy, fearful) × Hemispheres (left, right) × Recording Sites (Frontopolar, inferior frontal, superior frontal, parietal, anterior temporal)] were performed, using average voltage across W4 as the dependent variable. W5 showed significant effects for the factor of Condition [F(4,585) = 2.81; p b 0.05] and Recording Site [F(7,585) = 2.47; p b 0.05], with relevant interaction effects [F(28,585) = 12.60; p b 0.0001]. Post-hoc analysis revealed that neutral faces had significantly higher voltages than the non-facial context [(q3.86) = 4.29; p b 0.05], while significant differences between the levels of the factor Recording Site were restricted to those observed over the Fp1 and F7 scalp locations. 4. Discussion A facial emotional content will be meaningless in a non-facial context, so that an incoming stimulus must first be recognized as a

Fig. 3. Voltage amplitudes of grand mean waveforms of the ERP from conditions A) neutral face; B) happy face; and, C) fearful face, during the time window from 300 to 500 ms poststimulus.

A.A. González-Garrido et al. / International Journal of Psychophysiology 71 (2009) 25–30

29

face in order for its emotional facial content to be subsequently decoded. This outcome might depend on stimuli duration and task difficulty among other variables closely related to the availability of attentional resources. The primary purpose of the present study was to evaluate the effect of varying task-irrelevant facial emotional contexts while subjects had to accomplish a simultaneous simple visual comparison task.

permanence of some facial features such as the ears and their facial– spatial relationships in a non-facial context could be sufficient to activate the feature-based and configural processes reflected by N170. In this sense, N170 may be sensitive not only to eyes (Bentin et al.,1996; Eimer, 1998) but also to the features of facial–spatial relationships.

4.1. Behavioral performance

An early ERP positivity ranging from 120–180 ms has been described as sensitive to facial emotional expression in different experiments (Ashley et al., 2003; Eimer and Holmes, 2002), usually followed by a subsequent long-lasting and broadly distributed change. In the present experiment, there were no significant differences between emotional facial conditions before 200 ms post-stimulus. One possible explanation of the lack of early ERP differences could be that the facial emotional content was completely irrelevant to success in the task, compared to tasks in which the cognitive strategy could take account of emotional facial features as part of the recognition set (i.e., to detect infrequent immediate repetitions of identical stimuli across successive trials), probably triggering a partially time-overlapped neural emotional processing. An alternative but not exclusive view could be that the task difficulty and related attentional demands could mask the early ERP facial emotional potential effects, as has been found in experiments in which attention has been directed away from facial stimuli (Eimer et al., 2003). The main facial emotional-related ERP differences were obtained beyond 200 ms post-stimulus located over the frontal areas. Similar broadly distributed late positivities while processing emotional facial stimuli have been postulated as reflecting successively higher level stages of emotional face processing, such as the conscious evaluation of emotional content (Eimer and Holmes, 2007). The scalp distribution of these later ERP facial emotional-related variations also seems to support this hypothesis. In particular, the late ERP-positive component showed a differential topographic distribution that reached its highest amplitude over the right frontal scalp locations. It is reasonable to suppose that this distribution may be task-related, due to the visuospatial nature of the cognitive assignment, in which changes in the direction of attention could be expected. This result is in line with several empirical findings that describe late positive components sensitive to the direction of attention (Grent-'t-Jong et al., 2006; Jongen et al., 2007), perhaps implying the involvement of the lateral– prefrontal cortex in the voluntary control and maintenance of attention (Hopf and Mangun, 2000). Despite the fact that in the present experiment the cognitive task was more difficult and attention-demanding, the results coincide with those recently reported by Holmes et al. (2006), showing an effect of the emotional facial content on behavioral performances and ERP, when faces are task-irrelevant but appear within foveal vision. However, there were no apparent ERP facial emotional effects on the P2 component except beyond the 200 ms post-stimulus. This opens the possibility that the effects of emotional expression of foveally-presented faces on ERP could be influenced not only by taskattentional instructions but also by the task itself and by attentional availability in the early stages of the emotional face processing.

The emergence of an intact facial context interfered with and delayed task performance. This result could be anticipated in view of the reported effect of faces on attention (Eimer et al., 2003; Ellis and Ashbrook, 1988; Gonzalez-Garrido et al., 2007; Holmes et al., 2003), regarding the fact that faces are capable of modifying the focus of attention by reassigning working memory resources (Fenske and Eastwood, 2003; Mogg and Bradley, 1999). In addition, during the condition in which fearful faces appeared, behavioral responses were as slow as in the remaining facial contexts, but more accurate. Several previous findings have suggested that responses to facial emotional stimuli are modulated by attentional processes seem to support the slowness-to-respond observed in the fearful context (Eimer and Holmes, 2007; Holmes et al., 2003; McKenna et al., 2001; Pessoa et al., 2002; Pourtois et al., 2004, 2005, 2006). However, a paradox emerges from the fact that due to the ecological relevance of emotional stimuli, emotionally relevant faces might increase attentional attraction thereby stimulating their interference effect on cognitive processing. Recognition of fear is mandatory and independent of awareness (de Gelder et al., 2005), such that the present results imply that the behavioral effects fearful faces could be explained by two overlapping processes: (1) the facial nature of the stimulus might delay the cognitive processing by diverting attention, (2) while its emotional content allocates additional neural processing resources perhaps by enhancing the neural activity in a more widespread network of participating regions in which the amygdala may play a special role. Several empirical findings of similar higher correct responses for happy and fearful faces with respect to those obtained in the neutral facial context would seem to support this notion (Gur et al., 2002; Habel et al., 2007). 4.2. ERP and face processing Early ERP endogenous components have been shown to be sensitive to faces. The P1 component (Mangun, 1995), which represents the first endogenous response to visual stimuli (Itier and Taylor, 2002; Linkenkaer-Hansen et al., 1998), and the subsequent N170 waveform, which is a more pronounced and often earlier response to faces than to objects, being generally accepted as a fine index of structural facial encoding prior to recognition (Bentin et al., 1996; Bötzel et al., 1995; Eimer, 2000; George et al., 1996). In contrast, later components have repeatedly been associated with memory and retrieval processes. A P1-like component was found that reached its highest amplitude over the right occipital region during the condition in which faces were replaced by their external contour. This result is compatible with that reported by Linkenkaer-Hansen et al. (1998), and therefore supports the occurrence of an early face-selective processing indexed by P1. It has been postulated that the structural encoding of faces and perception of their emotional expression are parallel and independent processes (Bruce and Young, 1986). Modulation of the N170 component by emotional facial expression was not expected. However, two control conditions facial contour and non-facial context had no facial complexity, but neither the amplitude nor the latency of the N170 component was significantly modified. There is no clear explanation for this result, but an explanatory hypothesis could be that the

4.3. ERP and emotional face processing

Acknowledgements We thank Dr. Daniel Zarabozo for his useful assistance in data analysis and Marina Alvelais for her helpful cooperation in the implementation of the study. References Adolphs, R., Baron-Cohen, S., Tranel, D., 2002. Impaired recognition of social emotions following amygdala damage. J. Cogn. Neurosci. 14, 1264–1274. Ashley, V., Vuilleumier, P., Swick, D., 2003. Time course and specificity of event-related potentials to emotional expressions. Neuroreport 15, 211–216.

30

A.A. González-Garrido et al. / International Journal of Psychophysiology 71 (2009) 25–30

Barnes, L.L., Nelson, J.K., Reuter-Lorenz, P.A., 2001. Object-based attention and object working memory: overlapping processes revealed by selective interference effects in humans. Prog. Brain Res. 134, 471–481. Bentin, S., Allison, T., Puce, A., Perez, E., McCarthy, G., 1996. Electrophysiological studies of face perception in humans. J. Cogn. Neurosci. 8, 551–565. Bötzel, K., Schulze, S., Stodieck, S.R., 1995. Scalp topography and analysis of intracranial sources of face-evoked potentials. Exp. Brain Res. 104, 135–143. Breiter, H.C., Etcoff, N.L., Whalen, P.J., Kennedy, W.A., Rauch, S.L., Buckner, R.L., Strauss, M.M., Hyman, S.E., Rosen, B.R., 1996. Response and habituation of the human amygdala during visual processing of facial expression. Neuron 17, 875–887. Bruce, V., Young, A., 1986. Understanding face recognition. Br. J. Psychol. 77, 305–327. Calder, A.J., Young, A.W., 2005. Understanding the recognition of facial identity and facial expression. Nat. Rev. Neurosci. 641–651. de Gelder, B., Morris, J.S., Dolan, R.J., 2005. Unconscious fear influences emotional awareness of faces and voices. Proc. Natl. Acad. Sci. U. S. A. 102, 18682–18687. Dolan, R.J., Fletcher, P., Morris, J., Kapur, N., Deakin, J.F., Frith, C.D., 1996. Neural activation during covert processing of positive emotional facial expressions. Neuroimage 4, 194–200. Eimer, M., 1998. Does the face-specific N170 component reflect the activity of a specialized eye processor? Neuroreport 9, 2945–2948. Eimer, M., 2000. The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport 11, 2319–2324. Eimer, M., Holmes, A., 2002. An ERP study on the time course of emotional face processing. Neuroreport 13, 427–431. Eimer, M., Holmes, A., 2007. Event-related brain potential correlates of emotional face processing. Neuropsychologia 45, 15–31. Eimer, M., Holmes, A., McGlone, F., 2003. The role of spatial attention in the processing of facial expression: an ERP study of rapid brain responses to six basic emotions. Cogn. Affect. Behav. Neurosci. 3, 97–110. Ellis, H.C., Ashbrook, P.W., 1988. Resource allocation model of the effects of depressed mood states on memory. In: Fiedler, K., Forgas, J. (Eds.), Affect, Cognition, and Social Behavior: New Evidence and Integrative Attempts. Hogrefe, Toronto, pp. 25–43. Fenske, M.J., Eastwood, J.D., 2003. Modulation of focused attention by faces expressing emotion: evidence from flanker tasks. Emotion 3, 327–343. George, N., Evans, J., Fiori, N., Davidoff, J., Renault, B., 1996. Brain events related to normal and moderately scrambled faces. Cogn. Brain Res. 4, 65–76. Gonzalez-Garrido, A.A., Ramos-Loyo, J., Gomez-Velazquez, F.R., Alvelais Alarcón, M., de la Serna Tuya, J.M., 2007. Visual verbal working memory processing may be interfered by previously seen faces. Int. J. Psychophysiol. 65, 141–151. Grent-'t-Jong, T., Böcker, K.B., Kenemans, J.L., 2006. Electrocortical correlates of control of selective attention to spatial frequency. Brain Res. 11; 1105, 46–60. Gur, R.C., Schroeder, L., Turner, T., McGrath, C., Chan, R.M., Turetsky, B.I., Alsop, D., Maldjian, J., Gur, R.E., 2002. Brain activation during facial emotion processing. Neuroimage 16, 651–662. Habel, U., Windischberger, C., Derntl, B., Robinson, S., Kryspin-Exner, I., Gurf, R.C., Moser, E., 2007. Amygdala activation and facial expressions: explicit emotion discrimination versus implicit emotion processing. Neuropsychologia 45, 2369–2377. Hariri, A.R., Mattay, V.S., Tessitore, A., Fera, F., Weinberger, D.R., 2003. Neocortical modulation of the amygdala response to fearful stimuli. Biol. Psychiatry 53 (6), 494–501. Haxby, J.V., Hoffman, E.A., Gobbini, M.I., 2000. The distributed human neural system for face perception. Trends Cogn. Sci. 4, 223–233. Haxby, J.V., Horwitz, B., Ungerleider, L.G., Maisog, J.M., Pietrini, P., Grady, C.L., 1994. The functional organization of human extrastriate cortex: a PET-rCBF study of selective attention to faces and locations. J Neurosci. 14, 6336–6353. Hennenlotter, A., Schroeder, U., 2006. Partly dissociable neural substrates for recognizing basic emotions: a critical review. Prog Brain Res. 156, 443–456. Holmes, A., Kiss, M., Eimer, M., 2006. Attention modulates the processing of emotional expression triggered by foveal faces. Neurosci. Lett. 394, 48–52. Holmes, A., Vuilleumier, P., Eimer, M., 2003. The processing of emotional facial expression is gated by spatial attention: evidence from event-related brain potentials. Brain Res. Cogn. Brain Res. 16, 174–184. Hopf, J.M., Mangun, G.R., 2000. Shifting visual attention in space: an electrophysiological analysis using high spatial resolution mapping. Clin Neurophysiol. 111, 1241–1257.

Ishai, A., Haxby, J.V., Ungerleider, L.G., 2002. Visual imagery of famous faces: effects of memory and attention revealed by fMRI. Neuroimage 17, 1729–1741. Ishai, A., Schmidt, C.F., Boesiger, P., 2005. Face perception is mediated by a distributed cortical network. Brain Res. Bull. 67, 87–93. Itier, R.J., Taylor, M.J., 2002. Inversion and contrast-polarity reversal affect both encoding and recognition processes of unfamiliar faces: a repetition study using ERPs. Neuroimage 15, 353–372. Jenkins, R., Lavie, N., Driver, J., 2005. Recognition memory for distractor faces depends on attentional load at exposure. Psychon. Bull. Rev. 12, 314–320. Jongen, E.M., Smulders, F.T., Van der Heiden, J.S., 2007. Lateralized ERP components related to spatial orienting: discriminating the direction of attention from processing sensory aspects of the cue. Psychophysiology 44, 968–986. Kanwisher, N., McDermott, J., Chun, M.M., 1997. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J. Neurosci. 17, 4302–4311. Law, M.B., Abrams, R.A., 2002. Object-based selection within and beyond the focus of spatial attention. Percept Psychophys. 64, 1017–1027. Le Doux, J.E., 1998. The Emotional Brain: The Mysterious Underpinnings of Emotional Life. Touchstone, New York. Linkenkaer-Hansen, K., Palva, J.M., Sams, M., Hietanen, J.K., Aronen, H.J., Ilmoniemi, R.J., 1998. Face-selective processing in human extrastriate cortex around 120 ms after stimulus onset revealed by magneto- and electroencephalography. Neurosci. Lett. 253, 147–150. McKenna, M., Gutierrez, E., Ungerleider, L., Pessoa, L., 2001. Attention increases selectivity to emotional faces. Neuroimage 13, S443. Mangun, G.R., 1995. Neural mechanisms of visual selective attention. Psychophysiology 32, 4–18. Mogg, K., Bradley, B.P., 1999. Orienting of attention to threatening facial expressions presented under conditions of restricted awareness. Cogn. Emot. 13, 713–740. Morris, J.S., Frith, C.D., Perrett, D.I., Rowland, D., Young, A.W., Calder, A.J., 1996. A differential neural response in the human amygdala to fearful and happy facial expressions. Nature 383, 812–815. Pessoa, L., McKenna, M., Gutierrez, E., Ungerleider, L.G., 2002. Neural processing of emotional faces requires attention. Proc. Natl. Acad. Sci. U. S. A. 99, 11458–11463. Phillips, M.L., Young, A.W., Senior, C., Brammer, M., Andrew, C., Calder, A.J., Bullmore, E.T., Perrett, D.I., Rowland, D., Williams, S.C., Gray, J.A., David, A.S., 1997. A specific neural substrate for perceiving facial expressions of disgust. Nature 389, 495–498. Pourtois, G., Grandjean, D., Sander, D., Vuilleumier, P., 2004. Electrophysiological correlates of rapid spatial orienting towards fearful faces. Cereb. Cortex 14, 619–633. Pourtois, G., Thut, G., Grave de Peralta, R., Michel, C., Vuilleumier, P., 2005. Two electrophysiological stages of spatial orienting towards fearful faces: early temporoparietal activation preceding gain control in extrastriate visual cortex. Neuroimage 26, 149–163. Pourtois, G., Schwartz, S., Seghier, M.L., Lazeyras, F., Vuilleumier, P., 2006. Neural systems for orienting attention to the location of threat signals: an event-related fMRI study. Neuroimage 31, 920–933. Ro, T., Russell, C., Lavie, N., 2001. Changing faces: a detection advantage in the flicker paradigm. Psychol. Sci. 12, 94–99. Sergent, J., Signoret, J.L., 1992. Functional and anatomical decomposition of face processing: evidence from prosopagnosia and PET study of normal subjects. Phil. Trans. R. Soc. Lond. B. 335, 55–62. Vuilleumier, P., Armony, J.L., Driver, J., Dolan, R.J., 2001. Effects of attention and emotion on face processing in the human brain: an event-related fMRI study. Neuron 30, 829–841. Whalen, P.J., Shin, L.M., McInerney, S.C., Fischer, H., Wright, C.I., Rauch, S.L., 2001. A functional MRI study of human amygdale responses to facial expressions of fear versus anger. Emotion 1, 70–83. Winston, J.S., O'Doherty, J., Dolan, R.J., 2003. Common and distinct neural responses during direct and incidental processing of multiple facial emotions. Neuroimage 20, 84–97. Yang, T.T., Menon, V., Eliez, S., Blasey, C., White, C.D., Reid, A.J., Gotlib, I.H., Reiss, A.L., 2002. Amygdalar activation associated with positive and negative facial expressions. Neuroreport 13, 1737–1741. Zajonc, R.B., 1980. Feeling and thinking: preferences need no inferences. Am. Psychol. 35, 151–175.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.