Skip to main content

Effects of fearful face presentation time and observer’s eye movement on the gaze cue effect



There are many conflicting findings on the gaze cueing effect (GCE) of emotional facial expressions. This study aimed to investigate whether an averted gaze, accompanied by a fearful expression of different durations, could enhance attentional orientation, as measured by a participant’s eye movements.


Twelve participants (3 females) completed the gaze cue task, reacting to a target location after observing changes in the gaze and expression of a face illustrated on a computer screen. Meanwhile, participants’ eye movements were monitored by electrooculography. The GCE was calculated by reaction time as an indicator of attention shift.


The analysis of the overall data did not find a significant effect of fearful facial expressions on the GCE. However, analysis of trial data that excluded a participant’s eye movement data showed that brief (0, 100 ms) presentation of the fearful facial expression enhanced the GCE compared to that during a neutral facial expression, although when the presentation time of the fearful expression was increased to 200 or 400 ms, the GCE of the fearful expression was at the same level as when model showed a neutral expression.


The results suggest that the attention-enhancing effect of gaze cues induced by rapidly presented fearful expressions occurs only when the effect of eye movement trials is excluded. This effect may be mediated by reflexively neural circuits in the amygdala that process threatening stimuli. However, as the expression duration increased, the fearful expression’s attention-enhancing effect decreased. We suggest that future studies on the emotion modulation of GCE should consider the negative effects of participants’ saccades and blinks on the experimental results.


Humans are regarded as social animals, and facial information is a social cue necessary for smooth interindividual communication. Facial information such as facial expressions and gaze directions can effectively convey the expresser’s mental state, such as interest, intention, and emotion, to other individuals [1]. Humans have the highest proportion of exposed white sclera among primates, making their eyes structurally suitable for eye gaze communication [2, 3]. As a result, humans are more sensitive to another person’s gaze direction, which conveys information about important events in the environment. By reading and interpreting gaze direction, the observer detects and adjusts spatial attention accordingly [4].

Numerous studies [5,6,7,8,9,10] have verified a reflexive reaction, called the gaze cueing effect (GCE), that orients one’s attention to the direction of another person’s gaze. The GCE is generally considered to promote the sharing of attention between two people on the same object (joint attention), which is a crucial ability to promote interindividual social interaction.

Friesen et al. [6] modified the Posner task into the gaze-cueing task, which examined whether participants shifted their attention to gaze the direction. In this task, a facial stimulus (gaze cue) that looks to the left or right was first presented as a cue in the center of a screen, and a target was presented on either the same side (congruent) or opposite side (incongruent) of the gaze cue. Participants responded to the target position by pressing a button to indicate which side of the screen the target was on, and the reaction time and correct answer rate were recorded. Participants generally detected targets faster when the target was congruent rather than incongruent with the gaze cue. The difference in reaction time between incongruent and congruent trials reflected the magnitude of gaze-evoked attentional orientation as the GCE.

Understanding emotional facial expressions is another important skill in guiding social interactions [11]. Humans appear to share neural circuits for processing gaze and facial expressions, and studies have shown that the brain regions for processing gaze direction and affective perception information are highly correlated and interact during activation [12, 13]. Fearful and angry facial expressions are considered threat-related, and previous studies have reported involuntary processing of threatening information [14, 15]. The combination of fearful expression and gaze may signal a direction of threat in the environment, resulting in humans shifting their concentration of attention more rapidly to adapt to a dangerous situation.

However, Hietanen and Leppänen [16] conducted a series of experiments and reported that facial expressions (neutral, happy, angry, fearful) did not affect the GCE. Mathews et al. [17] reported that fearful expressions had a significantly larger GCE than neutral expressions only in the individuals with high anxiety level. They interpreted this as anxious people being more sensitive to information about potential threats.

In subsequent studies [18,19,20], a non-anxious group and a general group also reported that facial expression affected the GCE, contradicting previous findings. Graham et al. [18] conducted experiments using dynamic stimuli presentation methods ranging from neutral to emotional gazes (e.g., fear, disgust, pleasure). They found that the reaction time was shorter for emotional gazes than for neutral ones and that the magnitude of the GCE was greater in response to emotional expressions than to neutral ones. Lassalle and Itier [19, 20] considered the effect of the presentation order of gaze and facial stimuli. In a stimulus presentation sequence in which the expression changed from neutral to fearful after a gaze change, the fearful expression enhanced the GCE. This sequence of stimulus cues is considered to have high ecological validity, meaning it is close to what might happen in a real environment. Therefore, by using this stimulus sequence, the enhanced attentional shifts in fearful expressions can be correctly induced.

One factor that has contributed to the conflicting results of previous studies is thought to be fearful expression presentation time. In studies of spatial cues, an important factor called stimulus onset asynchrony (SOA) which meaning the length of time between a cue stimulus and a target presentation. It has been reported that SOA affects the GCE. A short SOA (150, 300 ms) produces a robust GCE, while the GCE losses in long SOA show that an automatic shift of attention characterizes the GCE [6]. McCrackin and Itier [21] reported that the effects of fearful facial expressions could occur at a 200-ms SOA when a stimulus presentation sequence is used in which the facial expression changes from neutral to fear after a gaze change; this result supported the theory of reflexive information processing by the fearful expression. However, Graham et al. [18] argued that 300 ms was not enough time for the integration of gaze and facial expression, as no effect of fearful expression was seen on the GCE in SOA of less than 300 ms.

Gaze direction and fearful expression appear to be integrated in the early period of visual information processing. The presentation time may affect the threat message formed by the combination of fearful expression and averted gaze. A short presentation time may be perceived as to indicate an obvious threat, while a long one may be judged as ambiguous regarding the presence of a threat. Therefore, the presentation time of fearful expression may affect the intensity of the perceived threat and thus affect the spatial attention shift of gaze direction.

In typical gaze cue tasks, SOA refers to the length of time from gaze presentation to target presentation. In a dynamic gaze cueing task testing the effect of the fearful expression, it is necessary to set the SOA as the total presentation time of the gaze cue and the fearful expression. However, due to the different experimental task designs in previous studies, their results cannot be directly compared. Therefore, it is necessary to investigate the effect of facial expression presentation time on the effect of fearful expression rather than simply to consider SOA as a factor.

Moreover, the inconsistency of findings among the previous studies may be related to differences among the participants’ eye movements from experiment to experiment [22, 23]. Bannerman et al. [24] reported that fearful expressions evoked faster and more saccades in participants than neutral expressions. Therefore, the emotional content of a presented face may induce spontaneous eye movements in participants. Visual exploration in primates depends on saccadic eye movements that cause alternations of neural suppression and enhancement [25]. Saccades will inevitably divert spatial attention, affecting the target's reaction time and thereby affecting the GCE. Moreover, unlike the manual response, the saccade response will induce the earlier inhibition of attention shift in the cue direction (inhibition of return) to develop earlier [26]. To the best of our knowledge, however, only one study has investigated the effect of saccade on emotion modulation in GCE. McCrackin et al. [27] investigated the effect of saccades generated during a gaze cueing task. In that study, happy expression elicited a greater GCE than neutral tongue trials when eye movements were removed but not when they were included. For the GCE enhancement effect of the fearful expression, there was no difference in the results from before and after the eye movement trial was removed. In the present study, while excluding saccades, we also focused on the effect of eye blinking on the experimental results because blinks may interrupt visual information input. In simple terms, eye blinking may affect the perception of the presentation time of the fearful expression, resulting in a negative influence on experimental data. McCrackin et al. [27] reported on the proportion of trials that included saccades but not on the proportion that included blinking. To better discuss the effect of the fearful expression on participants’ spontaneous eye movements, it is necessary to report the number of blink trials in the experiment and to discuss the effect of blinking on the experimental results.

Electrooculography (EOG) is a physiological measurement of eye movements that records changes in the cornea‒retina potential [28]. Vertical eye movements (blinks) can be measured by placing electrodes on the lids for VEOG (vertical EOG). Meanwhile, horizontal eye movements (saccades) can be measured by placing electrodes on the external canthi for HEOG (horizontal EOG). Since the EOG technique can simultaneously measure horizontal eye movement saccades and vertical eye movement blinks with a high temporal resolution and can be synchronized with the event signal of the experimental task, we believe that EOG is a suitable measurement of eye movements for investigating the influence of eye movement occurring during the execution of the gaze cueing task in this study design.

In summary, we conducted an experiment to explore the effects of fearful expression presentation time and of a participant’s eye movements on the GCE of fearful and neutral expressions to expand our understanding of the attention-shifting behavior evoked by fearful expressions.



The sample size was estimated based on prior studies on the effects of facial expression (ηp2 = 0.22) and of SOA on reaction time (ηp2 = 0.86) [21]. A statistical power analysis using G*Power [29] indicated an optimal sample size of N = 8 if the effect size ηp2 = 0.2 with α = 0.05 and power = 0.95.

Twelve university students (3 female) with normal vision (including corrected-to-normal vision) participated in the experiment. The age of the participants was 25.0 ± 1.7 years (M ± S.D.), and all were right-handed. All participants gave written informed consent before participating in the experiment. The experimental implementation sequence of this study was approved by the ethics committee of The Institute of Engineering and Center for Frontier Medical Engineering of the Graduate School at Chiba University (Acceptance Number: R2-11).


Two male and two female head models of neutral and fear facial expressions with a straight gaze were generated by 3D facial model generation software (FaceGen Modeller 3.5, Singular Inversions). The fearful expression was created using the Facial Action Coding System (FACS) built in FaceGen Modeller, which specifies a set of Nos. 1, 2, 4, 5, 7, 20, and 25 action units (AUs). All these AUs were considered characteristics of fearful expression [30, 31]. By using 3D model rendering software (Blender, Blender Foundation), facial images were created with the gaze of each model oriented to the left or right 20° from the straight gaze. Each stimulus was trimmed to a size of 12.6° × 20°, showing the region from the top of the head to the upper neck.

The stimuli were presented by a program created by PsychoPy3.0 on a 24.5-inch LCD monitor (ROG SWIFT PG258Q, 1920 × 1080 pixels, 240 Hz, ASUS) connected to a Windows 10 computer against a dark gray (64, 64, 64 RGB) background. Participants observed the stimulus at 50 cm from the LCD and responded using a keyboard. A chinrest ensured a fixed distance of 50 cm from the computer screen and minimized participants’ head movements.


The experiment included three factors: facial expression, congruency, and duration of facial expression presentation, including two levels of expression conditions (fear and neutral expressions) and three levels of gaze–target congruency (congruent, incongruent, and straight gaze). In addition, four durations of facial expression presentation (0, 100, 200, and 400 ms) were set. Because the straight gaze condition was set as a pseudo-condition, it was excluded from the analysis. Participants completed a practice block and five experimental blocks. Participants completed 24 trials in the practice block (one trial for each experimental condition). Each experimental block contained 8 repetitions for the left and right gaze directions and 4 repetitions for the straight gaze direction. As a result, there were 128 trials in each of the 16 conditions (2 expressions × 2 gaze – target congruency × 4 durations of facial expression presentation) and 32 trials in the straight gaze condition (2 expressions × 4 durations of facial expression presentation) for each experimental block, lasting about 8 min. Participants were allowed to take proper rest periods between blocks. Thus, participants completed the experiment in approximately 60 min.

Each trial began with a central fixation point prompted at a random time of 900 ~ 1000 ms, in which a neutral expression facial image with a straight gaze was presented for 500 ms. After each neutral presentation, a gaze cue consisting of an image of a face with a left, straight, or right gaze was presented within 100 ms to present a perception of a gaze shift. Second, as emotional cues, the facial expression either remained neutral or changed to fear and was presented for 0, 100, 200, or 400 ms. Finally, the asterisk target (2.5° × 2.5°) appeared on the left or right side of the facial stimulus. The target would randomly appear either left or right of the face. Participants were instructed to respond to the location of the target (left or right) as quickly and accurately as possible by pressing the “left arrow” or “right arrow” key with their right index finger and middle finger. Participants were also asked to keep their eyes fixed on a certain fixation cross during the completion of the experiment. Each participant’s reaction time (RT) was recorded (see Fig. 1).

Fig. 1
figure 1

Sequence of experimental stimuli. This diagram shows only one example of facial and target conditions (fearful-congruent condition)

The gaze direction was either congruent with the target location (looking toward the location where the target would later appear) or incongruent with it (looking in the opposite direction). Left and right gaze trials were averaged within the two gaze-congruency conditions.

The period from the start of the fixation point to the end of the participant’s response was considered as one trial, and the subsequent trial was started after a blank picture was presented for 1000 ms. Participants were informed that they must maintain their focus on the fixation point until a blank frame appeared.

EOG measurements and data classification criteria

EOG was recorded to measure the participant's eye movements at a sampling rate of 1000 Hz and visualized in real-time using biological signal analysis software (AcqKnowledge 4.1, Biopac Systems) Two electrodes were placed on the left and right temples (horizontal eye movement detection), two were placed above and below the right eye (vertical eye movement detection), and a reference electrode was placed on the forehead.

Before the experiment began, each participant was instructed to maintain a fixed gaze at the fixation point, and baseline EOG was recorded in both the horizontal and vertical directions for 5 s. During the duration of the experimental stimulus between the Fixation frame and the Target frame (see Fig. 1), if the maximum amplitude in the horizontal or vertical EOG was greater than the maximum amplitude in the baseline, it was considered a saccade (horizontal) or a blink (vertical) trial, whereas if the maximum amplitude in the horizontal or vertical EOG was not greater than that in the baseline, it was considered a non-eye-movement trial. We synchronized these classification markers to the reaction time data before data analysis.

Statistical analysis

Statistical analyses were conducted using SPSS Statistics (Chicago, IL, USA). The correct RTs were analyzed separately for all trials, including both eye movement trials and non-eye movement trials, by using a repeated measures analysis of variance (ANOVA) with within-subject factors of facial expression (neutral and fear), congruency (congruent and incongruent), and facial expression presentation time (0, 100, 200, and 400 ms).

We also performed an ANOVA on the GCE, which was calculated as the difference between incongruent and congruent RTs (RTincongruent − RTcongruent) using facial expression and facial expression presentation time as within-subject factors.

Statistical significance was accepted at the 5% level (p < 0.05). The Greenhouse‒Geisser correction was applied where sphericity was violated.


Table 1 shows the exclusion rates resulting from each participant’s eye movement, error response, and deviation of RT. Trials in which an incorrect response was made (mean = 0.7%, SD = 0.6%) and those in which the participant’s mean RT was above or below 3 SDs (mean = 3.1%, SD = 1.7%) were removed before the mean for each condition was calculated. Because we aimed to investigate the influence of a participants’ eye movement on the experimental results, we compared the analyses of data including and excluding eye movement trials.

Table 1 Data exclusion rates resulting from eye movement, error response, and deviation of RT

Analysis of all trials including eye movement trials

A 2 (facial expression: “neutral”; “fear”) × 2 (congruency: congruent; incongruent) × 4 (facial expression presentation time: 0, 100, 200, and 400 ms) repeated measures ANOVA was conducted on the mean reaction times of total data including eye movements trials.

The results showed a main effect of congruency (F (1, 11) = 20.57, p < 0.001, ηp2 = 0.65), as participants responded to gazed-at (congruent) targets were more quickly than to non-gazed-at (incongruent) targets. However, there was no main effect of facial expression (F (1, 11) = 1.52, p = 0.244, ηp2 = 0.12) or of facial expression presentation time (F (3, 33) = 2.05, p = 0.126, ηp2 = 0.15). No significant interaction was observed between facial expression and congruency (F (1, 11) = 0.01, p = 0.942, ηp2 = 0.004), between congruency and facial expression presentation time (F (3, 33) = 0.6, p = 0.62, ηp2 = 0.05), or between facial expression and facial expression presentation time (F (3, 33) = 1.21, p = 0.321, ηp2 = 0.10).

Finally, the three-way interaction was not significant (F (3, 33) = 1.41, p = 0.256, ηp2 = 0.11). As the main effect of and interaction with facial expression did not show statistical significance, the effect of the fearful expression on the GCE was not examined (see Fig. 2).

Fig. 2
figure 2

Mean reaction time (RT) for each facial expression presentation time and fear (left) and neutral (right) expressions for all trial data. Error bars indicate standard errors of the mean

Analysis of trials excluding eye movement trials

As described above, a three-way repeated measures ANOVA was conducted on mean reaction times, excluding the data of eye movement trials.

Congruency had a significant main effect (F (1, 11) = 68.14, p < 0.001, ηp2 = 0.86), the same as when the analysis of eye movements was included, but reflects that the effects were stronger when eye movement data were excluded. Facial expression presentation time (F (1.55, 17.06) = 3.82, p = 0.051, ηp2 = 0.26) had a significant main effect. Although the main effect of facial expressions (F (1, 11) = 0.13, p = 0.728, ηp2 = 0.01)was not well established, significant interactions were observed between facial expression and congruency(F (1, 11) = 9.60, p = 0.01, ηp2 = 0.46), congruency and facial expression presentation time(F (3, 33) = 14.48, p = 0.001, ηp2 = 0.57), and facial expression and facial expression presentation time(F (3, 33) = 4.08, p = 0.014, ηp2 = 0.27). Finally, the three-way interaction was significant (F (3, 33) = 3.05, p = 0.026, ηp2 = 0.24). These interaction results showed that the effect of facial expression was modulated by facial expression presentation time and congruency (see Fig. 3).

Fig. 3
figure 3

Mean reaction time (RT) for each facial expression presentation time and fear (left) and neutral (right) expressions when the eye movement trials data were excluded. Error bars indicate standard errors of the mean

Since a significant three-way interaction was found between facial expression, congruency, and facial expression presentation time, we conducted ANOVAs on the congruent and incongruent trials separately. Results showed that for RT in congruent trails, facial expressions had a certain trend toward a significant main effect (F (1, 11) = 3.50, p = 0.088, ηp2 = 0.24). No significant main effect of facial expression presentation time was found (F (1.78, 19.62) = 2.26, p = 0.135, ηp2 = 0.17), but the interaction between facial expression and facial expression presentation time approached the borderline of significance (F (3, 33) = 2.64, p = 0.076, ηp2 = 0.19). Next, we conducted a simple main effect analysis for each factor. The Bonferroni correction (two-tailed) was applied to tests of statistical significance of post-hoc comparison, and the significance level was set at 0.008. Simple main effect analysis of facial expression showed shorter RTs for fearful than for neutral trials when the fearful expression was prompted 100 ms or 200 ms before the target (100 ms condition: 321.1 ms vs 329.1 ms, F (1, 11) = 8.83, p = 0.013, ηp2 = 0.44; 200 ms condition: 318.3 ms vs 323.7 ms, F (1, 11) = 5.80, p = 0.035, ηp2 = 0.35). The simple main effect of facial expression presentation time was also significant in the fearful expression condition (F (3, 33) = 7.80, p = 0.002, ηp2 = 0.41). A further multiple comparison reflected that the RT was longer for the 0 ms condition than for the 100 or 200 ms condition (ps < 0.005), and RT for the 200 ms condition was shorter for the 400 ms condition (p < 0.005). For RT in incongruent trials, there was a significant main effect of facial expression presentation time (F (1.64, 18) = 7.91, p = 0.005, ηp2 = 0.42) and significant interaction between facial expression and facial expression presentation time (F (3, 33) = 5.50, p = 0.007, ηp2 = 0.33), but the main effect of facial expression was not observed (F (1, 11) = 0.51, p = 0.487, ηp2 = 0.04). Simple main effect analysis of facial expression showed longer RTs for fearful than for neutral trials only when the fearful expression was prompted simultaneously with the target (0 ms condition: 344.2 ms vs 334.5 ms, F (1, 11) = 9.54, p = 0.01, ηp2 = 0.46). The simple main effect of facial expression presentation time in the fearful expression condition was observed (F (3, 33) = 11.84, p = 0.001, ηp2 = 0.52), reflecting that the RT was longer for the 0 ms and 100 ms conditions than for the 200 ms or 400 ms condition (ps < 0.005). There was no significant difference between the 0 ms and 100 ms conditions or between the 200 ms and 400 ms conditions.

As congruency interacted with every other factor, the magnitude of GCE was computed (RTincongruent − RTcongruent) and analyzed by using a 2 (facial expression) × 4 (facial expression presentation time) repeated measures ANOVA. When the interaction is significant, the Bonferroni correction (two-tailed) was applied to tests of statistical significance in the post-hoc comparison for the simple main effect of facial expression presentation time, and the significance level was set at 0.008.

First, significant main effects of facial expression (F (1, 11) = 9.53, p = 0.01, ηp2 = 0.46) and facial expression presentation time (F (1, 11) = 14.52, p < 0.001, ηp2 = 0.57) were found. Moreover, their interaction was also significant (F (3, 33) = 3.49, p = 0.05, ηp2 = 0.24). The simple main effect analysis of facial expression showed that when the fearful expression was prompted simultaneously with or 100 ms before the target, the fearful expression had a significantly greater effect than the neutral expression (0 ms condition: 14.8 ms vs 7.7 ms, F (1, 11) = 21.06, p < 0.001, ηp2 = 0.66; 100 ms condition: 16.6 ms vs 5.9 ms, F (1, 11) = 42.19, p < 0.001, ηp2 = 0.79). The simple main effect of facial expression presentation time was also significant in the fearful expression condition (F (3, 33) = 16.27, p < 0.001, ηp2 = 0.59), reflecting that the GCE was larger for the 0, 100, and 200 ms conditions than for the 400 ms (ps < 0.008, see Fig. 4).

Fig. 4
figure 4

Mean gaze cueing effect (i.e., RT difference between congruent and incongruent trials) for each facial expression presentation time among neutral and fearful expressions when the eye movement trial data were excluded. Error bars indicate standard errors of the mean. Asterisks indicate p < 0.008 (the threshold after Bonferroni correction)


Humans can exploit social information such as gaze and facial expressions, enabling each other to communicate and transmit information more efficiently [32, 33]. Previous research has debated whether threatening information expressed by fearful facial expressions can enhance attention shift evoked by gazing at a target, but no consistent conclusion has been reached [16, 18, 27]. The present study set out to assess the importance of two factors, facial expression presentation time and participants’ spontaneous eye movements, that affect the enhancement of GCE by a fearful expression. The results demonstrated that this enhancement of GCE by a fearful expression was found only in the specific condition where the two main factors were combined (i.e., fearful expression under the short presentation time condition and excluding the results of the spontaneous eye movement trials). In two separate subsections in the “ Discussion” section, we discuss and summarize the effects of these two factors on the GCE enhancement of fearful expressions and their potential causation.

Facial expression presentation time

Across the present experimental results, we demonstrated that fearful expressions enhance gaze-evoked spatial attention shifts compared to neutral expressions. This effect is moderated by the presentation time of fearful expressions. However, this attentional enhancement effect of fearful expressions was found only in the results after excluding the data from spontaneous eye movement trials.

When presentations were brief (i.e., 0 and 100 ms), the GCE of the fearful expression was greater than that of the neutral expression. The greater GCE can be achieved in two ways: by pointing more quickly to the target in congruent conditions, or more slowly to the target in incongruent conditions. The present study showed that, in congruent trials lasting 100 or 200 ms, participants responded faster to the target when it was preceded by a fearful expression compared to a neutral expression. In addition, the attentional enhancement effect was more pronounced for the 100 ms duration than for the 200 ms duration, suggesting that attention was more rapidly directed towards the gaze cue when a fearful expression was presented. This is consistent with previous findings that fearful expressions speed up the orientation of spatial attention compared to neutral expressions [21]. In incongruent trials, when the fearful face and the target were presented simultaneously (0 ms condition), the response to the target was slower for the fearful expression compared to the neutral expression. This was the underlying reason for the enhancement of the GCE by the fearful expression in the 0 ms condition. We consider that findings from previous studies [34, 35] may explain this result. Carlson and Mujica-Parodi [34] investigated the impact of fearful facial expressions on spatial attention during conscious and unconscious information processing using a dot-probe task. They found that, regardless of conscious awareness, the presentation of fearful facial threat cues resulted in both faster orientation of attention towards the cued location and slower disengagement of attention from it. Georgiou et al. [35] suggested that spatial attentional processes may also act to delay disengagement from fear-relevant stimuli, even though this tendency is stronger among people with high anxiety. The present study adds to previous research by demonstrating that simultaneously presenting a fearful expression and a target impairs participants' ability to disengage their attention from the gaze-cued side associated with the potential threat. Specifically, the findings suggest that this effect is more pronounced for fearful expressions compared to neutral faces. In summary, the results of the present study confirm that when the fearful expression and the target were presented simultaneously after the gaze cue, the enhancement effect of GCE may be due to the difficulty in disengagement of attention from the gaze direction. When the fearful expression lasting for 100 ms, the GCE enhancement may be due to the faster shift of attention to the eye direction. However, the RT results for congruent trials in this study approached, but did not reach, statistical significance, thus limiting the interpretation of the results in the present experiment. This lack of statistical significance could be due to the relatively small sample size compared to previous studies, even though a statistical power analysis was performed before the experiment. Moreover, the present study did not measure or control for individual anxiety levels, which may also be a limitation in interpreting the results of this study. Therefore, future studies should aim to verify the results of this study by increasing the sample size and controlling individual anxiety levels.

A fearful face with averted gaze is often considered an indicator of threatening information. For example, a person looking to the right and expressing fear suggests there may be a danger in that direction. The ability to integrate and understand these cues, interpreted as an essential survival mechanism, is thought to reside in the amygdala, which is part of the limbic system and located deep inside the brain’s medial temporal lobe.

The amygdala integrates sensory information and outputs it to the hypothalamus. In these neural networks, the amygdala plays a vital role in judging the value of information to survival [36]. Thus, the amygdala processes and remembers emotional responses (especially fear and anger) and may have pathways for processing threat-related signals [37]. In an fMRI study, Adams et al. [12] showed that expressions related to threat information, such as fearful expression, could induce a stronger reaction in the amygdala than the neutral expression and could promote information processing of fearful expression. In addition, previous studies have shown that fear emotion is detected through the limbic system; especially, threat information related to survival will be quickly and automatically processed to get ready for the following action [38, 39].

Kawashima et al. [40] demonstrated that gaze information is also processed in the amygdala. The detection of gaze direction activates the left amygdala, which helps read social signals. Presenting fearful expressions and averting gaze in a short period of time may induce rough and rapid information processing in the amygdala, integrating fearful expressions and gaze direction to judge whether there is a threat to survival. As a result, fearful expression will cause spatial attention to be distributed more quickly to the gaze direction.

In amygdaloid processing of information about a threatening stimulus, two different neural circuits may be at work: one for reflexive responses and the other for reflective. By operating in parallel, these circuits can not only detect the existence of a threat rapidly and roughly but can also cancel the threat after confirming the safety of carrying out the most efficient threat response [41]. By adjusting the presentation duration of fearful expressions, Adams et al. [42] found that a short presentation induced a reflexive response in the amygdala to explicit threat information, while a long presentation induced a reflective response in the amygdala to an ambiguous threat. Our results support the hypothesis that when explicit threat information (such as the fearful expression with a specific direction of gaze in this study) is prompted for a short time (100 ms), the threat detection circuit of the amygdala is reflexively activated. The signal is quickly transmitted to the spatial attention system to guide attention rapidly. This supports the view that the fearful expression induced an automatic attention shift in previous studies. Most noteworthy is that, based on the results of previous studies, the present study is the first to confirm that the GCE was enhanced when the fearful face and the target were presented simultaneously, although no expression effect on congruent RT was found. Future research may continue to explore the GCE enhancement effect of fearful expression when the presentation time is within 0–100 ms.

However, when the presentation time is longer, but the target still does not appear, it may be temporarily considered that there is no threat (or ambiguous threat) in the environment, the top-down processing of the cortex affects spatial attention, and the GCE by fearful expression is not enhanced. Such results support the hypothesis of the processing of threatening stimuli: that when attention shifts to the gaze cue direction and no target is found, the spatial attention will deviate from the cue direction to respond to targets appearing in other directions in the environment. Such results could also reflect human environmental adaptability, as a long-term focus on a specific direction is detrimental to detecting environmental hazards [43].

In the present study, SOA duration was the time between the gaze cue and the target appearance. Hence, the SOA factors in this study were 100, 200, 300, and 500 ms for both fear and neutral expressions. However, the fearful expression occurred for 0, 100, 200, and 400 ms after the gaze shift (lasting 100 ms), while the neutral expression was maintained for 0, 100, 200, and 400 ms. Even though this is a classic experimental design for investigating emotion modulation in GCE, the lack of apparent facial motion in neutral expressions could potentially have affected the results; this may be a limiting aspect of this study. To eliminate this potential impact in future studies, we expect to create an experimental condition like the ‘neutral tongue’ in McCrackin et al.’s study [21, 27].

While the present study discussion involves a lot of speculation about brain circuits that process the gaze of fearful expressions, obviously, it is not enough to speculate on the neural basis behind the GCE enhancement by the fearful expression based only on the reaction time data. Therefore, we believe it may be possible to further confirm the conjecture by introducing neurophysiological methods in future studies.

Spontaneous eye movements

The comparison between total data and eye movement removal data revealed that a participant’s eye movements during the experiment affected the detection of the GCE enhanced by fearful expression.

Similar to the results reported by McCrackin et al., [27] in our study the effect of fearful expression on the GCE can be clarified by excluding from analysis the data on eye movements recorded during trials. In contrast, when all of the data were analyzed, no effect of fearful expression was found. The results of the present study showed that the effect of fearful expressions could be detected with a small sample size (12 participants) if the data affected by eye movements were excluded, even though the effect of fearful expressions is generally considered to be small. As we mentioned in the Background section, Graham et al. [18] argued that a 300-ms cueing time was insufficient to integrate gaze and facial expressions, as they found no effect of fearful expressions on the GCE at shorter SOAs. However, the present study demonstrated that, when the cueing time of fearful expressions was less than 100 ms (i.e., when SOA was less than 200 ms in the present study), fearful expressions enhanced the GCE. When the cueing time of fearful expressions exceeded 100 ms (i.e., when SOA exceeded 200 ms in the present study), the enhancement effect of fearful expressions on the GCE disappeared, contradicting previous findings. Compared to the experimental approach of Graham et al. [18], the present study required participants to fixate their gaze during the experiment and eliminated experimental data that had been contaminated by eye movements. This elimination of contamination might have contributed to the detection of the enhanced GCE of fearful expression by brief presentation. Conversely, when data analyses that did not account for the effects of eye movements were performed, no effect of fear expressions on the GCE was observed.

There are generally considered to be two types of orientation of visual attention: one is covert visual attention orientation, which is achieved through internal neurophysiological systems without head or eye movements; the other is overt visual attention orientation through behavioral systems accompanied by head or eye movements [44, 45]. During saccadic eye movements, visual nerve activity is suppressed [25], and an earlier inhibition of return for spatial attention shift is developed [26]. Prior studies have revealed that both overt and covert attention can induce GCE [46], but studies on the enhancement of GCE by fearful expression have not reached consistent conclusions. In the present study, the participants were asked to keep their eyes fixed on a certain point during the completion of the experiment, and to focused mainly on the effect of covert attention. However, since it is difficult to accurately locate the spatial position of the participants’ eye movements during the horizontal eye movements measured in this study, we cannot discuss the effect of overt attention of GCE enhancement by fearful expression. Therefore, this study focused on the relationship between covert attention and GCE enhancement by the fearful expression through strict exclusion criteria for eye movement trials. The present results confirm that the enhancement of GCE by fearful expression may be partially achieved by covert attention, which precedes eye movements and can be deployed simultaneously in multiple places in the environment. This mechanism is also in line with the advantages of threatening stimuli described above and is very helpful for efficient visual information processing and for guiding future eye movements. Nevertheless, the present results do not enable us to discuss whether the GCE enhancement by fearful expression is dominated by covert attention or the result of the combination of covert and overt attention. This is a limitation of the present study. Future studies need to explore their effects separately through experimental design.

However, unlike the case in previous studies, most of the eye movements in this study were caused by eye blinks (about 82.6% in total eye movements), which have the potential to temporarily obscure visual information and interrupt the continuity of visual information input, thus affecting the perceived prompt time [47]. A blank temporarily interrupts the flow of visual information between the world and the retina. In that instant, visual stimuli from the outside world disappear for 150–400 ms [48]. As mentioned above, the GCE enhancement by fearful expression is sensitive to the expression’s presentation time. Therefore, the blinks that occurred during the stimulus sequence may affect the perception of the presentation time of the fearful expression and thus affect the GCE enhancement by the fearful expression. As shown in the removal rate of trials including the eye movements of each participant (Table 1), we found some individual differences in the control of eye movements during the experiment. This phenomenon may be explained by differences in the cognitive resources that participants devote to the experiment. Maffei et al. [49] reported that participants individually had different blink rates depending on the difficulty of task execution, which showed that participants induced low probability of blinking in tasks with high difficulty and a high probability of blinking in tasks with low difficulty. It was also reported that, regardless of the task's difficulty, eye fatigue increased gradually during a 4 min task, and the blink rate increased accordingly. Considering the influence of such visual fatigue on eye movement, in future research tasks should be designed for completion within 4 min in one block.


This study found that when a fearful gaze appeared at the same time as a target or 100 ms before the target, participants focused more intensely on the gaze direction than when the gaze was neutral. However, as the expression duration increased, the attention-enhancing effect of the fearful expression attenuated. Such results were seen only in analyses of data without eye movement. The GCE enhancement effect of the fearful expression is dominated by a faster response to the target in a short period of time. Hence, the present results supported that threat information may automatically orient attention at an unconscious level. We suggest that in future studies that explore the influence of facial expressions on the GCE, a subject’s eye movements during the experiment should be considered by improving the experimental design or eye movement measurement.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.





Functional magnetic resonance imaging


Gaze cueing effect


Reaction time


Standard deviation


Stimulus onset asynchrony


  1. Kleinke CL. Gaze and eye contact: a research review. Psychol Bull. 1986;100:78–100.

    Article  CAS  PubMed  Google Scholar 

  2. Kobayashi H, Kohshima S. Unique morphology of the human eye. Nature. 1997;387(6635):767–8.

    Article  CAS  PubMed  Google Scholar 

  3. Kobayashi H, Kohshima S. Unique morphology of the human eye and its adaptive meaning: comparative studies on external morphology of the primate eye. J Hum Evol. 2001;40:419–35.

    Article  CAS  PubMed  Google Scholar 

  4. Frischen A, Bayliss AP, Tipper SP. Gaze cueing of attention: visual attention, social cognition, and individual differences. Psychol Bull. 2007;133(4):694.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Driver J, Davis G, Ricciardelli P, Kidd P, Maxwell E, Baron-Cohen S. Gaze perception triggers reflexive visuospatial orienting. Vis cogn. 1999;6(5):509–40.

    Article  Google Scholar 

  6. Friesen CK, Kingstone A. The eyes have it! Reflexive orienting is triggered by nonpredictive gaze. Psychon Bull Rev. 1998;5:490–5.

    Article  Google Scholar 

  7. Downing P, Dodds CM, Bray D. Why does the gaze of others direct visual attention? Vis cogn. 2004;11:71–9.

    Article  Google Scholar 

  8. Friesen CK, Ristic J, Kingstone A. Attentional effects of counterpredictive gaze and arrow cues. J Exp Psychol Hum Percept Perform. 2004;30(2):319.

    Article  PubMed  Google Scholar 

  9. Bayliss AP, Di Pellegrino G, Tipper SP. Sex differences in eye gaze and symbolic cueing of attention. Q J Exp Psychol (Hove) Section A. 2005;58(4):631–50.

    Article  Google Scholar 

  10. Bayliss AP, Tipper SP. Gaze and arrow cueing of attention reveals individual differences along the autism spectrum as a function of target context. Br J Psychol. 2005;96(1):95–114.

    Article  PubMed  Google Scholar 

  11. Ekman P, Friesen WV. Constants across cultures in the face and emotion. J Pers Soc Psychol. 1971;17:124–9.

    Article  CAS  PubMed  Google Scholar 

  12. Adams RB Jr, Gordon HL, Baird AA, Ambady N, Kleck RE. Effects of gaze on amygdala sensitivity to anger and fear faces. Science. 2003;300(5625):1536–1536.

    Article  CAS  PubMed  Google Scholar 

  13. Hoffman EA, Haxby JV. Distinct representations of eye gaze and identity in the distributed human neural system for face perception. Nat Neurosci. 2000;3(1):80–4.

    Article  CAS  PubMed  Google Scholar 

  14. De Gelder B. Towards the neurobiology of emotional body language. Nat Rev Neurosci. 2006;7(3):242–9.

    Article  PubMed  Google Scholar 

  15. Frijda NH. The emotions. Cambridge: Cambridge University Press; 1986.

  16. Hietanen JK, Leppänen JM. Does facial expression affect attention orienting by gaze direction cues? J Exp Psychol Hum Percept Perform. 2003;29(6):1228.

    Article  PubMed  Google Scholar 

  17. Mathews A, Fox E, Yiend J, Calder A. The face of fear: effects of eye gaze and emotion on visual attention. Vis cogn. 2003;10(7):823–35.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Graham R, Kelland Friesen C, Fichtenholtz HM, LaBar KS. Modulation of reflexive orienting to gaze direction by facial expressions. Vis cogn. 2010;18(3):331–68.

    Article  Google Scholar 

  19. Lassalle A, Itier RJ. Fearful, surprised, happy, and angry facial expressions modulate gaze-oriented attention: behavioral and ERP evidence. Soc Neurosci. 2013;8(6):583–600.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Lassalle A, Itier RJ. Emotional modulation of attention orienting by gaze varies with dynamic cue sequence. Vis cogn. 2015;23(6):720–35.

    Article  PubMed  PubMed Central  Google Scholar 

  21. McCrackin SD, Itier RJ. Both fearful and happy expressions interact with gaze direction by 200 ms SOA to speed attention orienting. Vis cogn. 2018;26(4):231–52.

    Article  Google Scholar 

  22. Ricciardelli P, Bricolo E, Aglioti SM, Chelazzi L. My eyes want to look where your eyes are looking: Exploring the tendency to imitate another individual’s gaze. NeuroReport. 2012;13(17):2259–64.

    Article  Google Scholar 

  23. Kuhn G, Kingstone A. Look away! Eyes and arrows engage oculomotor responses automatically. Atten Percept Psychophys. 2009;71(2):314–27.

    Article  PubMed  Google Scholar 

  24. Bannerman RL, Hibbard PB, Chalmers K, Sahraie A. Saccadic latency is modulated by emotional content of spatially filtered face stimuli. Emotion. 2012;12(6):1384.

    Article  PubMed  Google Scholar 

  25. Hoffman KL, Dragan MC, Leonard TK, Micheli C, Montefusco-Siegmund R, Valiante TA. Saccades during visual exploration align hippocampal 3–8 Hz rhythms in human and non-human primates. Front Syst Neurosci. 2013;7:43.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Briand KA, Larrison AL, Sereno AB. Inhibition of return in manual and saccadic response systems. Percept Psychophys. 2000;62(8):1512–24.

    Article  CAS  PubMed  Google Scholar 

  27. McCrackin SD, Soomal SK, Patel P, Itier RJ. Spontaneous eye-movements in neutral and emotional gaze-cuing: An eye-tracking investigation. Heliyon. 2019;5(4):e01583.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Creel DJ. The electrooculogram. Handb Clin Neurol. 2019;160:495–9.

    Article  PubMed  Google Scholar 

  29. Faul F, Erdfelder E, Lang A, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39:175–91.

    Article  PubMed  Google Scholar 

  30. Roesch EB, Tamarit L, Reveret L, Grandjean D, Sander D, Scherer KR. FACSGen: A tool to synthesize emotional facial expressions through systematic manipulation of facial action units. J Nonverbal Behav. 2011;35(1):1–16.

    Article  Google Scholar 

  31. Rosenberg EL, Ekman P, (Eds.). What the face reveals: basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS). New York: Oxford University Press; 2020.

  32. Chen Z, McCrackin SD, Morgan A, Itier RJ. The gaze cueing effect and its enhancement by facial expressions are impacted by task demands: direct comparison of target localization and discrimination tasks. Front Psychol. 2021;12:696.

    Google Scholar 

  33. Argyle M, Cook M, Cramer D. Gaze and mutual gaze. Br J Psychiatry. 1994;165(6):848–50.

    Article  Google Scholar 

  34. Carlson JM, Mujica-Parodi LR. Facilitated attentional orienting and delayed disengagement to conscious and nonconscious fearful faces. J Nonverbal Behav. 2015;39(1):69–77.

    Article  Google Scholar 

  35. Georgiou G, Bleakley C, Hayward J, Russo R, Dutton K, Eltiti S, Fox E. Focusing on fear: Attentional disengagement from emotional faces. Vis cogn. 2005;12(1):145–58.

    Article  PubMed  PubMed Central  Google Scholar 

  36. LeDoux J. The amygdala. Curr Biol. 2007;17(20):R868–74.

    Article  CAS  PubMed  Google Scholar 

  37. Tabbert K, Stark R, Kirsch P, Vaitl D. Hemodynamic responses of the amygdala, the orbitofrontal cortex and the visual cortex during a fear conditioning paradigm. Int J Psychophysiol. 2005;57(1):15–23.

    Article  PubMed  Google Scholar 

  38. Öhman A. Face the beast and fear the face: animal and social fears as prototypes for evolutionary analyses of emotion. Psychophysiology. 1986;23(2):123–45.

    Article  PubMed  Google Scholar 

  39. Öhman A. The role of the amygdala in human fear: automatic detection of threat. Psychoneuroendocrinology. 2005;30(10):953–8.

    Article  PubMed  Google Scholar 

  40. Kawashima R, Sugiura M, Kato T, Nakamura A, Hatano K, Ito K, et al. The human amygdala plays an important role in gaze monitoring: A PET study. Brain. 1999;122(4):779–83.

    Article  PubMed  Google Scholar 

  41. Forgas JP, Williams KD, Von Hippel W, Kipling WD, (Eds.). Social judgments: Implicit and explicit processes (Vol. 5). New York: Cambridge University Press; 2003.

  42. Adams RB Jr, Franklin RG Jr, Kveraga K, Ambady N, Kleck RE, Whalen PJ, Hadjikhani N, Nelson AJ. Amygdala responses to averted vs direct gaze fear vary as a function of presentation speed. Soc Cogn Affect Neurosci. 2012;7(5):568–77.

    Article  PubMed  Google Scholar 

  43. Itti L, Koch C. Computational modelling of visual attention. Nat Rev Neurosci. 2001;2:194–203.

    Article  CAS  PubMed  Google Scholar 

  44. Petersen SE, Posner MI. The attention system of the human brain: 20 years after. Annu Rev Neurosci. 2012;35:73.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  45. Henderson JM. Human gaze control during real-world scene perception. Trends Cogn Sci. 2003;7(11):498–504.

    Article  PubMed  Google Scholar 

  46. Friesen CK, Kingstone A. Covert and overt orienting to gaze direction cues and the effects of fixation offset. NeuroReport. 2003;14(3):489–93.

    Article  PubMed  Google Scholar 

  47. Baumstimler Y, Parrot J. Stimulus generalization and spontaneous blinking in man involved in a voluntary activity. J Exp Psychol. 1971;88:95–102.

    Article  Google Scholar 

  48. VanderWerf F, Brassinga P, Reits D, Aramideh M, Ongerboer de Visser B. Eyelid movements: behavioral studies of blinking in humans under different stimulus conditions. J Neurophysiol. 200389:2784–2796.

  49. Maffei A, Angrilli A. Spontaneous eye blink rate: an index of dopaminergic component of sustained attention and fatigue. Int J Psychophysiol. 2018;123:58–63.

    Article  PubMed  Google Scholar 

Download references





Author information

Authors and Affiliations



All authors made substantial contributions to the conception and design of the experiment. CY performed the experiments and analyzed the data. All authors gave advice regarding the “ Discussion” section and prepared the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Koichi Iwanaga.

Ethics declarations

Ethics approval and consent to participate

This study was carried out in accordance with the recommendations of the Ethics Review Committee of Chiba University, with all subjects providing written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the ethics review committee at The Institute of Engineering and Center for Frontier Medical Engineering of Graduate School at Chiba University (Acceptance Number: R2-11).

Consent for publication

Not applicable.

Competing interest

The authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, C., Ishibashi, K. & Iwanaga, K. Effects of fearful face presentation time and observer’s eye movement on the gaze cue effect. J Physiol Anthropol 42, 8 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: