Main Content

Research Projects

Foto: Colourbox.de

Multimodal Perception of Action

Belkis Ezgi Arikan (Philipps-University Marburg)
Supervisors: Tilo Kircher (Philipps-University Marburg), Katja Fiehler (Justus-Liebig-University Giessen), Laurence Harris (York University, Toronto)

  • Abstract

    The question of how we predict, process and integrate sensory inputs arising from our own actions is still an appealing one which needs more elucidation. Specifically, the mechanism behind predicting multisensory consequences of one's own action remains largely unknown. The aim of our project is to investigate effects of predictive mechanisms in the action-perception cycle on the perception of visual and auditory consequences of one’s own actions. Through behavioral and fMRI experiments, we aim to assess supramodal versus unimodal effects of action-sensory feedback matching processing, and the neural correlates of supramodal predictive mechanisms and their consequences.
    Our studies will investigate these mechanisms in the framework of the so-called forward model, which is believed to predict action consequences, compare this prediction to the actual sensory feedback, and update the system in case of mismatches. Our ultimate goal is to assess whether the forward model includes predictions for all modalities and how it contributes to multisensory integration in the context of action.

Assessing neural behavior and behavioral correlates of emotion perception from movement kinematics

Julia Bachmann (Justus-Liebig-University Giessen)
Supervisor: Jörn Munzert (Justus-Liebig-University Giessen)

  • Abstract

    Being able to recognize and interpret a person’s emotional state is a substantial asset within social interactions. Emotion itself is a multifaceted phenomenon which can be expressed through various behaviors, for instance, through gestures, facial expressions and speech. The aim of our work is to assess neural as well as behavioral correlates of emotion perception from whole body movement kinematics. Moreover, we seek to explore and compare intrapersonal factors within healthy populations as well as risk groups that influence emotion perception abilities.

Anticipatory smooth pursuit of intentional finger movements

Jing Chen (Justus-Liebig-University Giessen)
Supervisors: Karl Gegenfurtner, (Justus-Liebig-University Giessen), Alexander Schütz (Justus-Liebig-University Giessen), Douglas Munoz (Queen's University, Kingston)

  • Abstract

    In the current project, we looked at how the eye behaves while tracking one’s own finger movements. In general, the eye is able to anticipate the movement and moves before the finger starts to move. With EEG recording, we are also trying to find a direct link between arm motor preparation (indexed by lateralized readiness potentials) and the anticipatory eye movements.

Infants’ perception and understanding of the difference between real objects and photographs of those objects

Theresa Gerhard (Justus-Liebig-University Giessen)
Supervisors: Gudrun Schwarzer (Justus-Liebig-University Giessen), Jody C. Culham (Western University, London, Ontario)

  • Abstract

    Within our work plan we aim at providing new insights into the development of infants’ 2D-3D perception abilities. We are interested in when and especially how infants acquire an understanding of the relevant differences between real objects and photographs of those objects. In particular, we want to investigate whether this understanding is innate to human object recognition or may be learned by successful attempts to interact with real objects and failed interactions with depicted objects. We also seek to examine whether infants in their first year of life already show distinct processing patterns with respect to real objects and object images, as this is indicated by studies with adults and older children.

Dynamic prediction in the oculomotor system

Alexander Göttker (Justus-Liebig-University Giessen)
Supervisors: Karl Gegenfurtner (Justus-Liebig-University Giessen), Katja Fiehler (Justus-Liebig-University Giessen),  Gunnar Blohm (Queen's University, Kingston)

  • Abstract

    When looking around in the world we have the impression of a continuous and stable image with high acuity and great amount of details all over our visual field. However, this is not the reality as we are constantly moving our eyes. This is necessary as only in the center of our visual field, the fovea, information can be processed with high analytical performance and the acuity decays in the periphery. Due to the foveal organization, we need precise eye movements, that can bring our eyes to relevant locations, and also are able to keep objects of interest on the fovea when they are moving. We are investigating how the oculomotor system is using dynamic prediction to track moving objects, how different types of eye movements, saccades and pursuit, interact and how the execution of those eye movements influences our perception.

Effect of material and object shape on grasp locations

Lina Katharina Klein (Justus-Liebig-University Giessen)
Supervisors: Roland Fleming (Justus-Liebig University Giessen)

  • Abstract

    To ensure successful interaction with an object, we need to choose appropriate grasp locations. By taking into account an object's shape, material properties, and the desired action, humans estimate stable, comfortable grasp points that minimize slippage and torsion. The rules determining the grasp point selection process for three-dimensional objects are not clear. In my project, I aim to investigate, how material and shape of an object affect grasp locations.

Visual context learning and eye movements

Dennis Koch (Philipps-University Marburg)
Supervisor: Anna Schubö (Philipps-University Marburg)

  • Abstract

    Every second we move our eyes several times to bring certain regions of the visual field into the small retinal area of highest resolution, which is the fovea. These movements are considered to reflect overt shifts of attention. This project will examine how various visual environments influence eye-movement patterns. To investigate how the brain integrates sensorimotor and cognitive information we will manipulate different stimulus and task parameters such as visual scene context, task relevance, valence and predictability and collect resulting eye-movement data (e.g. saccade latency, fixation duration) as well as classical behavioral measures such as error rates and reaction times.

Active movement control in the process of haptic exploration

Alexandra Lezkan (Justus-Liebig-University Giessen)
Supervisors: Knut Drewing (Justus-Liebig-University Giessen)

  • Abstract

    How hard is the table in front of you? In order to answer this question, you would probably reach out with your hand and press on the table. As in this example, in the haptic sense we generally actively explore an object to obtain relevant sensory information. Often a single touch does not appear sufficient. Thus, exploration movements are repeated. Over time movements can change depending on the sensory feedback gathered from the object to explore. Using the example of softness and roughness exploration, we are studying how this sensory feedback shapes movement parameters. Additionally, we assume top-down influences to affect movement control. Here, we are especially looking at the effects of expectation, motivation, memory, and task difficulty on movement control during the exploration procedure. Our aim is to quantitatively model the control of natural explorations. We expect that people fine-tune their exploratory movements in order to optimize perception over the process of exploration. Top-down influences are assumed to be able to moderate the efficiency of this fine-tuning.

Active movement control in the process of haptic exploration

Zijian Fabio Lu (Justus-Liebig-University Giessen)
Supervisors: Katja Fiehler (Justus-Liebig-University Giessen)

  • Abstract

    It has been widely believed that people use egocentric and allocentric reference frames to plan and execute reaching movements in daily life, and my project aims to investigate how people make use of allocentric reference frame in memory-guided or online reaching movements in 2D and 3D naturalistic scenes (e.g., a breakfast scene). In particular, my project aims to further reveal factors (e.g., gaze behavior, prior knowledge of the reaching target, etc.) that could affect the use of allocentric information as well as its integration with the egocentric reference frame.

Attentional allocation during movement preparation and its effects on movement kinematics in humans

Tobias Moehler (Justus-Liebig University Giessen)
Supervisors: Katja Fiehler (Justus-Liebig University Giessen), Thilo Womelsdorf (York University, Toronto), Karl Gegenfurtner (Justus-Liebig-University Giessen)

  • Abstract

    Attention represents a powerful selection mechanism in a complex environment crowded with relevant and irrelevant stimuli to facilitate the fluent and efficient functioning of our perceptual and motor system. In dual-task paradigms incorporating cognitive (discrimination task, memory task) and motor tasks (saccades, reaching), I aim to investigate the effects of attention on perceptual and motor performance to gain insights in the coupling of attention, perception, and action.

Attentional allocation across modalities during eye and hand movements

Tom Nissens (Justus-Liebig University Giessen)
Supervisors: Katja Fiehler (Justus-Liebig University Giessen)

  • Abstract

    The purpose of my PhD is to further elucidate how attention for action and attention for perception is allocated across different modalities, in particular vision and touch. I will examine how cross-modal attentional allocation between touch and vision influence, and are influenced by, eye and hand movements.

Perception of indirect action consequences

Mareike Pazen (Philipps-University Marburg)
Supervisors: Tilo Kircher (Philipps-University Marburg), Benjamin Straube (Philipps-University Marburg)

  • Abstract

    It has been established that anticipated sensory states resulting from voluntary movements are perceived differently than unexpected sensory re-afferences. Comparisons between predicted and actual consequences are used to attribute the incoming sensory information to an agent; that is, to oneself or to an external source – a process which has been found to be deficient in patients with schizophrenia. The use of auxiliary means such as tools poses a challenge to these findings, since the coupling between action and effect is loosened. Both direct (hand) and indirect (tool) consequences of self-generated actions have to be incorporated into the internal prediction model in order to enable goal-directed behavior. Our work seeks to explore the neural correlates underlying the perception of action consequences in the context of self-generated movements; with special emphasis on the use of tools and multimodal sensorimotor integration. Further inferences will be drawn from comparisons between healthy control participants and patients with schizophrenia.

EEG-analysis of the processing of visually simulated self-motion

Constanze Schmitt (Philipps-University Marburg)
Supervisors: Frank Bremmer (Philipps-University Marburg)

  • Abstract

    The perception of self-motion direction (heading) is of ultimate importance in everyday life. In my project, I aim to investigate the neural correlate (EEG) of human processing of visually simulated self-motion. More specifically, I will concentrate on predictive aspects of self-motion processing as well as on the investigation of pre-attentive heading perception.

EEG-analysis of the processing of visually simulated self-motion

Adrian Schütz (Philipps-University Marburg)
Supervisors: Frank Bremmer (Philipps-University Marburg)

  • Abstract

    Veridical perception of self-motion direction (heading) is of utmost importance for successful navigation through an environment. Self-motion induces so-called visual optic flow on the retina. During self-motion, however, our eyes are not stationary but constantly moving. These eye movements challenge heading perception. In my project, I will investigate neural activity in macaque area 7a during simulated and/or real self-motion. This cortical region is known to carry information about heading and eye position. I will employ multi-electrode-array recordings in combination with EEG. Ultimately, I aim to decode online from these recordings spatial and self-motion information while the animal is moving through a (virtual or real) environment.

Infants' understanding and perception of familiar sizes in real objects and photographs of these objects

Özlem Sensoy (Justus-Liebig-University, Giessen)
Supervisors: Gudrun Schwarzer (Justus-Liebig-University, Giessen)

  • Abstract

    In their daily lives, infants are confronted with pictures of objects, e.g. in storybooks, and scale models of objects on a daily basis. When we show a scale model of a house to an infant, we assume that the infant understands that the scale model stands for a real house in the real world. But so far, it is not clear, when infants begin to understand what the familiar size of an object is and how the size of an object influences infants’ reactions towards the object. Besides the size, three-dimensionality seems to be another important feature to trigger knowledge about the real world in infants. Infants seem to rather show the same reaction, they show towards the real object, when presented with a 3D models rather than pictures. In our project, we are presenting infants as young as seven months a familiar sized object next to a tiny or huge version of the same object. Our aim is to investigate, if infants show a visual and a reaching preference for the familiar sized objects. In addition, we show the infants either real objects/ 3D models or photographs of these objects. The stimulus format allows us to investigate, if infants show an earlier or stronger preference for the familiar size, when presented with a real object rather than a photograph of the object.

A mind divided? The animate-inanimate distinction in action and language perception

Friederike Seyfried (Justus-Liebig-University, Giessen)
Supervisors: Mathias Hegele (Justus-Liebig-University, Giessen), Ina Bornkessel-Schlesewsky (University of South Australia)

  • Abstract

    One of my main interests is how human language processing is related to and depends on other cognitive functions. In my PhD I will investigate the relationship between animacy in visual perception and in language comprehension.
    The first project will study the conceptual space for animates and inanimates in visual perception and written and auditory language comprehension in two languages, German and Hindi to test whether the conceptual space for animates and inanimates is shaped by grammatical features of a language. To study conceptual space, we will use representational similarity analysis (RSA).
    The second project looks at transitive actions and features or actors like intentional movement across visual perception and language comprehension. Using a repetition suppression paradigm, we will test whether there are regions or networks in the brain that process actor features in each modality (vision or language) or independent of the modality  in which the actor is presented.

Neural processing of verbal and nonverbal communicative actions in intergroup contexts

Miriam Steines (Philipps-University Marburg)
Supervisors: Richard Wiese (Philipps-University Marburg), Benjamin Straube (Philipps-University Marburg))

  • Abstract

    By means of functional magnetic resonance imaging, we aim at revealing the neural correlates of recognizing communicative intentions. Furthermore, we are going to investigate the hypothesis that the processing of communicative actions leads to differential activations depending on the intergroup context they occur in and the subjects' genetic risk for schizophrenia. The project consists of two experiments focusing on either nonverbally or verbally expressed intentions: experiment 1 explores the perception and execution of emotional facial expressions, whereas experiment 2 investigates the comprehension of indirect speech acts (i.e., speech acts which do not correspond to the literal meaning of the utterance). Both studies include the manipulation of an ‚environmental’ within-subject factor (ingroup vs. outgroup, i.e., the (non-)identification with the persons shown in the stimuli) as well as the between-subject factor of genetic predisposition for schizophrenia (healthy subjects vs. first-degree relatives of schizophrenic patients).

The role of the distinction between self and other in action-feedback monitoring

Lukas Uhlmann (Philipps-University Marburg)
Supervisors: Benjamin Straube (Philipps-University Marburg), Tilo Kircher (Philipps-University Marburg))

  • Abstract

    In order to decide whether the multisensory consequences of a self-generated action are in fact caused by the motor output itself and not by mere changes in the environment, the brain needs to predict the sensory consequences of a movement already before the action is executed. On a neural level, self-generated sensory feedback elicits less BOLD activity in sensory cortices than externally generated sensory feedback. Interestingly, less sensory suppression is related to core symptoms in schizophrenia, such as passivity symptoms or hallucinations, pointing to sensory prediction errors among schizophrenia patients with these symptoms. Thus, dysfunctional action monitoring could be the basis of defective self-other discrimination, e.g. when patients with passivity symptoms have the sensation that their own actions or thoughts are induced or influenced by external sources. The aim of this dissertation is to examine the influence of the self-other distinction on behavioral and neural levels regarding auditory and visual consequences of self-generated and externally generated actions in healthy control subjects and schizophrenia patients with and without passivity symptoms and/or hallucinations.

Linking different levels of actions representations

Dmytro Velychko (Philipps-University Marburg)
Supervisors: Dominik Endres (Philipps-University Marburg)

  • Abstract

    To survive in a natural environment, humans need to perform actions as fast and as optimal as possible. Given the constraints of limited reaction time and bounded computational resources, we hypothesize that the brain must implement a compromise between the extremes of optimal control and stored motor programs. We want to investigate the nature of this compromise. To this end we create flexible models of sensory-motor integration in dynamical systems which are able to incorporate both learned dynamics and account for slow context and fast sensory changes.

Development of speech and emotion perception of infants in social interaction

Michael Vesker (Philipps-University Marburg)
Supervisors: Gudrun Schwarzer (Justus-Liebig-University Giessen)

  • Abstract

    We are investigating the development of speech- and emotion-perception in children. These two domains have been studied previously, but mostly in isolation. Since speech perception and emotion perception are often simultaneously utilized in the natural context of observing faces (during social interactions), we believe that it is important to investigate how these two systems interact and develop in relation to one another. We aim to use cross-modal priming experiments in children of ages 6, 9, and 12 as well as adults in order to track the developmental path of these two crucial information-processing abilities.

Action-Perception Coupling in Rivalry

Peter Veto (Philipps-University Marburg)
Supervisors: Wolfgang Einhäuser-Treyer (Technische Universität Chemnitz), Nikolaus Troje (Queen's University, Kingston)

  • Abstract

    The main question of my project is how movement influences perception. Part of my research examines the effects of own hand actions on bistable perception. When two interpretations of a visual stimulus are competing, a bias can be created by coupling one or both interpretations with an action. The type of coupling that leads to a bias tells us more about how actions are incorporated into our perceptual understanding of the world. Perception is also influenced by the perceived actions of another animate being; that is, the way we visually process biological motion differs from the processing of similar non-biological movement stimuli. We study how this difference in processing affects the outcome of a perceptual competition between biological and non-biological stimuli.

Control of eye movements by informational value

Christian Wolf (Philipps-University Marburg)
Supervisors: Alexander C. Schütz (Philipps-University Marburg)

  • Abstract

    We move our eyes more often than our heart beats and no decision which is made more often as where to look next. Thus, eye movements are an excellent tool to study human perception as well as cognition. Meanwhile it is well known that shifting our gaze to objects with a high motivational value (e.g. our favorite food item or items associated with a reward) facilitates eye movements and leads to shorter saccadic reaction times. The strength of this facilitation is modulated by the expected reward. At the same time, eye movements are initiated faster whenever we are engaged in gathering information relevant for fulfilling a task. We want to find out whether the brain also assigns a value to the information which can be acquired by the execution of an eye movement (perceptual value) similar as it does with motivational value. Can this perceptual value also modulate internal decision processes as it is reflected in saccadic reaction times?

Cooperation Partners