About Me

A Neuroscientist by training, fascinated by data analysis methods, programming, and technology in general. Currently working on a PhD investigating how the human brain separates sounds in its environment. In other words, trying to understand how you manage to listen to your favorite music while ignoring the world around you.

As a neuroscientist with a wide technical interest, I gained experience at both Maastricht University and McGill University in Montreal, Canada. I specialize in data analysis methods and experimental design for ultra-high field magnetic resonance imaging (MRI) at 7 Tesla, one of the most advanced MRI machines currently commercially available. Over the course of both my Master and PhD, I developed analysis techniques and pipelines for large multidimensional datasets, focusing mostly on machine learning and Bayesian inference. As part of a multidisciplinary academic research group which works at the edge of innovation, I implement creative analysis and software solutions to reduce both data and system limitations. During my research I encounter challenging data situations in often fuzzily defined contexts where there is generally no clear-cut answer or solution to the (data) challenges at hand. Combining an in-depth understanding of biological systems with algorithmic knowledge and coding skills, I develop innovative data-driven solutions for such analysis problems.

My interest in data analysis methods extends beyond neuroscience, such as complex sensor signal processing and parameter inference. Within the realm of analysis methods, I am specifically interested in machine learning techniques, both supervised and unsupervised. I have a strong affinity to and interest in Bayesian inference and its application to both machine learning and data modeling. I am, additionally, interested in computational methods for the separation of multivariate signals, for example blind source separation methods such as Independent Component Analysis. Within my research I apply a multitude of these techniques to disentangle experimental effects in human brain data.

Overall, I am passionate about employing programming for problem solving and the automation of processes. A large part of my time is spent developing novel approaches and/or algorithms to solve problems and to optimize/automate handling of such situations in the future. The development of well-written software, mostly in MATLAB and Python, has been a defining factor throughout my research activities.


Research

From a content perspective, my research aims at understanding how the brain separates multiple sounds present in the environment, which typically overlap in both time and frequency. A prominent example of such a situation would be having a conversation in a noisy environment, where you actively separate the speaker's voice from all other sounds surrounding you. Methods employed by the brain to resolve such a task are described within the Auditory Scene Analysis (ASA) framework. All sounds present in one’s environment are typically indicated as the auditory scene, analogous to a visual scene though made up of sounds. My research focuses on one of the sub-steps of ASA, namely auditory stream segregation.

Auditory stream segregation refers to the methods employed by the brain to separate overlapping environmental sounds into coherent perceptual representations (i.e., streams). Auditory streams can consist of one or multiple sound sources, depending on both physical cues and internal ‘brain states’, for example where you focus your attention. The combination of multiple sound sources into a ‘new’ sound is typically called auditory stream integration. Any form of stream segregation or integration makes use of two distinct types of information, namely bottom-up and top-down. Bottom-up information refers to the physical differences between sounds (e.g. frequency, timbre, onset, etc.), while top-down information refers to how higher-levels of processing in the brain, for example attention, modulate the representations of sounds in earlier processing stages.

To investigate how the brain performs these segregation and integration tasks for multiple instruments in a song, we employ multi-instrument/polyphonic music. The physical difference between instruments (i.e., bottom-up) is adjusted via instrument Timbre manipulation. Top-down effects are varied by changing the subject’s locus of attention, id est which instrument(s) to attend. To visualize such effects in the brain, my preferred measurement technique is functional Magnetic Resonance Imaging (fMRI). With fMRI it is possible to visualize, among others, which areas of the brain are showing changes in blood oxygenation levels, leading to the indirect assumption that neurons in this area were active. By comparing these changes across different experimental conditions, we may learn more about how the brain processes the variables we manipulated in our experiment. To draw any conclusions from the measured blood oxygenation changes, fMRI data needs to first be pre-processed and analyzed. It is the development of novel (combinations of) analysis methods for this kind of data which is one of my main professional interests. To acquire such brain data, I make use of a 7 Tesla MRI scanner which is capable of providing very detailed images of the brain both in space and time.

For more information on my education and professional experience, please visit my Linkedin page or contact me directly.

Updated 20 September 2019