Menu
website menu

Workshop on Auditory Neuroscience, Cognition and Modelling 2016


Research workshop, QMUL, London
Wed 17th February 2016, 9:30am-5pm

Keynote Presentations - Abstracts


  • Prof Elvira Brattico (Aarhus University), 10:10-11:10
    Automatic and conscious processing of musical sound features in the brain

    Abstract:
    Several features of the auditory environment are analysed and predicted even before the intervention of attention in an automatic and irrepressible way in order to facilitate response to salient and potentially dangerous events. Music capitalises on variations of “low-level” spectrotemporal features common to other auditory signals, and is also characterised by “high-level” sound schemata based on conventional agreement between members of a certain musical culture, which need to be learned via acculturation. In this talk I will review my recent neurophysiological and neuroimaging studies on the attentional resources required for encoding and predicting “low-“ vs. “high-level” sound features in isolation or in a realistic music context. I will also discuss how music acculturation can strikingly modify the neural processes and structures involved in musical feature processing and prediction.

    Bio:
    Elvira Brattico (PhD in Psychology, University of Helsinki, 2007) is Professor of Neuroscience, Music and Aesthetics at the Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University and Royal Academy of Music, Aarhus/Aalborg, Denmark. She also holds adjunct professorships at the University of Helsinki and the University of Jyväskylä, Finland. Her background is multidisciplinary: she studied piano performance and philosophy in Italy, and cognitive neuroscience and brain research methods in Finland and Canada. She is a pioneer in applying computational music information retrieval methods to neurophysiological and neuroimaging methods to solve questions concerning music processing, such as how the brain represents musical features, why we enjoy music, how music shapes neural structures and functions, and how each of these processes are dependent on the characteristics of the individual. Prof.Brattico has published more than 100 papers, of which 68 appear in peer-reviewed international journals or conference proceedings, and 10 invited book chapters.

  • Dr Jean-Julien Aucouturier (CNRS/IRCAM), 13:30-14:30
    Real-time transformations of emotional speech alter speaker's emotions in a congruent direction

    Abstract:
    Recent research about emotion regulation and forward models have suggested that emotional signals are produced in a goal directed way, and monitored for errors like other intentional actions. We created a digital audio platform to covertly modify the emotional tone of participants voices while they talked, in the direction of happiness,sadness or fear. We found that, while external listeners perceived the audio transformations as natural examples of the intended emotions, the great majority of the participants remained unaware that their own voices were being manipulated. We take this to indicates that people are not continuously monitoring their own voice to make sure it meets a predetermined emotional target. Instead, as a consequence of listening to their altered voices, the emotional state of the participants changed in congruence with the emotion portrayed, as measured both by self-report and skin conductance responses (SCR). This we believe is the first evidence of peripheral feedback effects on emotional experience in the auditory domain. As such, this result reinforces the wider framework of self-perception theory; that we often use the same inferential strategies to understand ourselves as those we use to understand others.

    Bio:
    JJ Aucouturier is a CNRS researcher in IRCAM in Paris. He was trained in Computer Science, and held several postdoctoral positions in Cognitive Neuroscience in RIKEN Brain Science Institute in Tokyo, Japan and Université of Dijon, France. He is now building a music neuroscience lab in IRCAM, and interested in using audio signal processing technologies to understand how sound and music create emotions. Lab website: http://cream.ircam.fr

  • Dr Richard E. Turner (University of Cambridge), 16:00-17:00
    Probabilistic models for natural audio signals

    Abstract:
    In this talk I will present a family of probabilistic models specifically designed for audio analysis that are able to automatically adapt to match the statistics of the input. At the heart of the approach are modern probabilistic machine learning methods, which provide techniques both for tuning representations and also for handling noise and missing data through an explicit representation of uncertainty. I will show that the new methods provide superior representations of audio as evidenced by results on denoising, missing data imputation and audio synthesis problems. I will also show that there is a close connection between the adapted representations and those employed by the brain for auditory scene analysis. This suggests that probabilistic modelling might shed light on the computations being performed in the auditory brain.

    Bio:
    Richard Turner holds a Lectureship (equivalent to US Assistant Professor) in Computer Vision and Machine Learning in the Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, UK. Before taking up this position, he held an EPSRC Postdoctoral research fellowship which he spent at both the University of Cambridge and the Laboratory for Computational Vision, NYU, USA. He has a PhD degree in Computational Neuroscience and Machine Learning from the Gatsby Computational Neuroscience Unit, UCL, UK and a M.Sci. degree in Natural Sciences (specialism Physics) from the University of Cambridge, UK. His research interests include machine learning for signal processing and developing probabilistic models of perception.



Back to main workshop webpage

Return to top