Workshop on Auditory Neuroscience, Cognition and Modelling 2016
Research workshop, QMUL, London
Wed 17th February 2016, 9:30am-5pm
Workshop Programme
Workshop book of abstracts (PDF)
KEYNOTE TALKS: (click on titles for abstracts)
- Prof Elvira Brattico (Aarhus University), 10:10-11:10
Automatic and conscious processing of musical sound features in the brain [VIDEO] - Dr Jean-Julien Aucouturier (CNRS/IRCAM), 13:30-14:30
Real-time transformations of emotional speech alter speaker's emotions in a congruent direction [VIDEO] - Dr Richard E. Turner (University of Cambridge), 16:00-17:00
Probabilistic models for natural audio signals [VIDEO]
ORAL PRESENTATIONS - SESSION 1 (11:10-12:10)
- High-level influences on auditory streaming [VIDEO]
Alexander J. Billig, Matthew H. Davis, Robert P. Carlyon - Multiple hypothesis testing on partial coherences: Graphical modelling of Neurological data in EEG/MEG [VIDEO]
Deborah Schneider-Luftman - EEG-based Emotion Detection in Music Listening [VIDEO]
Rafael Ramirez, Zacharias Vamvakousis, Sergio Giraldo
ORAL PRESENTATIONS - SESSION 2 (14:30-15:30)
- Investigating the role of auditory and cognitive factors for various speech-perception-in-noise situations in older listeners [VIDEO]
Sarah Knight and Antje Heinrich - Contextual effects on the neural encoding of speech sounds [VIDEO]
S. Rutten, R. Santoro, A. Hervais-Adelman, E. Formisano, and N. Golestani - Phonological Model for Automatic Recognition of Continuous Speech [VIDEO]
Vipul Arora, Aditi Lahiri and Henning Reetz
POSTER PRESENTATIONS (12:10-13:30, 15:30-16:00)
- Shared acoustic codes underlie emotional communication in Music and Speech – Evidence from Machine Learning
Eduardo Coutinho - Analysis of spectral correlates of violin timbre quality in relation to experts’ subjective ratings
Ewa Łukasik - Analysis of envelope following responses to natural vowels using a Fourier analyzer
Frederique J Vanheusden, Steven L Bell and David M Simpson - Feature Extraction Based on Auditory Image Model for Noise-Robust Automatic Speech Recognition
X. Yang, M. Karbasi, S. Bleeck, and D. Kolossa - Adaptive Frequency Neural Networks for Dynamic Pulse and Metre Perception
Andrew Lambert, Tillman Weyde, and Newton Armstrong - Compensation for spectral and temporal envelope distortion caused by transmission channel acoustics
Cleo Pike, Amy V Beeston, Tim Brookes, Guy J Brown, and Russell Mason - Using auditory brainstem responses (ABRs) to measure hearing loss-induced increases in neural gain and its implications with tinnitus
A.J Hardy, J. de Boer, and Katrin Krumbholz - A mobile-based platform for evaluating localisation of virtual sound sources (poster+demo)
Mark Steadman and Lorenzo Picinali - A model-based EEG approach for investigating the hierarchical nature of continuous speech processing
Giovanni M. Di Liberto, Michael J. Crosse, and Edmund C. Lalor - Towards a Library of Musical Core-Signals
Clara Hollomey - Functional neural modelling of just noticeable difference in interaural time detection for normal hearing and bilateral cochlear implant users
Andreas N. Prokopiou, Jan Wouters, and Tom Francart - Sensitivity to the statistics of rapid, stochastic tone sequences
Sijia Zhao, Marcus Pearce, Fred Dick, and Maria Chait - A meta-analysis and systematic review of perceptual studies of high resolution audio discrimination
Joshua D. Reiss - Does adaptation sharpen frequency representation in auditory cortex?
Oscar Woolnough, Jessica de Boer, Katrin Krumbholz, Rob Mill, and Chris Sumner - Perceiving auditory streams for instrumental and vocal music: the effects of prior knowledge and frequency separation
Sandra Quinn and Eliza Mclaughlin - Stimulus predictability dynamically modulates neural gain in the auditory processing stream
Ryszard Auksztulewicz, Nicolas Barascud, Gerald Cooray, Maria Chait, and Karl Friston - Automatic identification of musical schemata via symbolic fingerprinting and temporal filters
Andreas Katsiavalos, Tom Collins, and Bret Battey - Can the non-human primate core-belt model be applied to the human auditory cortex? Evidence from functional and structural MRI at 7 Tesla
Julien Besle, Olivier Mougin, Rosa Sanchez-Panchuelo, Penny Gowland, Richard Bowtell, Sue Francis, and Katrin Krumbholz - Validation of a new open-source platform for real-time emotional speech transformation
Laura Rachman - The reduced GABA concentration with absolute pitch possessors revealed by Magnetic resonance spectroscopy
Tomoya Nakai, Hiroaki Maeshima, Chihiro Hosoda, and Kazuo Okanoya - Rising to the challenge: modelling transfer learning of polyphonic musical structure
Reinier de Valk and Tillman Weyde - Comparison of reaction time measurement and Yes/no question paradigm regarding the perception of spatial coherence
Hanne Stenzel and Philip J. B. Jackson - EEG-powered Soundtrack for Interactive Theatre
Grigore Burloiu, Alexandru Berceanu, and Cătălin Crețu