People
Academic staff and fellows
Name | Project/interests/keywords |
---|---|
Dr Mathieu Barthet Lecturer in Digital Media | Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art |
Dr Emmanouil Benetos Senior Lecturer, RAEng Research Fellow, Turing Fellow | Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology |
Dr Nick Bryan-Kinns Reader in Interaction Design. Visiting Professor of Interaction Design, Hunan University, China. | Interaction Design with Audio #IDwA. Interactive Art, Interactive Music, Interactive Sonification. Design, Evaluation. Collaboration, Multi-person Interaction. Cross-Modal Interaction, Tangible Interaction. |
Prof. Simon Dixon Professor, Deputy Director of C4DM, Director of the AIM CDT | Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning. |
Dr George Fazekas Senior Lecturer | Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems) |
Prof Pat Healey Professor of Human Interaction | |
Dr Andrew McPherson Reader in Digital Media | new interfaces for musical expression, augmented instruments, performance study, human-computer interaction, embedded hardware |
Dr Marcus Pearce Senior Lecturer in Sound & Music Processing | Music Cognition, Auditory Perception, Empirical Aesthetics, Statistical Learning, Probabilistic Modelling. |
Dr Matthew Purver Reader in Computational Linguistics | computational linguistics including models of language and music |
Prof. Joshua D Reiss Professor in Audio Engineering | sound engineering, intelligent audio production, sound synthesis, audio effects, automatic mixing |
Dr Charalampos Saitis Lecturer in Digital Music Processing | Auditory perception and cognition, crossmodal correspondences, musical acoustics, musical haptics, musician-instrument interaction, affective computing |
Prof Mark Sandler C4DM Director, Turing Fellow, Royal Society Wolfson Research Merit award holder | Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data. |
Dr Tony Stockman Senior Lecturer | Interaction Design, auditory displays, Data Sonification, Collaborative Systems, Cross-modal Interaction, Assistive Technology, Accessibility |
Dr Dan Stowell Lecturer | Machine listening, birdsong, bird calls, multi-source, probabilistic models, machine learning, beatboxing |
Research support staff
Name | Project/interests/keywords |
---|---|
Chris Cannam | Audio and music research software development; SoundSoftware services |
Dr Panos Kudumakis | interactive music formats; music metadata, smart contracts and blockchain; middleware architectures; and, multimedia standardization (e.g., ISO/IEC MPEG) |
Postdoctoral research assistants
Name | Project/interests/keywords |
---|---|
Kurijn Buys Postdoctoral Research Assistant | musical acoustics, musical computing, augmented instruments, new interfaces for musical expression |
Dr Yuanyuan Liu | Project: Digital Platforms for Craft in the UK and China |
Michael Mcloughlin | Animal behaviour, animal welfare, cetacean culture, agent based modelling, audio signal processing |
Cornelia Metzig | |
Dave Moffat | Tools for Intelligent Music Production |
Veronica Morfi Postdoctoral Research Assistant | Machine transcription of wildlife bird sound scenes |
Fabio Morreale | human-computer interaction, new interfaces for musical expression, interactive art, augmented instruments |
Ken O'Hanlon | Audio-visual analysis |
Dr Johan Pauwels Postdoctoral Research Assistant | Audio Commons project, FAST IMPACt project, automatic music labelling, music information retrieval, music signal processing, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, machine learning |
Rod Selfridge | Virtual Reality and Music |
Dr Florian Thalmann | |
Dr Thomas Wilmering |
Research assistants
Name | Project/interests/keywords |
---|
Research students
Name | Project/interests/keywords |
---|---|
Ruchit Agrawal | Adaptive Semi-Supervised Music Alignment |
Jack Armitage | Supporting craft in digital musical instrument design |
Berker Banar | Generating emotional music using AI |
Adán Benito | Beyond the fret: gesture analysis on fretted instruments and its applications to instrument augmentation |
Gary Bromham | The role of nostalga in music production |
Fred Bruford | |
Bhusan Chettri | Automatic Speaker Verification Spoofing and Countermeaures |
Emmanouil Theofanis Chourdakis | Automatic Storytelling with Audio |
Marco Comunità | Machine learning applied to sound synthesis models |
Alejandro Delgado | Fine grain time resolution audio features for MIR |
Emir Demirel | Representation Learning in Singing Voice |
Pablo Alejandro Alvarado Duran | Physically and Musically Inspired Probabilistic Models for Audio Content Analysis |
David Foster | Modelling the Creative Process of Jazz Improvisation |
Jacob Harrison | Music interfaces for stroke neuro-rehabilitation |
Peter Harrison | Music-theoretic and cognitive applications of symbolic music modelling |
Giacomo Lepri | Exploring the role of culture and community in the design of new musical instruments |
Yukun Li | Computational Comparison Between Different Genres of Music in Terms of the Singing Voice |
Beici Liang | Piano playing technique detection, multimodal music information retrieval |
Lele Liu | Automatic music transcription with end-to-end deep neural networks |
Carlos Lordelo | Instrument modelling to aid polyphonic transcription |
Ilaria Manco | Deep learning and multi-modal models for the music industry |
Marco Martínez | Machine learning techniques for the development of intelligent audio mixing tools. |
Liang Men | |
Alessia Milo | |
Saumitra Mishra | Analysing Deep Architectures for Audio- based Music Content Analysis |
Zulfadhli Mohamad | Electric guitar synthesis |
Giulio Moro | IoT (as in instruments of things), low latency audio and sensors, embedded devices, why-do-people-think-analog-is-better |
Brendan O'Connor | Voice Transformation |
Iretiolowa Olowe | |
Arjun Pankajakshan | Computational sound scene analysis |
Mary Pilataki-Manika | Polyphonic Music Transcription using Deep Learning |
Vanessa Pope | Automated Analysis of Rhythm in Performed Speech |
Francisco Rodríguez Algarra | Intelligent Music Machine Listening |
Sebastián Ruiz | Physiological Responses to Ensemble Interaction |
Saurjya Sarkar | New perspectives in instrument-based audio source separation |
Dalia Senvaityte | Audio Source Separation for Advanced Digital Audio Effects |
Elona Shatri | Optical music recognition using deep learning |
Di Sheng | |
Rishi Shukla | Binaural virtual auditory display for music content recommendation and navigation |
Janis Sokolovskis | New Technologies for Music Learning: Computer-assisted approaches to analysis and shaping of music instrument practice |
Daniel Stoller | Machine listening with limited annotations |
Vinod Subramanian | Note level audio features for understanding and visualising musical performance |
Cyrus Vahidi | Perceptual end to end learning for music understanding |
Changhong Wang | Automatic Classification of Chinese Bamboo Flute Playing Techniques |
James Weaver | Space and Intelligibility of Musical Performance |
William J. Wilkinson | Probabilistic machine listening. Generative models for natural sounds. |
Yongmeng Wu | |
Simin Yang | Analysis and Prediction of Listeners' Time-varying Emotion Responses in Live Music Performance |
Adrien Ycart | Music Language Models for Audio Analysis : neural networks, automatic music transcription, symbolic music modelling |
Delia Fano Yela | Signal Processing and Machine Learning Methods for Noise and Interference Reduction in Studio and Live Recordings |
Visiting academics
Name | Project/interests/keywords |
---|---|
Roger Dean Visiting Professor | |
Martyn Ware Visiting Professorial Fellow |
Visitors
Name | Project/interests/keywords |
---|---|
Dr Helen Bear Visiting Researcher | Integrating sound and context recognition for acoustic scene analysis |
Inês Nolasco Visiting Researcher | Audio-based identification of beehive states |