website menu


Academic staff and fellows

Dr Mathieu Barthet
Lecturer in Digital Media
Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art
Dr Emmanouil Benetos
Senior Lecturer, Turing Fellow
Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology
Dr Nick Bryan-Kinns
Reader in Interaction Design. Visiting Professor of Interaction Design, Hunan University, China.
Interaction Design with Audio #IDwA. Interactive Art, Interactive Music, Interactive Sonification. Design, Evaluation. Collaboration, Multi-person Interaction. Cross-Modal Interaction, Tangible Interaction.
Prof. Simon Dixon
Professor, Deputy Director of C4DM, Director of the AIM CDT
Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning.
Dr George Fazekas
Senior Lecturer
Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems)
Prof Pat Healey
Professor of Human Interaction
Dr Andrew McPherson
Reader in Digital Media
new interfaces for musical expression, augmented instruments, performance study, human-computer interaction, embedded hardware
Dr Marcus Pearce
Senior Lecturer in Sound & Music Processing
Music Cognition, Auditory Perception, Empirical Aesthetics, Statistical Learning, Probabilistic Modelling.
Dr Huy Phan
Lecturer in Artificial Intelligence
Machine listening, computational auditory scene analysis, machine learning for speech processing, machine learning for biosignal analysis, longitudinal sleep monitoring, healthcare applications
Dr Matthew Purver
Reader in Computational Linguistics
computational linguistics including models of language and music
Prof. Joshua D Reiss
Professor in Audio Engineering
sound engineering, intelligent audio production, sound synthesis, audio effects, automatic mixing
Dr Charalampos Saitis
Lecturer in Digital Music Processing
Auditory perception and cognition, crossmodal correspondences, musical acoustics, musical haptics, musician-instrument interaction, affective computing
Prof Mark Sandler
C4DM Director, Turing Fellow, Royal Society Wolfson Research Merit award holder
Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data.
Dr Tony Stockman
Senior Lecturer
Interaction Design, auditory displays, Data Sonification, Collaborative Systems, Cross-modal Interaction, Assistive Technology, Accessibility
Dr Dan Stowell
Machine listening, birdsong, bird calls, multi-source, probabilistic models, machine learning, beatboxing

Research support staff

Chris CannamAudio and music research software development; SoundSoftware services
Dr Panos Kudumakisinteractive music formats; music metadata, smart contracts and blockchain; middleware architectures; and, multimedia standardization (e.g., ISO/IEC MPEG)

Postdoctoral research assistants

Kurijn Buys
Postdoctoral Research Assistant
musical acoustics, musical computing, augmented instruments, new interfaces for musical expression
Dr Yuanyuan LiuProject: Digital Platforms for Craft in the UK and China
Michael McloughlinAnimal behaviour, animal welfare, cetacean culture, agent based modelling, audio signal processing
Cornelia Metzig
Dave MoffatTools for Intelligent Music Production
Veronica Morfi
Postdoctoral Research Assistant
Machine transcription of wildlife bird sound scenes
Fabio Morrealehuman-computer interaction, new interfaces for musical expression, interactive art, augmented instruments
Ken O'Hanlon Audio-visual analysis
Dr Johan Pauwels
Postdoctoral Research Assistant
Audio Commons project, FAST IMPACt project, automatic music labelling, music information retrieval, music signal processing, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, machine learning
Rod SelfridgeVirtual Reality and Music
Dr Florian Thalmann
Dr Thomas Wilmering

Research assistants


Research students

Ruchit AgrawalAdaptive Semi-Supervised Music Alignment
Jack ArmitageSupporting craft in digital musical instrument design
Berker BanarGenerating emotional music using AI
Adán BenitoBeyond the fret: gesture analysis on fretted instruments and its applications to instrument augmentation
Gary BromhamThe role of nostalga in music production
Fred Bruford
Bhusan ChettriAutomatic Speaker Verification Spoofing and Countermeaures
Emmanouil Theofanis ChourdakisAutomatic Storytelling with Audio
Marco ComunitàMachine learning applied to sound synthesis models
Alejandro DelgadoFine grain time resolution audio features for MIR
Emir DemirelRepresentation Learning in Singing Voice
Pablo Alejandro Alvarado DuranPhysically and Musically Inspired Probabilistic Models for Audio Content Analysis
David FosterModelling the Creative Process of Jazz Improvisation
Jacob HarrisonMusic interfaces for stroke neuro-rehabilitation
Peter HarrisonMusic-theoretic and cognitive applications of symbolic music modelling
Giacomo LepriExploring the role of culture and community in the design of new musical instruments
Yukun LiComputational Comparison Between Different Genres of Music in Terms of the Singing Voice
Beici LiangPiano playing technique detection, multimodal music information retrieval
Lele LiuAutomatic music transcription with end-to-end deep neural networks
Carlos LordeloInstrument modelling to aid polyphonic transcription
Ilaria MancoDeep learning and multi-modal models for the music industry
Marco MartínezMachine learning techniques for the development of intelligent audio mixing tools.
Liang Men
Alessia Milo
Saumitra MishraAnalysing Deep Architectures for Audio- based Music Content Analysis
Zulfadhli MohamadElectric guitar synthesis
Giulio MoroIoT (as in instruments of things), low latency audio and sensors, embedded devices, why-do-people-think-analog-is-better
Brendan O'ConnorVoice Transformation
Iretiolowa Olowe
Arjun PankajakshanComputational sound scene analysis
Mary Pilataki-ManikaPolyphonic Music Transcription using Deep Learning
Vanessa PopeAutomated Analysis of Rhythm in Performed Speech
Francisco Rodríguez AlgarraIntelligent Music Machine Listening
Sebastián RuizPhysiological Responses to Ensemble Interaction
Saurjya SarkarNew perspectives in instrument-based audio source separation
Dalia SenvaityteAudio Source Separation for Advanced Digital Audio Effects
Elona ShatriOptical music recognition using deep learning
Di Sheng
Rishi ShuklaBinaural virtual auditory display for music content recommendation and navigation
Janis SokolovskisNew Technologies for Music Learning: Computer-assisted approaches to analysis and shaping of music instrument practice
Daniel StollerMachine listening with limited annotations
Vinod SubramanianNote level audio features for understanding and visualising musical performance
Cyrus VahidiPerceptual end to end learning for music understanding
Changhong WangAutomatic Classification of Chinese Bamboo Flute Playing Techniques
James WeaverSpace and Intelligibility of Musical Performance
William J. WilkinsonProbabilistic machine listening. Generative models for natural sounds.
Yongmeng Wu
Simin YangAnalysis and Prediction of Listeners' Time-varying Emotion Responses in Live Music Performance
Adrien YcartMusic Language Models for Audio Analysis : neural networks, automatic music transcription, symbolic music modelling
Delia Fano YelaSignal Processing and Machine Learning Methods for Noise and Interference Reduction in Studio and Live Recordings

Visiting academics

Roger Dean
Visiting Professor
Martyn Ware
Visiting Professorial Fellow


Dr Helen Bear
Visiting Researcher
Integrating sound and context recognition for acoustic scene analysis
Inês Nolasco
Visiting Researcher
Audio-based identification of beehive states

Return to top