website menu


Academic staff and fellows

Dr Mathieu Barthet
Dr Emmanouil Benetos
Senior Lecturer, RAEng Research Fellow, Turing Fellow
Music signal analysis, computational sound scene analysis, machine learning for audio analysis, computational musicology
Dr Nick Bryan-Kinns
Reader in Interaction Design. Visiting Professor of Interaction Design, Hunan University, China.
Interaction Design with Audio #IDwA. Interactive Art, Interactive Music, Interactive Sonification. Design, Evaluation. Collaboration, Multi-person Interaction. Cross-Modal Interaction, Tangible Interaction.
Prof Elaine Chew
Professor of Digital Media
mathematical/computational modeling, music structure, expressivity, musical prosody, computational music cognition, computational music analysis, composition/improvisation, ensemble interaction, Internet performance, performance rendering
Prof. Simon Dixon
Professor, Deputy Director of C4DM, Director of Graduate Studies
Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning.
Liam Donovan
RAEng Enterprise Fellow
Dr George Fazekas
Senior Lecturer
Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems)
Prof Pat Healey
Professor of Human Interaction
Dr Andrew McPherson
Reader in Digital Media
new interfaces for musical expression, augmented instruments, performance study, human-computer interaction, embedded hardware
Dr Marcus Pearce
Senior Lecturer in Sound & Music Processing
Music Cognition, Auditory Perception, Empirical Aesthetics, Statistical Learning, Probabilistic Modelling.
Dr Matthew Purver
Reader in Computational Linguistics
computational linguistics including models of language and music
Prof. Joshua D Reiss
Professor in Audio Engineering
sound engineering, intelligent audio production, sound synthesis, audio effects, automatic mixing
Prof Mark Sandler
Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data.
Dr Rebecca Stewart
Wearable computing, auditory display, spatial audio, electronic textiles, tangible interfaces, textile sensors
Dr Tony Stockman
Senior Lecturer
Interaction Design, auditory displays, Data Sonification, Collaborative Systems, Cross-modal Interaction, Assistive Technology, Accessibility
Dr Dan Stowell
EPSRC Research Fellow
Machine listening, birdsong, bird calls, multi-source, probabilistic models, machine learning, beatboxing
Luca Turchet
Marie-Curie Postdoctoral Research Fellow
Internet of Musical Things, Smart Instruments, human-computer interaction, sonic interaction design, haptic technology, perception

Research support staff

Chris CannamAudio and music research software development; SoundSoftware services
Dr Panos Kudumakisinteractive music formats; music metadata, smart contracts and blockchain; middleware architectures; and, multimedia standardization (e.g., ISO/IEC MPEG)

Postdoctoral research assistants

Dr Helen BearIntegrating sound and context recognition for acoustic scene analysis
Kurijn Buys
Postdoctoral Research Assistant
musical acoustics, musical computing, augmented instruments, new interfaces for musical expression
Fiore MartinDesign Patterns for Inclusive Collaboration (DePIC)
Michael McloughlinAnimal behaviour, animal welfare, cetacean culture, agent based modelling, audio signal processing
Dave MoffatTools for Intelligent Music Production
Fabio Morrealehuman-computer interaction, new interfaces for musical expression, interactive art, augmented instruments
Ken O'Hanlon Audio-visual analysis
Dr Johan Pauwels
Postdoctoral Research Assistant
Audio Commons project, FAST IMPACt project, automatic music labelling, music information retrieval, music signal processing, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, machine learning
Dr Florian Thalmann
Dr Thomas Wilmering

Research assistants

Inês Nolasco
Research Assistant
Audio-based identification of beehive states

Research students

Ruchit AgrawalAdaptive Semi-Supervised Music Alignment
Jack ArmitageSupporting craft in digital musical instrument design
Gary BromhamThe role of nostalga in music production
Bhusan ChettriAutomatic Speaker Verification Spoofing and Countermeaures
Emmanouil Theofanis ChourdakisAutomatic Storytelling with Audio
Jiajie DaiModelling Intonation and Interaction in Vocal Ensembles
Alejandro DelgadoFine grain time resolution audio features for MIR
Emir DemirelRepresentation Learning in Singing Voice
Pablo Alejandro Alvarado DuranPhysically and Musically Inspired Probabilistic Models for Audio Content Analysis
Jacob HarrisonMusic interfaces for stroke neuro-rehabilitation
Peter HarrisonMusic-theoretic and cognitive applications of symbolic music modelling
Giacomo LepriExploring the role of culture and community in the design of new musical instruments
Beici LiangPiano playing technique detection, multimodal music information retrieval
Carlos LordeloInstrument modelling to aid polyphonic transcription
Marco MartínezMachine learning techniques for the development of intelligent audio mixing tools.
Liang Men
Alessia Milo
Saumitra MishraAnalysing Deep Architectures for Audio- based Music Content Analysis
Zulfadhli MohamadElectric guitar synthesis
Veronica MorfiMachine transcription of wildlife bird sound scenes
Giulio MoroIoT (as in instruments of things), low latency audio and sensors, embedded devices, why-do-people-think-analog-is-better
Iretiolowa Olowe
Arjun PankajakshanComputational sound scene analysis
Vanessa PopeAutomated Analysis of Rhythm in Performed Speech
Francisco Rodríguez AlgarraIntelligent Music Machine Listening
Sebastián RuizPhysiological Responses to Ensemble Interaction
Rod SelfridgeDeveloping real-time physical sound synthesis models. Currently focussed on aeroacoustic sounds
Dalia SenvaityteAudio Source Separation for Advanced Digital Audio Effects
Di Sheng
Janis Sokolovskis
Daniel StollerMachine listening with limited annotations
Vinod SubramanianNote level audio features for understanding and visualising musical performance
Changhong WangAutomatic Classification of Chinese Bamboo Flute Playing Techniques
James WeaverSpace and Intelligibility of Musical Performance
William J. WilkinsonProbabilistic machine listening. Generative models for natural sounds.
Yongmeng Wu
Simin YangAnalysis and Prediction of Listeners' Time-varying Emotion Responses in Live Music Performance
Adrien YcartMusic Language Models for Audio Analysis : neural networks, automatic music transcription, symbolic music modelling
Delia Fano YelaSignal Processing and Machine Learning Methods for Noise and Interference Reduction in Studio and Live Recordings

Visiting researchers

Simon Davidmann
Visiting Professor
Peter Langley
Visiting Professorial Fellow
Dr Matthias Mauch
music transcription (chords, beats, drums, melody, ...), interactive music annotation, singing research, research in the evolution of musical styles
Martyn Ware
Visiting Professorial Fellow



Return to top