website menu


Primary academic staff and fellows

Dr Mathieu Barthet
Senior Lecturer in Digital Media
Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art
Dr Emmanouil Benetos
Reader in Machine Listening, Turing Fellow
Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology
Dr Nick Bryan-Kinns
Professor of Interaction Design. Visiting Professor of Interaction Design, Hunan University, China. Turing Fellow.
Interaction Design with Audio #IDwA. Interactive Art, Interactive Music, Interactive Sonification. Design, Evaluation. Collaboration, Multi-person Interaction. Cross-Modal Interaction, Tangible Interaction.
Prof. Simon Dixon
Professor of Computer Science, Deputy Director of C4DM, Director of the AIM CDT, Turing Fellow
Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning.
Dr George Fazekas
Senior Lecturer
Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems)
Prof Andrew McPherson
Professor of Musical Interaction
new interfaces for musical expression, augmented instruments, performance study, human-computer interaction, embedded hardware
Dr Johan Pauwels
Lecturer in Audio Signal Processing
automatic music labelling, music information retrieval, music signal processing, machine learning for audio, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, big data science
Prof. Joshua D Reiss
Professor of Audio Engineering
sound engineering, intelligent audio production, sound synthesis, audio effects, automatic mixing
Dr Charalampos Saitis
Lecturer in Digital Music Processing, Turing Fellow
Communication acoustics, crossmodal correspondences, sound synthesis, cognitive audio, musical haptics
Prof Mark Sandler
C4DM Director, Turing Fellow, Royal Society Wolfson Research Merit award holder
Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data.
Dr Tony Stockman
Senior Lecturer
Interaction Design, auditory displays, Data Sonification, Collaborative Systems, Cross-modal Interaction, Assistive Technology, Accessibility
Dr Lin Wang
Lecturer in Applied Data Science and Signal Processing
signal processing; machine learning; robot perception

Associate academic staff and fellows

Prof Pat Healey
Professor of Human Interaction, Turing Fellow
Dr Marcus Pearce
Senior Lecturer in Sound & Music Processing
Music Cognition, Auditory Perception, Empirical Aesthetics, Statistical Learning, Probabilistic Modelling.
Prof Matthew Purver
Professor of Computational Linguistics, Turing Fellow
computational linguistics including models of language and music
Prof Geraint Wiggins
Professor of Computational Creativity
Computational Creativity, Artificial Intelligence, Music Cognition

Research support staff

Alvaro Bort
Research Programme Manager
Projects: UKRI Centre for Doctoral Training in Artificial Intelligence and Music, New Frontiers in Music Information Processing (MIP-Frontiers)
Jonathan Winfield
Research Programme Manager
Project: Centre for Doctoral Training in Media and Arts Technology

Postdoctoral research assistants

Dr Jacob HarrisonBridging the gap: visually impaired and sighted music industry professionals working side by side
Dr Yuanyuan LiuProject: Digital Platforms for Craft in the UK and China

Research assistants

Sungkyun ChangDeep learning technologies for multi-instrument automatic music transcription

Research students

Berker BanarTowards Composing Contemporary Classical Music using Generative Deep Learning
Adán BenitoBeyond the fret: gesture analysis on fretted instruments and its applications to instrument augmentation
Aditya BhattacharjeeSelf-supervision in Audio Fingerprinting
James BoltIntelligent audio and music editing with deep learning
Gary BromhamThe role of nostalga in music production
Carey BunksCover Song Identification
Marco ComunitàMachine learning applied to sound synthesis models
Ruby CrockerContinuous mood recognition in film music
Andrew (Drew) EdwardsDeep Learning for Jazz Piano: Transcription + Generative Modeling
Oluremi FalowoE-AIM - Embodied Cognition in Intelligent Musical Systems
Corey FordArtificial Intelligence for Supporting Musical Creativity and Engagement in Child-Computer Interaction
David FosterModelling the Creative Process of Jazz Improvisation
Nelly GarciaAn investigation evaluating realism in sound design
Adam Andrew Garrow: Probabilistic learning of sequential structures in music cognition
Iacopo GhinassiSemantic understanding of TV programme content and structure to enable automatic enhancement and adjustment
Max GrafPERFORM-AI (Provide Extended Realities for Musical Performance using AI)
Andrea GuidiDesign for auditory imagery
Edward HallProbabilistic modelling of thematic development and structural coherence in music
Madeline HamiltonImproving AI-generated Music with Pleasure Models
Benjamin HayesPerceptually motivated deep learning approaches to creative sound synthesis
Jiawen HuangLyrics Alignment For Polyphonic Music
Ilias IbnyahyaAudio Effects design optimization
Thomas KaplanProbabilistic modelling of rhythm perception and production
Harnick KheraInformed source separation for multi-mic production
Giacomo LepriExploring the role of culture and community in the design of new musical instruments
Yukun LiComputational Comparison Between Different Genres of Music in Terms of the Singing Voice
Jinhua LiangAI for everyday sounds
Lele LiuAutomatic music transcription with end-to-end deep neural networks
Carlos LordeloInstrument modelling to aid polyphonic transcription
Yin-Jyun LuoIndustry-scale Machine Listening for Music and Audio Data
Yinghao MaSelf-supervision in machine listening
Ilaria MancoMultimodal Deep Learning for Music Information Retrieval
Luca MarinelliGender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising
Andrea MartelloniReal-Time Gesture Classification on an Augmented Acoustic Guitar using Deep Learning to Improve Extended-Range and Percussive Solo Playing
Tyler Howard McIntoshExpressive Performance Rendering for Music Generation Systems
Christopher MitcheltreeRepresentation Learning for Audio Production Style and Modulations
Ashley Noel-HirstLatent Spaces for Human-AI music generation
Inês NolascoTowards an automatic acoustic identification of individuals in the wild
Brendan O'ConnorSinging Voice Attribute Transformation
Arjun PankajakshanComputational sound scene analysis
Teresa PelinskiSensor mesh as performance interface
Mary PilatakiDeep Learning methods for Multi-Instrument Music Transcription
Vjosa PreniqiPredicting demographics, personalities, and global values from digital media behaviours
Xavier RileyPitch tracking for music applications - beyond 99% accuracy
Eleanor RowAutomatic micro-composition for professional/novice composers using generative models as creativity support tools
Sebastián RuizPhysiological Responses to Ensemble Interaction
Saurjya SarkarNew perspectives in instrument-based audio source separation
Pedro SarmentoGuitar-Oriented Neural Music Generation in Symbolic Format
Dalia SenvaityteAudio Source Separation for Advanced Digital Audio Effects
Bleiz Del SetteThe Sound of Care: researching the use of Deep Learning and Sonification for the daily support of people with Chronic Primary Pain
Elona ShatriOptical music recognition using deep learning
Jordie ShierReal-time timbral mapping for synthesized percussive performance
Shubhr SinghAudio Applications of Novel Mathematical Methods in Deep Learning
Christian SteinmetzEnd-to-end generative modeling of multitrack mixing with non-parallel data and adversarial networks
David SüdholtMachine Learning of Physical Models for Voice Synthesis
Jingjing TangEnd-to-End System Design for Music Style Transfer with Neural Networks
Louise ThorpeUsing Signal-informed Source Separation (SISS) principles to improve instrument separation from legacy recordings
Antonella TorrisiComputational analysis of chick vocalisations: from categorisation to live feedback
Maryam TorshiziMusic emotion modelling using graph analysis
Cyrus VahidiPerceptual end to end learning for music understanding
Soumya Sai VankaMusic Production Style Transfer and Mix Similarity
Yannis (John) VasilakisActive Learning for Interactive Music Transcription
Ningzhi WangGenerative Models For Music Audio Representation And Understanding
James WeaverSpace and Intelligibility of Musical Performance
Alexander WilliamsUser-driven deep music generation in digital audio workstations
Elizabeth WilsonCo-creative Algorithmic Composition Based on Models of Affective Response
Chris WinnardMusic Interestingness in the Brain
Lewis WolstanholmeMeta-Physical Modelling
Chengye WuLeveraging cross-sensory associations in communication
Chin-Yun YuNeural Audio Synthesis with Expressiveness Control
Huan ZhangComputational Modelling of Expressive Piano Performance
Jincheng ZhangEmotion-specific Music Generation Using Deep Learning
Yixiao ZhangMachine Learning Methods for Artificial Musicality

Visiting academics

Dr Helen Bear
Honorary Lecturer
Integrating sound and context recognition for acoustic scene analysis
Dr Matthias Mauch
Visiting Academic
music transcription (chords, beats, drums, melody, ...), interactive music annotation, singing research, research in the evolution of musical styles
Dr Veronica MorfiMachine transcription of wildlife bird sound scenes
Dr Montserrat Pàmies-Vilà
University of Music and Performing Arts Vienna
Timbre modelling for non-conventional cello techniques


Domenico Stefani
University of Trento, Italy
Embedded machine learning for smart musical instruments

Return to top