Menu
website menu

People

Academic staff and fellows

NameProject/interests/keywords
Dr Mathieu Barthet
Senior Lecturer in Digital Media
Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art
Dr Emmanouil Benetos
Senior Lecturer, Turing Fellow
Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology
Dr Nick Bryan-Kinns
Reader in Interaction Design. Visiting Professor of Interaction Design, Hunan University, China.
Interaction Design with Audio #IDwA. Interactive Art, Interactive Music, Interactive Sonification. Design, Evaluation. Collaboration, Multi-person Interaction. Cross-Modal Interaction, Tangible Interaction.
Dr Bhusan Chettri
Lecturer in Data Analytics
Machine listening, automatic speaker recognition, language recognition, fake speech detection for robust speaker verification, machine learning, generative models, and interpretability in machine learning for speech technology.
Prof. Simon Dixon
Professor, Deputy Director of C4DM, Director of the AIM CDT
Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning.
Dr George Fazekas
Senior Lecturer
Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems)
Prof Pat Healey
Professor of Human Interaction
Dr Andrew McPherson
Reader in Digital Media
new interfaces for musical expression, augmented instruments, performance study, human-computer interaction, embedded hardware
Dr Marcus Pearce
Senior Lecturer in Sound & Music Processing
Music Cognition, Auditory Perception, Empirical Aesthetics, Statistical Learning, Probabilistic Modelling.
Dr Huy Phan
Lecturer in Artificial Intelligence
Machine listening, computational auditory scene analysis, machine learning for speech processing, machine learning for biosignal analysis, longitudinal sleep monitoring, healthcare applications
Prof Matthew Purver
Professor in Computational Linguistics
computational linguistics including models of language and music
Prof. Joshua D Reiss
Professor in Audio Engineering
sound engineering, intelligent audio production, sound synthesis, audio effects, automatic mixing
Dr Charalampos Saitis
Lecturer in Digital Music Processing
Auditory perception and cognition, crossmodal correspondences, musical acoustics, musical haptics, musician-instrument interaction, affective computing
Prof Mark Sandler
C4DM Director, Turing Fellow, Royal Society Wolfson Research Merit award holder
Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data.
Dr Tony Stockman
Senior Lecturer
Interaction Design, auditory displays, Data Sonification, Collaborative Systems, Cross-modal Interaction, Assistive Technology, Accessibility
Dr Dan Stowell
Lecturer
Machine listening, birdsong, bird calls, multi-source, probabilistic models, machine learning, beatboxing

Research support staff

NameProject/interests/keywords
Dr Jasmina Bolfek-Radovani
Research Programme Manager
Project: UKRI Centre for Doctoral Training in Artificial Intelligence and Music
Alvaro Bort
Research Programme Manager
Projects: UKRI Centre for Doctoral Training in Artificial Intelligence and Music, New Frontiers in Music Information Processing (MIP-Frontiers)
Jonathan Winfield
Research Programme Manager
Project: Centre for Doctoral Training in Media and Arts Technology

Postdoctoral research assistants

NameProject/interests/keywords
Dr Yuanyuan LiuProject: Digital Platforms for Craft in the UK and China
Veronica Morfi
Postdoctoral Research Assistant
Machine transcription of wildlife bird sound scenes
Ken O'Hanlon Development of next generation music recognition algorithm for content monitoring
Dr Thomas Wilmering

Research assistants

NameProject/interests/keywords

Research students

NameProject/interests/keywords
Ruchit AgrawalAdaptive Semi-Supervised Music Alignment
Jack ArmitageSupporting craft in digital musical instrument design
Berker BanarGenerating emotional music using AI
Adán BenitoBeyond the fret: gesture analysis on fretted instruments and its applications to instrument augmentation
Gary BromhamThe role of nostalga in music production
Fred Bruford
Emmanouil Theofanis ChourdakisAutomatic Storytelling with Audio
Marco ComunitàMachine learning applied to sound synthesis models
Alejandro DelgadoFine grain time resolution audio features for MIR
Emir DemirelRepresentation Learning in Singing Voice
David FosterModelling the Creative Process of Jazz Improvisation
Jacob HarrisonMusic interfaces for stroke neuro-rehabilitation
Giacomo LepriExploring the role of culture and community in the design of new musical instruments
Yukun LiComputational Comparison Between Different Genres of Music in Terms of the Singing Voice
Beici LiangPiano playing technique detection, multimodal music information retrieval
Lele LiuAutomatic music transcription with end-to-end deep neural networks
Carlos LordeloInstrument modelling to aid polyphonic transcription
Ilaria MancoDeep learning and multi-modal models for the music industry
Marco MartínezMachine learning techniques for the development of intelligent audio mixing tools.
Liang Men
Alessia Milo
Saumitra MishraAnalysing Deep Architectures for Audio- based Music Content Analysis
Giulio MoroIoT (as in instruments of things), low latency audio and sensors, embedded devices, why-do-people-think-analog-is-better
Inês NolascoTowards an automatic acoustic identification of individuals in the wild
Brendan O'ConnorVoice Transformation
Iretiolowa Olowe
Arjun PankajakshanComputational sound scene analysis
Mary Pilataki-ManikaPolyphonic Music Transcription using Deep Learning
Vanessa PopeAutomated Analysis of Rhythm in Performed Speech
Sebastián RuizPhysiological Responses to Ensemble Interaction
Saurjya SarkarNew perspectives in instrument-based audio source separation
Dalia SenvaityteAudio Source Separation for Advanced Digital Audio Effects
Elona ShatriOptical music recognition using deep learning
Di Sheng
Rishi ShuklaBinaural virtual auditory display for music content recommendation and navigation
Janis SokolovskisNew Technologies for Music Learning: Computer-assisted approaches to analysis and shaping of music instrument practice
Vinod SubramanianNote level audio features for understanding and visualising musical performance
Cyrus VahidiPerceptual end to end learning for music understanding
Changhong WangAutomatic Classification of Chinese Bamboo Flute Playing Techniques
James WeaverSpace and Intelligibility of Musical Performance
William J. WilkinsonProbabilistic machine listening. Generative models for natural sounds.
Yongmeng Wu
Simin YangAnalysis and Prediction of Listeners' Time-varying Emotion Responses in Live Music Performance
Adrien YcartMusic Language Models for Audio Analysis : neural networks, automatic music transcription, symbolic music modelling

Visiting academics

NameProject/interests/keywords
Dr Helen Bear
Honorary Lecturer
Integrating sound and context recognition for acoustic scene analysis
Roger Dean
Visiting Professor
Dr Matthias Mauch
Visiting Academic
music transcription (chords, beats, drums, melody, ...), interactive music annotation, singing research, research in the evolution of musical styles
Martyn Ware
Visiting Professorial Fellow

Visitors

NameProject/interests/keywords

Return to top