Menu
website menu

People

Primary academic staff and fellows

NameProject/interests/keywords
Dr Mathieu Barthet
Senior Lecturer in Digital Media
Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art
Dr Emmanouil Benetos
Reader in Machine Listening, Turing Fellow
Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology
Dr Nick Bryan-Kinns
Professor of Interaction Design. Visiting Professor of Interaction Design, Hunan University, China. Turing Fellow.
Interaction Design with Audio #IDwA. Interactive Art, Interactive Music, Interactive Sonification. Design, Evaluation. Collaboration, Multi-person Interaction. Cross-Modal Interaction, Tangible Interaction.
Prof. Simon Dixon
Professor of Computer Science, Deputy Director of C4DM, Director of the AIM CDT, Turing Fellow
Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning.
Dr George Fazekas
Senior Lecturer
Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems)
Prof Andrew McPherson
Professor of Musical Interaction
new interfaces for musical expression, augmented instruments, performance study, human-computer interaction, embedded hardware
Dr Johan Pauwels
Lecturer in Audio Signal Processing
automatic music labelling, music information retrieval, music signal processing, machine learning for audio, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, big data science
Dr Huy Phan
Lecturer in Artificial Intelligence, Turing Fellow
Machine listening, computational auditory scene analysis, machine learning for speech processing, machine learning for biosignal analysis, longitudinal sleep monitoring, healthcare applications
Prof. Joshua D Reiss
Professor of Audio Engineering
sound engineering, intelligent audio production, sound synthesis, audio effects, automatic mixing
Dr Charalampos Saitis
Lecturer in Digital Music Processing, Turing Fellow
Communication acoustics, crossmodal correspondences, sound synthesis, cognitive audio, musical haptics
Prof Mark Sandler
C4DM Director, Turing Fellow, Royal Society Wolfson Research Merit award holder
Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data.
Dr Tony Stockman
Senior Lecturer
Interaction Design, auditory displays, Data Sonification, Collaborative Systems, Cross-modal Interaction, Assistive Technology, Accessibility

Associate academic staff and fellows

NameProject/interests/keywords
Prof Pat Healey
Professor of Human Interaction, Turing Fellow
Dr Marcus Pearce
Senior Lecturer in Sound & Music Processing
Music Cognition, Auditory Perception, Empirical Aesthetics, Statistical Learning, Probabilistic Modelling.
Prof Matthew Purver
Professor of Computational Linguistics, Turing Fellow
computational linguistics including models of language and music

Research support staff

NameProject/interests/keywords
Dr Jasmina Bolfek-Radovani
Research Programme Manager
Project: UKRI Centre for Doctoral Training in Artificial Intelligence and Music
Alvaro Bort
Research Programme Manager
Projects: UKRI Centre for Doctoral Training in Artificial Intelligence and Music, New Frontiers in Music Information Processing (MIP-Frontiers)
Jonathan Winfield
Research Programme Manager
Project: Centre for Doctoral Training in Media and Arts Technology

Postdoctoral research assistants

NameProject/interests/keywords
Dr Jacob HarrisonBridging the gap: visually impaired and sighted music industry professionals working side by side
Dr Yuanyuan LiuProject: Digital Platforms for Craft in the UK and China

Research assistants

NameProject/interests/keywords
Sungkyun ChangDeep learning technologies for multi-instrument automatic music transcription
Callum GoddardDeep learning technologies for multi-instrument automatic music transcription

Research students

NameProject/interests/keywords
Berker BanarTowards Composing Contemporary Classical Music using Generative Deep Learning
Adán BenitoBeyond the fret: gesture analysis on fretted instruments and its applications to instrument augmentation
Gary BromhamThe role of nostalga in music production
Fred Bruford
Marco ComunitàMachine learning applied to sound synthesis models
Ruby CrockerContinuous mood recognition in film music
Alejandro DelgadoFine grain time resolution audio features for MIR
Andrew (Drew) EdwardsDeep Learning for Jazz Piano: Transcription + Generative Modeling
Oluremi FalowoE-AIM - Embodied Cognition in Intelligent Musical Systems
Corey FordArtificial Intelligence for Supporting Musical Creativity and Engagement in Child-Computer Interaction
David FosterModelling the Creative Process of Jazz Improvisation
Iacopo GhinassiSemantic understanding of TV programme content and structure to enable automatic enhancement and adjustment
Max GrafPERFORM-AI (Provide Extended Realities for Musical Performance using AI)
Andrea GuidiDesign for auditory imagery
Edward HallProbabilistic modelling of thematic development and structural coherence in music
Madeline HamiltonImproving AI-generated Music with Pleasure Models
Benjamin HayesPerceptually motivated deep learning approaches to creative sound synthesis
Jiawen HuangLyrics Alignment For Polyphonic Music
Ilias IbnyahyaAudio Effects design optimization
Thomas KaplanProbabilistic modelling of rhythm perception and production
Harnick KheraInformed source separation for multi-mic production
Giacomo LepriExploring the role of culture and community in the design of new musical instruments
Yukun LiComputational Comparison Between Different Genres of Music in Terms of the Singing Voice
Jinhua LiangAI for everyday sounds
Lele LiuAutomatic music transcription with end-to-end deep neural networks
Carlos LordeloInstrument modelling to aid polyphonic transcription
Yin-Jyun LuoIndustry-scale Machine Listening for Music and Audio Data
Ilaria MancoMultimodal Deep Learning for Music Information Retrieval
Luca MarinelliGender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising
Andrea MartelloniReal-Time Gesture Classification on an Augmented Acoustic Guitar using Deep Learning to Improve Extended-Range and Percussive Solo Playing
Lia MiceThe impact of physical dimensions on musicalgestural interaction in large digital musicalinstrument design
Inês NolascoTowards an automatic acoustic identification of individuals in the wild
Brendan O'ConnorSinging Voice Attribute Transformation
Arjun PankajakshanComputational sound scene analysis
Teresa PelinskiSensor mesh as performance interface
Mary PilatakiDeep Learning methods for Multi-Instrument Music Transcription
Vjosa PreniqiPredicting demographics, personalities, and global values from digital media behaviours
Courtney ReedPhysiological sensing of the singing voice and musical imagery usage in vocalists
Xavier RileyPitch tracking for music applications - beyond 99% accuracy
Eleanor RowAutomatic micro-composition for professional/novice composers using generative models as creativity support tools
Sebastián RuizPhysiological Responses to Ensemble Interaction
Saurjya SarkarNew perspectives in instrument-based audio source separation
Pedro SarmentoGuitar-Oriented Neural Music Generation in Symbolic Format
Dalia SenvaityteAudio Source Separation for Advanced Digital Audio Effects
Elona ShatriOptical music recognition using deep learning
Rishi ShuklaBinaural virtual auditory display for music content recommendation and navigation
Shubhr SinghAudio Applications of Novel Mathematical Methods in Deep Learning
Christian SteinmetzEnd-to-end generative modeling of multitrack mixing with non-parallel data and adversarial networks
Vinod SubramanianNote level audio features for understanding and visualising musical performance
Jingjing TangEnd-to-End System Design for Music Style Transfer with Neural Networks
Cyrus VahidiPerceptual end to end learning for music understanding
Soumya Sai VankaMusic Production Style Transfer and Mix Similarity
James WeaverSpace and Intelligibility of Musical Performance
Elizabeth WilsonCo-creative Algorithmic Composition Based on Models of Affective Response
Chris WinnardMusic Interestingness in the Brain
Lewis WolstanholmeMeta-Physical Modelling
Yixiao ZhangMachine Learning Methods for Artificial Musicality
Jincheng ZhangEmotion-specific Music Generation Using Deep Learning

Visiting academics

NameProject/interests/keywords
Dr Helen Bear
Honorary Lecturer
Integrating sound and context recognition for acoustic scene analysis
Dr Matthias Mauch
Visiting Academic
music transcription (chords, beats, drums, melody, ...), interactive music annotation, singing research, research in the evolution of musical styles
Dr Veronica MorfiMachine transcription of wildlife bird sound scenes

Visitors

NameProject/interests/keywords
Dr Ken O'Hanlon
Dr Jose J. Valero-MasUniversity of Alicante

Return to top