website menu

Music Informatics

With online music stores offering millions of songs to choose from, users need assistance. Using digital signal processing, machine learning, and the semantic web, our research explores new ways of intelligently analysing musical data, and assists people in finding the music they want.

We have developed systems for automatic playlisting from personal collections (SoundBite), for looking inside the audio (Sonic Visualiser), for hardening/softening transients, and many others. We also regularly release some of our algorithms under Open Source licences, while maintaining a healthy portfolio of patents.

This area is led by Dr Simon Dixon. Projects in this area include:

  • mid-level music descriptors: chords, keys, notes, beats, drums, instrumentation, timbre, structural segmentation, melody
  • high-level concepts for music classification, retrieval and knowledge discovery: genre, mood, emotions
  • Sonic Visualser
  • semantic music analysis for intelligent editing
  • linking music-related information and audio data
  • interactive auralisation with room impulse responses

PhD Study - interested in joining the team? We are currently accepting PhD applications.


Berker BanarTowards Composing Contemporary Classical Music using Generative Deep Learning
Dr Mathieu Barthet
Senior Lecturer in Digital Media
Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art
Dr Emmanouil Benetos
Reader in Machine Listening, Turing Fellow
Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology
Gary BromhamThe role of nostalga in music production
Sungkyun ChangDeep learning technologies for multi-instrument automatic music transcription
Ruby CrockerContinuous mood recognition in film music
Alejandro DelgadoFine grain time resolution audio features for MIR
Prof. Simon Dixon
Professor of Computer Science, Deputy Director of C4DM, Director of the AIM CDT, Turing Fellow
Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning.
Andrew (Drew) EdwardsDeep Learning for Jazz Piano: Transcription + Generative Modeling
Dr George Fazekas
Senior Lecturer
Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems)
David FosterModelling the Creative Process of Jazz Improvisation
Iacopo GhinassiSemantic understanding of TV programme content and structure to enable automatic enhancement and adjustment
Callum GoddardDeep learning technologies for multi-instrument automatic music transcription
Andrea GuidiDesign for auditory imagery
Edward HallProbabilistic modelling of thematic development and structural coherence in music
Jiawen HuangLyrics Alignment For Polyphonic Music
Thomas KaplanProbabilistic modelling of rhythm perception and production
Harnick KheraInformed source separation for multi-mic production
Yukun LiComputational Comparison Between Different Genres of Music in Terms of the Singing Voice
Lele LiuAutomatic music transcription with end-to-end deep neural networks
Carlos LordeloInstrument modelling to aid polyphonic transcription
Ilaria MancoMultimodal Deep Learning for Music Information Retrieval
Andrea MartelloniReal-Time Gesture Classification on an Augmented Acoustic Guitar using Deep Learning to Improve Extended-Range and Percussive Solo Playing
Dr Matthias Mauch
Visiting Academic
music transcription (chords, beats, drums, melody, ...), interactive music annotation, singing research, research in the evolution of musical styles
Brendan O'ConnorSinging Voice Attribute Transformation
Dr Johan Pauwels
Lecturer in Audio Signal Processing
automatic music labelling, music information retrieval, music signal processing, machine learning for audio, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, big data science
Mary PilatakiDeep Learning methods for Multi-Instrument Music Transcription
Vjosa PreniqiPredicting demographics, personalities, and global values from digital media behaviours
Courtney ReedPhysiological sensing of the singing voice and musical imagery usage in vocalists
Xavier RileyPitch tracking for music applications - beyond 99% accuracy
Prof Mark Sandler
C4DM Director, Turing Fellow, Royal Society Wolfson Research Merit award holder
Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data.
Saurjya SarkarNew perspectives in instrument-based audio source separation
Pedro SarmentoGuitar-Oriented Neural Music Generation in Symbolic Format
Elona ShatriOptical music recognition using deep learning
Vinod SubramanianNote level audio features for understanding and visualising musical performance
Cyrus VahidiPerceptual end to end learning for music understanding
Soumya Sai VankaMusic Production Style Transfer and Mix Similarity
Elizabeth WilsonCo-creative Algorithmic Composition Based on Models of Affective Response
Yixiao ZhangMachine Learning Methods for Artificial Musicality
Jincheng ZhangEmotion-specific Music Generation Using Deep Learning

Return to top