Menu
website menu

Music Informatics

With online music stores offering millions of songs to choose from, users need assistance. Using digital signal processing, machine learning, and the semantic web, our research explores new ways of intelligently analysing musical data, and assists people in finding the music they want.

We have developed systems for automatic playlisting from personal collections (SoundBite), for looking inside the audio (Sonic Visualiser), for hardening/softening transients, and many others. We also regularly release some of our algorithms under Open Source licences, while maintaining a healthy portfolio of patents.

This area is led by Dr Simon Dixon. Projects in this area include:

  • mid-level music descriptors: chords, keys, notes, beats, drums, instrumentation, timbre, structural segmentation, melody
  • high-level concepts for music classification, retrieval and knowledge discovery: genre, mood, emotions
  • Sonic Visualser
  • semantic music analysis for intelligent editing
  • linking music-related information and audio data
  • interactive auralisation with room impulse responses

PhD Study - interested in joining the team? We are currently accepting PhD applications.

Members

NameProject/interests/keywords
Ruchit AgrawalAdaptive Semi-Supervised Music Alignment
Dr Mathieu Barthet
Lecturer
Dr Emmanouil Benetos
Senior Lecturer, RAEng Research Fellow, Turing Fellow
Music signal analysis, computational sound scene analysis, machine learning for audio analysis, computational musicology
Gary BromhamThe role of nostalga in music production
Emmanouil Theofanis ChourdakisAutomatic Storytelling with Audio
Jiajie DaiModelling Intonation and Interaction in Vocal Ensembles
Alejandro DelgadoFine grain time resolution audio features for MIR
Emir DemirelRepresentation Learning in Singing Voice
Prof. Simon Dixon
Professor, Deputy Director of C4DM, Director of Graduate Studies
Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning.
Dr George Fazekas
Senior Lecturer
Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems)
Peter HarrisonMusic-theoretic and cognitive applications of symbolic music modelling
Beici LiangPiano playing technique detection, multimodal music information retrieval
Carlos LordeloInstrument modelling to aid polyphonic transcription
Dr Matthias Mauch
Lecturer
music transcription (chords, beats, drums, melody, ...), interactive music annotation, singing research, research in the evolution of musical styles
Dave MoffatTools for Intelligent Music Production
Dr Johan Pauwels
Postdoctoral Research Assistant
Audio Commons project, FAST IMPACt project, automatic music labelling, music information retrieval, music signal processing, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, machine learning
Francisco Rodríguez AlgarraIntelligent Music Machine Listening
Prof Mark Sandler
Director
Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data.
Dalia SenvaityteAudio Source Separation for Advanced Digital Audio Effects
Daniel StollerMachine listening with limited annotations
Vinod SubramanianNote level audio features for understanding and visualising musical performance
Dr Florian Thalmann
Dr Thomas Wilmering
Simin YangAnalysis and Prediction of Listeners' Time-varying Emotion Responses in Live Music Performance
Adrien YcartMusic Language Models for Audio Analysis : neural networks, automatic music transcription, symbolic music modelling
Delia Fano YelaSignal Processing and Machine Learning Methods for Noise and Interference Reduction in Studio and Live Recordings

Return to top