Music Informatics
With online music stores offering millions of songs to choose from, users need assistance. Using digital signal processing, machine learning, and the semantic web, our research explores new ways of intelligently analysing musical data, and assists people in finding the music they want.
We have developed systems for automatic playlisting from personal collections (SoundBite), for looking inside the audio (Sonic Visualiser), for hardening/softening transients, and many others. We also regularly release some of our algorithms under Open Source licences, while maintaining a healthy portfolio of patents.
This area is led by Dr Simon Dixon. Projects in this area include:
- mid-level music descriptors: chords, keys, notes, beats, drums, instrumentation, timbre, structural segmentation, melody
- high-level concepts for music classification, retrieval and knowledge discovery: genre, mood, emotions
- Sonic Visualser
- semantic music analysis for intelligent editing
- linking music-related information and audio data
- interactive auralisation with room impulse responses
PhD Study - interested in joining the team? We are currently accepting PhD applications.
Members
Name | Project/interests/keywords |
---|---|
Ruchit Agrawal | Adaptive Semi-Supervised Music Alignment |
Berker Banar | Generating emotional music using AI |
Dr Mathieu Barthet Senior Lecturer in Digital Media | Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art |
Dr Emmanouil Benetos Senior Lecturer, Turing Fellow | Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology |
Gary Bromham | The role of nostalga in music production |
Alejandro Delgado | Fine grain time resolution audio features for MIR |
Emir Demirel Marie Curie Skladowska Actions Fellow | Automatic Lyrics Transcription and Alignment |
Prof. Simon Dixon Professor, Deputy Director of C4DM, Director of the AIM CDT | Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning. |
Dr George Fazekas Senior Lecturer | Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems) |
David Foster | Modelling the Creative Process of Jazz Improvisation |
Jiawen Huang | Lyrics Alignment For Polyphonic Music |
Harnick Khera | Informed source separation for multi-mic production |
Yukun Li | Computational Comparison Between Different Genres of Music in Terms of the Singing Voice |
Lele Liu | Automatic music transcription with end-to-end deep neural networks |
Carlos Lordelo | Instrument modelling to aid polyphonic transcription |
Ilaria Manco | Deep learning and multi-modal models for the music industry |
Dr Matthias Mauch Visiting Academic | music transcription (chords, beats, drums, melody, ...), interactive music annotation, singing research, research in the evolution of musical styles |
Brendan O'Connor | Voice Transformation |
Mary Pilataki-Manika | Polyphonic Music Transcription using Deep Learning |
Vjosa Preniqi | Predicting demographics, personalities, and global values from digital media behaviours |
Xavier Riley | Digging Deeper - expanding the “Dig That Lick” corpus with new sources and techniques |
Prof Mark Sandler C4DM Director, Turing Fellow, Royal Society Wolfson Research Merit award holder | Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data. |
Saurjya Sarkar | New perspectives in instrument-based audio source separation |
Dalia Senvaityte | Audio Source Separation for Advanced Digital Audio Effects |
Elona Shatri | Optical music recognition using deep learning |
Vinod Subramanian | Note level audio features for understanding and visualising musical performance |
Cyrus Vahidi | Perceptual end to end learning for music understanding |
Dr Thomas Wilmering | |
Simin Yang | Analysis and Prediction of Listeners' Time-varying Emotion Responses in Live Music Performance |
Yixiao Zhang | Machine Learning Methods for Artificial Musicality |