website menu

Machine Listening

In the Machine Listening area we concentrate on the automatic analysis and understanding of musical and and other sounds from the world around us.

We use a wide variety of techniques to analyse sounds, including: short-time Fourier transforms (STFTs), wavelets, cosine packets, Mel-frequency cepstral coefficients (MFCCs), hidden Markov models, Bayesian models, spectrogram/matrix factorization methods, sinusoidal analysis, independent component analysis, dynamic Bayesian networks, and sparse representations.

Projects in this area include:

  • Information Dynamics of Music
  • Beat Tracking and Rhythmic Analysis
  • Sparse Representations for Audio Source Separation
  • Compressed Sensing of Audio Scenes
  • Automated Composition
  • Machine Listening using Sparse Representations
  • Interactive Real-time Musical Systems
  • Musical Audio Analysis for Real-Time Interaction
  • Real-Time Analysis of Voice for Musical Applications
  • Musical Audio Stream Separation
  • Sparse Object-Based Coding of Music
  • Multi-pitch detection and instrument identification
  • Acoustic scene analysis: event detection and scene classification


Dr Helen Bear
Honorary Lecturer
Integrating sound and context recognition for acoustic scene analysis
Dr Emmanouil Benetos
Reader in Machine Listening, Turing Fellow
Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology
Aditya BhattacharjeeSelf-supervision in Audio Fingerprinting
Sungkyun ChangDeep learning technologies for multi-instrument automatic music transcription
Andrew (Drew) EdwardsDeep Learning for Jazz Piano: Transcription + Generative Modeling
Jiawen HuangLyrics Alignment For Polyphonic Music
Harnick KheraInformed source separation for multi-mic production
Jinhua LiangAI for everyday sounds
Lele LiuAutomatic music transcription with end-to-end deep neural networks
Carlos LordeloInstrument modelling to aid polyphonic transcription
Yin-Jyun LuoIndustry-scale Machine Listening for Music and Audio Data
Yinghao MaSelf-supervision in machine listening
Ilaria MancoMultimodal Deep Learning for Music Information Retrieval
Luca MarinelliGender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising
Christopher MitcheltreeRepresentation Learning for Audio Production Style and Modulations
Dr Veronica MorfiMachine transcription of wildlife bird sound scenes
InĂªs NolascoTowards an automatic acoustic identification of individuals in the wild
Arjun PankajakshanComputational sound scene analysis
Dr Johan Pauwels
Lecturer in Audio Signal Processing
automatic music labelling, music information retrieval, music signal processing, machine learning for audio, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, big data science
Dalia SenvaityteAudio Source Separation for Advanced Digital Audio Effects
Shubhr SinghAudio Applications of Novel Mathematical Methods in Deep Learning
Antonella TorrisiComputational analysis of chick vocalisations: from categorisation to live feedback
Yannis (John) VasilakisActive Learning for Interactive Music Transcription
Dr Lin Wang
Lecturer in Applied Data Science and Signal Processing
signal processing; machine learning; robot perception

PhD Study - interested in joining the team? We are currently accepting PhD applications.

Return to top