website menu

Machine Listening

In the Machine Listening area we concentrate on the automatic analysis and understanding of musical and and other sounds from the world around us.

We use a wide variety of techniques to analyse sounds, including: short-time Fourier transforms (STFTs), wavelets, cosine packets, Mel-frequency cepstral coefficients (MFCCs), hidden Markov models, Bayesian models, spectrogram/matrix factorization methods, sinusoidal analysis, independent component analysis, dynamic Bayesian networks, and sparse representations.

Projects in this area include:

  • Information Dynamics of Music
  • Beat Tracking and Rhythmic Analysis
  • Sparse Representations for Audio Source Separation
  • Compressed Sensing of Audio Scenes
  • Automated Composition
  • Machine Listening using Sparse Representations
  • Interactive Real-time Musical Systems
  • Musical Audio Analysis for Real-Time Interaction
  • Real-Time Analysis of Voice for Musical Applications
  • Musical Audio Stream Separation
  • Sparse Object-Based Coding of Music
  • Multi-pitch detection and instrument identification
  • Acoustic scene analysis: event detection and scene classification


Dr Helen Bear
Honorary Lecturer
Integrating sound and context recognition for acoustic scene analysis
Dr Emmanouil Benetos
Senior Lecturer, Turing Fellow
Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology
Emir Demirel
Marie Curie Skladowska Actions Fellow
Automatic Lyrics Transcription and Alignment
Jiawen HuangLyrics Alignment For Polyphonic Music
Harnick KheraInformed source separation for multi-mic production
Lele LiuAutomatic music transcription with end-to-end deep neural networks
Carlos LordeloInstrument modelling to aid polyphonic transcription
Yin-Jyun LuoIndustry-scale Machine Listening for Music and Audio Data
Ilaria MancoDeep learning and multi-modal models for the music industry
Luca MarinelliGender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising
Veronica Morfi
Postdoctoral Research Assistant
Machine transcription of wildlife bird sound scenes
Inês NolascoTowards an automatic acoustic identification of individuals in the wild
Ken O'Hanlon Development of next generation music recognition algorithm for content monitoring
Arjun PankajakshanComputational sound scene analysis
Dr Huy Phan
Lecturer in Artificial Intelligence
Machine listening, computational auditory scene analysis, machine learning for speech processing, machine learning for biosignal analysis, longitudinal sleep monitoring, healthcare applications
Xavier RileyDigging Deeper - expanding the “Dig That Lick” corpus with new sources and techniques
Dalia SenvaityteAudio Source Separation for Advanced Digital Audio Effects
Shubhr SinghAudio Applications of Novel Mathematical Methods in Deep Learning
Dr Dan Stowell
Machine listening, birdsong, bird calls, multi-source, probabilistic models, machine learning, beatboxing
Vinod SubramanianNote level audio features for understanding and visualising musical performance
William J. WilkinsonProbabilistic machine listening. Generative models for natural sounds.

PhD Study - interested in joining the team? We are currently accepting PhD applications.

Return to top