website menu

Machine Listening

In the Machine Listening area we concentrate on the automatic analysis and understanding of musical and and other sounds from the world around us.

We use a wide variety of techniques to analyse sounds, including: short-time Fourier transforms (STFTs), wavelets, cosine packets, Mel-frequency cepstral coefficients (MFCCs), hidden Markov models, Bayesian models, spectrogram/matrix factorization methods, sinusoidal analysis, independent component analysis, dynamic Bayesian networks, and sparse representations.

Projects in this area include:

  • Information Dynamics of Music
  • Beat Tracking and Rhythmic Analysis
  • Sparse Representations for Audio Source Separation
  • Compressed Sensing of Audio Scenes
  • Automated Composition
  • Machine Listening using Sparse Representations
  • Interactive Real-time Musical Systems
  • Musical Audio Analysis for Real-Time Interaction
  • Real-Time Analysis of Voice for Musical Applications
  • Musical Audio Stream Separation
  • Sparse Object-Based Coding of Music
  • Multi-pitch detection and instrument identification
  • Acoustic scene analysis: event detection and scene classification


Dr Helen BearIntegrating sound and context recognition for acoustic scene analysis
Dr Emmanouil Benetos
Senior Lecturer, RAEng Research Fellow, Turing Fellow
Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology
Bhusan ChettriAutomatic Speaker Verification Spoofing and Countermeaures
Emmanouil Theofanis ChourdakisAutomatic Storytelling with Audio
Emir DemirelRepresentation Learning in Singing Voice
Pablo Alejandro Alvarado DuranPhysically and Musically Inspired Probabilistic Models for Audio Content Analysis
Carlos LordeloInstrument modelling to aid polyphonic transcription
Saumitra MishraAnalysing Deep Architectures for Audio- based Music Content Analysis
Veronica MorfiMachine transcription of wildlife bird sound scenes
Inês Nolasco
Research Assistant
Audio-based identification of beehive states
Ken O'Hanlon Audio-visual analysis
Arjun PankajakshanComputational sound scene analysis
Dr Johan Pauwels
Postdoctoral Research Assistant
Audio Commons project, FAST IMPACt project, automatic music labelling, music information retrieval, music signal processing, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, machine learning
Francisco Rodríguez AlgarraIntelligent Music Machine Listening
Dalia SenvaityteAudio Source Separation for Advanced Digital Audio Effects
Daniel StollerMachine listening with limited annotations
Dr Dan Stowell
EPSRC Research Fellow
Machine listening, birdsong, bird calls, multi-source, probabilistic models, machine learning, beatboxing
Vinod SubramanianNote level audio features for understanding and visualising musical performance
William J. WilkinsonProbabilistic machine listening. Generative models for natural sounds.
Adrien YcartMusic Language Models for Audio Analysis : neural networks, automatic music transcription, symbolic music modelling
Delia Fano YelaSignal Processing and Machine Learning Methods for Noise and Interference Reduction in Studio and Live Recordings

PhD Study - interested in joining the team? We are currently accepting PhD applications.

Return to top