Menu
website menu

Machine Listening

In the Machine Listening area we concentrate on the automatic analysis and understanding of musical and and other sounds from the world around us.

We use a wide variety of techniques to analyse sounds, including: short-time Fourier transforms (STFTs), wavelets, cosine packets, Mel-frequency cepstral coefficients (MFCCs), hidden Markov models, Bayesian models, spectrogram/matrix factorization methods, sinusoidal analysis, independent component analysis, dynamic Bayesian networks, and sparse representations.

Projects in this area include:

  • Information Dynamics of Music
  • Beat Tracking and Rhythmic Analysis
  • Sparse Representations for Audio Source Separation
  • Compressed Sensing of Audio Scenes
  • Automated Composition
  • Machine Listening using Sparse Representations
  • Interactive Real-time Musical Systems
  • Musical Audio Analysis for Real-Time Interaction
  • Real-Time Analysis of Voice for Musical Applications
  • Musical Audio Stream Separation
  • Sparse Object-Based Coding of Music
  • Multi-pitch detection and instrument identification
  • Acoustic scene analysis: event detection and scene classification

Members

NameProject/interests/keywords
Dr Helen Bear
Honorary Lecturer
Integrating sound and context recognition for acoustic scene analysis
Dr Emmanouil Benetos
Reader in Machine Listening, Turing Fellow
Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology
Sungkyun ChangDeep learning technologies for multi-instrument automatic music transcription
Andrew (Drew) EdwardsDeep Learning for Jazz Piano: Transcription + Generative Modeling
Jiawen HuangLyrics Alignment For Polyphonic Music
Harnick KheraInformed source separation for multi-mic production
Jinhua LiangAI for everyday sounds
Lele LiuAutomatic music transcription with end-to-end deep neural networks
Carlos LordeloInstrument modelling to aid polyphonic transcription
Yin-Jyun LuoIndustry-scale Machine Listening for Music and Audio Data
Ilaria MancoMultimodal Deep Learning for Music Information Retrieval
Luca MarinelliGender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising
Dr Veronica MorfiMachine transcription of wildlife bird sound scenes
InĂªs NolascoTowards an automatic acoustic identification of individuals in the wild
Dr Ken O'Hanlon
Arjun PankajakshanComputational sound scene analysis
Dr Huy Phan
Lecturer in Artificial Intelligence, Turing Fellow
Machine listening, computational auditory scene analysis, machine learning for speech processing, machine learning for biosignal analysis, longitudinal sleep monitoring, healthcare applications
Dalia SenvaityteAudio Source Separation for Advanced Digital Audio Effects
Shubhr SinghAudio Applications of Novel Mathematical Methods in Deep Learning
Vinod SubramanianNote level audio features for understanding and visualising musical performance

PhD Study - interested in joining the team? We are currently accepting PhD applications.

Return to top