People
Primary academic staff and fellows
Name | Project/interests/keywords |
---|---|
Dr Mathieu Barthet Senior Lecturer in Digital Media | Music information research, Internet of musical things, Extended reality, New interfaces for musical expression, Semantic audio, Music perception (timbre, emotions), Audience-Performer interaction, Participatory art |
Dr Emmanouil Benetos Reader in Machine Listening, Turing Fellow | Machine listening, music information retrieval, computational sound scene analysis, machine learning for audio analysis, language models for music and audio, computational musicology |
Dr Nick Bryan-Kinns Professor of Interaction Design. Visiting Professor of Interaction Design, Hunan University, China. Turing Fellow. | Interaction Design with Audio #IDwA. Interactive Art, Interactive Music, Interactive Sonification. Design, Evaluation. Collaboration, Multi-person Interaction. Cross-Modal Interaction, Tangible Interaction. |
Prof. Simon Dixon Professor of Computer Science, Deputy Director of C4DM, Director of the AIM CDT, Turing Fellow | Music informatics, music signal processing, artificial intelligence, music cognition; extraction of musical content (e.g. rhythm, harmony, intonation) from audio signals: beat tracking, audio alignment, chord and note transcription, singing intonation; using signal processing approaches, probabilistic models, and deep learning. |
Dr George Fazekas Senior Lecturer | Semantic Audio, Music Information Retrieval, Semantic Web for Music, Machine Learning and Data Science, Music Emotion Recognition, Interactive music sytems (e.g. intellignet editing, audio production and performance systems) |
Prof Andrew McPherson Professor of Musical Interaction | new interfaces for musical expression, augmented instruments, performance study, human-computer interaction, embedded hardware |
Dr Johan Pauwels Lecturer in Audio Signal Processing | automatic music labelling, music information retrieval, music signal processing, machine learning for audio, chord/key/structure (joint) estimation, instrument identification, multi-track/channel audio, music transcription, graphical models, big data science |
Prof. Joshua D Reiss Professor of Audio Engineering | sound engineering, intelligent audio production, sound synthesis, audio effects, automatic mixing |
Dr Charalampos Saitis Lecturer in Digital Music Processing, Turing Fellow | Communication acoustics, crossmodal correspondences, sound synthesis, cognitive audio, musical haptics |
Prof Mark Sandler C4DM Director, Turing Fellow, Royal Society Wolfson Research Merit award holder | Digital Signal Processing, Digital Audio, Music Informatics, Audio Features, Semantic Audio, Immersive Audio, Studio Science, Music Data Science, Music Linked Data. |
Dr Tony Stockman Senior Lecturer | Interaction Design, auditory displays, Data Sonification, Collaborative Systems, Cross-modal Interaction, Assistive Technology, Accessibility |
Dr Lin Wang Lecturer in Applied Data Science and Signal Processing | signal processing; machine learning; robot perception |
Associate academic staff and fellows
Name | Project/interests/keywords |
---|---|
Prof Pat Healey Professor of Human Interaction, Turing Fellow | |
Dr Marcus Pearce Senior Lecturer in Sound & Music Processing | Music Cognition, Auditory Perception, Empirical Aesthetics, Statistical Learning, Probabilistic Modelling. |
Prof Matthew Purver Professor of Computational Linguistics, Turing Fellow | computational linguistics including models of language and music |
Prof Geraint Wiggins Professor of Computational Creativity | Computational Creativity, Artificial Intelligence, Music Cognition |
Research support staff
Name | Project/interests/keywords |
---|---|
Alvaro Bort Research Programme Manager | Projects: UKRI Centre for Doctoral Training in Artificial Intelligence and Music, New Frontiers in Music Information Processing (MIP-Frontiers) |
Jonathan Winfield Research Programme Manager | Project: Centre for Doctoral Training in Media and Arts Technology |
Postdoctoral research assistants
Name | Project/interests/keywords |
---|---|
Dr Jacob Harrison | Bridging the gap: visually impaired and sighted music industry professionals working side by side |
Dr Yuanyuan Liu | Project: Digital Platforms for Craft in the UK and China |
Research assistants
Name | Project/interests/keywords |
---|---|
Sungkyun Chang | Deep learning technologies for multi-instrument automatic music transcription |
Research students
Name | Project/interests/keywords |
---|---|
Berker Banar | Towards Composing Contemporary Classical Music using Generative Deep Learning |
Adán Benito | Beyond the fret: gesture analysis on fretted instruments and its applications to instrument augmentation |
Aditya Bhattacharjee | Self-supervision in Audio Fingerprinting |
James Bolt | Intelligent audio and music editing with deep learning |
Gary Bromham | The role of nostalga in music production |
Carey Bunks | Cover Song Identification |
Marco Comunità | Machine learning applied to sound synthesis models |
Ruby Crocker | Continuous mood recognition in film music |
Andrew (Drew) Edwards | Deep Learning for Jazz Piano: Transcription + Generative Modeling |
Oluremi Falowo | E-AIM - Embodied Cognition in Intelligent Musical Systems |
Corey Ford | Artificial Intelligence for Supporting Musical Creativity and Engagement in Child-Computer Interaction |
David Foster | Modelling the Creative Process of Jazz Improvisation |
Nelly Garcia | An investigation evaluating realism in sound design |
Adam Andrew Garrow | : Probabilistic learning of sequential structures in music cognition |
Iacopo Ghinassi | Semantic understanding of TV programme content and structure to enable automatic enhancement and adjustment |
Max Graf | PERFORM-AI (Provide Extended Realities for Musical Performance using AI) |
Andrea Guidi | Design for auditory imagery |
Edward Hall | Probabilistic modelling of thematic development and structural coherence in music |
Madeline Hamilton | Improving AI-generated Music with Pleasure Models |
Benjamin Hayes | Perceptually motivated deep learning approaches to creative sound synthesis |
Jiawen Huang | Lyrics Alignment For Polyphonic Music |
Ilias Ibnyahya | Audio Effects design optimization |
Thomas Kaplan | Probabilistic modelling of rhythm perception and production |
Harnick Khera | Informed source separation for multi-mic production |
Giacomo Lepri | Exploring the role of culture and community in the design of new musical instruments |
Yukun Li | Computational Comparison Between Different Genres of Music in Terms of the Singing Voice |
Jinhua Liang | AI for everyday sounds |
Lele Liu | Automatic music transcription with end-to-end deep neural networks |
Carlos Lordelo | Instrument modelling to aid polyphonic transcription |
Yin-Jyun Luo | Industry-scale Machine Listening for Music and Audio Data |
Yinghao Ma | Self-supervision in machine listening |
Ilaria Manco | Multimodal Deep Learning for Music Information Retrieval |
Luca Marinelli | Gender-coded sound: A multimodal data-driven analysis of gender encoding strategies in sound and music for advertising |
Andrea Martelloni | Real-Time Gesture Classification on an Augmented Acoustic Guitar using Deep Learning to Improve Extended-Range and Percussive Solo Playing |
Tyler Howard McIntosh | Expressive Performance Rendering for Music Generation Systems |
Christopher Mitcheltree | Representation Learning for Audio Production Style and Modulations |
Ashley Noel-Hirst | Latent Spaces for Human-AI music generation |
Inês Nolasco | Towards an automatic acoustic identification of individuals in the wild |
Brendan O'Connor | Singing Voice Attribute Transformation |
Arjun Pankajakshan | Computational sound scene analysis |
Teresa Pelinski | Sensor mesh as performance interface |
Mary Pilataki | Deep Learning methods for Multi-Instrument Music Transcription |
Vjosa Preniqi | Predicting demographics, personalities, and global values from digital media behaviours |
Xavier Riley | Pitch tracking for music applications - beyond 99% accuracy |
Eleanor Row | Automatic micro-composition for professional/novice composers using generative models as creativity support tools |
Sebastián Ruiz | Physiological Responses to Ensemble Interaction |
Saurjya Sarkar | New perspectives in instrument-based audio source separation |
Pedro Sarmento | Guitar-Oriented Neural Music Generation in Symbolic Format |
Dalia Senvaityte | Audio Source Separation for Advanced Digital Audio Effects |
Bleiz Del Sette | The Sound of Care: researching the use of Deep Learning and Sonification for the daily support of people with Chronic Primary Pain |
Elona Shatri | Optical music recognition using deep learning |
Jordie Shier | Real-time timbral mapping for synthesized percussive performance |
Shubhr Singh | Audio Applications of Novel Mathematical Methods in Deep Learning |
Christian Steinmetz | End-to-end generative modeling of multitrack mixing with non-parallel data and adversarial networks |
David Südholt | Machine Learning of Physical Models for Voice Synthesis |
Jingjing Tang | End-to-End System Design for Music Style Transfer with Neural Networks |
Louise Thorpe | Using Signal-informed Source Separation (SISS) principles to improve instrument separation from legacy recordings |
Antonella Torrisi | Computational analysis of chick vocalisations: from categorisation to live feedback |
Maryam Torshizi | Music emotion modelling using graph analysis |
Cyrus Vahidi | Perceptual end to end learning for music understanding |
Soumya Sai Vanka | Music Production Style Transfer and Mix Similarity |
Yannis (John) Vasilakis | Active Learning for Interactive Music Transcription |
Ningzhi Wang | Generative Models For Music Audio Representation And Understanding |
James Weaver | Space and Intelligibility of Musical Performance |
Alexander Williams | User-driven deep music generation in digital audio workstations |
Elizabeth Wilson | Co-creative Algorithmic Composition Based on Models of Affective Response |
Chris Winnard | Music Interestingness in the Brain |
Lewis Wolstanholme | Meta-Physical Modelling |
Chengye Wu | Leveraging cross-sensory associations in communication |
Chin-Yun Yu | Neural Audio Synthesis with Expressiveness Control |
Huan Zhang | Computational Modelling of Expressive Piano Performance |
Jincheng Zhang | Emotion-specific Music Generation Using Deep Learning |
Yixiao Zhang | Machine Learning Methods for Artificial Musicality |
Visiting academics
Name | Project/interests/keywords |
---|---|
Dr Helen Bear Honorary Lecturer | Integrating sound and context recognition for acoustic scene analysis |
Dr Matthias Mauch Visiting Academic | music transcription (chords, beats, drums, melody, ...), interactive music annotation, singing research, research in the evolution of musical styles |
Dr Veronica Morfi | Machine transcription of wildlife bird sound scenes |
Dr Montserrat Pàmies-Vilà University of Music and Performing Arts Vienna | Timbre modelling for non-conventional cello techniques |
Visitors
Name | Project/interests/keywords |
---|---|
Domenico Stefani University of Trento, Italy | Embedded machine learning for smart musical instruments |