Digital Music Research NetworkEPSRC Network GR/R64810/01 |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
DMRN > Directory | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Projects
Collaborative ProjectsTechniques and
Algorithms for Understanding the Information Dynamics
of Music Hierarchical Segmentation & Semantic Markup
of Musical Signals Algorithms for Musical Pattern Recognition and
Extraction OMRAS: Online music recognition and searching See also:Queen Mary University of LondonEASAIER: Enabling Access to Sound Archives through
Integration Enrichment and Retrieval Techniques and Algorithms for Understanding the
Information Dynamics of Music Hierarchical Segmentation & Semantic Markup
of Musical Signals Object-based Coding of Musical Audio Advanced Subband Systems for Audio Source Separation Mobile Jamming Understanding and Supporting Group Creativity
Within Design : A Designing For The 21st Century Research
Cluster SIMAC: Semantic Interaction with Music Audio Contents Engaging Collaborations Automatic Polyphonic Music Transcription Using
Multiple Cause Models and Independent Component Analysis SAVANT: Synchronised and scalable Audio Video
content Across NeTworks High Quality Audio Coding Bit-Flipping Sigma-Delta Modulation Information and noise in ICA for music Linear Signal Processing in the Wavelet Domain OMRAS: Online
music recognition and searching Digital Audio Effects (DAFx) Scalable audio compression with wavelets for MPEG4 and
other applications University of Bristol Automatic recognition of musical instruments
using hidden Markov models CARESS: Creating Aestetically Resonant Environments in
Sound Using neural networks to model the spectral character
of musical instruments over their playing range University of CambridgeProbabilistic Modelling of Musical Audio for Machine
Listening MUSCLE: Multimedia
Understanding through Semantics, Computation and Learning HASSIP:
Harmonic Analysis and Statistics for
Signal and Image Processing High Level Modelling and Interence for Audio Signals
Using Bayesian Atomic Decompositions MOUMIR: MOdels for Unified Multimedia Information Retrieval City University, London I-MAESTRO:
Interactive MultimediA Environment for technology enhanced
muSic educaTion and cReative cOllaborative composition
and performance Interactive
MUSICNETWORK De Montfort University, LeicesterEARS:
The ElectroAcoustic Resource Site Electroacoustic Music Studies Network (EMS) University of EdinburghNovel Numerical Approaches for Physical Modelling Sound
Synthesis Science
in the service of music University of GlasgowBetweening: Teaching
and learning between the disciplines: Music Technology OpenDrama: The Digital Heritage of Opera in the Network
Environment MuTaTeD! : Music Tagging Type Definition MuTaTeD’II,
A system for Music Information Retrieval of Encoded Music Multi-participant interactive music services Goldsmiths' College, University of LondonTechniques and Algorithms for Understanding the
Information Dynamics of Music Hierarchical Segmentation & Semantic Markup
of Musical Signals TabXML: Document
Representation for Historical Performance-Based Music
Notation Algorithms for Musical Pattern Recognition and
Extraction ECOLM II: Electronic Corpus of Lute Music Electronic Music Performance Interfaces that Learn from their
users Cognitively Pertinent Models and Tools for the Discovery and
Analysis of Structural Similarity in Musical Data OMRAS: Online
music recognition and searching Startup of InterAction UK An Automated Ear Training
Tool for Trainee Musicians Imperial CollegeLow-cost, efficient,
parallel algorithms for musical electronic learning aids King's College LondonAlgorithms for Musical Pattern Recognition and
Extraction OMRAS: Online
music recognition and searching University of Leeds I-MAESTRO:
Interactive MultimediA Environment for technology enhanced
muSic educaTion
and cReative cOllaborative composition and performance Interactive MUSICNETWORK AXMEDIS: Automating Production of Cross Media Content
for Multichannel Distribution Cost287-ConGAS:
Gesture Controlled Audio Systems Optical Manuscript
Recognition Queen's University BelfastExtraction of
Physical Model Parameters from Music University of SalfordRoom
Acoustics Parameters From Music Room acoustic active diffusers (RAAD) University of SurreyQuality of service evaluation for spatial audio coding and
processing systems The role of head movement in the analysis of spatial
impression Hierarchical bandlimitation of surround sound
- A psychoacoustical study Perceptually Motivated Measurement of Spatial Sound Attributes for audio-based
information systems Subjective Quality Trade-offs in Consumer Multichannel
Sound and Video Delivery Systems University of YorkThe
role, diversity and future of the human voice in a technological
age The synthesis of head-related transfer functions
from parameterised morphological measurements Netvotech:
Network on technology and the healthy human voice in
performance High-level control of music synthesis for musicians Cost287-ConGAS:
Gesture Controlled Audio Systems Interactive
live composition using advanced real-time digital transformation
processes Tactile controlled physical modelling music synthesis RIMM:
Real-time Interactive Multiple Media Content Generation
Using High Performance Computing and Multi-Parametric
Human-Computer Interfaces Real-time natural singing synthesis using novel architectures Other ProjectsBach Digital - A pilot digital library, giving access to materials concerning Bach autographs. Autographs and related material from various collections have been digitized and collected together in a single virtual environment Computer-Based Music Research: Artificial Intelligence Models of Musical Expression - A research project carried out at the Austrian Research Institute for Artificial Intelligence (ÖFAI). IPUS (Integrated Processing and Understanding of Signals) - Focused on developing a framework that exploits formal signal processing models to provide structured, bi-directional interaction between signal processing and signal interpretation components. Includes a publication list. Sound Source Separation - Masataka Goto, ETL. A sound source separation system that extracts information such as the kind of instrument, onset time, and loudness of each note from an acoustic signal that consists of sounds of several kinds of percussion instruments. |
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Last Updated: 14 November, 2006. © Queen Mary, University of London 2006 | ||