Centre for Digital Music - Education
If you’re a high-flying student with a passion for music and audio, you can do maths, and want to understand how computers and electronics shape today’s and tomorrow’s electronic music instruments, digital audio systems, music downloads, sound effects and games, then these degrees are designed with you in mind.
There are two undergraduate course programmes are available:
The BEng programme is aimed at mathematically able technology-oriented students who have a passion for music and audio, and who wish to understand how technology is applied to music and audio, learn to use the latest innovative equipment, and to have a future career in this field.
The MEng programme is aimed at high-flying, technology-oriented students who have a passion for music and audio, and who wish to understand how cutting-edge technology is applied to music and audio, and to have a future career in this field, perhaps designing the next generations of equipment.
The programme will provide engineering students with training in advanced music and audio technologies. Modules run by the Centre include: Fundamentals of DSP, Advanced Transforms Methods, Real-Time DSP, Statistical DSP, Audio & speech processing, Music Analysis and Synthesis, Digital Audio Effects and Intelligent Signal Processing. The course also features modules in multimedia systems and a summer project on state-of-the-art research topics in the Centre.
PhD Study Opportunities in the Centre for Digital Music
The Centre for Digital Music at Queen Mary University of London is a world-leading research group in the field of Music & Audio Technology. Our research covers everything in digital music and audio: from analysis, understanding and retrieval to delivery, synthesis and sound rendering. We seek not only to investigate new applications of digital signal processing (DSP), but also to push forward the frontiers of DSP itself.
Applications are invited for PhD research study in any of our areas of interest. Possible topic areas might include: automatic music transcription; harmony analysis; auditory scene analysis; beat tracking & rhythm analysis; blind source separation & independent component analysis (ICA); joint audio/video tracking and transcription; A/D & D/A conversion; sigma delta modulation; scalable audio codecs; object based coding; sparse representations & sparse coding; music information retrieval; semantic markup, musical metadata & MPEG7 applications; audio effects; 3D sound & multi-loudspeaker rendering systems; intelligent microphone arrays; automatic accompaniment; interactive performance/internet jamming; interactive compositional tools; biologically inspired audio processing; and pervasive audio ("my music wherever I am"). See the PhD research page for details about funding, facilities, application procedures, and more.