Centre for Digital Music

 
How to contact us
About the Department
People
Academic Staff
Support Staff
Research Staff
Research Students
Location
Queen Mary Information
Contact details
Staff directory
News
Mile End campus
Disability & accessibility
 

Amélie Anglade

Contact Details

Title: Research Student
Tel: Internal: [13] 7480
National: 020 7882 7480
International: +44 20 7882 7480
Fax:
National: 020 7882 7997
International: +44 20 7882 7997
Email:
amelie.anglade@elec.qmul.ac.uk
Room: Eng. 104

Research Group: Centre for Digital Music

Supervisor: Simon Dixon

Research Topic: High Level Logical Music Descriptors for Automatic Music Classification, Retrieval and Knowledge Discovery

Nowadays the amount of musical data available online is incredibly large and increases everyday. Both content providers and customers are facing a common difficulty: organizing their huge musical libraries in such a way that each song can be easily retrieved and not necessarily only by its name. Classifications by artist, genre or even mood are generally provided. Most of them are hand-build by experts but because classifying such amounts of data is expensive and time-consuming, people are gaining interest in automatic music classification.

Most of the automatic music classification algorithms are based on the so-called "bag-of-frames" or "bag-of-features" approach. But these techniques only use low level descriptors of music, focusing mostly on timbral texture, rhythmic content, pitch content (melody/harmony) or a combination of the three.

It has been shown that the bag-of-frames approach is not appropriate at least for three reasons. First because most of the time there is no direct mapping between the acoustical properties and mental representation of a musical entity (such as genre or mood). Also, contrary to the bag-of-frames assumption the contribution of a musical event to the perceptual similarity is not proportional to its statistical importance (rare musical events can even be the most informative ones). Finally the bag-of-features approach ignores the temporal dimension of the data and it has been proved that it is crucial in the retrieval process to use sequences and not only the average values or global statistical distributions of features over a whole passage or piece.

Thus to overcome the limitations of this approach this project aims to automatically build high-level music descriptors explaining the musical content like musicologists would explain it (i.e. taking into account the sequences of events, the very rare but very informative events, etc.) and for that we adopt a logic-based representation of the musical events together with logical inference (namely Inductive Logic Programming) of higher level phenomena.

We chose logic because temporal relations between musical events are easily expressible in a relational framework. Moreover logical inference of rules allows to take into account all events, even those which are rare. Another advantage of using logical rules is that they are human-readable (or can automatically be transcribed into a human-readable format). Thus, automatically extracted patterns or rules expressed in logical formulae can be transmitted as they are (or with little translation) to end-users. On other words, thanks to the expressiveness of logic the characterisation or classification models are transparent to the users. Additionally and contrary to most statistical models when adopting a logic-based inference, a background knowledge can be used. Additionally logical inference has already been successfully used in several music-related projects, for instance to induce rules about popular music harmonisation, counterpoint and expressive performance.

Taking the example of these promising applications of logic to music we are building a logic-based reasoning system able to characterise songs for classification or similarity evaluation purposes. Such a system takes a database of audio and/or symbolic examples to characterise. These examples are then analysed by a musical event extractor using either symbolic or audio features. Finally the relational description of the examples resulting from this analysis is given to an inference system which derives musical rules that are true for the examples. An example of such an automatically derived rule for the blues genre would be: 12_bar_structure(X) ∧ blues_scale(X) ∧ is_syncopation(X) => blues(X).

We imagine three kinds of applications for this system:

  1. Classification and retrieval of pieces similar to a given set of examples, for instance for recommender systems.
  2. Music discoveries: some of the rules derived using ILP might be new interesting rules describing particular phenomena in the music in logical formulae (so human-readable, understandable and interpretable).
  3. Query in databases using logical formulae: finding music that match a descriptor explicitly given by the user in a simple logical formula (it could be defined by hand or a descriptor previously infered by the system) so without any audio or score example.

Because harmony is a high level descriptor of music (focusing on the structure, progression, and relation of chords) we have been focusing on this dimension of music for now as a proof of concept, working on the automatic induction of the harmony rules underlying a genre, a composer or a user’s preference.

Publications, Presentations and Seminars

Publications

Seminars

I have also presented my research during the following seminars:
  • "Logic-based Modelling of Musical Harmony for Automatic Characterisation, Classification, and Recommendation", presented both as part of the ESSIP Seminar Series at Columbia University, and at New York University on 21st May 2010.
  • "Virtual Communities for Creating Shared Music Channels", MMV-C4DM seminars at Queen Mary University of London, 30th April 2007.
  • "Towards Logic-based Representations of Musical Harmony for Classification, Retrieval and Knowledge Discovery" and "Virtual Communities for Creating Shared Music Channels", Music Technology Group Seminars at Universitat Pompeu Fabra, Barcelona, 12th February 2009.

Other events

Finally I also participated in the "Sorted Sound" event organised by the Dana Centre, 5th June 2008. I was among the speakers who presented the Centre for Digital Music research pro jects to a general audience.

Professional activities

Committees

Over the past two years I have been very involved in several professional associations and committees. The one I have dedicated most of my free time is IEEE. Inside IEEE I am part of the following committees and groups:
  • QMUL IEEE Student Branch: I was Communication Officer from May 2007 until October 2008 and was elected Chair of the Student Branch (October 2008-November 2009).
  • IEEE United Kingdom and Republic of Ireland Section Student Activities Committee: I have joinded this committee in January 2009.
  • IEEE Region 8 (Europe, Middle East and Africa) Student Activities Committee: I was appointed to this committee as Region 8 Student Branch Coordinator in 2009 and then Region 8 Student Representative in 2010.
In addition to these IEEE activities I have joined the Women in Science and Engineering Society at Queen Mary University of London (WISE@QMUL) as part of the Operating Committee.

Conferences/congresses organisation

Peer-review activities

I was reviewer for:
  • the 10th International Conference on Music Information Retrieval (ISMIR 2009)
  • the International Workshop on Machine Learning and Music 2009 (MML 2009)
  • the 11th International Conference on Music Information Retrieval (ISMIR 2010)

Technical courses/internships

  • Attended the Machine Learning Summer School 2008 (MLSS08), Ile de Ré, France, September 1-15 2008.
  • Did a 2-month internship at the Music Technology Group, Universitat Pompeu Fabra, Barcelona, (19th January - 15th March 2009) during which I worked with Dr Rafael Ramirez, who is an expert in Inductive Logic Programming techniques applied to music.