Menu
website menu

Publications

Select year: 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996

2018

[1]
Proceedings of the 19th international society for music information retrieval conference, ismir 2018, paris, france, september 23-27, 2018. In E Gómez, X Hu, E Humphrey, and E Benetos, editors, ISMIR, 2018. [ bib ]
[2]
S Alharbi and M Purver. Applying distributional semantics to enhance classifying emotions in arabic tweets. pages 15--34, Apr 2018. [ bib | DOI ]
[3]
H Ali, SN Tran, E Benetos, and AS d'Avila Garcez. Speaker recognition with hybrid features from a deep belief network. Neural Computing and Applications, 29(6):13--19, Mar 2018. [ bib | DOI | http ]
[4]
A Allik, F Thalmann, and M Sandler. Musiclynx: Exploring music through artist similarity graphs. In The Web Conference 2018 - Companion of the World Wide Web Conference, WWW 2018, pages 167--170, Apr 2018. [ bib | DOI ]
[5]
JDK ARMITAGE and A MCPHERSON. Crafting digital musical instruments: An exploratory workshop study. Jun 2018. [ bib ]
[6]
C Baker, H Ranaivoson, B Greinke, and N Bryan-Kinns. Wear: Wearable technologists engage with artists for responsible innovation: Processes and progress. Virtual Creativity, 8(1):91--105, Jun 2018. [ bib | DOI ]
[7]
H BEAR and E BENETOS. An extensible cluster-graph taxonomy for open set sound scene analysis. In http://dcase.community/workshop2018/. Surrey, UK, Nov 2018. [ bib | http ]
[8]
S Bechhofer, G Fazekas, and K Page. Preface. In ACM International Conference Proceeding Series, Oct 2018. [ bib ]
[9]
E BENETOS, D STOWELL, and M PLUMBLEY. Approaches to complex sound scene analysis. In T Virtanen, M PLUMBLEY, and D Ellis, editors, Computational Analysis of Sound Scenes and Events, number 8 in Signals & Communication, pages 215--242. Springer International Publishing, 1 edition, Jan 2018. [ bib | DOI | http ]
[10]
B Bengler, F Martin, and N Bryan-Kinns. Collidoscope. Interactions, 25(2):12--13, Feb 2018. [ bib | DOI ]
[11]
SMA BIN. The Show Must Go Wrong: Towards an understanding of audience perception of error in digital musical instrument performance. PhD thesis, May 2018. [ bib ]
[12]
SMA BIN, N BRYAN-KINNS, and AP MCPHERSON. Risky business: Disfluency as a design strategy. Blacksburg, VA, USA, Jun 2018. [ bib ]
[13]
G Bromham, D Moffat, M Barthet, and G Fazekas. The impact of compressor ballistics on the perceived style of music. In 145th Audio Engineering Society International Convention, AES 2018, Jan 2018. [ bib ]
[14]
N Bryan-Kinns. Case study of data mining mutual engagement. In Electronic Workshops in Computing (eWiC), Jul 2018. [ bib | DOI ]
[15]
N Bryan-Kinns, W Wang, and T Ji. Exploring interactivity and co-creation in rural china. Interacting with Computers, 30(4):273--292, Jul 2018. [ bib | DOI ]
[16]
N Bryan-Kinns, W Wang, and Y Wu. Thematic analysis for sonic interaction design. In Electronic Workshops in Computing (eWiC), Jul 2018. [ bib | DOI ]
[17]
K BUYS and A MCPHERSON. Real-time bowed string feature extraction for performance applications. In https://zenodo.org/record/1422597. Cyprus, Jul 2018. [ bib | DOI ]
[18]
B CHETTRI, S MISHRA, B STURM, and E BENETOS. Analysing the predictions of a cnn-based replay spoofing detection system. In http://www.slt2018.org/, pages 92--97. Athens, Greece, IEEE, Dec 2018. [ bib ]
[19]
B CHETTRI, BLT STURM, and E BENETOS. Analysing replay spoofing countermeasure performance under varied conditions. Aalborg, Denmark, IEEE, Sep 2018. [ bib | http ]
[20]
K Choi, G Fazekas, M Sandler, and K Cho. A comparison of audio signal preprocessing methods for deep neural networks on music tagging. In Proc. of the 26th European Signal Processing Conference (EUSIPCO 2018), 3-7 Sept, Rome, Italy, 2018. keywords: Signal Processing, Deep Learning, MIR, Auto-tagging date-added: 2018-05-06 23:32:25 +0000 date-modified: 2018-05-29 23:32:25 +0000. [ bib ]
[21]
K Choi, G Fazekas, M Sandler, and K Cho. The effects of noisy labels on deep convolutional neural networks for music tagging. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(2):139--149, Mar 2018. date-added: 2018-06-06 23:32:25 +0000 date-modified: 2018-05-06 23:32:25 +0000 keywords: evaluation, music tagging, deep learning, CNN bdsk-url-1: https://arxiv.org/pdf/1706.02361.pdf bdsk-url-2: https://dx.doi.org/10.1109/TETCI.2017.2771298. [ bib | DOI | http ]
[22]
ET CHOURDAKIS and JOSHUAD REISS. From my pen to your ears: automatic production of radio plays from unstructured story text. Limassol, Cyprus, Jul 2018. [ bib | http ]
[23]
S Dixon, E Gómez, and A Volk. Editorial: Introducing the transactions of the international society for music information retrieval. Transactions of the International Society for Music Information Retrieval, 1(1):1--3, Jan 2018. [ bib | DOI ]
[24]
M Droog-Hayes, G Wiggins, and M Purver. Automatic detection of narrative structure for high-level story representation. In Proceedings of AISB Annual Convention 2018, Jan 2018. [ bib ]
[25]
S Duffy and PGT Healey. Refining musical performance through overlap. Hacettepe Egitim Dergisi, 33(Special Issue):316--333, Jan 2018. [ bib | DOI ]
[26]
S Duffy and M Pearce. What makes rhythms hard to perform? an investigation using steve reich's clapping music. PLoS One, 13(10):e0205847--e0205847, Oct 2018. [ bib | DOI | http ]
[27]
D Fano Yela, D Stowell, and M Sandler. Does k matter? k-nn hubness analysis for kernel additive modelling vocal separation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), volume 10891 LNCS, pages 280--289, Jun 2018. [ bib | DOI ]
[28]
J Flynn and JD Reiss. Improving the frequency response magnitude and phase of analogue-matched digital filters. In 144th Audio Engineering Society Convention 2018, Jan 2018. [ bib ]
[29]
J Freeman, G Wiggins, G Starks, and M Sandler. A concise taxonomy for describing data as an art material. In Leonardo, volume 51, pages 75--79, Feb 2018. [ bib | DOI ]
[30]
K Frieler, F Höger, M Pfleiderer, and S Dixon. Two web applications for exploring melodic patterns in jazz solos. In Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018, pages 777--783, Jan 2018. [ bib ]
[31]
RP Galindo Esparza, PGT Healey, L Weaver, and M Delbridge. Augmented embodiment: Developing interactive technology for stroke survivors short paper. In ACM International Conference Proceeding Series, Jun 2018. [ bib | DOI ]
[32]
C GODDARD, M BARTHET, and G WIGGINS. Assessing musical similarity for computational music creativity. Journal of the Audio Engineering Society, Apr 2018. [ bib | DOI ]
[33]
L Goodman, N Bryan-Kinns, Y Wu, S Liu, and C Baker. Wear sustain network: Ethical and sustainable technology innovation in wearables and etextiles. In 2018 IEEE Games, Entertainment, Media Conference, GEM 2018, pages 1--3, Oct 2018. [ bib | DOI ]
[34]
PGT Healey, JP de Ruiter, and GJ Mills. Editors' introduction: Miscommunication. Top Cogn Sci, 10(2):264--278, May 2018. [ bib | DOI | http ]
[35]
PGT Healey, GJ Mills, A Eshghi, and C Howes. Running repairs: Coordinating meaning in dialogue. Top Cogn Sci, 10(2):367--388, Apr 2018. [ bib | DOI | http ]
[36]
PGT HEALEY and MRJ PURVER. Self-repetition in dialogue and monologue, Nov 2018. [ bib | .pdf ]
[37]
PGT HEALEY and MRJ PURVER. Self-repetition in dialogue and monologue, Nov 2018. [ bib | .pdf ]
[38]
S Heitlinger, N Bryan-Kinns, and R Comber. Connected seeds and sensors: Co-designing internet of things for sustainable smart cities with urban food-growing communities. In ACM International Conference Proceeding Series, volume 2, Sep 2018. [ bib | DOI ]
[39]
Y Hu, X Du, N Bryan-Kinns, and Y Guo. Identifying divergent design thinking through the observable behavior of service design novices. International Journal of Technology and Design Education, Oct 2018. [ bib | DOI ]
[40]
RH JACK, A MEHRABI, T Stockman, and A MCPHERSON. Action-sound latency and the perceived quality of digital musical instruments: Comparing professional percussionists and amateur musicians. Music Perception, Aug 2018. [ bib | DOI ]
[41]
N Jillings, B De Man, R Stables, and JD Reiss. Investigation into the effects of subjective test interface choice on the validity of results. In 145th Audio Engineering Society International Convention, AES 2018, Jan 2018. [ bib ]
[42]
P KUDUMAKIS, J Corral García, I Barbancho, L J. Tardón, and M SANDLER. Enabling interactive and interoperable semantic music applications. In R Bader, editor, Springer Handbook of Systematic Musicology, number 45 in Springer Handbooks, pages 911--921. Springer, Berlin, Heidelberg, Jan 2018. [ bib | DOI ]
[43]
P KUDUMAKIS and S DIXON. Dmrn+13: Digital music research network workshop proceedings 2018. Centre for Digital Music, Queen Mary University of London, Dec 2018. [ bib | DOI ]
[44]
L Lavia, HJ Witchel, F Aletta, J Steffens, A Fiebig, J Kang, C Howes, and PGT Healey. Non-participant observation methods for soundscape design and urban panning. In Handbook of Research on Perception-Driven Approaches to Urban Assessment and Design, pages 73--98. Jan 2018. [ bib | DOI ]
[45]
S Li, S Dixon, and MD Plumbley. A demonstration of hierarchical structure usage in expressive timing analysis by model selection tests. In Chinese Control Conference, CCC, volume 2018-July, pages 3190--3195, Oct 2018. [ bib | DOI ]
[46]
B Liang, G Fazekas, and M Sandler. Measurement, recognition and visualisation of piano pedalling gestures and techniques. JAES Special Issue on Participatory Sound And Music Interaction Using Semantic Audio, 2(47):xxxx--xxxx, Jun 2018. date-added: 2018-06-06 23:32:25 +0000 date-modified: 2018-05-06 23:32:25 +0000 keywords: sensor system, piano pedalling, measurement, machine learning, gesture recognition, piano transcription. [ bib | DOI ]
[47]
B Liang, G Fazekas, and M Sandler. Measurement, recognition, and visualization of piano pedaling gestures and techniques. AES: Journal of the Audio Engineering Society, 66(6):448--456, Jun 2018. [ bib | DOI ]
[48]
B Liang, G Fazekas, and M Sandler. Piano legato-pedal onset detection based on a sympathetic resonance measure. In Proceedings of the 26th European Signal Processing Conference (EUSIPCO 2018), pages 2484--2488. Rome, IEEE, Sep 2018. [ bib | DOI ]
[49]
L Marengo, G Fazekas, and A Tombros. I wonder... inquiry techniques as a method to gain insights into people’s encounters with visual art. In Proc. International Conference on Museums and the Web 2018, April 18-21, Vancouver, Canada., 2018. date-added: 2018-05-01 00:11:04 +0000 date-modified: 2018-05-01 00:16:25 +0000 keywords: visual art, information design, inquiry techniques, user requirements, online collections, interaction design bdsk-url-1: http://mw18.mwconf.org/paper/i-wonder-inquiry-techniques-as-a-method-to-gain-insights-into-peoples-encounters-with-visual-art. [ bib | http ]
[50]
M Martinez Ramirez and J Reiss. End-to-end equalization with convolutional neural networks. Sep 2018. [ bib | http ]
[51]
A McArthur, M Sandler, and R Stewart. Perception of mismatched auditory distance - cinematic vr. In Proceedings of the AES International Conference, volume 2018-August, pages 24--33, Jan 2018. [ bib ]
[52]
R McCabe and PGT Healey. Miscommunication in doctor–patient communication. Topics in Cognitive Science, 10(2):409--424, Apr 2018. [ bib | DOI ]
[53]
A Mehrabi, K Choi, S Dixon, and M Sandler. Similarity measures for vocal-based drum sample retrieval using deep convolutional auto-encoders. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, volume 2018-April, pages 356--360, Sep 2018. [ bib | DOI ]
[54]
L Men and N Bryan-Kinns. Lemo: Supporting collaborative music making in virtual reality. In 2018 IEEE 4th VR Workshop on Sonic Interactions for Virtual Environments, SIVE 2018, Dec 2018. [ bib | DOI ]
[55]
A Mesaros, T Heittola, E Benetos, P Foster, M Lagrange, T Virtanen, and M Plumbley. Detection and classification of acoustic scenes and events: Outcome of the dcase 2016 challenge. IEEE/ACM Transactions on Audio, Speech and Language Processing, 26:379--393, Feb 2018. [ bib | DOI | http ]
[56]
A Milo, N Bryan-Kinns, and JD Reiss. Graphical research tools for acoustic design training: Capturing perception in architectural settings. In Handbook of Research on Perception-Driven Approaches to Urban Assessment and Design, pages 397--433. Jan 2018. [ bib | DOI ]
[57]
S Mishra, BL Sturm, and S Dixon. Understanding a deep machine listening model through feature inversion. In Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018, pages 755--762, Jan 2018. [ bib ]
[58]
S Mishra, BL Sturm, and S Dixon. “what are you listening to?” explaining predictions of deep machine listening systems. In European Signal Processing Conference, volume 2018-September, pages 2260--2264, Nov 2018. [ bib | DOI ]
[59]
D Moffat and J Reiss. Objective evaluations of synthesised environmental sounds. Aveiro, Portugal, Sep 2018. [ bib | http ]
[60]
D Moffat and MB Sandler. Adaptive ballistics control of dynamic range compression for percussive tracks. In 145th Audio Engineering Society International Convention, AES 2018, Jan 2018. [ bib ]
[61]
D Moffat, F Thalmann, and M Sandler. Towards a semantic web representation and application of audio mixing rules. Huddersfield, UK, Sep 2018. [ bib ]
[62]
DJ MOFFAT and JD REISS. Perceptual evaluation of synthesized sound effects. ACM Transactions on Applied Perception (TAP), 15(2), Apr 2018. [ bib | DOI | http ]
[63]
V Morfi and D Stowell. Deep learning for audio event detection and tagging on low-resource datasets. Applied Sciences, 8(8), Aug 2018. [ bib | DOI | http ]
[64]
F MORREALE, J ARMITAGE, and A MCPHERSON. Effect of instrument structure alterations on violin performance. Frontiers in Psychology, Dec 2018. [ bib | DOI ]
[65]
J Mycroft, T Stockman, and JD Reiss. A prototype mixer to improve cross-modal attention during audio mixing. In ACM International Conference Proceeding Series, Sep 2018. [ bib | DOI ]
[66]
E Nakamura, E BENETOS, K Yoshii, and S DIXON. Towards complete polyphonic music transcription: Integrating multi-pitch detection and rhythm quantization. pages 101--105. Calgary, Canada, IEEE, Apr 2018. [ bib | http ]
[67]
I Nolasco and E BENETOS. To bee or not to bee: Investigating machine learning approaches for beehive sound recognition. In http://dcase.community/documents/workshop2018/proceedings/DCASE2018Workshop_Nolasco_131.pdf. Surrey, UK, Nov 2018. [ bib | http ]
[68]
K O'Hanlon and MB Sandler. Improved detection of semi-percussive onsets in audio using temporal reassignment. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, volume 2018-April, pages 611--615, Sep 2018. [ bib | DOI ]
[69]
M PANTELI, E BENETOS, and S DIXON. A review of manual and computational approaches for the study of world music corpora. Journal of New Music Research, 47:176--189, Jan 2018. [ bib | DOI ]
[70]
LS Pardue, A McPherson, and D Overholt. Improving the instrumental learning experience through complexity management. In Proceedings of the 15th Sound and Music Computing Conference: Sonic Crossings, SMC 2018, pages 150--157, Jan 2018. [ bib ]
[71]
J Pauwels, G Fazekas, and M Sandler. Recommending songs to music learners based on chord content. In Proceedings of the 2018 Joint Workshop on Machine Learning for Music. Stockholm, Sweden, Jul 2018. [ bib ]
[72]
J Pauwels and M Sandler. pywebaudioplayer: Bridging the gap between audio processing code and attractive visualisations based on web technology. In Proceedings of the 4th Web Audio Conference (WAC). Berlin, Germany, Sep 2018. [ bib ]
[73]
J Pauwels, A Xambó, G Roma, M Barthet, and G Fazekas. Exploring real-time visualisations to support chord learning with a large music collection. In Proceedings of the 4th Web Audio Conference (WAC). Berlin, Germany, Sep 2018. [ bib ]
[74]
M Pearce and M Rohrmeier. Musical syntax ii: Empirical perspectives. In Springer Handbooks, pages 487--505. Jan 2018. [ bib | DOI ]
[75]
MT PEARCE. Statistical learning and probabilistic prediction in music cognition: Mechanisms of stylistic enculturation. Annals of the New York Academy of Sciences, May 2018. [ bib | DOI ]
[76]
H Peng and JD Reiss. Why can you hear a difference between pouring hot and cold water? an investigation of temperature dependence in psychoacoustics. In 145th Audio Engineering Society International Convention, AES 2018, Jan 2018. [ bib ]
[77]
A Pras, B De Man, and JD Reiss. A case study of cultural influences on mixing practices. In 144th Audio Engineering Society Convention 2018, Jan 2018. [ bib ]
[78]
MRJ PURVER, J HOUGH, and C HOWES. Computational models of miscommunication phenomena. Topics in Cognitive Science, Mar 2018. [ bib | DOI | .pdf ]
[79]
DR Quiroga-Martinez, NC Hansen, A Højlund, M Pearce, E Brattico, and P Vuust. Reduced prediction error responses in high- as compared to low-uncertainty musical contexts. bioRxiv, Sep 2018. [ bib | DOI ]
[80]
M Rohrmeier and M Pearce. Musical syntax i: Theoretical perspectives. In Springer Handbooks, pages 473--486. Jan 2018. [ bib | DOI ]
[81]
DRW Sears, MT Pearce, J Spitzer, WE Caplin, and S McAdams. Expectations for tonal cadences: Sensory and cognitive priming effects. Q J Exp Psychol (Hove), pages 1747021818814472--1747021818814472, Nov 2018. [ bib | DOI | http ]
[82]
R SELFRIDGE, D MOFFAT, E AVITAL, and J REISS. Creating real-time aeroacoustic sound effects using physically informed models. Journal of the Audio Engineering Society, 66(7/8):594--607, Aug 2018. [ bib | DOI | http ]
[83]
R Selfridge, JD Reiss, and EJ Avital. Physically derived synthesis model of an edge tone. In 144th Audio Engineering Society Convention 2018, Jan 2018. [ bib ]
[84]
D Sheng and G Fazekas. Feature design using audio decomposition for intelligent control of the dynamic range compressor. In Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), April 15-20, Calgary, Canada., 2018. date-added: 2018-05-06 23:33:10 +0000 date-modified: 2018-05-07 00:05:17 +0000 keywords: intelligent music production, ICASSP, intelligent audio effects local-url: sheng2018icassp.pdf. [ bib | http ]
[85]
D Sheng and G Fazekas. Feature selection for dynamic range compressor parameter estimation. In Proc. of the 144th Convention of the Audio Engineering Society, 23-26 May, Milan, Italy, 2018. date-added: 2018-05-07 00:06:23 +0000 date-modified: 2018-05-07 00:09:42 +0000 keywords: feature selection,. intelligent music production, AES, intelligent audio effects local-url: sheng2018aes.pdf. [ bib | http ]
[86]
RC SHUKLA, RL STEWART, A Roginska, and MB SANDLER. User selection of optimal hrtf sets via holistic comparative evaluation. In http://www.aes.org/e-lib/inst/browse.cfm?elib=19677, pages 1--10, New York, NY, USA, Aug 2018. Redmond, WA, USA, Audio Engineering Society. [ bib | http ]
[87]
S Skach, R Stewart, and PGT Healey. Smart arse: Posture classification with textile sensors in trousers. In ICMI 2018 - Proceedings of the 2018 International Conference on Multimodal Interaction, pages 116--124, Oct 2018. [ bib | DOI ]
[88]
S SKACH, A XAMBO, L TURCHET, A Stolfi, RL STEWART, and MHE BARTHET. Embodied interactions with e-textiles and the internet of sounds for performing arts. Stockholm, Sweden, Mar 2018. [ bib | DOI ]
[89]
AG STOCKMAN and D AL-THANI. Evaluating an interface for cross-modal information seeking. Interacting With Computers, Sep 2018. [ bib | DOI ]
[90]
AG STOCKMAN and O METATLA. “i hear you”: Understanding awareness information exchange in an audio-only workspace. Montreal, Apr 2018. [ bib | DOI ]
[91]
T STOCKMAN and S Wilkie. Perception of objects that move in depth, using ecologically valid audio cues. Applied Acoustics, Jan 2018. [ bib | DOI ]
[92]
A Stolfi, J Sokolovskis, F Gorodscy, F Iazzetta, and M Barthet. Audio semantics: Online chat communication in open band participatory music performances. AES: Journal of the Audio Engineering Society, 66(11):910--921, Nov 2018. [ bib | DOI ]
[93]
D Stoller, V Akkermans, and S Dixon. Detection of cut-points for automatic music rearrangement. In IEEE International Workshop on Machine Learning for Signal Processing, MLSP, volume 2018-September, Oct 2018. [ bib | DOI ]
[94]
D Stoller, S Ewert, and S Dixon. Adversarial semi-supervised audio source separation applied to singing voice extraction. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, volume 2018-April, pages 2391--2395, Sep 2018. [ bib | DOI ]
[95]
D STOLLER, S Ewert, and S DIXON. Jointly detecting and separating singing voice: a multi-task approach. Jun 2018. [ bib ]
[96]
D Stowell, MD Wood, H Pamuła, Y Stylianou, and H Glotin. Automatic acoustic detection of birds through deep learning: The first bird audio detection challenge. Methods in Ecology and Evolution, Nov 2018. [ bib | DOI ]
[97]
F Thalmann, T Wilmering, and MB Sandler. Cultural heritage documentation and exploration of live music events with linked data. In ACM International Conference Proceeding Series, pages 1--5, Oct 2018. [ bib | DOI ]
[98]
FLORIAN THALMANN, L THOMPSON, and M SANDLER. A user-adaptive automated dj web app with object-based audio and crowd-sourced decision trees. Berlin, Sep 2018. [ bib ]
[99]
L TURCHET and M BARTHET. Co-design of musical haptic wearables for electronic music performer's communication. IEEE Transactions on Human-Machine Systems, Dec 2018. [ bib | DOI ]
[100]
L Turchet and M Barthet. Jamming with a smart mandolin and freesound-based accompaniment. In Conference of Open Innovation Association, FRUCT, volume 2018-November, pages 375--381, Dec 2018. [ bib | DOI ]
[101]
L Turchet, C Fischione, G Essl, D Keller, and M Barthet. Internet of musical things: Vision and challenges. IEEE Access, 6:61994--62017, Sep 2018. [ bib | DOI ]
[102]
L Turchet, A McPherson, and M Barthet. Co-design of a smart cajón. AES: Journal of the Audio Engineering Society, 66(4):220--230, Apr 2018. [ bib | DOI ]
[103]
L Turchet, A McPherson, and M Barthet. Real-time hit classification in a smart cajón. Frontiers in ICT, 5, Jan 2018. [ bib | DOI ]
[104]
L Turchet, F Viola, G Fazekas, and M Barthet. Towards a semantic architecture for the internet of musical things. In Conference of Open Innovation Association, FRUCT, volume 2018-November, pages 382--390, Dec 2018. [ bib | DOI ]
[105]
JJ Valero-Mas, E BENETOS, and JM Iñesta. A supervised classification approach for note tracking in polyphonic piano transcription. Journal of New Music Research, 47(3):249--263, Jun 2018. [ bib | DOI ]
[106]
F Viola, A Stolfi, A Milo, M Ceriani, M Barthet, and G Fazekas. Playsound.space: Enhancing a live music performance tool with semantic recommendations. In ACM International Conference Proceeding Series, pages 46--53, Oct 2018. [ bib | DOI ]
[107]
F Viola, L Turchet, F Antoniazzi, and G Fazekas. C minor: A semantic publish/subscribe broker for the internet of musical things. In Conference of Open Innovation Association, FRUCT, volume 2018-November, pages 405--415, Dec 2018. [ bib | DOI ]
[108]
C WANG, E BENETOS, X MENG, and E CHEW. Towards hmm-based glissando detection for recordings of chinese bamboo flute. In http://ismir2018.ircam.fr/pages/events-lbd.html. Paris, France, Sep 2018. [ bib ]
[109]
J Weaver, M Barthet, and E Chew. Analysis of piano duo tempo changes in varying convolution reverberation conditions. In 145th Audio Engineering Society International Convention, AES 2018, Jan 2018. [ bib ]
[110]
WJ Wilkinson, JD Reiss, and D Stowell. A generative model for natural sounds based on latent force modelling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), volume 10891 LNCS, pages 259--269, Jun 2018. [ bib | DOI ]
[111]
T Wilmering, F Thalmann, and MB Sandler. Exploration of grateful dead concerts and memorabilia on the semantic web. In CEUR Workshop Proceedings, volume 2180. Monterey, CA, Oct 2018. [ bib ]
[112]
Y Wu and N Bryan-Kinns. Musicking with an interactive musical system: The effects of task motivation and user interface mode on non-musicians’ creative engagement. International Journal of Human Computer Studies, 122:61--77, Aug 2018. [ bib | DOI ]
[113]
A Xambo, G Roma, A Lerch, M Barthet, and G Fazekas. Live repurposing of sounds: Mir explorations with personal and crowd-sourced databases. In Proc. of the New Interfaces for Musical Expression (NIME), 3-6 June, Blacksburg, VA, USA., 2018. date-added: 2018-05-07 00:22:07 +0000 date-modified: 2018-05-07 00:28:09 +0000 keywords: live coding, MIR, sound samples, Creative Commons. [ bib ]
[114]
A Xambó, J Pauwels, G Roma, M Barthet, and G Fazekas. Jam with jamendo: Querying a large music collection by chords from a learner's perspective. In ACM International Conference Proceeding Series, Sep 2018. [ bib | DOI ]
[115]
A YCART and E BENETOS. A-maps: Augmented maps dataset with rhythm and key annotations. Paris, Sep 2018. [ bib ]
[116]
A YCART and E BENETOS. Polyphonic music sequence transduction with meter-constrained lstm networks. pages 386--390. Calgary, Canada, IEEE, Apr 2018. [ bib ]
[117]
DF Yela, S Ewert, K O'Hanlon, and MB Sandler. Shift-invariant kernel additive modelling for audio source separation. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, volume 2018-April, pages 616--620, Sep 2018. [ bib | DOI ]
[118]
V ZAPPI and A MCPHERSON. Hackable instruments: Supporting appropriation and modification in digital musical interaction. Frontiers in ICT, 5(26), Oct 2018. [ bib | DOI ]
[119]
L Zhang and PGT Healey. Human, chameleon or nodding dog? pages 428--436, Jan 2018. [ bib | DOI ]

This file was generated by bibtex2html 1.98.

Return to top