default search action
Mathieu Barthet
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c59]Wenqian Cui, Pedro Sarmento, Mathieu Barthet:
MoodLoopGP: Generating Emotion-Conditioned Loop Tablature Music with Multi-granular Features. EvoMUSART 2024: 97-113 - [i14]Wenqian Cui, Pedro Sarmento, Mathieu Barthet:
MoodLoopGP: Generating Emotion-Conditioned Loop Tablature Music with Multi-Granular Features. CoRR abs/2401.12656 (2024) - [i13]Pedro Sarmento, Jackson Loth, Mathieu Barthet:
Between the AI and Me: Analysing Listeners' Perspectives on AI- and Human-Composed Progressive Metal Music. CoRR abs/2407.21615 (2024) - 2023
- [j10]Simin Yang, Mathieu Barthet, Courtney N. Reed, Elaine Chew:
Do You Hear What I Hear? Computer 56(12): 4-6 (2023) - [j9]Thomas Deacon, Patrick Healey, Mathieu Barthet:
"It's cleaner, definitely": Collaborative Process in Audio Production. Comput. Support. Cooperative Work. 32(3): 475-505 (2023) - [j8]Simin Yang, Courtney N. Reed, Elaine Chew, Mathieu Barthet:
Examining Emotion Perception Agreement in Live Music Performance. IEEE Trans. Affect. Comput. 14(2): 1442-1460 (2023) - [j7]Miguel Ceriani, Fabio Viola, Sasa Rudan, Francesco Antoniazzi, Mathieu Barthet, György Fazekas:
Semantic integration of audio content providers through the Audio Commons Ontology. J. Web Semant. 77: 100787 (2023) - [c58]Tyler Howard McIntosh, Orlando Woscholski, Mathieu Barthet:
Affective Conditional Modifiers in Adaptive Video Game Music. Audio Mostly Conference 2023: 17-23 - [c57]Thomas Deacon, Mathieu Barthet:
Invoke: A Collaborative Virtual Reality Tool for Spatial Audio Production Using Voice-Based Trajectory Sketching. Audio Mostly Conference 2023: 161-168 - [c56]Lily E. Montague, Mathieu Barthet:
Collaboration on the Tracks: Ethnographically-Informed Design for Computer-Assisted Music Collaboration between Producers and Performers. Creativity & Cognition 2023: 391-392 - [c55]Sara Adkins, Pedro Sarmento, Mathieu Barthet:
LooperGP: A Loopable Sequence Model for Live Coding Performance Using GuitarPro Tablature. EvoMUSART@EvoStar 2023: 3-19 - [c54]Pedro Sarmento, Adarsh Kumar, Yu-Hua Chen, CJ Carr, Zack Zukowski, Mathieu Barthet:
GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music Generation with Transformers. EvoMUSART@EvoStar 2023: 260-275 - [c53]Yann Frachi, Guillaume Chanel, Mathieu Barthet:
Affective gaming using adaptive speed controlled by biofeedback. ICMI Companion 2023: 238-246 - [c52]Andrea Martelloni, Andrew P. McPherson, Mathieu Barthet:
Real-Time Percussive Technique Recognition and Embedding Learning for the Acoustic Guitar. ISMIR 2023: 121-128 - [c51]Max Graf, Mathieu Barthet:
Reducing Sensing Errors in a Mixed Reality Musical Instrument. VRST 2023: 72:1-72:2 - [i12]Pedro Sarmento, Adarsh Kumar, Yu-Hua Chen, CJ Carr, Zack Zukowski, Mathieu Barthet:
GTR-CTRL: Instrument and Genre Conditioning for Guitar-Focused Music Generation with Transformers. CoRR abs/2302.05393 (2023) - [i11]Sebastian Löbbers, Mathieu Barthet, György Fazekas:
AI as mediator between composers, sound designers, and creative media producers. CoRR abs/2303.01457 (2023) - [i10]Sara Adkins, Pedro Sarmento, Mathieu Barthet:
LooperGP: A Loopable Sequence Model for Live Coding Performance using GuitarPro Tablature. CoRR abs/2303.01665 (2023) - [i9]Pedro Sarmento, Adarsh Kumar, Dekun Xie, CJ Carr, Zack Zukowski, Mathieu Barthet:
ShredGP: Guitarist Style-Conditioned Tablature Generation. CoRR abs/2307.05324 (2023) - [i8]Jackson Loth, Pedro Sarmento, CJ Carr, Zack Zukowski, Mathieu Barthet:
ProgGP: From GuitarPro Tablature Neural Generation To Progressive Metal Production. CoRR abs/2307.05328 (2023) - [i7]Andrea Martelloni, Andrew P. McPherson, Mathieu Barthet:
Real-time Percussive Technique Recognition and Embedding Learning for the Acoustic Guitar. CoRR abs/2307.07426 (2023) - [i6]Max Graf, Mathieu Barthet:
Combining Vision and EMG-Based Hand Tracking for Extended Reality Musical Instruments. CoRR abs/2307.10203 (2023) - [i5]Giovanni Bindi, Nils Demerlé, Rodrigo Diaz, David Genova, Aliénor Golvet, Ben Hayes, Jiawen Huang, Lele Liu, Vincent Martos, Sarah Nabi, Teresa Pelinski, Lenny Renault, Saurjya Sarkar, Pedro Sarmento, Cyrus Vahidi, Lewis Wolstanholme, Yixiao Zhang, Axel Roebel, Nick Bryan-Kinns, Jean-Louis Giavitto, Mathieu Barthet:
AI (r)evolution - where are we heading? Thoughts about the future of music and sound technologies in the era of deep learning. CoRR abs/2310.18320 (2023) - 2022
- [c50]Yann Frachi, Takuya Takahashi, Feiqi Wang, Mathieu Barthet:
Design of Emotion-Driven Game Interaction Using Biosignals. HCI (33) 2022: 160-179 - [c49]Takuya Takahashi, Mathieu Barthet:
Emotion-driven Harmonisation And Tempo Arrangement of Melodies Using Transfer Learning. ISMIR 2022: 741-748 - [c48]Max Graf, Mathieu Barthet:
Mixed Reality Musical Interface: Exploring Ergonomics and Adaptive Hand Pose Recognition for Gestural Control. NIME 2022 - 2021
- [c47]Pedro Sarmento, Adarsh Kumar, CJ Carr, Zack Zukowski, Mathieu Barthet, Yi-Hsuan Yang:
DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models. ISMIR 2021: 610-617 - [c46]Andrea Martelloni, Andrew McPherson, Mathieu Barthet:
Guitar augmentation for Percussive Fingerstyle: Combining self-reflexive practice and user-centred design. NIME 2021 - [i4]Max Graf, Harold Chijioke Opara, Mathieu Barthet:
An Audio-Driven System For Real-Time Music Visualisation. CoRR abs/2106.10134 (2021) - [i3]Sebastian Löbbers, Mathieu Barthet, György Fazekas:
Sketching sounds: an exploratory study on sound-shape associations. CoRR abs/2107.07360 (2021) - [i2]Pedro Sarmento, Adarsh Kumar, CJ Carr, Zack Zukowski, Mathieu Barthet, Yi-Hsuan Yang:
DadaGP: A Dataset of Tokenized GuitarPro Songs for Sequence Models. CoRR abs/2107.14653 (2021) - 2020
- [c45]Andrea Martelloni, Andrew P. McPherson, Mathieu Barthet:
Percussive Fingerstyle Guitar through the Lens of NIME: an Interview Study. NIME 2020: 440-445 - [i1]Pedro Sarmento, Ove Holmqvist, Mathieu Barthet:
Musical Smart City: Perspectives on Ubiquitous Sonification. CoRR abs/2006.12305 (2020)
2010 – 2019
- 2019
- [j6]Luca Turchet, Mathieu Barthet:
Co-Design of Musical Haptic Wearables for Electronic Music Performer's Communication. IEEE Trans. Hum. Mach. Syst. 49(2): 183-193 (2019) - [c44]Fred Bruford, Mathieu Barthet, SKoT McDonald, Mark B. Sandler:
Modelling Musical Similarity for Drum Patterns: A Perceptual Evaluation. Audio Mostly Conference 2019: 131-138 - [c43]Gary Bromham, David Moffat, Mathieu Barthet, Anne Danielsen, György Fazekas:
The Impact of Audio Effects Processing on the Perception of Brightness and Warmth. Audio Mostly Conference 2019: 183-190 - [c42]Luca Turchet, Mathieu Barthet:
Haptification of performer's control gestures in live electronic music performance. Audio Mostly Conference 2019: 244-247 - [c41]Thomas Deacon, Nick Bryan-Kinns, Patrick G. T. Healey, Mathieu Barthet:
Shaping Sounds: The Role of Gesture in Collaborative Spatial Music Composition. Creativity & Cognition 2019: 121-132 - [c40]Fred Bruford, Mathieu Barthet, SKoT McDonald, Mark B. Sandler:
Groove Explorer: An Intelligent Visual Interface for Drum Loop Library Navigation. IUI Workshops 2019 - 2018
- [j5]Luca Turchet, Carlo Fischione, Georg Essl, Damián Keller, Mathieu Barthet:
Internet of Musical Things: Vision and Challenges. IEEE Access 6: 61994-62017 (2018) - [j4]Luca Turchet, Andrew P. McPherson, Mathieu Barthet:
Real-Time Hit Classification in a Smart Cajón. Frontiers ICT 5: 16 (2018) - [c39]Anna Xambó, Johan Pauwels, Gerard Roma, Mathieu Barthet, György Fazekas:
Jam with Jamendo: Querying a Large Music Collection by Chords from a Learner's Perspective. Audio Mostly Conference 2018: 30:1-30:7 - [c38]Luca Turchet, Mathieu Barthet:
Jamming with a Smart Mandolin and Freesound-based Accompaniment. FRUCT 2018: 375-381 - [c37]Luca Turchet, Fabio Viola, György Fazekas, Mathieu Barthet:
Towards a Semantic Architecture for the Internet of Musical Things. FRUCT 2018: 382-390 - [c36]Luca Turchet, Mathieu Barthet:
Demo of interactions between a performer playing a Smart Mandolin and audience members using Musical Haptic Wearables. NIME 2018: 82-83 - [c35]Ariane de Souza Stolfi, Miguel Ceriani, Luca Turchet, Mathieu Barthet:
Playsound.space: Inclusive Free Music Improvisations Using Audio Commons. NIME 2018: 228-233 - [c34]Anna Weisling, Anna Xambó, Ireti Olowe, Mathieu Barthet:
Surveying the Compositional and Performance Practices of Audiovisual Practitioners. NIME 2018: 344-345 - [c33]Anna Xambó, Gerard Roma, Alexander Lerch, Mathieu Barthet, György Fazekas:
Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases. NIME 2018: 364-369 - [c32]Fabio Viola, Ariane Stolfi, Alessia Milo, Miguel Ceriani, Mathieu Barthet, György Fazekas:
Playsound.space: enhancing a live music performance tool with semantic recommendations. SAAM@ISWC 2018: 46-53 - [c31]Sophie Skach, Anna Xambó, Luca Turchet, Ariane Stolfi, Rebecca Stewart, Mathieu Barthet:
Embodied Interactions with E-Textiles and the Internet of Sounds for Performing Arts. TEI 2018: 80-87 - 2017
- [j3]Yongmeng Wu, Leshao Zhang, Nick Bryan-Kinns, Mathieu Barthet:
Open Symphony: Creative Participation for Audiences of Live Music Performances. IEEE Multim. 24(1): 48-62 (2017) - [c30]Callum Goddard, Mathieu Barthet, Geraint A. Wiggins:
Designing Computationally Creative Musical Performance Systems. Audio Mostly Conference 2017: 3:1-3:8 - [c29]Anand Subramaniam, Mathieu Barthet:
Mood Visualiser: Augmented Music Visualisation Gauging Audience Arousal. Audio Mostly Conference 2017: 5:1-5:8 - [c28]Dorien Herremans, Simin Yang, Ching-Hua Chuan, Mathieu Barthet, Elaine Chew:
IMMA-Emo: A Multimodal Interface for Visualising Score- and Audio-synchronised Emotion Annotations. Audio Mostly Conference 2017: 11:1-11:8 - [c27]Ariane Stolfi, Mathieu Barthet, Fábio Goródscy, Antonio Deusany de Carvalho Junior:
Open Band: A Platform for Collective Sound Dialogues. Audio Mostly Conference 2017: 25:1-25:8 - [c26]Ireti Olowe, Mick Grierson, Mathieu Barthet:
User Requirements for Live Sound Visualization System Using Multitrack Audio. Audio Mostly Conference 2017: 40:1-40:8 - [c25]Jon Pigrem, Mathieu Barthet:
Datascaping: Data Sonification as a Narrative Device in Soundscape Composition. Audio Mostly Conference 2017: 43:1-43:8 - [c24]Ireti Olowe, Mathieu Barthet, Mick Grierson:
FEATUR.UX.AV: A Live Sound Visualization System Using Multitrack Audio. Audio Mostly Conference 2017: 53:1-53:5 - [e2]György Fazekas, Mathieu Barthet, Tony Stockman:
Proceedings of the 12th International Audio Mostly Conference on Augmented and Participatory Sound and Music Experiences, London, United Kingdom, August 23 - 26, 2017. ACM 2017, ISBN 978-1-4503-5373-1 [contents] - 2016
- [j2]Pasi Saari, György Fazekas, Tuomas Eerola, Mathieu Barthet, Olivier Lartillot, Mark B. Sandler:
Genre-Adaptive Semantic Computing and Audio-Based Modelling for Music Mood Annotation. IEEE Trans. Affect. Comput. 7(2): 122-135 (2016) - [c23]Kate Hayes, Mathieu Barthet, Yongmeng Wu, Leshao Zhang, Nick Bryan-Kinns:
A Participatory Live Music Performance with the Open Symphony System. CHI Extended Abstracts 2016: 313-316 - [c22]Thomas Deacon, Tony Stockman, Mathieu Barthet:
User Experience in an Interactive Music Virtual Reality System: An Exploratory Study. CMMR 2016: 192-216 - [c21]Leshao Zhang, Yongmeng Wu, Mathieu Barthet:
A Web Application for Audience Participation in Live Music Performance: The Open Symphony Use Case. NIME 2016: 170-175 - [c20]Ireti Olowe, Giulio Moro, Mathieu Barthet:
residUUm: user mapping and performance strategies for multilayered live audiovisual generation. NIME 2016: 271-276 - 2015
- [c19]Mathieu Barthet, György Fazekas, Alo Allik, Mark B. Sandler:
Moodplay: an interactive mood-based musical experience. Audio Mostly Conference 2015: 3:1-3:8 - 2014
- [c18]Tillman Weyde, Stephen Cottrell, Jason Dykes, Emmanouil Benetos, Daniel Wolff, Dan Tidhar, Alexander Kachkaev, Mark D. Plumbley, Simon Dixon, Mathieu Barthet, Nicolas Gold, Samer A. Abdallah, Aquiles Alancar-Brayner, Mahendra Mahey, Adam Tovell:
Big Data for Musicology. DLfM@JCDL 2014: 1-3 - [c17]Chris Baume, György Fazekas, Mathieu Barthet, David Marston, Mark B. Sandler:
Selection of Audio Features for Music Emotion Recognition Using Production Music. Semantic Audio 2014 - [c16]Sefki Kolozali, György Fazekas, Mathieu Barthet, Mark B. Sandler:
A Framework for Automatic Ontology Generation Based on Semantic Audio Analysis. Semantic Audio 2014 - [c15]Ting Lou, Mathieu Barthet, György Fazekas, Mark B. Sandler:
Evaluation and Improvement of the Mood Conductor Interactive System. Semantic Audio 2014 - 2013
- [j1]Sefki Kolozali, Mathieu Barthet, György Fazekas, Mark B. Sandler:
Automatic Ontology Generation for Musical Instruments Based on Audio Analysis. IEEE Trans. Speech Audio Process. 21(10): 2207-2220 (2013) - [c14]György Fazekas, Mathieu Barthet, Mark B. Sandler:
Mood Conductor: Emotion-Driven Interactive Music Performance. ACII 2013: 726 - [c13]György Fazekas, Mathieu Barthet, Mark B. Sandler:
Novel Methods in Facilitating Audience and Performer Interaction Using the Mood Conductor Framework. CMMR 2013: 122-147 - [c12]György Fazekas, Mathieu Barthet, Mark B. Sandler:
Demo paper: The BBC Desktop Jukebox music recommendation system: A large scale trial with professional users. ICME Workshops 2013: 1-2 - [c11]Pasi Saari, Mathieu Barthet, György Fazekas, Tuomas Eerola, Mark B. Sandler:
Semantic models of musical mood: Comparison between crowd-sourced and curated editorial tags. ICME Workshops 2013: 1-6 - [c10]Pasi Saari, Tuomas Eerola, György Fazekas, Mathieu Barthet, Olivier Lartillot, Mark B. Sandler:
The Role of Audio and Tags in Music Mood Prediction: A Study Using Semantic Layer Projection. ISMIR 2013: 201-206 - [c9]Mathieu Barthet, David Marston, Chris Baume, György Fazekas, Mark B. Sandler:
Design and Evaluation of Semantic Mood Models for Music Recommendation using Editorial Tags. ISMIR 2013: 421-426 - [e1]Mitsuko Aramaki, Mathieu Barthet, Richard Kronland-Martinet, Sølvi Ystad:
From Sounds to Music and Emotions - 9th International Symposium, CMMR 2012, London, UK, June 19-22, 2012, Revised Selected Papers. Lecture Notes in Computer Science 7900, Springer 2013, ISBN 978-3-642-41247-9 [contents] - 2012
- [c8]Mathieu Barthet, György Fazekas, Mark B. Sandler:
Music Emotion Recognition: From Content- to Context-Based Models. CMMR 2012: 228-252 - 2011
- [c7]Mathieu Barthet, Simon Dixon:
Ethnographic Observations of Musicologists at the British Library: Implications for Music Information Retrieval. ISMIR 2011: 353-358 - [c6]Sefki Kolozali, Mathieu Barthet, György Fazekas, Mark B. Sandler:
Knowledge Representation Issues in Musical Instrument Ontology Design. ISMIR 2011: 465-470 - 2010
- [c5]Mathieu Barthet, Steven Hargreaves, Mark B. Sandler:
Speech/Music Discrimination in Audio Podcast Using Structural Segmentation and Timbre Recognition. CMMR 2010: 138-162 - [c4]Sefki Kolozali, Mathieu Barthet, György Fazekas, Mark B. Sandler:
Towards the Automatic Generation of a Semantic Web Ontology for Musical Instruments. SAMT 2010: 186-187
2000 – 2009
- 2007
- [c3]Mathieu Barthet, Richard Kronland-Martinet, Sølvi Ystad:
Improving Musical Expressiveness by Time-Varying Brightness Shaping. CMMR 2007: 313-336 - [c2]Mathieu Barthet, Philippe Depalle, Richard Kronland-Martinet, Sølvi Ystad:
The effect of Timbre in Clarinet Interpretation. ICMC 2007 - 2005
- [c1]Mathieu Barthet, Philippe Guillemain, Richard Kronland-Martinet, Sølvi Ystad:
On the Relative Influence of even and odd harmonics in Clarinet Timbre. ICMC 2005
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-08-22 19:44 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint