Companion Website :: Routledge
This is the companion website of the book chapter "Leveraging Online Audio Commons Content For Media Production" (2019) by Anna Xambó, Frederic Font, György Fazekas, and Mathieu Barthet, which appears in Michael Filimowicz (ed.) Foundations in Sound Design for Linear Media: An Interdisciplinary Approach, Routledge.
This work relates to the AudioCommons project which is funded by the European Commission through the Horizon 2020 programme, research and innovation grant 688382.
- Sound Examples
- Example 1 - Rain by Anna Xambó
- Example 2 - Footsteps by Anna Xambó
- Example 3 - Crowd by Anna Xambó
- Example 4 - Reshuffling a bass line by Anna Xambó
- Example 5 - MareNostrum by Anna Xambó
- Example 1 - Edited vs. original sound by Anna Xambó
- Example 2 - AudioCommons Trap: A musical composition with edited loops by Anna Xambó
- Soundscape Composition
- Example 1 - Submerged by Andrew Thompson
- Example 2 - Space meal by Andrea Guidi
- Example 3 - A camper and his dog in France by Caryl Jones
- MIRLC (MIR and Live Coding)
- Example 1 - Demo by Anna Xambó
- Example 2 - Music improvisation by Jack Armitage
- Example 3 - Music improvisation by Alo Allik
- Example 4 - H2RI.04 by Anna Xambó (H2RI, pan y rosas 2018)
- Lab Activities
Developed by Le Sound, AudioTexture is a plugin prototype for sound texture synthesis that leverages Audio Commons by bringing CC-licensed audio content into the DAW. The AudioTexture plugin lets users generate sonic textures from audio recordings from either online or local databases within a DAW environment, such as Logic Pro X, Ableton Live, or Reaper.
Example 1 - Rain by Anna Xambó
This example explores the granular texture of rain drips with the mode Noisiness-Y of AudioTexture using Reaper. The unit size of the grain is changed in real time creating a glitch effect that contrasts with the expected linearity of the sound. This example has been built upon the sound "Rain Drips" by DJMistressM from Freesound.
Example 2 - Footsteps by Anna Xambó
This example explores the granular texture of footsteps on leaves with the mode Position-X of AudioTexture using Reaper. The unit size of the grain is set to a small size so that the sound grains are short and change fast between them. The region of the target sound grains changes in real time by moving the slider manually, and the rate of the sound is slowly increased moving the knob, creating an increase of pitch and the effect of a high-pass filter. This example has been built upon the sound "01-20 Footsteps, sneakers on dry leaves" by SpliceSound from Freesound.
Example 3 - Crowd by Anna Xambó
This example explores the granular texture of a large crowd at a medium distance with the mode Noisiness-Y of AudioTexture using Reaper. With real-time variations on crossfade, Y range and the vertical slider, the result embraces minimalism composition techniques. A long fade out at the end is produced with the gain functionality of AudioTexture. This example has been built upon the sound "Large_crowd_medium_distance_stereo" by eguobyte from Freesound.
Example 4 - Reshuffling a bass line by Anna Xambó
This example reshuffles a bass line in real time. The reshuffling is made with the mode Brightness-Y of AudioTexture using Reaper. The example starts with a dynamic change of the Y range and vertical slider to change regions of interest and size of the range. The example progresses towards a breakbeat style with an increasing shorter unit size of the sound grains. It ends with a continuous decrease of the rate until the low frequencies of the bass line slowly disappear. This example has been built upon the sound "funk bass edit" by marvman from Freesound.
Example 5 - MareNostrum by Anna XambóThis piece entitled MareNostrum explores the theme of supercomputing centers and the acoustic properties of large-scale computation systems in massive spaces, particularly how a supercomputing center of quantum computers would sound like. The piece is based on the musical spatialization of sounds from crowdsourced online databases from the AudioCommons ecosystem, such as Freesound.org, combined with personal recordings from the Barcelona Supercomputing Center (BSC) and sound synthesis generated with SuperCollider. Some of the sounds were processed using AudioTexture and the whole composition and performance was developed in SuperCollider. The piece can be listened to in the embedded video below from the performance on August 9, 2018, at Cube Fest 2018, Moss Arts Center, Blacksburg, VA, USA.
MareNostrum at The Cube (August 9, 2018) by Anna Xambó.
- Xambó, A. (October 1, 2018). MareNostrum @ The Spatial Music Workshop & The Cube Fest 2018. In AudioCommons’s blog.
SampleSurfer has been developed by Waves Audio LTD and is another plugin for the Audio Commons Ecosystem that serves as an audio content search engine based on semantic metadata and musical features. The plugin is designed to integrate Audio Commons sound and music samples in a DAW-based environment by providing basic editing capabilities (e.g. fades, trims) to optimize the music production workflow.
Example 1 - Edited vs. original sound by Anna Xambó
This example selects a short loop of the intro of the song with SampleSurfer and adds a flanger effect using Ableton Live to give some organicity to the loop. This example is based on the original song interferences (2006) by ATHUSH from Jamendo.
Example 2 - AudioCommons Trap: A musical composition with edited loops by Anna Xambó
This example showcases the possibilities of SampleSurfer by selecting sounds from the online databases of Freesound and Jamendo under certain criteria (particularly C minor) and creating a musical piece by adding effects with Ableton Live.
There are three sounds that serve as baseline to create a trap beat, which are modified with delays and beat repeat effects:
Then, 6 sounds are selected to be triggered with the Drum Rack module of Ableton Live, which are influenced by two audio effects, a beat repeat and a bit reduction:
- Simulated robot android cyborg or droid saying welcome (female voice) by Jagadamba
- Simulated robot android cyborg or droid saying welcome (male voice) by Jagadamba
- kitten15 by Department64
- Dialogue, Pained Yelp, Loud, E by InspectorJ
- Cyber Off by qubodup
- Hey You (2) by montblanccandies
We provide here examples of short soundscape compositions that have been created in the Fall of 2017 by students from the Sound Recording and Production Techniques module at Queen Mary University of London led by Mathieu Barthet. The soundscape themes were found by applying and adapting the participatory design technique described in Holmquist (2008). Students were invited to generate ideas across four categories: character, place/environment, situation/action, and mood. These ideas were combined randomly and formed the basis of soundscape themes after a reflection and simplification phase. Students were given the creative constraint of only using sounds sourced from Audio Commons or Apple Loops. They could edit and process the sounds in the DAW and were able to use Le Sound's AudioTexture plugin to generate novel sonic textures from Audio Commons audio content.
Example 1 - Submerged by Andrew Thompson
Abstract: The piece follows a short dream sequence of an individual drowning under immense pressure in their life. Sounds sourced from Freesound.org are arranged to create a compelling underwater soundscape that evolves into the distant memory of a jazz street performer. AudioTexture provides an evolving bed for the soundscape by blending large grains of an underwater boat recording.
Example 2 - Space meal by Andrea Guidi
Abstract: A metaphoric conversation between a woman doctor, a friend and smart devices happens during the soundscape. The doctor’s attention is overloaded both by notifications and by her friend's complaints while the listener's auditory experience is overloaded by the amplitude and presence of sounds from their meal (cutleries and chewing).
The Freesound database was accessed within the Audio Texture VST. The research was conducted by using keywords related to the project constraints (character, place/environment, situation/action, and mood). The same VST was then used to time-scramble, pitch shift and filter the audio files to obtain a sound palette for the composition. Two main categories of musical events were achieved: audio textures and discrete events. The former was used to create the soundscape context while the latter to articulate a musical counterpoint. Finally, some editing, dynamic processing and stereo imaging strategies were applied to finalise the composition.
Example 3 - A camper and his dog in France by Caryl Jones
Abstract: The resources for this composition solely included recorded sounds from Freesound, the concatenative synthesis plugin AudioTexture to treat some of these sounds and audio samples from Apple Loops. The digital audio workstation (DAW) used was Logic and Audio Technica headphones for monitoring. The compositional approach was to create a figurative piece with an abstract, dream-like edge of a man walking his dog in France from a first person perspective. Sounds were divided into five categories for the arrangement and laid out in a linear way to tell a story from morning to evening with quick transitions between natural summer scenes. Sounds were arranged and mixed using trim, volume, pan automation, gain and EQ. The composition was mastered using a limiter to stop clipping.
- Holmquist, L. E. (2008). Bootlegging: multidisciplinary brainstorming with cut-ups. In: Proceedings of the Tenth Anniversary Conference on Participatory Design 2008. Indiana University, pp. 158-161.
MIRLC (MIR and Live Coding)
MIRLC (Xambó et al. 2018) is a library designed to repurpose audio samples from Freesound, which can also be applied to local databases, by providing human-like queries and real-time performance capabilities. The system is built within the SuperCollider environment by leveraging the Freesound API. The library was presented at the New Interfaces for Musical Expression conference at Virginia Tech in 2018.
Example 1 - Demo by Anna Xambó
This demo showcases the basic functionalities of MIRLRep querying by rhythmic and melodic characteristics.
MIRLCRep demo by Anna Xambó.
Example 2 - Music improvisation by Jack Armitage
Abstract: Being somewhat familiar with the Freesound database, I wanted to see if it could be used to create a shifting cinematic landscape by text-based search and sequencing alone. I chose four environments that I felt should have reasonable coverage in the database, and that would be sonically distinct; garden, street, city and airport. I came up with three or four search terms for each environment, checked that they returned at least one usable result, and improvised between them using a list of queries as a score. I did not feel in control of the output; the results were largely a surprise.
The following CC sounds were used in the piece: http://crowdj.net/routledge/jack-mirlc-credits.txt
Example 3 - Music improvisation by Alo Allik
Abstract: The MIRLC library opens up new interesting opportunities for live coders who seek to augment their practices with access to Freesound content. This can be done very efficiently on-the-fly with MIRLC and often with an element of surprise to the performer which is frequently the desired behaviour of a system in the world of live coding. This experiment in improvisation makes use of four core samples searched by key words - rainstick, breaking glass, bullroarer and didgeridoo - with a certain sonic context in mind: rainstick for overall texture, breaking glass for punctuation, and the latter two for low frequency contrast. The rest of the sampled content was extracted with the similarity search functionality that allows the live coder to really explore the Freesound database in detail while being constantly surprised by the returned results and having to make compositional choices in response. The core samples were also subjected to further signal processing, including reverberation, playback rate changes and pitch shifting which, when applied simultaneously, create a different, more electronic-sounding sonic outcome. The aesthetic approach here was to gradually move from unprocessed samples to almost unrecognisable digital phasing of sounds.
The following CC sounds were used in the piece: http://crowdj.net/routledge/alo-mirlc-credits.txt
Example 4 - H2RI.04 by Anna Xambó (H2RI, pan y rosas 2018)
H2RI is an instance of a generative album created by Anna Xambó in 2018. This track entitled H2.04 is an example of the 20 tracks of 1′ each that have been generated using her self-built tool MIRLC, a library for using music information retrieval techniques in live coding. A basic rule has shaped the audio sources of the album: to only use short sounds from the crowd-sourced, online, sound database Freesound.
The following CC sounds were used in the piece: http://www.panyrosasdiscos.net/anna-xambo-h2ri-credits/
- Xambó, A., Roma, G., Lerch, A., Barthet, M. and Fakekas, G. (2018). Live repurposing of sounds: MIR explorations with personal and crowdsourced databases. In: Proceedings of the International Conference on New Interfaces for Musical Expression. pp. 364-369.
Playsound.space (Stolfi et al. 2018) is a web-based sound search and music creation tool providing access to online audio content from Audio Commons (Freesound). Playsound can be used in a varied range of contexts, from individual soundscape compositions to collaborative live music improvisations with laptop musicians. The tool was presented at the New Interfaces for Musical Expression conference at Virginia Tech in 2018.
Example 1 - Demo with laptop triosWe used Playsound.space in the context of free live music improvisation in small ensembles. The following video shows laptop trios composed of both non musicians and musicians who were able to play short pieces together after a brief introduction to the tool:
Example 2 - Crowd Noises by The Puppets
The following excerpt was produced during an improvisation involving a mixed ensemble of Playsound players and other instrumentalists. The excerpt was in turn uploaded to Freesound forming an iterative creative loop between sound producers and consumers:
Example 3 - Adaptation of Gertrude Stein's poem Tender Buttons by Ariane Stolfi
Playsound.space was used by Ariane Stolfi to generate a sonic interpretation of Gertrude Stein's poem Tender Buttons. The performer used excerpts of the poem as the source for semantic searches to obtain Creative Commons audio content from illustrating the poem:
The following CC sounds were used in the piece: http://finetanks.com/records/playsound/gertrudestein_v1.txt
- Stolfi, A. de Souza, Ceriani, M., Turchet, L. and Barthet, M. (2018). Playsound.space: inclusive free music improvisations using Audio Commons. In: Proceedings of the International Conference on New Interfaces for Musical Expression. pp. 228-233.
Several lab activities can be organised to familiarise students with Audio Commons content and tools. We provide several instructions sheets to learn how to use the following tools:
- Jamendo's Sound and Music Search Tool (MuSST): a web interface for accessing Audio Commons sounds and music pieces.
- Le Sound’s AudioTexture: a plugin for sound texture synthesis based on local or online audio content from Audio Commons.
- Playsound.space: a web interface to search for sounds and create music (e.g. live collaborative improvisations, soundscapes) using audio content from Freesound.