CartoonModel
Inherits from: Object : Soundscape
CartoonModel gathers information about Cartoonification model (simplified sound design technique for the creation of sounds related to physical processes and spatial cues). The concatenative synthesis is performed to simulate the sound objects, while the spatialisation model just affords an amplitude attenuation by distance listener-sound object and a stereophonic panning. Textural, physical model and procedural synthesis could individuates further model, creating new Soundscape subclasses.
Overview: Graph is the sequencing format that stores information about vertex (sound objects), their parameters and the whole structure relationships; Runner generates messages about the actants navigating the graph structures (this determine which vertex identifiers (vID) are playing); GeoListener sends information about the listener interaction; finally the CartoonModel performs the spatial and concatenative sound synthesis. The designer defines which are the most relevant sound cues able to represent a target soundscape. He has to collect samples to simulate the relevant cues and he has to organise them in a database. In order to determine the relevant cues it is well recommended that the designer analyses the soundscape.
See also: Soundscape, GeoListener, Runner, Graph
Accessing Instance and Class Variables
bufDict
IdentityDictionary, it indexes to the Buffer where the samples are load up. It is used on method play and modifplay to map the vertex name activated by actants to the allocated sound buffer.
gain
Parameter to amplify all the vertex (i.e. the whole soundscape loudness).
m
Ratio VirtualUnit/meter, in order to adjust the amplitude function by distance accordingly to the distance unit of measure.
samplesPath
the path of your sample database.
offsetVertexStandardListenedArea
In meter, the area where a sound object is perceived clearly , this means that if the distance listener vertex is higher than perceptionArea + offsetVertexStandardListenedArea, a LowBandPass filter is applied in this.filter: offsetVertexStandardListenedArea is a specific attribute of each vertex, while perceptionArea is a listener attribute.
Default value is 60 meter.
name
You could need it, if you have several CartoonModel instances in the same soundscape = several sound zones of the soundscape or several sound layer as group of sound objects to be controlled together (gain, m...).
Class methods
initAudio(aSamplesPath, nameList, ext)
Initialisation method: aPath is the path of your samples database. MP3 format is not supported. For a list of supported format see SoundFile help
NOTE: You don't need nameList and ext. if you provide just the path, and nameList = nil, the system updates the identityDictionary bufDict with the name of the sound files with this.loadDatabase(aPath) method
aSamplesPath - The path of your sample database.
nameList - a list of all the names of your sample files, those labels will name the vertexes.
ext - The extention of the audio files. Default value is "wav".
// creation and init
g = CartoonModelnew(runner,geoListner);
g.initAudio(aPath);
g.initAudio(aPath,[\v1,\v2,\v3], "wav");
sendDef
Send the synthDef to the server
setListenedArea(vID, val, aGraph)
set the offsetVertexListenedArea parameter in the vertex options vector of a graph. Args are: the vertex ID, the value of offsetVertexListenedArea, a graph object
setDy(vID, val, aGraph)
set the automatic normalisation parameter in the vertex options vector of a graph. The parameter is used to correct the distance of recordings if different from 5m. Args are: the vertex ID, the value of dy, a graph object.
setGain(value)
gain = value
readGain
return gain
setVirtualUnitMeterRatio(ratio)
m = ratio
fromRecDistanceToNormalisationAmp(recordingDistance)
Take as arg the recording distance of a sound object, return the amp value for the automatic normalisation parameter "dy" (see filter method). The normalisation is computed with a 2 degree polynomial approximation of the law: each time the distance double the amp changes in -6db
filter (aXv, aYv, aA, aB, aOffsetVertexListenedArea, aDy)
It calculates the parameters for the synthesis: amp and cutFrequecy. The amplitude is computed from the point to point distance vertex-listener. Following the OPEN_GL standard, the amplitude by distance formula is: amp = m*dy/(dy + (rol * (d - dy)). Loudness in real life approximately changes -6db each times the distance doubles: if at 10m the loudness of a source is 50db, at d = 5m, Loudness = 44db. In supercollider, each times the amp double this simulates a change of +6 db, for example evaluate this:
[0.125, 0.25, 0.5, 1, 2].ampdb;
The variable offsetListenedArea (simplified as rol) is a roll off to increase the perception of a vertex depending on the same distance; dy is a normalisation parameter of each vertex to scale a sound depending on the distance it was recorded. The standard Rec_distance is 5m where dy = 1, if Rec_distance = 2.5m dy = 0.5, if Rec_distance = 10, dy = 2. The variable m is the Ratio virtualUnit/Meter. If the vertex is a non point source, the distance is computed as point - rect distance: the distance between the listener and the nearest point of the rect representing the area of the non point vertex. The cutFrequency is computed as: exp(10 - ( 7 *((d - (m*perceptionArea*2))/(m*offsetListenedArea + m*perceptionArea))));
aXv - x coordinate of a vertex.
aYv - y coordinate of a vertex.
aA - x Listener coordinate.
aB - y Listener coordinate.
aOffsetVertexListenedArea - vertex roll off.
aDy - The distance of recording if different from the standard 5m. From aDy the filter method computes "dy", that is the normalisation parameter.
play (message)
Play method takes message as an argument. Message is the vertex ativated by an actant. Play call filter and GeoListener.calculatepanning, it takes the amp, pan and cutFrequency values and it allocates a new Synth with the parameters. It stores the Synth activated in the synthplaying IndentityDictionary.
modifplay (message)
Modifplay method takes message as an argument. Message is the new position and orientation of the listener. The method calls the methods to calculate amp, cutFrequency and pan for all the active vertexes. Then, it iterates on all the Synth playing (synthplaying IdentityDictionary) the modification of the synth parameters.