Encodes signals in a variety of configurations to a first order ambisonic signal (B-format). PanB is a SuperCollider inbuilt equivalent.
in |
The input signal, an array: [in0, in1, ... inN] |
encoder |
FoaEncoderMatrix or FoaEncoderKernel instance. |
mul |
Output will be multiplied by this value. |
add |
This value will be added to the output. |
A B-format signal as an array of channels: [w, x, y, z]
The examples below are intended to briefly illustrate some of the first order encoding options made available in the Ambisonic Toolkit. The user is encouraged to carefully review the features of FoaEncoderMatrix and FoaEncoderKernel to gain a deeper understanding of the flexibility of these tools.
Available encoders include monophonic (as an omnidirectional soundfield, planewave or frequency spreading), stereophonic and varieties of pantophonic (2D surround) and periphonic (3D surround). Additionally, microphone array encoding is also supported.
As the Ambisonic technique is a hierarchal system, numerous options for playback are possible. These include two channel stereo, two channel binaural, pantophonic and full 3D periphonic. With the examples below, we'll take advantage of this by first choosing a suitable decoder with with to audition.
Choose a decoder suitable for your system, as illustrated here. You'll end up definining ~decoder
and ~renderDecode
.
Encoded as an omnidirectional soundfield, PinkNoise is used as the example sound source. In a well aligned, dampend studio environment, this usually sounds "in the head". FoaPush is used to "push" the omnidirectional soundfield so that it becomes a planewave (infinite distance, in an anechoic environment) arriving from some direction.
The soundfield is controlled by MouseX and MouseY, where MouseX specifies the incident azimuth angle (pi to -pi; left to right of display) and MouseY the FoaPush angle (0 to pi/2; bottom to top of display). With the mouse at the bottom of the display, the soundfield remains omnidirectional. Placed at the top of the display, the soundfield becomes directional, and varying left/right position will vary the incident azimuth of the resulting planewave.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // omni encoder // mono pink noise source // define encoder matrix ~encoder = FoaEncoderMatrix.newOmni // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels ( { var sig; // audio signal var angle, azim; // angle and azimuth control // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // angle ---> top = push to plane wave // bottom = omni-directional angle = MouseY.kr(pi/2, 0); // azimuth -> hard left = back // centre = centre // hard right = back azim = MouseX.kr(pi, -pi); // ------------------------------------------------------------ // test sig sig = PinkNoise.ar; // mono pink noise // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // transform sig = FoaTransform.ar(sig, 'push', angle, azim); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // ------------------------------------------------------------
Encoded as a frequency spread soundfield, PinkNoise is used as the example sound source. This sounds as spread across the soundfield, with the various frequency components appearing in various places. FoaPush is used to "push" the omnidirectional soundfield so that it becomes a planewave (infinite distance, in an anechoic environment) arriving from some direction.
The soundfield is controlled by MouseX and MouseY, where MouseX specifies the incident azimuth angle (pi to -pi; left to right of display) and MouseY the FoaPush angle (0 to pi/2; bottom to top of display). With the mouse at the bottom of the display, the soundfield remains spread. Placed at the top of the display, the soundfield becomes directional, and varying left/right position will vary the incident azimuth of the resulting planewave.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // frequency spreading encoder // mono pink noise source // define encoder kernel ~encoder = FoaEncoderKernel.newSpread // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels ( { var sig; // audio signal var angle, azim; // angle and azimuth control // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // angle ---> top = push to plane wave // bottom = omni-directional angle = MouseY.kr(pi/2, 0); // azimuth -> hard left = back // centre = centre // hard right = back azim = MouseX.kr(pi, -pi); // ------------------------------------------------------------ // test sig sig = PinkNoise.ar; // mono pink noise // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // transform sig = FoaTransform.ar(sig, 'push', angle, azim); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // ------------------------------------------------------------ // free kernel ~encoder.free
Encoded as a frequency diffused soundfield, PinkNoise is used as the example sound source. This sounds as diffused across the soundfield, with the various frequency components appearing in various places, with various phases. FoaPush is used to "push" the omnidirectional soundfield so that it becomes a planewave (infinite distance, in an anechoic environment) arriving from some direction.
The soundfield is controlled by MouseX and MouseY, where MouseX specifies the incident azimuth angle (pi to -pi; left to right of display) and MouseY the FoaPush angle (0 to pi/2; bottom to top of display). With the mouse at the bottom of the display, the soundfield remains spread. Placed at the top of the display, the soundfield becomes directional, and varying left/right position will vary the incident azimuth of the resulting planewave.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // frequency diffusion encoder // mono pink noise source // define encoder kernel ~encoder = FoaEncoderKernel.newDiffuse // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels ( { var sig; // audio signal var angle, azim; // angle and azimuth control // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // angle ---> top = push to plane wave // bottom = omni-directional angle = MouseY.kr(pi/2, 0); // azimuth -> hard left = back // centre = centre // hard right = back azim = MouseX.kr(pi, -pi); // ------------------------------------------------------------ // test sig sig = PinkNoise.ar; // mono pink noise // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // transform sig = FoaTransform.ar(sig, 'push', angle, azim); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // ------------------------------------------------------------ // free kernel ~encoder.free
Here we encode four channels of decorrelated PinkNoise as a decorrelated soundfield, resulting in a maximally diffuse soundfield. FoaPush is used to "push" the soundfield so that it becomes a planewave (infinite distance, in an anechoic environment) arriving from some direction. This technique gives the opportunity to continuously modulate between a directional and a diffuse soundfield.
The soundfield is controlled by MouseX and MouseY, where MouseX specifies the incident azimuth angle (pi to -pi; left to right of display) and MouseY the FoaPush angle (0 to pi/2; bottom to top of display). With the mouse at the bottom of the display, the soundfield remains omnidirectional. Placed at the top of the display, the soundfield becomes directional, and varying left/right position will vary the incident azimuth of the resulting planewave.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // A to B encoder // decorrelated pink noise source // define encoder matrix ~encoder = FoaEncoderMatrix.newAtoB // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels ( { var sig; // audio signal var angle, azim; // angle and azimuth control // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // angle ---> top = push to plane wave // bottom = omni-directional angle = MouseY.kr(pi/2, 0); // azimuth -> hard left = back // centre = centre // hard right = back azim = MouseX.kr(pi, -pi); // ------------------------------------------------------------ // test sig sig = -3.dbamp * PinkNoise.ar([1, 1, 1, 1]); // 4 channels decorrelated pink noise // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // transform sig = FoaTransform.ar(sig, 'push', angle, azim); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // ------------------------------------------------------------
This example is somewhat unconvential as regards the literature. Four microphones (omnis) are place around the performer in a tetrahedron. This is then matrixed into B-format.
As the performer rotates and moves about, the image shifts through the sound-scene. In a compositional context, FoaPush could be used to control the soundfield.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // A to B encoder // A-format soundfile read from disk // define encoder matrix ~encoder = FoaEncoderMatrix.newAtoB('flrd') // for Thomas ~encoder = FoaEncoderMatrix.newAtoB('flr') // for Cross // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels.raddeg // read a whole sound into memory // remember to free the buffer later! // (boot the server, if you haven't!) ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/a-format/Thomas_Mackay.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/a-format/Cross_Tenor_Sax.wav") ( { var sig; // audio signal // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // ------------------------------------------------------------ // test sig sig = PlayBuf.ar(~sndbuf.numChannels, ~sndbuf, BufRateScale.kr(~sndbuf), doneAction:2); // soundfile // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // free buffer ~sndbuf.free // ------------------------------------------------------------
In this example we first encode a single channel of PinkNoise into a stereophonic signal with Pan2. FoaZoom is then used to balance the soundfield across the x-axis (front/back).
The soundfield is controlled by MouseX and MouseY, where MouseX specifies the left to right position of the stereo panned source and MouseY the FoaZoom front to back position (distortion angle). Moving the mouse in a circular motion results in a circular motion of the sound.1
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // stereo encoder // stereo panned mono pink noise source // define encoder matrix ~encoder = FoaEncoderMatrix.newStereo // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels.raddeg ( { var sig; // audio signal var angle, azim; // angle and azimuth control // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // angle ---> top = zoom to plane wave at front // bottom = zoom to plane wave at back angle = MouseY.kr(pi/2, pi.neg/2); // azimuth -> hard left = back // centre = centre // hard right = back azim = MouseX.kr(pi, -pi); // ------------------------------------------------------------ // test sig sig = PinkNoise.ar; // mono pink noise // ------------------------------------------------------------ // pan (encode) to stereo sig = Pan2.ar(sig, azim.neg/pi); // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // transform sig = FoaTransform.ar(sig, 'zoom', angle); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // free kernel ~encoder.free // ------------------------------------------------------------
For this example we'll look at encoding stereo soundfiles.
The stereo encoder places the left channel at +pi/4 and the right at -pi/4. Compare to the Super Stereo encoder below.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // stereo encoder // stereo soundfile read from disk // define encoder matrix ~encoder = FoaEncoderMatrix.newStereo(pi/4) // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels.raddeg // read a whole sound into memory // remember to free the buffer later! // (boot the server, if you haven't!) ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/stereo/The_City_Waites-The_Downfall.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/stereo/The_City_Waites-An_Old.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/stereo/Aurora_Surgit-Lux_Aeterna.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/stereo/Aurora_Surgit-Dies_Irae.wav") ( { var sig; // audio signal // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // ------------------------------------------------------------ // test sig sig = PlayBuf.ar(~sndbuf.numChannels, ~sndbuf, BufRateScale.kr(~sndbuf), doneAction:2); // soundfile // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // free buffer ~sndbuf.free // ------------------------------------------------------------
Super Stereo2 is the classic Ambisonic method to encode stereophonic files, and is considered to be optimal for frontal stereo encoding.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // super stereo encoder // stereo soundfile read from disk // define encoder matrix ~encoder = FoaEncoderKernel.newSuper // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels.raddeg // read a whole sound into memory // remember to free the buffer later! // (boot the server, if you haven't!) ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/stereo/The_City_Waites-The_Downfall.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/stereo/The_City_Waites-An_Old.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/stereo/Aurora_Surgit-Lux_Aeterna.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/stereo/Aurora_Surgit-Dies_Irae.wav") ( { var sig; // audio signal // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // ------------------------------------------------------------ // test sig sig = PlayBuf.ar(~sndbuf.numChannels, ~sndbuf, BufRateScale.kr(~sndbuf), doneAction:2); // soundfile // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // free kernel & buffer ~encoder.free ~sndbuf.free // ------------------------------------------------------------
Ambisonic UHJ is the stereo format for Ambisonics.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // ambisonic uhj stereo encoder // stereo soundfile read from disk // define encoder matrix ~encoder = FoaEncoderKernel.newUHJ // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels.raddeg // read a whole sound into memory // remember to free the buffer later! // (boot the server, if you haven't!) ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/uhj/Palestrina-O_Bone.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/uhj/Gabrieli-Canzon_Duodecimi.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/uhj/Cante_Flamenco-Alegrias.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/uhj/Waldteufel-The_Skaters.wav") ( { var sig; // audio signal // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // ------------------------------------------------------------ // test sig sig = PlayBuf.ar(~sndbuf.numChannels, ~sndbuf, BufRateScale.kr(~sndbuf), doneAction:2); // soundfile // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // free kernel & buffer ~encoder.free ~sndbuf.free // ------------------------------------------------------------
The ZoomH2 is a convenient, portable handheld recorder. The device only records horizontal surround (pantophonic), so we don't get height.
As a relatively inexpensive piece of equipment, the imaging of the ZoomH2 isn't always as consistent as we'd prefer. To remedy, the Y gain is tweaked to widen the image, and dominance is applied to stabilise the front.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // zoomH2 encoder // zoomH2 soundfile read from disk // define encoder and xform matricies ~encoder = FoaEncoderMatrix.newZoomH2(k: 1.7378) ~xformer = FoaXformerMatrix.newDominateX(3.0) // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels.raddeg // read a whole sound into memory // remember to free the buffer later! // (boot the server, if you haven't!) ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/zoomh2/Anderson-Waltz.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/zoomh2/Anderson-Steam.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/zoomh2/Anderson-Stape_Silver.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/zoomh2/Anderson-St_Peter_&_St_Paul.wav") ( { var sig; // audio signal // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // ------------------------------------------------------------ // test sig sig = PlayBuf.ar(~sndbuf.numChannels, ~sndbuf, BufRateScale.kr(~sndbuf), doneAction:2); // soundfile // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // xform sig = FoaXform.ar(sig, ~xformer); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // free buffer ~sndbuf.free // ------------------------------------------------------------
As described here, the ZoomH2 encoder reverses the labels for front and back of the ZoomH2. This is done to favour the use of the decoder as a roving, hand-held device, with the display facing the operator.
If one wishes to respect the labelled orientation of the device as does Courville in the example below, we'll need to either adjust the angles argument or apply FoaXform: *newMirrorX. For this example, we'll set angles = [3/4*pi, pi/3]
, which are those specified in the ZoomH2 documentation.
As a relatively inexpensive piece of equipment, the imaging of the ZoomH2 isn't always as consistent as we'd prefer. To remedy, the Y gain is tweaked to widen the image.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // zoomH2 encoder // zoomH2 soundfile read from disk // define encoder matrix ~encoder = FoaEncoderMatrix.newZoomH2([3/4*pi, pi/3], 1.7378) // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels.raddeg // read a whole sound into memory // remember to free the buffer later! // (boot the server, if you haven't!) ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/zoomh2/Courville-Dialogue.wav") ( { var sig; // audio signal // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // ------------------------------------------------------------ // test sig sig = PlayBuf.ar(~sndbuf.numChannels, ~sndbuf, BufRateScale.kr(~sndbuf), doneAction:2); // soundfile // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // free buffer ~sndbuf.free // ------------------------------------------------------------
The pantophonic encoder may be used to transcode from one format to another. This example transcodes an octophonic recording to the decoder you've chosen.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // pantophonic (8-channel) encoder // pantophonic (8-channel) soundfile read from disk // define encoder matrix ~encoder = FoaEncoderMatrix.newPanto(8, 'flat') // choose for Mackay ~encoder = FoaEncoderMatrix.newPanto(8, 'point') // choose for Young // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels.raddeg // read a whole sound into memory // remember to free the buffer later! // (boot the server, if you haven't!) ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/multichannel/Mackay-Augustines_Message.wav") ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/multichannel/Young-Allting_Runt.wav") ( { var sig; // audio signal // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // ------------------------------------------------------------ // test sig sig = PlayBuf.ar(~sndbuf.numChannels, ~sndbuf, BufRateScale.kr(~sndbuf), doneAction:2); // soundfile // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // free buffer ~sndbuf.free // ------------------------------------------------------------
The directions encoder may be used to transcode from one format to another. This example transcodes a periphonic 12-channel recording to the decoder you've chosen.
If you haven't already choosen a ~decoder
and defined ~renderDecode
, do so now.
// ------------------------------------------------------------ // periphonic (12-channel) encoder // define encoder matrix ~directions = [ [ 22.5, 0 ], [ -22.5, 0 ], [ 67.5, 0 ], [ -67.5, 0 ], [ 112.5, 0 ], [ -112.5, 0 ], [ 157.5, 0 ], [ -157.5, 0 ], [ 45, 45 ], [ -45, 45 ], [ -135, 45 ], [ 135, 45 ] ].degrad ~encoder = FoaEncoderMatrix.newDirections(~directions) // inspect ~encoder.kind ~encoder.numChannels ~encoder.dirChannels.raddeg // read a whole sound into memory // remember to free the buffer later! // (boot the server, if you haven't!) ~sndbuf = Buffer.read(s, Atk.userSoundsDir ++ "/multichannel/Wilson-Bose.wav") ( { var sig; // audio signal // display encoder and decoder "Ambisonic encoding via % encoder".format(~encoder.kind).postln; "Ambisonic decoding via % decoder".format(~decoder.kind).postln; // ------------------------------------------------------------ // test sig sig = PlayBuf.ar(~sndbuf.numChannels, ~sndbuf, BufRateScale.kr(~sndbuf) doneAction:2); // soundfile // ------------------------------------------------------------ // encode sig = FoaEncode.ar(sig, ~encoder); // ------------------------------------------------------------ // decode (via ~renderDecode) ~renderDecode.value(sig, ~decoder) }.scope; ) // free buffer ~sndbuf.free // ------------------------------------------------------------