CategoryCode

SoDA

An application kit for networked dance performance

Abstract

A component-based distributed system, SoDA enables networked dance practice and computer-assisted choreography for virtual interaction between human and non-human performers (including third-party software such as Wekinator, INScore, MaxMSP, etc). Using SoDA, participants in a networked rehearsal receive aural and visual cues in the form of sounds, verbal cues, and visual displays that correlate to specific actions or movements. Cues regarding relative positioning and movement quality (direction, weight, speed and flow as per Laban movement theory) as well as pre-composed sequences or routines are communicated via webcam, microphone input, or shared video playback. Through such cues, the system fosters co-creation in a networked performance space as developed by the ongoing artistic research project “[Social D[ist]ancing: Development of a networked artistic practice out of confinement](SoDA - An application kit for networked dance performance · NIME 2021)” at the University of Music and Performing Arts Vienna, which is working to leverage ML and integrate computer capabilities within networked music and dance practices.

The SoDA kit consists of a set of applications that communicate through a custom message broker application (AMEX) in a remote server. SoDA works parallel to video conferencing applications (Jitsy, Zoom, etc) to annotate, document and enhance the performer’s experience. A SoDA Node exchanges and translates messages between SoDA components and third-party applications such as INScore (GRAME) or Wekinator (Rebecca Fiebrink). An optional SoDA Point of View (PoV) component monitors temporal communication parameters –latency, clock offset– relatively to a designated ‘central observer’, adding artificial delays where necessary to fine tune the perceived synchrony of actions at that particular node.

SoDA Nodes exchange messages via TCP protocol across the network, but communication between the Nodes and other SoDA components as well as third-party software —typically running alongside each other in the same machine— uses Open Sound Control (OSC) over UDP. This network design provides flexibility to the SoDA mesh, while having the advantage of being extremely scalable – the specifically conceived AMEX application running in the server allows for a large number of SoDA nodes to interact seamlessly in real time.

Additional SoDA components include SoD4L, a MaxForLive device that serves as a bridge between Ableton Live and the SoDA mesh, and SoDATA, a standalone application used for sequence recall and choreography annotation in different human-readable formats.

Using the SoDA kit enables dance practitioners to harness the immediacy and scalability of computer systems for artistic experimentation, and allows them to tap into the potential of AI in a networked, distributed environment.

Credits

Adrián Artacho is currently a PhD candidate at the University of Music and Performing Arts of Vienna, researching the use of technology to enhance performance capabilities. He is also an active performer of live electronics either solo or in different configurations. As a composer, his interest in cross-media projects and dance in particular has led him to regularly collaborate with choreographers and to become founder of the dance companies Tanz.Labor.Labyrinth and SyncLab Tanzkollektiv. (adrian@neuesatelier.org)

Oscar Medina Duarte has many years of experience working on technological projects in the safety-critical domain (aviation, railway, …). His great interest in the performing arts brought him to an active role at NeuesAtelier where he is active as a technologist and software developer. As a technologist, his main interest is on the effects of the pervasive tecnification of society and the role of performing arts in a post-pandemic world. (oscar@neuesatelier.org)

Links

Here is a link to a video showcasing the use of the SoDA system by a team of artists-researchers in the context of the research project “Social D[ist]ancing: Development of a networked artistic practice out of confinement” at the University of Music and Performing Arts Vienna.

Here is the project public repository, including instructions for the installation and use of the Soda kit:

https://bitbucket.org/AdrianArtacho/soda-node/

Acknowledgements

The authors would like to thank Hanne Pilgrim, Mariama Diagne, Benedikt Berner, Katharina Püschel, Dalma Sarnyai, Magdalena Eidenhammer, Maximilian Resch and Maria Solberger for their participation in the evaluation and testing of the applications. It was their invaluable feedback that ultimately informed the design of the software. This work is supported by the Artistic Research Center and the Research Support department of the University of Music and Performing Arts Vienna.

Modules

Name Description
sod4l SoDA for Live (Max For Live device)
soda-node Basic SoDA network unit
soda-pov SoDA - point of view (latency management)
sodapink
sodata
soundcues Aural cues
videogrid Video collage tool
zoomkeeper Zoom Videoconference tool

SyncLab

The Midi2json device works in combination with Pixel2Midi device to synchronise events across performers.

Here is a description of the information workflow:

repom2p2m

Sheets as 'repositories'

These devices uses the download-sheet abstraction, which downloads the contents of a google spreadsheet to the patch. In a way, the online sheet woks as a human-editable repo that keep the consistency of the data across multiple devices that read from it.

MORPHology. This spreadsheet codifies the specific way the cue is rendered on the performer's side. The device reads (and stores) a MORPH .tsv database of the different realisations of a cue. Some examples include:

  • Use downbeat to mark the pulse, and the horizontal bar to mark when a cue arrives Perhaps 02 with the same cadence as the established pulse, and 20 for the extent of the note (as a makeshift progress bar)

  • Use two vertical regions (left and right), where left is the current action, and right is the upcoming one. The transition onto the new action could entail a couple of normal beats followed by a downbeat (or vice versa).

    • This requires a wildcard for the time of the previous cue
    • the flickering of inverse images could happen immediately before (together with the beats), or perhaps the region on the right could have a different color.
    • The different (derivative) image files should be named in a way that is eindeutig... I just haven't thought enough about it yet.
  • Use the regular beat to convey the pulse for each individual performer (perhaps the schrittempo) and reserve the horizontal bar for the length of the action.

  • Three regions (columns), the ones on the sides are very skinny, and they flicker to indicate that the performer should turn left/right.

  • Two vertical regions, one for the content (score, text) and another region for the movement in space, like turning, up/down... etc.

  • Two horizontal regions, one fro the content, and another one (above, below, etc.) for the intensity (dynamic, articulation, technique...)

  • Use different font siZEs in the score (perhaps font size is also manageable within polytempo, which would make it rather flexible) to convey intensity (dynamics + articulaition). Perhaps color (or 'acotaciones') for a specific emotion.

  • Use horizontal bar as a progress bar for the length of the note. Without warning, with two beat warnings... etc.

  • have noteoff as a category of pseudocode that gets executed when the current note ends.

Pseudocode

Wildcards can be used to use contextual values. The maxmsp code uses $ for wildcards, however in pseudocode the szmbol ~ may also be used (~1 = $1).

wildcard value
$1 Action ID)
$2 Current time (float, in seconds)
$3 derivative action (+1, -1.... etc.)
$4 derivative time (float, in seconds)
$5 Time of the previous noteoff
$4 Time of the previous noteon

Time can be expressed using b and B for beat and BAR respectively. So 1b would be equal to the length of a beat given the current tempo.

Pseudocode examples

Pseudocode what it does
image $2 sectionID $1 regionID 1 OPTION+SPACE (ALT+SPACE)
before 3.0 image $4 sectionID $1 regionID 1 image $1 shown after 3.0 seconds
before 1.5b image $4 sectionID $1 regionID 1 image $1 shown 1.5x length of a beat before the action´s time
text 3 "Whichever text you like"
name Promenade
addRegion 1 rect 0 0 1 0.5
loadImage 1 url mussorgsky.png
addSection 3 imageID 1 rect 0 0 1 0.5
marker $2 value $1
beat $2 duration 1 pattern 21 cue 1
loadAudio 5 url freejazz.wav ? analogous to loadImage
audio 5 ? analogous to image

To-Do

  • The color panel begin "color" is slightly larger than the text...

  • is it possible to write in different colors on textedit? maybe there are alternatives where this is possible, like [lcd]...

  • make also the url for the [download-image] object settable via clientwindow.

  • How to store pseudocode in the spreadsheets, so that I can more flexibly experiment with the morphology

  • I could write pseudocode using natural language for the colors.

  • Have (for every piece) a separate spreadsheet, for ACTION. Inside: pseudocode ("TEXT hola ue tal" "VIBRATE vibration signal" "IMAGE specify image name..." "COLOR maroon")

  • Use image_128.png as a convention for the COVER page (it is an id unlikely to be used, regardless of the bank.)

  • deal with the "multiple files in your search path" error

TesserAkt

Tesser_logo

The TesserAkt environment is a collection of MaxForLive devices designed for real-time midi manipulation. These devices were developed in the context of the Fraktale Lab, within the artistic research project Atlas of Smooth Spaces (FWF 640) at the University of Music and Performing Arts Vienna.

Credits and access

This environement is developed and mantained by Adrian Artacho. The online repository can be found here:

https://bitbucket.org/AdrianArtacho/tesserakt/


Name Description
Tesser_cmd Launches a function based on midi input
tesser_automidi Similar to 'autotune', reshapes a midi input
tesser_block Blocks CC/Midinotes dynamically
tesser_buffer This device allows to save and recall bits of audio
tesser_cc2note Converts a midi inout onto midinotes
tesser_cc2params Maps CC input onto ranges and parameters
tesser_cc2signal Create a signal-structured stream of midi
tesser_chains Renames midinotes based on lists
tesser_clip2cc Translates midinotes to CC values
tesser_clips Launches clips via midinotes or CC
tesser_cue Aural warnings to the performer
tesser_delay Takes a midi input and delays it by an amount of time
tesser_dynamic Manipulate note velocity in different ways
tesser_fade Fades in/out (increases/reduces midi velocities)
tesser_fractal Fractal Video manipulation
tesser_function Manipulates midi input based on a function
tesser_funnel Maps differently sized lists of midi IN/OUT values
tesser_gate Open/Close the midi stream dynamically
tesser_gesture Extract a gesture from a stream of midi values
tesser_inscore Interfaces with INScore (midi input)
tesser_livescore Score display of midi
tesser_mirror Mirrors midi values dynamically based on a 'center'
tesser_mutate Introduces mutation in a given midi sequence
tesser_note2cc Converts midinotes onto CC
tesser_pedal Specific midi keyboard pedal interface
tesser_pgch Generate program change messages based on midi
tesser_ramp Generates ramp of values for a given time
tesser_ranges Allows/Blocks specific midi ranges dynamically
tesser_recall Saves and recalls midi sequences dynamically
tesser_route Routes midi input dynamically
tesser_scale Scales midi input dynamically
tesser_signal2midi Takes in a signal (audio) and converts it to midi
tesser_threshold Allows/Blocks midi input based on threshold values
tesser_videoloop Live capture looping
tesser_visuals Produce visuals (Max/jitter) based on midi

SoDA node

SoDA application kit

This code repository also hosts the documentation and help files of the set of applications included in the Social Distancing Applications kit (SoDA), developed by Adrián Artacho. The standalone programs has been written in MaxMSP, while the message broker (AMEX) is C# programm written by Oscar Medina Duarte. The build standalone for the SoDA node can be downloaded here.

Required externals:

From the CNMAT package:

  • OSC-route.mxo

Computer Vision for Jitter (Jean-Marc Pelletier)

  • cv.jit.resize.mxo
  • cv.jit.track.mxo

Probabilistic Models by Jules Francoise:

  • mubu.mxo
  • mubu.gmm.mxo
  • mubu.hhmm.mxo

From the Ejies library:

  • zsa.descriptors.mxo

SoDA ecosystem

The SoDA ecosystem is composed by a network of SoDA node applications in addition to standalone components that add some features to the system. The local standalone components communicate by means of OSC messages with the SoDA node, and the individual nodes exchange messages via TCP protocol with the AMEX.md message broker application usually running in a remote server.

  • OSC messages (Description of the SoDA message ecosystem)
  • AMEX.md (Message broker program running in the remote server)

Standalone applications

  • SoDA node is the main application you will need to participate in a networked performance session.

  • SoDA pov (This application monitors and corrects latency problems)
  • SoDATA

Max for Live devices

  • SoDA for Live (SoD4L) is a M4L object that acts as a bridge between Ableton Live and the SoDA-node standalone.

SoDA node

This are the instructions for the installation and use of the SoDA-node application. The installation process will be different between Mac and Windows operating systems.

Download SoDA node

The toolkit heavily relies on the MuBu lib developed by the ISMM team at IRCAM. This a Max/MSP library that is free (only to register to the IRCAM forum: http://forumnet.ircam.fr/product/mubu-en/.)

SoDA-node v.1.5 Standalones (2021.03.26)

For Mac OS:

: SoDA-node-standalone-v1.5_Max8-MacOS-10-13-6

For Windows:

: SoDA-node-standalone-v1.5_Max8-Windows7

Note for Windows' users: you need to install Visual Studio 2015 Redistributable Package to have MuBu working! Once downloaded, add the Mubu library into the ext-libs folder within the GST.

Last step: in MaxMSP add all the toolkit to your file preferences:

Options > File Preferences

Installation in Mac operation systems

  1. First you will need to download the appropriate version from the Downloads page to your Desktop.
  2. Uncompress the .zip file by double clicking on it.
  3. Move the uncompressed file (SoDA-node.app) to your Applications folder (optional)
  4. Lauch it as you would with any other application and enjoy!

→ (error) If your computer protests that the application is broken / corrupted / or from an unknown developer, you may solve with a little line in your computer Terminal:

  1. Open your Terminal application (⌘CMD + SPACEBAR to open Spotlight, then type terminal)
  2. Write the following:

xattr -cr /Applications/SoDA-node.app

Installation in Windows operation systems

  1. First you will need to download the appropriate version to your Desktop.
  2. Uncompress the .zip file by double clicking on it.
  3. Lauch it as you would with any other application and enjoy!

Using the SoDA-node application (both operating systems)

Using the program is pretty straightforward. Once the remote server is on (which can also be done from the SoDA pov application),
hit connect button:

repo:SODA:node-neutral

Then, the application should show the status "connected" and the toggle "tcp on".

repo:SODA:tcp-on

Now, select your participant ID:

repo:SODA:select-pid

If we are going to use audio for the session, make sure to drop the audiofile (.wav, .mp3 etc.) in the rectangle on the right.

repo:SODA:drop-audiofile

You will know that a connection is established with the POV application when your internal SID number appears near your Participant ID. You will also see a red blink at the top corner right whenever 'pinged' by the POV Application.

repo:SODA:internal-sid

Set your volume level as you like with the horizontal slider, but make sure to leave the audio ON (the speaker symbol on the left) if we are going to be working with audio.


OSC messages

soda-message-ecosystem

Quick message reference (pdf):

quick-reference


TrasForms

TransForms

repo:TRAN:standalone-neutral

Here is the links to the live.device (OS-agnostic):

TransForms M4L device

[comment]: # ( TransForms standalone (MacOS) )

(*) In order to have polyphonic Pitch Bend, the Max For Live device needs to be used together with additional routing devices.

work in progress... - - -

OSC messages to & from TransForms App

OSC messages from the PolytempoNetwork App to the TransForms App use the port 22574 and these are the ones implemented so far:

TransForms app (OSC port IN 22574)

repo:TRAN:osc-reference

Tesser_block

This Patch is part of the TESSER environment.

TTESS:Logo

Tesser_block

This device allows to block selectively different kinds of midi data.

TESS:block

Usage

The settings can be stored with the device/set. They can also be automated.

Credits

This device is a derivative patch largely taken from AbletonDrummer's object.


To-Do

  • document device
  • Add feature to print in maxwindow specific kinds of cc input
  • Extended feature: block specific notes which are set using the gesture CC. This is practical, for example, if you are playing the keyboard and whant to block the area where the hands are from playing new notes, etc.

Tesser_AutoMidi

This Patch is part of the TESSER environment.

TTESS:Logo

Tesser_AutoMidi

This device is analogous to an autotune effect, only with midi. It takes midinotes and allows to substitute the entered midinote number and/or velocity with alternative CC values entered via CC86 and CC87.

TESS:automidi

View of the device. Click here) to edit.

Usage

Here is a full description of the functions associated to CC messages within the TESSER environment.

override pitch The entered pitch will be substituted by the last entered CC86 value.

override velocity The entered velocity will be substituted by the ;ast entered CC87 value.

auto AllNotesOff When this toggle is on, a note off (vel = 0) will cause a 0 123 message to be sent to all 16 midi channels.


To-Do

  • document
  • how does a gesture gets stored, and then recalled one by one in a loop? The device should accept (and store) a gesture string, possibly via CC90