Welcome to my website. At the moment this is still a work in progress…
Lights & sound installation The rocket that bumped off the ceiling, by Max Windisch-Spork and Adrian Artacho.
Max For Live Devices
Here is the online repository for all the M4L devices developed during the work on the installation.
Midilights are controlled using the DMXIS interface.
Lowest note (bank) is C-2 (Ch15?); Ch16 for Presets?
An application kit for networked dance performance
A component-based distributed system, SoDA enables networked dance practice and computer-assisted choreography for virtual interaction between human and non-human performers (including third-party software such as Wekinator, INScore, MaxMSP, etc). Using SoDA, participants in a networked rehearsal receive aural and visual cues in the form of sounds, verbal cues, and visual displays that correlate to specific actions or movements. Cues regarding relative positioning and movement quality (direction, weight, speed and flow as per Laban movement theory) as well as pre-composed sequences or routines are communicated via webcam, microphone input, or shared video playback. Through such cues, the system fosters co-creation in a networked performance space as developed by the ongoing artistic research project “[Social D[ist]ancing: Development of a networked artistic practice out of confinement](SoDA - An application kit for networked dance performance · NIME 2021)” at the University of Music and Performing Arts Vienna, which is working to leverage ML and integrate computer capabilities within networked music and dance practices.
The SoDA kit consists of a set of applications that communicate through a custom message broker application (AMEX) in a remote server. SoDA works parallel to video conferencing applications (Jitsy, Zoom, etc) to annotate, document and enhance the performer’s experience. A SoDA Node exchanges and translates messages between SoDA components and third-party applications such as INScore (GRAME) or Wekinator (Rebecca Fiebrink). An optional SoDA Point of View (PoV) component monitors temporal communication parameters –latency, clock offset– relatively to a designated ‘central observer’, adding artificial delays where necessary to fine tune the perceived synchrony of actions at that particular node.
SoDA Nodes exchange messages via TCP protocol across the network, but communication between the Nodes and other SoDA components as well as third-party software —typically running alongside each other in the same machine— uses Open Sound Control (OSC) over UDP. This network design provides flexibility to the SoDA mesh, while having the advantage of being extremely scalable – the specifically conceived AMEX application running in the server allows for a large number of SoDA nodes to interact seamlessly in real time.
Additional SoDA components include SoD4L, a MaxForLive device that serves as a bridge between Ableton Live and the SoDA mesh, and SoDATA, a standalone application used for sequence recall and choreography annotation in different human-readable formats.
Using the SoDA kit enables dance practitioners to harness the immediacy and scalability of computer systems for artistic experimentation, and allows them to tap into the potential of AI in a networked, distributed environment.
Adrián Artacho is currently a PhD candidate at the University of Music and Performing Arts of Vienna, researching the use of technology to enhance performance capabilities. He is also an active performer of live electronics either solo or in different configurations. As a composer, his interest in cross-media projects and dance in particular has led him to regularly collaborate with choreographers and to become founder of the dance companies Tanz.Labor.Labyrinth and SyncLab Tanzkollektiv. (email@example.com)
Oscar Medina Duarte has many years of experience working on technological projects in the safety-critical domain (aviation, railway, …). His great interest in the performing arts brought him to an active role at NeuesAtelier where he is active as a technologist and software developer. As a technologist, his main interest is on the effects of the pervasive tecnification of society and the role of performing arts in a post-pandemic world. (firstname.lastname@example.org)
Here is a link to a video showcasing the use of the SoDA system by a team of artists-researchers in the context of the research project “Social D[ist]ancing: Development of a networked artistic practice out of confinement” at the University of Music and Performing Arts Vienna.
Here is the project public repository, including instructions for the installation and use of the Soda kit:
The authors would like to thank Hanne Pilgrim, Mariama Diagne, Benedikt Berner, Katharina Püschel, Dalma Sarnyai, Magdalena Eidenhammer, Maximilian Resch and Maria Solberger for their participation in the evaluation and testing of the applications. It was their invaluable feedback that ultimately informed the design of the software. This work is supported by the Artistic Research Center and the Research Support department of the University of Music and Performing Arts Vienna.
|sod4l||SoDA for Live (Max For Live device)|
|soda-node||Basic SoDA network unit|
|soda-pov||SoDA - point of view (latency management)|
|videogrid||Video collage tool|
|zoomkeeper||Zoom Videoconference tool|
The Midi2json device works in combination with Pixel2Midi device to synchronise events across performers.
Here is a description of the information workflow:
Sheets as 'repositories'
These devices uses the download-sheet abstraction, which downloads the contents of a google spreadsheet to the patch. In a way, the online sheet woks as a human-editable repo that keep the consistency of the data across multiple devices that read from it.
MORPHology. This spreadsheet codifies the specific way the cue is rendered on the performer's side. The device reads (and stores) a MORPH .tsv database of the different realisations of a cue. Some examples include:
Use downbeat to mark the pulse, and the horizontal bar to mark when a cue arrives Perhaps
02with the same cadence as the established pulse, and
20for the extent of the note (as a makeshift progress bar)
Use two vertical regions (left and right), where left is the current action, and right is the upcoming one. The transition onto the new action could entail a couple of normal beats followed by a downbeat (or vice versa).
- This requires a wildcard for the time of the previous cue
- the flickering of inverse images could happen immediately before (together with the beats), or perhaps the region on the right could have a different color.
- The different (derivative) image files should be named in a way that is eindeutig... I just haven't thought enough about it yet.
Use the regular beat to convey the pulse for each individual performer (perhaps the schrittempo) and reserve the horizontal bar for the length of the action.
Three regions (columns), the ones on the sides are very skinny, and they flicker to indicate that the performer should turn left/right.
Two vertical regions, one for the content (score, text) and another region for the movement in space, like turning, up/down... etc.
Two horizontal regions, one fro the content, and another one (above, below, etc.) for the intensity (dynamic, articulation, technique...)
Use different font siZEs in the score (perhaps font size is also manageable within polytempo, which would make it rather flexible) to convey intensity (dynamics + articulaition). Perhaps color (or 'acotaciones') for a specific emotion.
Use horizontal bar as a progress bar for the length of the note. Without warning, with two beat warnings... etc.
have noteoff as a category of pseudocode that gets executed when the current note ends.
Wildcards can be used to use contextual values. The maxmsp code uses
$ for wildcards, however in pseudocode the szmbol
~ may also be used (
||Current time (float, in seconds)|
||derivative action (+1, -1.... etc.)|
||derivative time (float, in seconds)|
||Time of the previous noteoff|
||Time of the previous noteon|
Time can be expressed using
B for beat and BAR respectively. So
1b would be equal to the length of a beat given the current tempo.
|Pseudocode||what it does|
||image $1 shown after 3.0 seconds|
||image $1 shown 1.5x length of a beat before the action´s time|
||? analogous to loadImage|
||? analogous to image|
The color panel begin "color" is slightly larger than the text...
is it possible to write in different colors on textedit? maybe there are alternatives where this is possible, like [lcd]...
make also the url for the [download-image] object settable via clientwindow.
How to store pseudocode in the spreadsheets, so that I can more flexibly experiment with the morphology
I could write pseudocode using natural language for the colors.
Have (for every piece) a separate spreadsheet, for ACTION. Inside: pseudocode ("TEXT hola ue tal" "VIBRATE vibration signal" "IMAGE specify image name..." "COLOR maroon")
Use image_128.png as a convention for the COVER page (it is an id unlikely to be used, regardless of the bank.)
deal with the "multiple files in your search path" error
In this artistic research project (AR 640) we explore how to notate, communicate and compose space phenomena across audio-corporeal artistic practices. We investigate these in four disciplines: dance, rhythmics, choir conducting and direct sound. They share an alertness for and a certain tacit knowledge about space. In stark contrast to musical or movement notations, one finds that notated spaces are rather scarce in the audio-corporeal practices even though space unites them. We argue that this lacuna will be bridged by working on an atlas of space qualities. Rather than communicating merely the metric measures of spaces without the performer, we are concerned instead with emergent spatial qualities of smooth spaces that complement the performer, that exist outside of but not without the performer.
An ecological approach to gesture segmentation in choreomusical research
Brainchild is a trio which brings together three different, but unexpectedly compatible instruments. While the saxophone is associated with jazz music, the cello with the classics, and computer music with the cutting edge of modern music, the three melt together into a chamber music ensemble nothing like its individual elements.
In dem 30 minütigen Bühnenstück “Herschel und das unsichtbare Ende des Regenbogens” wird dem Publikum die Entdeckung der Infrarotstrahlung durch die Geschwister Herschel im Jahre 1800 auf amüsante Art und Weise näher gebracht. Das Stück wurde 2017 in Auftrag gegeben und 2018 und 2019 mehrmals aufgeführt.
Institut für Astrophysik der Universität Wien | Wissenschaftskommunikationsprojekt WKP 100