SPAT Revolution Technical Articles Archives - FLUX:: Immersive https://www.flux.audio/category/spat-revolution-tech/ FLUX:: Immersive Fri, 25 Jul 2025 14:40:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.flux.audio/wp-content/uploads/2017/08/132.png SPAT Revolution Technical Articles Archives - FLUX:: Immersive https://www.flux.audio/category/spat-revolution-tech/ 32 32 164167279 Introducing Relative OSC: Simplifying Audio Parameter Controls https://www.flux.audio/2025/07/25/introducing-relative-osc-simplifying-audio-parameter-controls/ Fri, 25 Jul 2025 11:15:23 +0000 https://www.flux.audio/?p=26327 With the release of version 25.01 SPAT Revolution introduces an exciting feature: the support of Relative OSC messages. This innovative functionality offers the capability to adjust parameter values dynamically, without needing to know the current value. Absolute vs. Relative OSC messages Traditionally, OSC messages require users to specify the desired value for a parameter. This […]

The post Introducing Relative OSC: Simplifying Audio Parameter Controls appeared first on FLUX:: Immersive.

]]>
With the release of version 25.01 SPAT Revolution introduces an exciting feature: the support of Relative OSC messages. This innovative functionality offers the capability to adjust parameter values dynamically, without needing to know the current value.


Absolute vs. Relative OSC messages

In contrast, Relative OSC messages allow users to define an offset rather than a specific value. This means you can adjust parameters incrementally, such as increasing room gain by +3dB or decreasing source distance by -2m. This flexibility is amplified when you don’t have bidirectional integration, as the parameter current value is unknown, and can simplify the interaction with SPAT Revolution.


How Relative OSC messages work

Relative OSC messages follow the same structure as standard OSC messages, with the addition of a path keyword. For instance, to offset the room gain by +3db, you would use:

This consistency ensures that the OSC path structure remains unchanged, allowing users to utilize wildcards to target multiple objects or add interpolation timing. For example, to offset the distance of all sources by 2 meters over 5 seconds, you would use:


Getting started with Relative OSC messages

To assist you in leveraging this new feature, we provide a QLab5 template with examples of Relative OSC messages. This resource will guide you through your initial steps and help you explore the full potential of Relative OSC.

You can find all available controls in the updated OSC table, which includes all available OSC commands in SPAT Revolution.

QLab 5 Relative OSC Template

We encourage you to try out this new feature and share your template and use cases with us!




The post Introducing Relative OSC: Simplifying Audio Parameter Controls appeared first on FLUX:: Immersive.

]]>
26327
Navigating SPAT Revolution – From Studio Creation to Live Deployment https://www.flux.audio/2023/10/20/navigating-spat-revolution-from-studio-creation-to-live-deployment/ Fri, 20 Oct 2023 14:34:24 +0000 https://www.flux.audio/?p=24832 In the realm of audio production, the transition from the controlled studio environment to the variety of live setup deployment choices poses a pivotal challenge. This challenge becomes particularly significant when working with SPAT Revolution and its ability to deliver to various speaker arrangement formats with various techniques. A paramount concern is ensuring a seamless […]

The post Navigating SPAT Revolution – From Studio Creation to Live Deployment appeared first on FLUX:: Immersive.

]]>
In the realm of audio production, the transition from the controlled studio environment to the variety of live setup deployment choices poses a pivotal challenge. This challenge becomes particularly significant when working with SPAT Revolution and its ability to deliver to various speaker arrangement formats with various techniques. A paramount concern is ensuring a seamless translation of the carefully designed spatial composition, including positioning and automation, from the studio to a live environment.

Defining the reproduction strategy (Speaker system & panning technique)

While this article does not aim to delve deep into the subject of system design, it is worth discussing a significant challenge behind the ‘From Studio to Live’ journey. In essence, it’s about content creation, taking place in an ideal monitoring ‘sweet spot’, and live deployment, which invariably involves compromises and spread out audience coverage area. Notably, this discussion touches on the reproduction strategy to adopt, where the position-based family of panning techniques such as DBAP, KNN, and the more sophisticated WFS are our best friends for such live reality.

It is tempting to suggest, ‘Let’s surround our audience with a large number of speakers,’ and indeed, increasing the number of loudspeakers has the potential to enhance ‘resolution’, or, if you prefer, the accuracy of soundscape reproduction. However, for everyone to fully experience this heightened accuracy, we must ensure that each loudspeaker or loudspeaker array offers adequate coverage and sound pressure level (SPL). Ultimately, a reproduction strategy aligned with the artistic intent but conditional to a capable system stands as the key to achieving success.

The Role of Transcoders in SPAT Revolution:

Contemplating the use of transcoders, such as master transcodes, in SPAT Revolution for converting between speaker arrangements prompts inquiry. It’s crucial to note that these transcoders primarily facilitate the transcoding of ambisonic stream formats to channel-based speaker setups. Channel-based to channel-based transcoding serves as a matrix tool only, not a medium for down/up mixing from one speaker setup to another.

The construction of a SPAT Revolution soundscape

Understanding the components of the soundscapes in SPAT unveils essential elements to understand for a successful transition across systems:

Source Object (Position and parameters)

The foundation of the soundscape you are building is about manipulating all the source object properties. The efficacy of panning techniques and the reproduction system determines the experiential outcome. Adapting speaker arrangements and techniques in SPAT Revolution facilitates easy migration to diverse systems. This can be pretty much done ‘on the fly’.

Source Object Attenuation Model

The model, relying on source distance from the center reference point, influences amplitude and air absorption simulations. When employed, the protection zone establishes a pivotal threshold where our processing initiates the ‘simulated’ distance attenuation, influencing both amplitude and spectrum in the case of air absorption (minor high-frequency roll-off). This simulation aims to replicate our natural hearing experience in relation to distance. Overlooking this aspect while ‘scaling’ a mix can notably influence the resultant mix quality. By default, SPAT standard normalized arrangements set the distance threshold at 2 meters, aligned with the speaker line and typically extended to the farthest speaker in larger or non-normalized systems. Thus, this distance, spanning from the center reference point (0) to the protection zone’s boundary, defines the threshold. To illustrate, if this threshold is set at 10 meters and your source is 20 meters away, the distance has doubled, resulting in a loss of 6 dB per the physics principles. The drop factor ratio, a key source parameter, governs this 6 dB reduction. Notably, this concept adheres to a spherical nature and isn’t contingent on the speaker setup’s geometry. Consequently, appropriately scaling this factor in tandem with session setup alterations proves pivotal. This adjustment will impact the ‘direct/reverb omni ratio’ when utilizing SPAT’s room reverb engine.’

The Room’s Reverberation Model:

When applied to some or all audio source objects, the ‘room’ reverb settings play a central role in tailoring the mix to achieve the desired room ambiance. This adjustment typically occurs on-site, allowing you to mold the mix’s sonic character. However, challenges arise when transitioning from a rather acoustically inert or ‘dead’ room to a space with pronounced reverberation. In such instances, careful consideration is needed to either harmonize the reverb model with the new environment or judiciously omit or limit its use. A good starting point is typically changing the first reflections settings (Early and Cluster) and the overall reverb gain. Relocating from a studio setting to an open outdoor space presents a comparatively simpler scenario, as it offers a higher degree of control over acoustic characteristics.

Bringing It All Together

Now, having meticulously crafted your mix or artistic creation within the confines of the studio, the pivotal moment arrives: the transition to the live venue. Throughout the creative process, you’ve been fine-tuning your mix on a specific speaker monitoring setup, utilizing a set protection zone that emulates distance, and potentially crafting snapshots and automation using tools like DAWs or remote control applications such as QLab. As you establish a new speaker arrangement tailored to the venue, the transformation begins. The instant you actually apply this new arrangement into your mix session, SPAT Revolution springs into action, automatically initiating a scaling process. An alert marked ‘NEW SPEAKER ARRANGEMENT’ is announced, advising you that the selected new arrangement speaker distance differs from the previous one. Consider this: if your studio’s farthest speaker sat at a distance of 5 meters, and the new venue configuration extends it to 25 meters, SPAT Revolution will propose a scaling factor of 5. When accepted and applied, this factor cascades across multiple facets:

  • Each current source’s distance parameter
  • The protection zone, integral to the attenuation model
  • A distance scaling factor within the room output section dynamically impacting recalled snapshots, plugin-based automation, and incoming OSC messages.

And there you have it – a seamless integration of the new arrangement. This methodology seamlessly synchronizes with direction-based techniques, as it fundamentally maintains the consistency of angles that these techniques rely upon. Additionally, the protection zone, enriching the mix with its attenuation model, undergoes appropriate scaling. Consequently, all automation, snapshots, and remote communications harmoniously adapt to the new system’s scale, thanks to the dynamic scaling fostered by the distance scaling factor within the room output section.

Navigating Challenges and Conclusion

While the automated scaling works seamlessly with direction-based techniques, such as vector-based methods, it might not serve as a panacea for position-based approaches. In other words, this automatic scaling may not wield an all-encompassing solution if you’re employing position-based techniques, such as DBAP or the Wave Field Synthesis reproduction approach. The intricacies arise due to the unique geometry of the speaker arrangement, rendering it more complex than a mere distance-based scale factor. The alteration in the spatial relationship between the speakers and sources, as dictated by the new setup, can yield substantially different outcomes. It’s essential to note that this doesn’t imply a breakdown but signifies a distinct rendition of the audio.

In such cases, it may be strategic to consider the pre-production environment, and the in-studio monitoring system to be more aligned in reproduction technique and speaker setup geometry to the future venue system, such as with a scaled-down version.

The post Navigating SPAT Revolution – From Studio Creation to Live Deployment appeared first on FLUX:: Immersive.

]]>
24832
Ambisonics: How to Handle Subwoofers and Low Frequency Effect https://www.flux.audio/2023/10/20/ambisonics-how-to-handle-subwoofers-and-low-frequency-effect/ Fri, 20 Oct 2023 14:23:04 +0000 https://www.flux.audio/?p=24806 In this article, we will look at different methods to handle subwoofer and bass extension when dealing with Ambisonics in SPAT Revolution. Specifically, we will investigate solutions at the reproduction stage as well as the creation stage. LFE vs Bass Management Initially, it’s important to clarify that we shouldn’t confuse the primary role of the […]

The post Ambisonics: How to Handle Subwoofers and Low Frequency Effect appeared first on FLUX:: Immersive.

]]>
In this article, we will look at different methods to handle subwoofer and bass extension when dealing with Ambisonics in SPAT Revolution. Specifically, we will investigate solutions at the reproduction stage as well as the creation stage.


LFE vs Bass Management

Initially, it’s important to clarify that we shouldn’t confuse the primary role of the LFE (Low Frequency Effect) channel with the function of subwoofers. While the LFE bus is specifically used for low-frequency effects, subwoofers not only handle these effects but also assist with bass management to complement our main speakers, especially considering their limited low-frequency range. Whether it’s in a mixing/monitoring system, an in-venue system, or even certain home systems, there should be provisions to accommodate this. Additionally, some of the methods mentioned are tailored for unique subwoofer content, deviating from traditional approaches.

Ambisonic Generalities

Ambisonics is often described as a scene-based spatialization technique. Audio channels do not refer to a specific speaker in space, but rather to a part of the space. To make it work, ambisonics has to be decoded to a certain speaker layout. So if you are creating an ambisonic format mix (encoding) or simply want to audition an ambisonic source from a recording, it will need to be decoded to a layout.

Ambisonic panner rarely gives access to some kind of aux sends to feed a Low Frequency Extension bus. Equivalently ambisonic decoders rarely directly support subwoofers or bass management. This article will discuss three ways to handle such use cases in SPAT Revolution.

Reproduction Workflows

Reproduction workflows suppose that you have an ambisonic mix you want to decode on an existing speaker layout. We are then looking for a bass management solution over the ambisonic stream.

Omni Sub Send

The initial case we’ll address involves scenarios where spatialization in the subwoofer isn’t desired or necessary. In these situations, we can directly route the W channels from the ambisonic stream to the subwoofer channel.

Extracting the W channel to feed a subwoofer

By following the setup shown in the screenshot above, you can then change the send level to the subwoofers by using the gain of the ‘W to Sub’ master block.

You can download this template to test this approach.


Specific decoding for subwoofer

If you wish to have a sense of spatialization in the low-end, you could use a second ambisonic decoder that specifically targets the speaker layout formed by the subwoofers. Here, it is often preferred to use the ‘in-phase’ decoding strategy, as it will make sure that no out-of-phase signal will be generated. It comes at the price of a worse sound localization experience, which should not be too much of an issue in this spectrum region.

InPhase decoding mode for low-end content

See this template for further information.

Creation workflow


Per Sources LFE Send

It is possible to create an ‘LFE send’ kind of workflow that is independent for each source. The idea is to create a second room dedicated to the subwoofers. In a first approach, this room should be set to mono, and have the reverb muted. You can then use the room-specific as a ‘send to LFE’ control.

Example with a mono room acting as a LFE bus


To easily see the gain parameter related to the LFE room, you can use the search field in the parameters section. For example, strict: RoomGain2only displays the specific gain of the second room.

Example of control filters to only display the gain of the second room

Download this template to get more detail on this setup.

To extend this concept, you could use a second HOA room instead of a channel based mono room. Such an ambisonic room should be set in 2D mode, as a 3D subwoofer arrangement seems like a non-existent use case, and may also use a lower order than the main room. This will then give you two ambisonic streams, one dedicated to full range speakers, and one only to subwoofers.

Dedicated Ambisonic Room for LFE management. Note the use of dedicated HOA output for recording purpose.

See this template for further details.

It is also possible to combine both a dedicated room for LFE content and a bass management strategy to cover most of the playback situations you may encounter.

Conclusion

It is possible to handle subwoofers in Ambisonics reproduction by:

  • Extracting the W channel and sending it to the subwoofers
  • Using a second ambisonic decoder to target only the arrangement of subwoofers.

If you are at the content creation stage, and are already working with a HOA room, you could use a second HOA room to replicate a ‘send to LFE’ type of workflow.

The post Ambisonics: How to Handle Subwoofers and Low Frequency Effect appeared first on FLUX:: Immersive.

]]>
24806
Reporting latency for delay compensation in SPAT Revolution https://www.flux.audio/2022/10/18/reporting-latency-for-delay-compensation-in-spat-revolution/ Tue, 18 Oct 2022 11:49:22 +0000 https://www.flux.audio/?p=23338 This follows a generic article on Delay and Compensation mechanism, As mentioned in different articles, when using audio devices to route to/from SPAT Revolution, Pro Tools is handling the needed delay compensation based on your routing / plugin usage. That being said, when you are extracting the audio from the SPAT plugin Local Audio Path […]

The post Reporting latency for delay compensation in SPAT Revolution appeared first on FLUX:: Immersive.

]]>
This follows a generic article on Delay and Compensation mechanism,

As mentioned in different articles, when using audio devices to route to/from SPAT Revolution, Pro Tools is handling the needed delay compensation based on your routing / plugin usage. That being said, when you are extracting the audio from the SPAT plugin Local Audio Path (LAP), the delay compensation is not taken into consideration in regards to all the other objects you have in the session. In the case of Pro Tools, the delay compensation happens down the line between these tracks and the final bus they feed.

The problem is simple. ​You are inserting a SPAT send plugin in LAP mode on a strip after other plugins that may introduce latency. To avoid this, the SPAT Revolution 22.9 update includes a solution to allow the user to report the latency of the strip (in the plugin) and then have SPAT Revolution do the required latency compensation.

Time for a little operation – Delay compensation mechanism in SPAT Revolution!

In the latest plugin interface, a new delay Input delay field is available in the SPAT Send plugin. It provides the ability to report the latency of this audio object in samples. Once declared, when these audio sources will be connected in SPAT Revolution,  a delay compensation mechanism will apply the delay needed to each of the object input in SPAT Revolution to ensure that they are being aligned as they would be within a typical DAW Routing.

Reporting Input delay in SPAT Send, SPAT Revolution delay compensating all other objects
SPAT Send plugin, SPAT Revolution Input Delay

Although this operation is manual, it simply means reporting the delay information of the track itself to the field dedicated to it in the plugin. Below are three (3) SPAT object aux tracks,, one with a delay occurred because a plugin causing latency. You can simply open the user interface and report the track delay, 

Need more information on Pro Tools integration? It can be found in the Pro Tools section of the  SPAT Revolution User Guide.

The post Reporting latency for delay compensation in SPAT Revolution appeared first on FLUX:: Immersive.

]]>
23338
Using Pro Tools routing folders with SPAT Revolution https://www.flux.audio/2022/10/17/using-pro-tools-routing-folders-with-spat-revolution/ Mon, 17 Oct 2022 13:25:59 +0000 https://www.flux.audio/?p=23162 In a Previous Tech Article, we’ve covered the basics of integrating SPAT Revolution into the Pro Tools environment. At the base was the use of the SPAT Revolution send plugin and the Local Audio Path (LAP) mode to route your audio to your SPAT Revolution rendering engine.  As the session and routing requirements grow, a […]

The post Using Pro Tools routing folders with SPAT Revolution appeared first on FLUX:: Immersive.

]]>

In a Previous Tech Article, we’ve covered the basics of integrating SPAT Revolution into the Pro Tools environment. At the base was the use of the SPAT Revolution send plugin and the Local Audio Path (LAP) mode to route your audio to your SPAT Revolution rendering engine. 

As the session and routing requirements grow, a few things must be kept in mind. Keeping things organized in the session and adopting an object-based workflow becomes ideal. 

Many advantages come by adopting an object track-based workflow and taking advantage of the routing folder (Pro Tools 2020.3 and above)  and the newest routing possibilities if you have the later Pro Tools 2022.09 (New AUX I/O). An article on Pro Tools 2022.9 AUX I/O covers this part of the subject.

Below are two examples of routing, the first one using the Local Audio Path mechanism, the second using actual audio devices.

Adopting an Object oriented workflow and using aux / routing folders with LA
Adopting an Object oriented workflow and using aux / routing folders with audio devices / AUX I/O

Using routing folders (aux tracks) as your SPAT send objects

  • Allows to sum multiple audio tracks to the same audio object
  • Those auxes are becoming your SPAT Revolution objects
  • Using the routing folders (auxes) simplify routing, organization and keep things clean.
  • Pro tools being pre-fader insert only, using the SPAT send plugin (LAP Mode) to extract audio directly on audio tracks lose the ability to automate the fader functions.

The newest templates in the Pro Tools section of the SPAT Revolution User Guide are adopting exactly this workflow. Both using external audio bridging solutions or the Local Audio Path (LAP) feature of the SPAT Send plugin.

Pro Tools Routing folder as your SPAT Objects routed to your Bridge to SPAT

As mentioned above, the routing folder track is at the base an aux track. They come with the advantage that they are a folder so they keep things organized. When creating a routing folder, it does a few things for you. It creates the aux patched to a specific bus for input. In the above, this is the « SPAT Mono Object 1 » audio bus that you can route any audio track to. The key takeaway is that once done, the only thing left is to route this audio object routing folder to the desired audio output or use the SPAT send plugin with Local Audio Path on it. 

Moving your audio tracks to the routing folders

If you have audio tracks that you want to send to a routing folder, you can simply right-click on that actual track and choose the Move to function.

You can then move to any previously created routing folders or create one on the fly.

Moving an audio track to an existing routing folder or creating a new one


This is of course possible with a group of audio tracks as well, that you for example would want to “declare” as a single audio object to SPAT Revolution. The last step to assign it to the SPAT Send plugin, and either activate the local audio path (LAP), or patch the output to the desired audio bridge output.

Rapidly creating a routing folder from 4 audio tracks in Pro Tool

For example,on the above animation , you have  4 audio tracks to declare as an object. Select them, right-click and choose “move to new folder” (or simply press the  Shift+Command+Option+N on a Mac or Shift+Control+Alt+N on Windows) with routing. Give it a folder name (such as your object name) and you are done.

Instantiating the SPAT Send plugin on the Routing folder 

The last step is to insert the SPAT Send plugin into the actual routing folder(s). This will first allow you to automate all the SPAT Revolution source parameters to Pro Tools for writing your automation. You can simply enable the mode in the plugin interface if you want to use the Local Audio Path (LAP) option rather then actual audio I/O devices to route to SPAT Revolution. You can always hold the CTRL key before to enable it to do it to all instances of the plugin into your session

Inserting the SPAT Send plugin and optional opening the local audio path (LAP) mode

Routing using Local Audio Path (LAP)

If you plan to use the Local Audio Path (LAP) function to route the audio to SPAT Revolution, you have to make sure to follow the proper routing of those tracks to maintain good sync.

This routing is to have all objects routed to a SPATSync bus (namely a dummy bus).  This is explained in the Pro Tools intregration to SPAT Revolution article and throughout the Pro Tools section of the SPAT Revolution User Guide

Ideally, you use the provided templates as a start point to understand this important routing well. It involves routing all your SPAT Objects to a single SPATSync bus and making sure the SPAT Revolution Renders return track(s) in Pro Tools, using the SPAT Return plugin, are patched to this bus as an input. With that, a good sync is well kept.

The post Using Pro Tools routing folders with SPAT Revolution appeared first on FLUX:: Immersive.

]]>
23162
Pro Tools integration to SPAT Revolution (2022) https://www.flux.audio/2022/10/14/pro-tools-integration-to-spat-revolution-2022/ Fri, 14 Oct 2022 14:53:25 +0000 https://www.flux.audio/?p=23333 This article will examine the basics of using SPAT Revolution with a Pro Tools workstation, which has been updated using the most recent features of Pro Tools. Ultimately what we have going is either: The actual spatial mix of all the Pro Tools audio tracks (audio objects) is rendered in SPAT Revolution, and the resulting […]

The post Pro Tools integration to SPAT Revolution (2022) appeared first on FLUX:: Immersive.

]]>

This article will examine the basics of using SPAT Revolution with a Pro Tools workstation, which has been updated using the most recent features of Pro Tools.

Ultimately what we have going is either:

  1. The actual spatial mix of all the Pro Tools audio tracks (audio objects) is rendered in SPAT Revolution, and the resulting stream(s) (the renders) is returned to Pro Tools for monitoring and bouncing. 

OR

  1. Only a selected portion of the mix is being rendered by SPAT Revolution (such as if you are creating some bed elements in SPAT Revolution (Stereo, 7.1 or 7.1.2) for a much larger session). For example, such a bed can later be declared as the channel base 7.1.2 beds to a Dolby workflow.

OR

  1. We are in an in-line workflow where Pro Tools audio tracks (audio objects) are being rendered by SPAT Revolution, which in return is outputting to an audio system.

In each case, we are looking at using SPAT Revolution with a Pro Tools environment to render, highlighting the need for audio routing and good practice to maintain synchronization.

Insert VS Inline workflow with Pro Tools

 Various workflows are possible:

  • Single computer
    • SPAT Local Audio Path mechanism (LAP)
    • Audio bridging solution 
      • Audio aggregate devices
      • Pro Tools AUX I/O (PT 2022.9 and up)
  • Dual Computer
    • AoIP (AVB, AES67, Dante), MADI, Soundgrid
    • Other high channel-count audio interfaces

In some workflows, we need to make sure that the latency (produced by some processing plugins) on the audio track /objects sending to SPAT Revolution are properly compensated. While latency is well handled by Pro Tools when routing to actual audio devices, when using the FLUX:: Local Audio Path mechanism (LAP) some use cases may require specific attention to delay compensation. This generic article on Delay and Compensation mechanism in SPAT Revolution covers the basics, while this specific article Reporting delay for compensation in Pro Tools goes into the details of it.

Simplicity with the SPAT plugin and Local Audio Path (LAP)

*Single Computer workstation using SPAT Send & LAP* 

Dealing with integrating Pro Tools to SPAT Revolution can be as simple as adding the SPAT Send plugin to multiple audio tracks that are becoming your SPAT source objects. From this point, a multichannel track or aux input is added where the SPAT return plugin handles returning our SPAT Revolution rendering to Pro Tools. This render can be a 7.1.2 bed, a larger HOA 3rd-order scene, or a binaural mix from SPAT Revolution. You can as well consider doing simultaneous renders with the “multi-room” environment.

Single format basic example of routing when using Local Audio Path

Multiple simultaneous render using Local Audio Path

One of the key takeaways from the above pictures is that the audio track outputs (where the SPAT plugin is extracting the actual audio) need to be routed to a common bus. Same applies for the return render(s), they must all have that common bus as an input. More on this below.

Thanks to the Local Audio Path (LAP) feature of the plugin, it provides a simple solution for the audio integration between both SPAT Revolution and Pro Tools applications as well as the metadata of the SPAT object, passing to Pro Tools all parameters for automation. Once you enable the audio path, the Pro Tools source simply appears in SPAT Revolution and is ready to be connected in the SPAT Revolution environment.

Enabling LAP on the SPAT Send plugin instantaneously generates the input source in SPAT Revolution

The FLUX:: Local AudioPipe (LAP) technology, residing inside the SPAT plugin suite, is used to extract and declare your audio elements (audio tracks/buses becoming objects) to/from the object-based mixing/rendering SPAT Revolution application. 

This extraction, happening from the plugin insert, can happen:

  1. Directly on the audio track
  2. On an aux bus (an object bus)
  3. Using routing folder track as object track / bus.
Inserting the SPAT Send plugin on Routing Folder track and optionally enabling local audio path (LAP)

While someone may be tempted to simply extract on the audio track, this comes with some caveats:

  • Pro Tools only have pre-fader inserts, meaning your signal being extracted is not taking into consideration your track automation/control (volume, mute, solo,…).
  • Delay / latency generated by plugins on the insert chain aren’t going to be compensated as any compensation mechanism happens down the line between tracks and the buses they feed. 

One good way to deal with the pre-fader reality is the of use aux tracks to keep the object-based workflow organized and routed in Pro Tools. While it means an extra layer in a DAW session, the use of aux track can do the tricks to counter off the problem of pre-fader insert. They are basically becoming the audio objects that are extracted and declared to SPAT Revolution. This means as well that an audio object may not only be a single audio  track element but multiple tracks that may play at the same or different time, a sum of multiple audio elements

Adopting an Object oriented workflow and using aux / routing folders to do so

Thanks to routing folder tracks in Pro Tools(Pro Tools 2020.3 and above) , which are ultimately aux tracks,  we can use a nesting system to keep our large sessions organized. All our audio elements remain under an object folder and we use the auto routing capabilities of these folders reducing the patching/management steps. The following article, Using Pro Tools routing folders with SPAT Revolution dives on this conversation.

To ensure proper synchronization for this integration, you have to make sure to follow the proper routing of those tracks to maintain good sync. We often refer to this common bus as the SPATSync bus, something that we referred to in the past as the “Dummy bus.” 

It involves routing all your SPAT Objects to a single SPATSync bus and making sure the SPAT Revolution return track(s) in Pro Tools, hosting the SPAT Return plugin, are patched to this bus as an input.

This is explained in the Pro Tools section of the SPAT Revolution User Guide. Ideally, you use the provided templates as a start point to understand this important routing well. With that, a good sync is well kept

AVID Pro Tools Templates for SPAT Revolution

            SPATSync bus

Audio Bridging solution

Single computer inline workflow with audio bridge solution

A second way to deal with single computer integration is to rely on an audio bridge device in between the applications. This is most commonly seen in macOS environments. If using this audio bridge device to send to SPAT Revolution and using a different audio output device in SPAT Revolution (for monitoring, delivering to a audio system or sending the render into another application), this solution can work.

Thanks to the recently added support for separate input and output audio devices in SPAT Revolution, you can now use audio bridging for input while using your audio interface for SPAT Revolution output. It comes to highlight one challenge if you need to return the render to Pro Tools for bouncing / monitoring. As when the Pro Tools playback engine is set to this audio bridge device, although it provides an I/O solution to SPAT Revolution you have no way to actually send your monitoring output to an audio device. To the rescue here is the use of aggregate devices available in macOS or with some specific drivers in Windows.

Example of aggregation device containing an audio bridge device for your in-between applications routing with a total count of 112 x 116 I/O 

From the example above, you can then route the SPAT Object on channel (from 1-64) of your aggregate device (the BlackHole 64 channel device for example) while being able to return from SPAT Revolution on some of these 64 channels but being capable to use channel 65-80 to route your monitoring buses. (In the above case to a Merging Technology AES67 driver)

Single computer insert  workflow with Audio Bridge or AUX I/O Routing

Thanks to Pro Tools 2022.9 new AUX I/O system, this is simplified and new routing options are possible. Use cases includes when working with and HDx system as a playback engine and overall to simplify dealing with many audio devices. The article Pro Tools 2022.9 AUX I/O covers this part of the subject.

The last part of this integration is to configure the OSC connection in SPAT Revolution. This is already pre-configured in the SPAT plugin suite when the LAP is not enabled. More on the subject on the OSC Connectivy section below.

Dual Computer workstation

Dual computer inline workflow with AoIP Routing

Recommended for larger studio session projects (with many plugin processing or with video track) or for real-time/live production ,the dual computer scenario is simply to have a computer for the Pro Tools playback while the second dedicated computer handles SPAT Revolution real-time rendering.  The mechanism for routing in this case relies on an high-channel-count audio interface such as what is possible with AoIP (AVB, AES67, Dante), MADI, Soundgrid and the likes. 

With such audio routing, you can as well adopt an insert workflow and return the renders from SPAT to Pro Tools for bouncing and monitoring distribution. 

Dual computer insert workflow with AoIP Routing

OSC Connectivity

SPAT Send and SPAT OSC connection using a network

While the audio is handled by a high channel-count audio interface , the use of the SPAT plugin suite still applies. This means as well that you can move from a single computer to a dual computer workflow easily. By default, when the plugins aren’t using the FLUX:: LAP feature,  they are sending/receiving the automation via network using OSC commands to SPAT Revolution.

By default, they use the local loop address 127.0.0.1 which is what we need for a single computer configuration and when using an audio bridging solution. In the case of dual computers, you simply have the bidirectional message transit on the network interfaces of a common network of both computers. In some use cases when using a virtual sound card (such as DVS) on the network, we recommend having 2 network interfaces, one for the audio, the second for the control. 

The post Pro Tools integration to SPAT Revolution (2022) appeared first on FLUX:: Immersive.

]]>
23333
Pro Tools 2022.9 AUX I/O routing with SPAT Revolution https://www.flux.audio/2022/10/13/pro-tools-2022-9-aux-i-o-routing-with-spat-revolution/ Thu, 13 Oct 2022 13:55:12 +0000 https://www.flux.audio/?p=23152 Welcome to AUX I/O in Pro Tools 2022.9 Routing audio between applications and tools, such as with external renderers and workstations, can be challenging to users depending on their configuration. Thanks to Pro Tools 2022.9, new I/O routing options are possible with the AUX I/O feature.  macOS users have come to rely on Core Audio […]

The post Pro Tools 2022.9 AUX I/O routing with SPAT Revolution appeared first on FLUX:: Immersive.

]]>

Welcome to AUX I/O in Pro Tools 2022.9

Routing audio between applications and tools, such as with external renderers and workstations, can be challenging to users depending on their configuration. Thanks to Pro Tools 2022.9, new I/O routing options are possible with the AUX I/O feature. 

New Aux I/O feature in Pro Tools I/O Setup

macOS users have come to rely on Core Audio aggregate devices to solve this challenge. The audio device creation allows them to aggregate multiple audio interfaces (physical or virtual) and use them as a single entity. That said, what about the scenario of HDX based playback hardware that can’t be aggregated and that needs to be used as the Pro Tools playback engine? What about which audio bridge solution to use reliably between applications? What about preventing possible digital audio loops that are prone to happen on patch errors using an audio bridge?

The new proposed feature by Avid Technology addresses those questions. 

New Pro Tools Audio bridge

At the heart of AUX I/O is the Pro Tools Audio bridge. These new virtual core audio devices now get installed with Pro Tools 2022.9. Together with AUX I/O, it offers flexible and simultaneous audio routing in and out of Pro Tools.

Audio bridge configurations in 2, 6, 16, 32 and 64 channel versions are available for flexibility and separation when routing to various applications simultaneously on the same computer.

You can use these new devices in AUX I/O setup, Audio Midi Setup, Sound and Preferences, and input and outputs for other applications such as the SPAT Revolution rendering engine. 

Routing to SPAT Revolution with AUX IO

The steps to getting this done are pretty simple; access the Aux I/O section of the I/O Setup, select In device, select Out device, and you are set. These new inputs will become apparent to the Input section, the same goes for the output. You can take the time to give them a unique name, such as “Bridge to/from SPAT Revolution” as an example.

Creating a bridge for inputs and outputs to SPAT Revolution

You simply need to choose these virtual audio interfaces in the SPAT Revolution Hardware I/O set up to complete this configuration.

Pro Tools Audio Bridge 64 as input, and 32 as the output of SPAT Revolution Hardware IO

Pro Tools versions and AUX I/O capabilities.

Pro Tools Ultimate has unlimited I/O capabilities, where Pro Tools Studio gets unlimited inputs and only 32 outputs (which would be the maximum object you can send to SPAT Revolution to render). Be reminded that Pro Tools installed virtual audio bridge device receives a maximum of 64 channels. Other solutions need to be used for a higher channel count.

Structure for your Pro Tools session and your audio objects.

Last but not least is your Pro Tools session and how you are taking advantage of the newly created audio routes in Pro Tools. Thanks to routing folders introduced in 2020.3, There is a very nice way to keep the object-based workflow organized and routed in Pro Tools. Session templates using this mechanism are available inside the Pro Tools section of the SPAT Revolution User Guide. Furthermore, the following article, Using Pro Tools routing folders with SPAT Revolution dives on this conversation.

Pro Tools Routing folder as your SPAT Objects routed to your Bridge to SPAT

The post Pro Tools 2022.9 AUX I/O routing with SPAT Revolution appeared first on FLUX:: Immersive.

]]>
23152
SPAT Revolution 22.09 – The Anatomy of the SPAT Revolution Snapshot system – Advanced https://www.flux.audio/2022/09/23/spat-revolution-22-09-the-anatomy-of-the-spat-revolution-snapshot-system-advanced/ Fri, 23 Sep 2022 15:11:42 +0000 https://www.flux.audio/?p=23048 In the previous article we talked about the basic snapshots operation : how to create, recall one and how we can manage them into a list in the snapshot page. We also discussed the recall time that can be used to create complex movements. Today we will examine the update mechanism and the version history […]

The post SPAT Revolution 22.09 – The Anatomy of the SPAT Revolution Snapshot system – Advanced appeared first on FLUX:: Immersive.

]]>
In the previous article we talked about the basic snapshots operation : how to create, recall one and how we can manage them into a list in the snapshot page. We also discussed the recall time that can be used to create complex movements. Today we will examine the update mechanism and the version history system. We will also discuss how to update one change to many snapshots.


Editing snapshots, restore previous states and recovery from mistakes

Building a snapshot list when working on a live show is one step, being able to edit and update as the show is building is another.

Very simply, in SPAT Revolution, you can update a snapshot. In practice, this means that it will replace the content stored in the snapshot by the content of the sound scene that is currently ongoing.

To update a snapshot, you can either use the button on the bottom of the user interface (it is available in any context) or use the button in the snapshot page. The first button will update the snapshot that is currently recalled. The one from the item list will update the selected (highlighted from the list) snapshot. 

In combination with the update feature, we also provide a versioning system. At the top right of the snapshot page sits a “Version history” panel. Select a snapshot and this panel will display up to ten previous versions.

This versioning system can serve several purposes:

  • Do some quick A/B testing compare two sound scenery.
  • Being able to modify content night after night on a tour, while being able to revert to a previous state if needed.
  • Recover from a mistake!


Updating values to several snapshot

Imagine, halfway through the creation of all of your snapshots, you start to wonder if all your sources are not a bit too far away. If you are working on a big session with dozens of snapshots, editing each of them manually is a huge time loss.

To tackle this use case, we have designed a propagation system. In simple terms, when you change anything on a sound scene, you can choose to make this change propagate to any snapshots. This propagation can be done in two ways. It can either be absolute : the new position of a source will be the same for all propagated snapshots. The other option is to propagate using the trim values : the offset of the new position of the source is propagated to all selected snapshots.

To use the propagation system, you can click on the “propagate” button, at the bottom of the user interface, or use the one in the snapshot page. Doing so will open a pop-up window that let you select which snapshot to update and choose between the absolute or relative mode.

Relative recall

Imagine a scenario where, on a tour, for one night, the singer of a band wants to preserve his voice. So you have a gain offset of a few decibels. You could choose to propagate a gain offset on all your snapshots, but you will have to do the reverse action for the next night. In this case, the ideal choice is to use our “Relative recall” mode.

This mode can be activated by clicking on the “Relative recall” button at the bottom of the user interface or in the snapshot page. Once the mode is engaged, every change you make in the session will be relatively preserved on the next snapshot recall. This means that you could add three decibels of gain on the singer’s source, activate the relative recall mode and go through all the snapshots as usual. Even if the input gain of this source is modified by some snapshot, it will preserve the three decibel offset as long as the relative recall mode is activated. This can also be an easy way to accommodate various acoustics on a tour. 


Recalling snapshots from remote control

When doing live shows, we quickly have the need to synchronize sound, video, musicians presets and more. There are popular apps designed for this application, such as QLab. To keep things open in SPAT Revolution, we have implemented an OSC grammar for our snapshot system.

You recall a snapshot using this message : 

/snapshot/recall index, time, RecallEffectiveSelection, RecallActualSelection, EnableSourcesRecall, EnableRoomsRecall, EnableMastersRecall

Fear not, we will explain everything from this OSC message. /snapshot/recall is the OSC address of our message, everything that follows are arguments. An argument is a value sent to dictate a particular behavior. In this case we can pass up to seven arguments: 

  • index : is the index of the snapshot to recall
  • time : is the recall time, in seconds
  • RecallEffectiveSelection recall the selection of source in the snapshot
  • RecallActualSelection recall the snapshot only for selected sources
  • EnableSourceRecall recalls source parameters
  • EnableRoomsRecall recalls rooms parameters
  • EnableMastersRecall recalls masters parameters

A few commentary on all of that :

  • Only the two first arguments are mandatory! 
  • The index follows the OSC index defined in the snapshot list of the snapshot page. These indexes do not follow the order of the snapshots in the list. This means that reorganizing your snapshots does not break the OSC messages.

Here are two examples of snapshots recall via OSC:

/snapshot/recall 3, 1.5

This message recalls snapshot three in one second and a half. It follows the options defined in the snapshot list in regard to which parameter to recall.

/snapshot/recall 1, 2, false, false, true, true, false

This message recalls snapshot one in two seconds. It overrides the snapshot preferences and recalls source and room parameters.

Well, this is it for our snapshot journey. We hope that these two articles made all of these features clear enough for you to integrate them seamlessly into your next immersive audio creations.

The post SPAT Revolution 22.09 – The Anatomy of the SPAT Revolution Snapshot system – Advanced appeared first on FLUX:: Immersive.

]]>
23048
SPAT Revolution 22.09 – The Anatomy of the SPAT Revolution Snapshot system – Introduction https://www.flux.audio/2022/09/23/spat-revolution-22-09-the-anatomy-of-the-spat-revolution-snapshot-system-introduction/ Fri, 23 Sep 2022 14:57:00 +0000 https://www.flux.audio/?p=23045 Since the 22.09 release, SPAT Revolution offers a whole snapshot system to handle complex scenery changes for both live and studio production situations. The Big picture Our snapshot system is both very simple and very powerful. The main idea is that a snapshot stores everything (source parameters, room properties, everything related to mixing parameters), and […]

The post SPAT Revolution 22.09 – The Anatomy of the SPAT Revolution Snapshot system – Introduction appeared first on FLUX:: Immersive.

]]>
Since the 22.09 release, SPAT Revolution offers a whole snapshot system to handle complex scenery changes for both live and studio production situations.

The Big picture

Our snapshot system is both very simple and very powerful. The main idea is that a snapshot stores everything (source parameters, room properties, everything related to mixing parameters), and only the recalls are selective.

Important: setup configurations are not stored in snapshots, you cannot create blocks, change stream type properties with it.

Now we have these principles in mind, let’s dive deeper into it!

Creating a snapshot

Snapshot creation can be handled in many ways. The simplest one being to use the snapshot tool bar, at the bottom of the graphical user interface.

Hit the “New snapshot” button, give it a name and we have a first snapshot.

This creation process can also be done using the following shortcut : “shitf+spacebar” (both mac and PC).

Just for the sake of the exercise, you can move some sources and create a 2nd snapshot with the shortcut.

Navigating through snapshots

At this stage, you should have two snapshots registered in the session. You can use the “previous” and “next” button to go between them. You can also click on the “Current” button to display a list of all the snapshots available in the session. Click on one of the items in the list recalls the snapshot.

Important : beware, if you have changed anything in a snapshot before recalling a new one, everything will be lost. Remember to update the snapshot by clicking on “Update current” to save all the changes. 

Now, you know how to create, recall , update and navigate through scenes. But there is a lot more to it.

The brain of the system : The snapshot page

The snapshot page is both a monitoring and an editing tool for all the snapshots of your SPAT Revolution session. Snapshots are displayed as items in a list and all their properties are displayed in columns. They can be reordered, deleted and recalled all from here.

Changing global recall options 

By default, the recall properties of all snapshots are common and follow the Global options. This Global option only recalls source properties with no integration (or interpolation) time. This means that, when you recall a snapshot, reverb parameters will stay identical, and the source parameters will be instantaneously set.

You can change the time value and which parameters are recalled in the “global options” top line of the list.

While the Sources category handles all source parameters, the Rooms is for all the room properties which includes the reverb and output section and Masters are strictly dealing with Master output modules.

Creating movement with the timing option

The timing option can be a very powerful tool to create complex movements. For example, set five seconds in the global options “timing”, go back to the 3D view and recall snapshots. If source positions are different, they will take five seconds to go to their new positions.

Overriding Global options

Of course, you are not forced to use the same recall parameter for all your snapshot. Each of them can have specified options, including a custom recall time and a custom recall scope (sources, rooms or master parameters).

In the screenshot above, the “Snapshot 1” will recall source parameters with an interpolation time of two seconds, while the “Snapshot 2” will recall source and master parameters with an interpolation of 3 seconds. They both don’t follow the global options because they have this option unchecked.

Advanced snapshot management

In the following article, we go one step further and talk about version history, parameter propagation, relative recall and OSC remote control.

The anatomy of the SPAT Revolution Snapshot system – Advanced

The post SPAT Revolution 22.09 – The Anatomy of the SPAT Revolution Snapshot system – Introduction appeared first on FLUX:: Immersive.

]]>
23045
SPAT Revolution 22.09 – Using the new input Delay and Compensation feature https://www.flux.audio/2022/09/23/spat-revolution-22-09-using-the-new-input-delay-and-compensation-feature/ Fri, 23 Sep 2022 14:52:16 +0000 https://www.flux.audio/?p=23036 Since the SPAT Revolution 22.09 update, each and every input block now features a delay line. There are several technical usages where such additions can make your life much easier. How to apply a delay on a particular input ? This new option is available by selecting an input block in the SPAT’s setup page. […]

The post SPAT Revolution 22.09 – Using the new input Delay and Compensation feature appeared first on FLUX:: Immersive.

]]>
Since the SPAT Revolution 22.09 update, each and every input block now features a delay line. There are several technical usages where such additions can make your life much easier.

How to apply a delay on a particular input ?

This new option is available by selecting an input block in the SPAT’s setup page. It will appear in the inspector at the right of the graphical user interface. Located at « Input delay », you can simply enter any values, by default in samples, to delay the signal that enters in SPAT via this input block.

Tips: You can use the newly released item page to access the delay value of all inputs in a very convenient way.

How to change the delay unit ?

Using samples as a delay unit is not the most obvious one for all use cases. In the SPAT Revolution preferences, it is possible to change this unit to one that may make more sense, depending on your usage.

Navigate to the Preference page by clicking on the « preference » button located at the top right corner of the graphical user interface. Then, in the first panel, named « Global », you will be able to choose a delay unit between meters and milliseconds. Imperial units are available if you uncheck the option labeled « Use metric system ».

Input delay in SPAT Revolution send plug-in

SPAT Revolution send plug-ins also feature a new option. It is labeled « Input delay » and is designed to report latency to the application that will engage the delay compensation engine. If you happen to have other FX above the SPAT Revolution send in your signal chain that induces latency, you can report this latency directly in the « Input delay » setting.

When to use these delay lines ?

In live applications where you have to deal with acoustic sources, it can be more than useful to be able to time align your different microphones to reduce comb filters and improve the global timber of the instruments. In this use case, using the distance « meter » delay unit may make more sense. For example, if you happen to have a severe tone issue on a guitar amp miked with two microphones, measure the distance between both microphones and the guitar cab and enter the difference in the input delay of the closest microphone to the speaker. This should remove the comb filter effect caused by the two microphones being summed together.

In a studio application, depending on the DAW you are using, using the FLUX :: local audio path technology to send the audio stream from the DAW to SPAT Revolution may bypass the latency mechanism of the DAW. There are two ways to handle this issue : use as many zero latency plug-ins as possible or report the latency of the plug-ins in the SPAT Revolution send plug-in.

For example, if you use a compressor with a look ahead in front of the SPAT send plug-in that adds 100 samples of latency to your signal, report these 100 samples in the SPAT send plug-in and SPAT Revolution will automatically delay every other input by 100 samples.

Which plug-ins type tend to add latency ?

Stock plug-ins tend to favor a zero latency approach, but double check with your specific tools.

  • Phase linear equalizers are the most prominent to add lots of latency to the signal. In the same fashion, most of the plug-ins using oversampling also add latency.
  • Phase minimal equalizer may add latency if they use anti-cramping strategy in the high-end registry.
  • Dynamic processing usually doesn’t add latency to the signal unless they use some kind of look ahead.
  • Also, beware of analog emulation/simulation, as they mainly try to reproduce analog non-linearity, which may involve oversampling and other processing that add latency. Sometimes, developers offer a zero latency mode in such plug-ins.

The post SPAT Revolution 22.09 – Using the new input Delay and Compensation feature appeared first on FLUX:: Immersive.

]]>
23036