Skip to content

Commit

Permalink
Issue2082: move real-time effect calculation to producer side...
Browse files Browse the repository at this point in the history
... of the playback RingBuffer.

It may be making calls to unknown foreign code in plug-ins.  Can we trust
unknown code really to follow the constraints of a low-latency thread (such
as not allocating memory or locking a mutex or similar)?  Rather let's do it
in the thread that must maintain high throughput, but can tolerate more variance
in processing times of batches of data.

Note that the "drop quickly" logic on the consumer side is only about the
"micro-fades" that last milliseconds, and smooth the transition to silence when
stopping.  Whether those few microseconds are now realtime-transformed, where
they weren't before -- matters little.

Note that our own code does some mutex locking in
RealtimeEffectManager::ProcessScope, constructing and destroying it, and in
Process(); not held for the duraction of the scope.  Some way to avoid this
locking needs to be found later.

This will cause response of playback to changes of the realtime effect settings
to become much more laggy.  This will be fixed later.

(cherry picked from commit 713ced2;
modified for Tenacity)
Signed-off-by: Avery King <[email protected]>
  • Loading branch information
Paul-Licameli authored and generic-pers0n committed Aug 27, 2024
1 parent a39c797 commit 365e164
Show file tree
Hide file tree
Showing 2 changed files with 93 additions and 40 deletions.
132 changes: 92 additions & 40 deletions src/AudioIO.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1591,15 +1591,75 @@ void AudioIO::FillPlayBuffers()
frames, available );
} while (available && !done);

/* The flushing of all the Puts to the RingBuffers is lifted out of the loop.
It's only here that a release is done on the atomic variable that
indicates the readiness of sample data to the consumer. That atomic
also sychronizes the use of the TimeQueue.
*/
// Do any realtime effect processing, more efficiently in at most
// two buffers per track, after all the little slices have been written.
TransformPlayBuffers();

/* The flushing of all the Puts to the RingBuffers is lifted out of the
do-loop above, and also after transformation of the stream for realtime
effects.
It's only here that a release is done on the atomic variable that
indicates the readiness of sample data to the consumer. That atomic
also sychronizes the use of the TimeQueue.
*/
for (size_t i = 0; i < std::max(size_t{1}, mPlaybackTracks.size()); ++i)
mPlaybackBuffers[i]->Flush();
}

void AudioIO::TransformPlayBuffers()
{
// Transform written but un-flushed samples in the RingBuffers in-place.

// Avoiding std::vector
auto pointers =
static_cast<float**>(alloca(mNumPlaybackChannels * sizeof(float*)));

std::optional<RealtimeEffects::ProcessingScope> pScope;
if (mpTransportState && mpTransportState->mpRealtimeInitialization)
pScope.emplace(
*mpTransportState->mpRealtimeInitialization, mOwningProject);
const auto numPlaybackTracks = mPlaybackTracks.size();
for (unsigned t = 0; t < numPlaybackTracks; ++t) {
const auto vt = mPlaybackTracks[t].get();
if ( vt->IsLeader() ) {
// vt is mono, or is the first of its group of channels
const auto nChannels = std::min<size_t>(
mNumPlaybackChannels, TrackList::Channels(vt).size());

// Loop over the blocks of unflushed data, at most two
for (unsigned iBlock : {0, 1}) {
size_t len = 0;
size_t iChannel = 0;
for (; iChannel < nChannels; ++iChannel) {
const auto pair =
mPlaybackBuffers[t + iChannel]->GetUnflushed(iBlock);
// Playback RingBuffers have float format: see AllocateBuffers
pointers[iChannel] = reinterpret_cast<float*>(pair.first);
// The lengths of corresponding unflushed blocks should be
// the same for all channels
if (len == 0)
len = pair.second;
else
assert(len == pair.second);
}

// Are there more output device channels than channels of vt?
// Such as when a mono track is processed for stereo play?
// Then supply some non-null fake input buffers, because the
// various ProcessBlock overrides of effects may crash without it.
// But it would be good to find the fixes to make this unnecessary.
float **scratch = &mScratchPointers[mNumPlaybackChannels + 1];
while (iChannel < mNumPlaybackChannels)
pointers[iChannel++] = *scratch++;

if (len && pScope)
pScope->Process(vt, &pointers[0], mScratchPointers.data(), len);
}
}
}
}

void AudioIO::DrainRecordBuffers()
{
if (mRecordingException || mCaptureTracks.empty())
Expand Down Expand Up @@ -1997,9 +2057,6 @@ bool AudioIoCallback::FillOutputBuffers(

{
auto pProject = mOwningProject.lock();
std::optional<RealtimeEffects::ProcessingScope> pScope;
if (mpTransportState && mpTransportState->mpRealtimeInitialization)
pScope.emplace( *mpTransportState->mpRealtimeInitialization, mOwningProject );

bool selected = false;
int group = 0;
Expand Down Expand Up @@ -2099,39 +2156,34 @@ bool AudioIoCallback::FillOutputBuffers(
// Last channel of a track seen now
len = mMaxFramesOutput;

// Do realtime effects
if( !dropQuickly && len > 0 ) {
if (pScope)
pScope->Process(mTrackChannelsBuffer[0], mAudioScratchBuffers.data(), mScratchPointers.data(), len);
// Mix the results with the existing output (software playthrough) and
// apply panning. If post panning effects are desired, the panning would
// need to be be split out from the mixing and applied in a separate step.
for (auto c = 0; c < chanCnt; ++c)
// Mix the results with the existing output (software playthrough) and
// apply panning. If post panning effects are desired, the panning would
// need to be be split out from the mixing and applied in a separate step.
for (auto c = 0; c < chanCnt; ++c)
{
// Our channels aren't silent. We need to pass their data on.
//
// Note that there are two kinds of channel count.
// c and chanCnt are counting channels in the Tracks.
// chan (and numPlayBackChannels) is counting output channels on the device.
// chan = 0 is left channel
// chan = 1 is right channel.
//
// Each channel in the tracks can output to more than one channel on the device.
// For example mono channels output to both left and right output channels.
if (len > 0) for (int c = 0; c < chanCnt; c++)
{
// Our channels aren't silent. We need to pass their data on.
//
// Note that there are two kinds of channel count.
// c and chanCnt are counting channels in the Tracks.
// chan (and numPlayBackChannels) is counting output channels on the device.
// chan = 0 is left channel
// chan = 1 is right channel.
//
// Each channel in the tracks can output to more than one channel on the device.
// For example mono channels output to both left and right output channels.
if (len > 0) for (int c = 0; c < chanCnt; c++)
{
vt = mTrackChannelsBuffer[c];

if (vt->GetChannelIgnoringPan() == Track::LeftChannel ||
vt->GetChannelIgnoringPan() == Track::MonoChannel )
AddToOutputChannel( 0, outputMeterFloats, outputFloats,
mAudioScratchBuffers[c], drop, len, *vt);

if (vt->GetChannelIgnoringPan() == Track::RightChannel ||
vt->GetChannelIgnoringPan() == Track::MonoChannel )
AddToOutputChannel( 1, outputMeterFloats, outputFloats,
mAudioScratchBuffers[c], drop, len, *vt);
}
vt = mTrackChannelsBuffer[c];

if (vt->GetChannelIgnoringPan() == Track::LeftChannel ||
vt->GetChannelIgnoringPan() == Track::MonoChannel )
AddToOutputChannel( 0, outputMeterFloats, outputFloats,
mAudioScratchBuffers[c], drop, len, *vt);

if (vt->GetChannelIgnoringPan() == Track::RightChannel ||
vt->GetChannelIgnoringPan() == Track::MonoChannel )
AddToOutputChannel( 1, outputMeterFloats, outputFloats,
mAudioScratchBuffers[c], drop, len, *vt);
}
}

Expand Down
1 change: 1 addition & 0 deletions src/AudioIO.h
Original file line number Diff line number Diff line change
Expand Up @@ -529,6 +529,7 @@ class TENACITY_DLL_API AudioIO final

//! First part of TrackBufferExchange
void FillPlayBuffers();
void TransformPlayBuffers();

//! Second part of TrackBufferExchange
void DrainRecordBuffers();
Expand Down

0 comments on commit 365e164

Please sign in to comment.