A Python library for audio data augmentation. Inspired by albumentations. Useful for deep learning. Runs on CPU. Supports mono audio and multichannel audio. Can be integrated in training pipelines in e.g. Tensorflow/Keras or Pytorch. Has helped people get world-class results in Kaggle competitions. Is used by companies making next-generation audio products.
Need a Pytorch-specific alternative with GPU support? Check out torch-audiomentations!
pip install audiomentations
Some features have extra dependencies. Extra python package dependencies can be installed by running
pip install audiomentations[extras]
Feature | Extra dependencies |
---|---|
LoudnessNormalization |
pyloudnorm |
Mp3Compression |
ffmpeg and [pydub or lameenc ] |
RoomSimulator |
pyroomacoustics |
Note: ffmpeg
can be installed via e.g. conda or from the official ffmpeg download page.
from audiomentations import Compose, AddGaussianNoise, TimeStretch, PitchShift, Shift
import numpy as np
augment = Compose([
AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=0.5),
TimeStretch(min_rate=0.8, max_rate=1.25, p=0.5),
PitchShift(min_semitones=-4, max_semitones=4, p=0.5),
Shift(min_fraction=-0.5, max_fraction=0.5, p=0.5),
])
# Generate 2 seconds of dummy audio for the sake of example
samples = np.random.uniform(low=-0.2, high=0.2, size=(32000,)).astype(np.float32)
# Augment/transform/perturb the audio data
augmented_samples = augment(samples=samples, sample_rate=16000)
Check out the source code at audiomentations/augmentations/ to see the waveform transforms you can apply, and what arguments they have.
from audiomentations import SpecCompose, SpecChannelShuffle, SpecFrequencyMask
import numpy as np
augment = SpecCompose(
[
SpecChannelShuffle(p=0.5),
SpecFrequencyMask(p=0.5),
]
)
# Example spectrogram with 1025 frequency bins, 256 time steps and 2 audio channels
spectrogram = np.random.random((1025, 256, 2))
# Augment/transform/perturb the spectrogram
augmented_spectrogram = augment(spectrogram)
See audiomentations/spec_augmentations/spectrogram_transforms.py for spectrogram transforms.
Some of the following waveform transforms can be visualized (for understanding) by the audio-transformation-visualization GUI (made by phrasenmaeher), where you can upload your own input wav file
Added in v0.9.0
Mix in another sound, e.g. a background noise. Useful if your original sound is clean and you want to simulate an environment where background noise is present.
Can also be used for mixup, as in https://arxiv.org/pdf/1710.09412.pdf
A folder of (background noise) sounds to be mixed in must be specified. These sounds should ideally be at least as long as the input sounds to be transformed. Otherwise, the background sound will be repeated, which may sound unnatural.
Note that the gain of the added noise is relative to the amount of signal in the input. This implies that if the input is completely silent, no noise will be added.
Here are some examples of datasets that can be downloaded and used as background noise:
Added in v0.1.0
Add gaussian noise to the samples
Added in v0.7.0
Add gaussian noise to the input. A random Signal to Noise Ratio (SNR) will be picked uniformly in the decibel scale. This aligns with human hearing, which is more logarithmic than linear.
Added in v0.7.0
Convolve the audio with a random impulse response. Impulse responses can be created using e.g. http://tulrich.com/recording/ir_capture/
Some datasets of impulse responses are publicly available:
- EchoThief containing 115 impulse responses acquired in a wide range of locations.
- The MIT McDermott dataset containing 271 impulse responses acquired in everyday places.
Impulse responses are represented as wav files in the given ir_path.
Added in v0.9.0
Mix in various (bursts of overlapping) sounds with random pauses between. Useful if your original sound is clean and you want to simulate an environment where short noises sometimes occur.
A folder of (noise) sounds to be mixed in must be specified.
Added in v0.18.0, updated in v0.21.0
Apply band-pass filtering to the input audio. Filter steepness (6/12/18... dB / octave) is parametrized. Can also be set for zero-phase filtering (will result in a 6db drop at cutoffs).
Added in v0.21.0
Apply band-stop filtering to the input audio. Also known as notch filter or band reject filter. It relates to the frequency mask idea in the SpecAugment paper. This transform is similar to FrequencyMask, but has overhauled default parameters and parameter randomization - center frequency gets picked in mel space so it is more aligned with human hearing, which is not linear. Filter steepness (6/12/18... dB / octave) is parametrized. Can also be set for zero-phase filtering (will result in a 6db drop at cutoffs).
Added in v0.17.0
Clip audio by specified values. e.g. set a_min=-1.0 and a_max=1.0 to ensure that no samples in the audio exceed that extent. This can be relevant for avoiding integer overflow or underflow (which results in unintended wrap distortion that can sound horrible) when exporting to e.g. 16-bit PCM wav.
Another way of ensuring that all values stay between -1.0 and 1.0 is to apply
PeakNormalization
.
This transform is different from ClippingDistortion
in that it takes fixed values
for clipping instead of clipping a random percentile of the samples. Arguably, this
transform is not very useful for data augmentation. Instead, think of it as a very
cheap and harsh limiter (for samples that exceed the allotted extent) that can
sometimes be useful at the end of a data augmentation pipeline.
Added in v0.8.0
Distort signal by clipping a random percentage of points
The percentage of points that will be clipped is drawn from a uniform distribution between the two input parameters min_percentile_threshold and max_percentile_threshold. If for instance 30% is drawn, the samples are clipped if they're below the 15th or above the 85th percentile.
Added in v0.7.0
Mask some frequency band on the spectrogram. Inspired by https://arxiv.org/pdf/1904.08779.pdf
Added in v0.11.0
Multiply the audio by a random amplitude factor to reduce or increase the volume. This technique can help a model become somewhat invariant to the overall gain of the input audio.
Warning: This transform can return samples outside the [-1, 1] range, which may lead to clipping or wrap distortion, depending on what you do with the audio in a later stage. See also https://en.wikipedia.org/wiki/Clipping_(audio)#Digital_clipping
Added in v0.22.0
Gradually change the volume up or down over a random time span. Also known as fade in and fade out. The fade works on a logarithmic scale, which is natural to human hearing.
Added in v0.18.0, updated in v0.21.0
Apply high-pass filtering to the input audio of parametrized filter steepness (6/12/18... dB / octave). Can also be set for zero-phase filtering (will result in a 6db drop at cutoff).
Added in v0.21.0
A high shelf filter is a filter that either boosts (increases amplitude) or cuts
(decreases amplitude) frequencies above a certain center frequency. This transform
applies a high-shelf filter at a specific center frequency in hertz.
The gain at nyquist frequency is controlled by {min,max}_gain_db
(note: can be positive or negative!).
Filter coefficients are taken from the W3 Audio EQ Cookbook
Added in v0.18.0, updated in v0.21.0
Apply low-pass filtering to the input audio of parametrized filter steepness (6/12/18... dB / octave). Can also be set for zero-phase filtering (will result in a 6db drop at cutoff).
Added in v0.21.0
A low shelf filter is a filter that either boosts (increases amplitude) or cuts
(decreases amplitude) frequencies below a certain center frequency. This transform
applies a low-shelf filter at a specific center frequency in hertz.
The gain at DC frequency is controlled by {min,max}_gain_db
(note: can be positive or negative!).
Filter coefficients are taken from the W3 Audio EQ Cookbook
Added in v0.12.0
Compress the audio using an MP3 encoder to lower the audio quality. This may help machine learning models deal with compressed, low-quality audio.
This transform depends on either lameenc or pydub/ffmpeg.
Note that bitrates below 32 kbps are only supported for low sample rates (up to 24000 hz).
Note: When using the lameenc backend, the output may be slightly longer than the input due to the fact that the LAME encoder inserts some silence at the beginning of the audio.
Added in v0.14.0
Apply a constant amount of gain to match a specific loudness. This is an implementation of ITU-R BS.1770-4.
Warning: This transform can return samples outside the [-1, 1] range, which may lead to clipping or wrap distortion, depending on what you do with the audio in a later stage. See also https://en.wikipedia.org/wiki/Clipping_(audio)#Digital_clipping
Added in v0.6.0
Apply a constant amount of gain, so that highest signal level present in the sound becomes 0 dBFS, i.e. the loudest level allowed if all samples must be between -1 and 1. Also known as peak normalization.
Added in v0.23.0
Apply padding to the audio signal - take a fraction of the end or the start of the audio and replace that part with padding. This can be useful for preparing ML models with constant input length for padded inputs.
Added in v0.21.0
Add a biquad peaking filter transform
Added in v0.4.0
Pitch shift the sound up or down without changing the tempo
Added in v0.11.0
Flip the audio samples upside-down, reversing their polarity. In other words, multiply the waveform by -1, so negative values become positive, and vice versa. The result will sound the same compared to the original when played back in isolation. However, when mixed with other audio sources, the result may be different. This waveform inversion technique is sometimes used for audio cancellation or obtaining the difference between two waveforms. However, in the context of audio data augmentation, this transform can be useful when training phase-aware machine learning models.
Added in v0.8.0
Resample signal using librosa.core.resample
To do downsampling only set both minimum and maximum sampling rate lower than original sampling rate and vice versa to do upsampling only.
Added in v0.18.0
Reverse the audio. Also known as time inversion. Inversion of an audio track along its time axis relates to the random flip of an image, which is an augmentation technique that is widely used in the visual domain. This can be relevant in the context of audio classification. It was successfully applied in the paper AudioCLIP: Extending CLIP to Image, Text and Audio.
Added in v0.23.0
A ShoeBox Room Simulator. Simulates a cuboid of parametrized size and average surface absorption coefficient. It also includes a source and microphones in parametrized locations.
Use it when you want a ton of synthetic room impulse responses of specific configurations characteristics or simply to quickly add reverb for augmentation purposes
Added in v0.24.0
Adjust the volume of different frequency bands. This transform is a 7-band parametric equalizer - a combination of one low shelf filter, five peaking filters and one high shelf filter, all with randomized gains, Q values and center frequencies.
Because this transform changes the timbre, but keeps the overall "class" of the sound the same (depending on application), it can be used for data augmentation to make ML models more robust to various frequency spectrums. Many things can affect the spectrum, like room acoustics, any objects between the microphone and the sound source, microphone type/model and the distance between the sound source and the microphone.
The seven bands have center frequencies picked in the following ranges (min-max): 42-95 hz 91-204 hz 196-441 hz 421-948 hz 909-2045 hz 1957-4404 hz 4216-9486 hz
Added in v0.5.0
Shift the samples forwards or backwards, with or without rollover
Added in v0.19.0
Apply tanh (hyperbolic tangent) distortion to the audio. This technique is sometimes used for adding distortion to guitar recordings. The tanh() function can give a rounded "soft clipping" kind of distortion, and the distortion amount is proportional to the loudness of the input and the pre-gain. Tanh is symmetric, so the positive and negative parts of the signal are squashed in the same way. This transform can be useful as data augmentation because it adds harmonics. In other words, it changes the timbre of the sound.
See this page for examples: http://gdsp.hf.ntnu.no/lessons/3/17/
Added in v0.7.0
Make a randomly chosen part of the audio silent. Inspired by https://arxiv.org/pdf/1904.08779.pdf
Added in v0.2.0
Time stretch the signal without changing the pitch
Added in v0.7.0
Trim leading and trailing silence from an audio signal using librosa.effects.trim
Added in v0.13.0
Shuffle the channels of a multichannel spectrogram. This can help combat positional bias.
Added in v0.13.0
Mask a set of frequencies in a spectrogram, Ă la Google AI SpecAugment. This type of data augmentation has proved to make speech recognition models more robust.
The masked frequencies can be replaced with either the mean of the original values or a given constant (e.g. zero).
Compose applies the given sequence of transforms when called, optionally shuffling the sequence for every call.
Same as Compose, but for spectrogram transforms
OneOf randomly picks one of the given transforms when called, and applies that transform.
SomeOf randomly picks several of the given transforms when called, and applies those transforms.
- Some transforms do not support multichannel audio yet. See Multichannel audio
- Expects the input dtype to be float32, and have values between -1 and 1.
- The code runs on CPU, not GPU. For a GPU-compatible version, check out pytorch-audiomentations
- Multiprocessing is not officially supported yet. See also #46
Contributions are welcome!
As of v0.22.0, all transforms except AddBackgroundNoise
and AddShortNoises
support not only mono audio (1-dimensional numpy arrays), but also stereo audio, i.e. 2D arrays with shape like (num_channels, num_samples)
- Guard against invalid params in TimeMask
- Add
SevenBandParametricEQ
transform - Add optional
noise_transform
inAddShortNoises
- Add .aac and .aif to the list of recognized audio filename endings
- Show warning if
top_db
and/orp
inTrim
are not specified because their default values will change in a future version
- Fix filter instability bug related to center freq above nyquist freq in
LowShelfFilter
andHighShelfFilter
- Add
Padding
transform - Add
RoomSimulator
transform for simulating shoebox rooms usingpyroomacoustics
- Add parameter
signal_gain_in_db_during_noise
inAddShortNoises
- Not specifying a value for
leave_length_unchanged
inAddImpulseResponse
now emits a warning, as the default value will change fromFalse
toTrue
in a future version.
- Remove the deprecated
AddImpulseResponse
alias. UseApplyImpulseResponse
instead. - Remove support for the legacy parameters
min_SNR
andmax_SNR
inAddGaussianSNR
- Remove useless default path value in
AddBackgroundNoise
,AddShortNoises
andApplyImpulseResponse
- Implement
GainTransition
- Add support for librosa 0.9
- Add support for stereo audio in
Mp3Compression
,Resample
andTrim
- Add
"relative_to_whole_input"
option fornoise_rms
parameter inAddShortNoises
- Add optional
noise_transform
inAddBackgroundNoise
- Improve speed of
PitchShift
by 6-18% when the input audio is stereo
- Remove support for librosa<=0.7.2
- Add support for multichannel audio in
ApplyImpulseResponse
,BandPassFilter
,HighPassFilter
andLowPassFilter
- Add
BandStopFilter
(similar to FrequencyMask, but with overhauled defaults and parameter randomization behavior),PeakingFilter
,LowShelfFilter
andHighShelfFilter
- Add parameter
add_all_noises_with_same_level
inAddShortNoises
- Change
BandPassFilter
,LowPassFilter
,HighPassFilter
, to use scipy's butterworth filters instead of pydub. Now they have parametrized roll-off. Filters are now steeper than before by default - setmin_rolloff=6, max_rolloff=6
to get the old behavior. They also support zero-phase filtering now. And they're at least ~25x times faster than before!
- Remove optional
wavio
dependency for audio loading
- Implement
OneOf
andSomeOf
for applying one of or some of many transforms. Transforms are randomly chosen every call. Inspired by augly. Thanks to Cangonin and iver56. - Add a new argument
apply_to_children
(bool) inrandomize_parameters
,freeze_parameters
andunfreeze_parameters
inCompose
andSpecCompose
.
- Insert three new parameters in
AddBackgroundNoise
:noise_rms
(defaults to "relative", which is the old behavior),min_absolute_rms_in_db
andmax_absolute_rms_in_db
. This may be a breaking change if you usedAddBackgroundNoise
with positional arguments in earlier versions of audiomentations! Please use keyword arguments to be on the safe side - it should be backwards compatible then.
- Remove global
pydub
import which was accidentally introduced in v0.18.0.pydub
is considered an optional dependency and is imported only on demand now.
- Implement
TanhDistortion
. Thanks to atamazian and iver56. - Add a
noise_rms
parameter toAddShortNoises
. It defaults torelative
, which is the old behavior.absolute
allows for adding loud noises to parts that are relatively silent in the input.
- Implement
BandPassFilter
,HighPassFilter
,LowPassFilter
andReverse
. Thanks to atamazian.
- Add a
fade
option inShift
for eliminating unwanted clicks - Add support for 32-bit int wav loading with scipy>=1.6
- Add support for float64 wav files. However, the use of this format is discouraged, since float32 is more than enough for audio in most cases.
- Implement
Clip
. Thanks to atamazian. - Add some parameter sanity checks in
AddGaussianNoise
- Officially support librosa 0.8.1
- Rename
AddImpulseResponse
toApplyImpulseResponse
. The former will still work for now, but give a warning. - When looking for audio files in
AddImpulseResponse
,AddBackgroundNoise
andAddShortNoises
, follow symlinks by default. - When using the new parameters
min_snr_in_db
andmax_snr_in_db
inAddGaussianSNR
, SNRs will be picked uniformly in the decibel scale instead of in the linear amplitude ratio scale. The new behavior aligns more with human hearing, which is not linear.
- Avoid division by zero in
AddImpulseResponse
when input is digital silence (all zeros) - Fix inverse SNR characteristics in
AddGaussianSNR
. It will continue working as before unless you switch to the new parametersmin_snr_in_db
andmax_snr_in_db
. If you use the old parameters, you'll get a warning.
- Implement
SpecCompose
for applying a pipeline of spectrogram transforms. Thanks to omerferhatt.
- Fix a bug in
SpecChannelShuffle
where it did not support more than 3 audio channels. Thanks to omerferhatt. - Limit scipy version range to >=1.0,<1.6 to avoid issues with loading 24-bit wav files. Support for scipy>=1.6 will be added later.
- Add an option
leave_length_unchanged
toAddImpulseResponse
- Fix picklability of instances of
AddImpulseResponse
,AddBackgroundNoise
andAddShortNoises
- Implement
LoudnessNormalization
- Implement
randomize_parameters
inCompose
. Thanks to SolomidHero. - Add multichannel support to
AddGaussianNoise
,AddGaussianSNR
,ClippingDistortion
,FrequencyMask
,PitchShift
,Shift
,TimeMask
andTimeStretch
- Lay the foundation for spectrogram transforms. Implement
SpecChannelShuffle
andSpecFrequencyMask
. - Configurable LRU cache for transforms that use external sound files. Thanks to alumae.
- Officially add multichannel support to
Normalize
- Show a warning if a waveform had to be resampled after loading it. This is because resampling is slow. Ideally, files on disk should already have the desired sample rate.
- Correctly find audio files with upper case filename extensions.
- Fix a bug where AddBackgroundNoise crashed when trying to add digital silence to an input. Thanks to juheeuu.
- Speed up
AddBackgroundNoise
,AddShortNoises
andAddImpulseResponse
by loading wav files with scipy or wavio instead of librosa.
- Implement
Mp3Compression
- Officially support multichannel audio in
Gain
andPolarityInversion
- Add m4a and opus to the list of recognized audio filename extensions
- Expand range of supported
librosa
versions
- Python <= 3.5 is no longer officially supported, since Python 3.5 has reached end-of-life
- Breaking change: Internal util functions are no longer exposed directly. If you were doing
e.g.
from audiomentations import calculate_rms
, now you have to dofrom audiomentations.core.utils import calculate_rms
- Implement
Gain
andPolarityInversion
. Thanks to Spijkervet for the inspiration.
- Improve the performance of
AddBackgroundNoise
andAddShortNoises
by optimizing the implementation ofcalculate_rms
.
- Improve compatibility of output files written by the demo script. Thanks to xwJohn.
- Fix division by zero bug in
Normalize
. Thanks to ZFTurbo.
AddImpulseResponse
,AddBackgroundNoise
andAddShortNoises
now support aiff files in addition to flac, mp3, ogg and wav
- Breaking change:
AddImpulseResponse
,AddBackgroundNoise
andAddShortNoises
now include subfolders when searching for files. This is useful when your sound files are organized in subfolders.
- Fix filter instability bug in
FrequencyMask
. Thanks to kvilouras.
- Remember randomized/chosen effect parameters. This allows for freezing the parameters and applying the same effect to multiple sounds. Use transform.freeze_parameters() and transform.unfreeze_parameters() for this.
- Implement transform.serialize_parameters(). Useful for when you want to store metadata on how a sound was perturbed.
- Add a rollover parameter to
Shift
. This allows for introducing silence instead of a wrapped part of the sound. - Add support for flac in
AddImpulseResponse
- Implement
AddBackgroundNoise
transform. Useful for when you want to add background noise to all of your sound. You need to give it a folder of background noises to choose from. - Implement
AddShortNoises
. Useful for when you want to add (bursts of) short noise sounds to your input audio.
- Disregard non-audio files when looking for impulse response files
- Switch to a faster convolve implementation. This makes
AddImpulseResponse
significantly faster. - Expand supported range of librosa versions
- Fix a bug in
ClippingDistortion
where the min_percentile_threshold was not respected as expected. - Improve handling of empty input
- Add shuffle parameter in
Composer
- Add
Resample
transformation - Add
ClippingDistortion
transformation - Add
fade
parameter toTimeMask
Thanks to askskro
AddGaussianSNR
AddImpulseResponse
FrequencyMask
TimeMask
Trim
Thanks to karpnv
- Implement peak normalization
- Implement
Shift
transform
- Ensure p is within bounds
- Implement
PitchShift
transform
- Fix output dtype of
AddGaussianNoise
- Implement
leave_length_unchanged
inTimeStretch
- Add
TimeStretch
transform - Parametrize
AddGaussianNoise
- Initial release. Includes only one transform:
AddGaussianNoise
Install the dependencies specified in requirements.txt
Format the code with black
pytest
python -m demo.demo
Audiomentations isn't the only python library that can do various types of audio data augmentation/degradation! Here's an overview:
Name | Github stars | License | Last commit | GPU support? |
---|---|---|---|---|
audio-degradation-toolbox | ||||
audio_degrader | ||||
audiomentations | ||||
AugLy | ||||
kapre | ||||
muda | ||||
nlpaug | ||||
pedalboard | ||||
pydiogment | ||||
python-audio-effects | ||||
sigment | ||||
SpecAugment | ||||
spec_augment | ||||
teal | ||||
torch-audiomentations | ||||
torchaudio-augmentations | ||||
WavAugment |
Thanks to Nomono for backing audiomentations.
Thanks to all contributors who help improving audiomentations.