diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json
index d95421b..17b0826 100644
--- a/dev/.documenter-siteinfo.json
+++ b/dev/.documenter-siteinfo.json
@@ -1 +1 @@
-{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2023-12-28T18:48:57","documenter_version":"1.2.1"}}
\ No newline at end of file
+{"documenter":{"julia_version":"1.9.4","generation_timestamp":"2023-12-28T19:13:22","documenter_version":"1.2.1"}}
\ No newline at end of file
diff --git a/dev/acd/index.html b/dev/acd/index.html
index 2fee5fd..df962ee 100644
--- a/dev/acd/index.html
+++ b/dev/acd/index.html
@@ -28,4 +28,4 @@
y = rand(rng, 3000)
z = rand(rng, 10000)
a = [Sound(x, 8000), Sound(y, 8000), Sound(z, 8000)]
-distinctiveness(a[1], a[2:3])
45165.66072314727
The number is effectively an index of how acoustically unique a word is in a language.
Calculate the acoustic distance between s1 and s2 with method version of dynamic time warping and dist as the interior distance function. Using method=:dtw uses vanilla dynamic time warping, while method=:fastdtw uses the fast dtw approximation. Note that this is not a true mathematical distance metric because dynamic time warping does not necessarily satisfy the triangle inequality, nor does it guarantee the identity of indiscernibles.
Args
s1 Features-by-time array of first sound to compare
s2 Features-by-time array of second sound to compare
method (keyword) Which method of dynamic time warping to use
dist (keyword) Any distance function implementing the SemiMetric interface from the Distances package
dtwradius (keyword) maximum warping radius for vanilla dynamic timew warping; if no value passed, no warping constraint is used argument unused when method=:fastdtw
fastradius (keyword) The radius to use for the fast dtw method; argument unused when method=:dtw
Convert s1 and s2 to a frequency representation specified by rep, then calculate acoustic distance between s1 and s2. Currently only :mfcc is supported for rep, using defaults from the MFCC package except that the first coefficient for each frame is removed and replaced with the sum of the log energy of the filterbank in that frame, as is standard in ASR.
Return a sequence representing the average of the sequences in S using the dba method for sequence averaging. Supports method=:dtw for vanilla dtw and method=:fastdtw for fast dtw approximation when performing the sequence comparisons. With center=:medoid, finds the medoid as the sequence to use as the initial center, and with center=:rand selects a random element in S as the initial center.
Args
S An array of sequences to average
method (keyword) The method of dynamic time warping to use
dist (keyword) Any distance function implementing the SemiMetric interface from the Distances package
radius (keyword) The radius to use for the fast dtw method; argument unused when method=:dtw
center (keyword) The method used to select the initial center of the sequences in S
dtwradius (keyword) How far a time step can be mapped when comparing sequences; passed directly to DTW function from DynamicAxisWarping; if set to nothing, the length of the longest sequence will be used, effectively removing the radius restriction
progress Whether to show the progress coming from dba
Convert the Sound objects in S to a representation designated by rep, then find the average sequence of them. Currently only :mfcc is supported for rep, using defaults from the MFCC package except that the first coefficient for each frame is removed and replaced with the sum of the log energy of the filterbank in that frame, as is standard in ASR.
Calculates the acoustic distinctiveness of s given the corpus corpus. The method, dist, and radius arguments are passed into acdist. The reduction argument can be any function that reduces an iterable to one number, such as mean, sum, or median.
For more information, see Kelley (2018, September, How acoustic distinctiveness affects spoken word recognition: A pilot study, DOI: 10.7939/R39G5GV9Q) and Kelley & Tucker (2018, Using acoustic distance to quantify lexical competition, DOI: 10.7939/r3-wbhs-kr84).
Converts s and corpus to a representation specified by rep, then calculates the acoustic distinctiveness of s given corpus. Currently only :mfcc is supported for rep, using defaults from the MFCC package except that the first coefficient for each frame is removed and replaced with the sum of the log energy of the filterbank in that frame, as is standard in ASR.
Bartelds, M., Richter, C., Liberman, M., & Wieling, M. (2020). A new acoustic-based pronunciation distance measure. Frontiers in Artificial Intelligence, 3, 39.
Mielke, J. (2012). A phonetically based metric of sound similarity. Lingua, 122(2), 145-163.
Kelley, M. C. (2018). How acoustic distinctiveness affects spoken word recognition: A pilot study. Presented at the 11th International Conference on the Mental Lexicon (Edmonton, AB). https://doi.org/10.7939/R39G5GV9Q
Kelley, M. C., & Tucker, B. V. (2018). Using acoustic distance to quantify lexical competition. University of Alberta ERA (Education and Research Archive). https://doi.org/10.7939/r3-wbhs-kr84
Petitjean, F., Ketterlin, A., & Gançarski, P. (2011). A global averaging method for dynamic time warping, with applications to clustering. Pattern Recognition, 44(3), 678–693.
Settings
This document was generated with Documenter.jl version 1.2.1 on Thursday 28 December 2023. Using Julia version 1.10.0.