Generating Adjusted Spatial Audio from Customized HRTF using 3D3A Lab HRTF Database
In this project, the main objective is to generate an adjusted spatial audio file using a custom-made HRTF, with inputs of images of ear pairs (left, right) and an audio file. The custom HRTF shall be based on the 3D3A Lab Head-Related Transfer Function Database [1].
Input: images of ears (left/right) of a subject azimuth, elevation value input audio file to be adjusted sampling rate of audio file Output: audio file adjusted using the custom HRTF
Clone this project, and serve it as your workspace.
test function. Just run this to check if the project is working. This calls all the function mentioned below.
Azimuth = 90, Elevation = 0 (Left dominated)
Azimuth = 270, Elevation = 0 (Right dominated)
Generate audio file adjusted using the custom HRTF Example: s = getSoundCHRTF('realears/earright.png', 'realears/earleft.png', 'siren.mp3', 90, 30, 'dfeq', 96000, 1);
Saves the file in the format: Format: soundfilename_AZXX_ELYY_TYPE.type' Example: siren.mp3_AZ90_EL0_DFEQ.wav
Read all scanned left/right ear images. Generating this is done by opening all .ply files from [2] and load it in Blender. Next is to crop the left/right ear images with methods listed in [10].
Get similar ears from scanned ear list.
- Get top N ssim() results between leftRef and leftEarScans (N=5)
- Get top N ssim() results between rightRef and rightEarScans
- Get the intersection of 1 and 2
- If no intersection, get Top 2 from both
Get the .sofa files of the match indices from the similar ear index.
Gets personalized hrtf from hrtf_list
Returns soundOutput based from input sound and hrtf values