You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I sometimes have multiple subtitles to be aligned against the same file. Is any speed to be gained from processing them simultaneously? (I figure that at least the ffmpeg extraction would only be run once).
If the alignment quality is output (cf. #29), multiple subtitle files (in the same language, but maybe for unknown cuts) could be entered, and the one that actually matches the film be picked; maybe even selected automatically into the output file.
The text was updated successfully, but these errors were encountered:
If you then synchronize against out.srt it's the same as if you'd synchronize against the movie (I implemented this more as a debugging tool but it comes in handy here). Otherwise you can not get speed improvements with multiple files. The algorithm has to be run for each pair of subtitles individually - there is no pre-processing.
Thanks, that solves the duplicate-processing issue; the question of assessing match quality is probably better left to #29 then. I'm opening a PR to document the underscore feature, which would hopefully have sent me on the right track in the first place.
I sometimes have multiple subtitles to be aligned against the same file. Is any speed to be gained from processing them simultaneously? (I figure that at least the ffmpeg extraction would only be run once).
If the alignment quality is output (cf. #29), multiple subtitle files (in the same language, but maybe for unknown cuts) could be entered, and the one that actually matches the film be picked; maybe even selected automatically into the output file.
The text was updated successfully, but these errors were encountered: