You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 3, 2021. It is now read-only.
I am trying to estimate word confidence while doing CTC decoding, one of the simplest ways I could think of is just multiplying the associated character probabilities at that particular time step to get an approximation.
However, a lot of people have been using the concept of posterior probabilities and lattice to get the word confidence.
Is there a simplified way to have word confidence estimation during decoding?
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I am trying to estimate word confidence while doing CTC decoding, one of the simplest ways I could think of is just multiplying the associated character probabilities at that particular time step to get an approximation.
However, a lot of people have been using the concept of posterior probabilities and lattice to get the word confidence.
Is there a simplified way to have word confidence estimation during decoding?
The text was updated successfully, but these errors were encountered: