Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

concept y_pred[:, 2:, :] and downsample_factor ? #21

Open
PythonImageDeveloper opened this issue May 6, 2019 · 0 comments
Open

concept y_pred[:, 2:, :] and downsample_factor ? #21

PythonImageDeveloper opened this issue May 6, 2019 · 0 comments

Comments

@PythonImageDeveloper
Copy link

PythonImageDeveloper commented May 6, 2019

Hi
Q1 - I don't understand why don't use all of y_pred[ :,:,:]? why use y_pred[ :, 2:,:]? why don't use 0,1 dims?

def ctc_lambda_func(args):
    y_pred, labels, input_length, label_length = args
    # the 2 is critical here since the first couple outputs of the RNN
    # tend to be garbage:
    y_pred = y_pred[:, 2:, :]
    return K.ctc_batch_cost(labels, y_pred, input_length, label_length)

Q2 - What's meaning the downsampling parameter? if we change the input_size, is necessary to change the value of downsampling parameter? what's the principle of the parameter?
and I also don't understand, why use this line and why we multiply (self.img_w // self.downsample_factor - 2) to np.ones((self.batch_size, 1)) ? what's the advantage of this work? if we use only np.ones((self.batch_size, 1)) , Causing the problem?
and why use (self.downsample_factor - 2) ? why 2 ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant