Skip to content

Commit

Permalink
Fix batch size definition in case of parallel encoders (#279)
Browse files Browse the repository at this point in the history
  • Loading branch information
guillaumekln authored Nov 28, 2018
1 parent 0a2890a commit 06eba7f
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 1 deletion.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ OpenNMT-tf follows [semantic versioning 2.0.0](https://semver.org/). The API cov
### Fixes and improvements

* Fix inference error when using parallel inputs and the parameter `bucket_width`
* Fix size mismatch error when decoding from multi-source models

## [1.14.0](https://github.com/OpenNMT/OpenNMT-tf/releases/tag/v1.14.0) (2018-11-22)

Expand Down
2 changes: 1 addition & 1 deletion opennmt/models/sequence_to_sequence.py
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ def _build(self, features, labels, params, mode, config=None):

if mode != tf.estimator.ModeKeys.TRAIN:
with tf.variable_scope("decoder", reuse=labels is not None):
batch_size = tf.shape(encoder_sequence_length)[0]
batch_size = tf.shape(tf.contrib.framework.nest.flatten(encoder_outputs)[0])[0]
beam_width = params.get("beam_width", 1)
maximum_iterations = params.get("maximum_iterations", 250)
minimum_length = params.get("minimum_decoding_length", 0)
Expand Down

0 comments on commit 06eba7f

Please sign in to comment.