Skip to content

Commit

Permalink
Fixed critical typo in infer_request invocation
Browse files Browse the repository at this point in the history
  • Loading branch information
slyalin committed Dec 12, 2023
1 parent 6097bfd commit c54a466
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion optimum/intel/openvino/modeling_decoder.py
Original file line number Diff line number Diff line change
Expand Up @@ -431,7 +431,7 @@ def forward(
inputs['beam_idx'] = self.next_beam_idx

# Run inference
self.request.start_async(inputs, shared_inputs=True)
self.request.start_async(inputs, share_inputs=True)
self.request.wait()
logits = torch.from_numpy(self.request.get_tensor("logits").data).to(self.device)

Expand Down

0 comments on commit c54a466

Please sign in to comment.