Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raise error in #scroll_batches when search backend returns a failure #916

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

tomdev
Copy link
Contributor

@tomdev tomdev commented Dec 15, 2023

We are running into a bug in production when performing chewy:sync.

Our search backend is intermittently returning a 200 response without hits and containing a backend failure, see example response:

{
  "_scroll_id": "<scroll_id>",
  "took": 1,
  "timed_out": false,
  "terminated_early": false,
  "_shards": {
    "total": 5,
    "successful": 2,
    "skipped": 0,
    "failed": 3,
    "failures": [
      {
        "shard": -1,
        "index": null,
        "reason": {
          "type": "search_context_missing_exception",
          "reason": "No search context found for id [34462229]"
        }
      },
      {
        "shard": -1,
        "index": null,
        "reason": {
          "type": "search_context_missing_exception",
          "reason": "No search context found for id [34462228]"
        }
      },
      {
        "shard": -1,
        "index": null,
        "reason": {
          "type": "search_context_missing_exception",
          "reason": "No search context found for id [34888662]"
        }
      }
    ]
  },
  "hits": {
    "total": {
      "value": 720402,
      "relation": "eq"
    },
    "max_score": 1.0,
    "hits": []
  }
}

scroll_batches currently is not taking these failures into account. Because there are no hits returned, the logic of fetched >= total will never be reached, causing the loop to never break.

Because of this we've experienced chewy:sync running for days instead of an hour. (Yes, we now have proper monitoring in place...)

This PR will raise a Chewy::Error when the search backend is returning failures.


Before submitting the PR make sure the following are checked:

  • The PR relates to only one subject with a clear title and description in grammatically correct, complete sentences.
  • Wrote good commit messages.
  • Commit message starts with [Fix #issue-number] (if the related issue exists).
  • Feature branch is up-to-date with master (if not - rebase it).
  • Squashed related commits together.
  • Added tests.
  • Added an entry to the changelog if the new code introduces user-observable changes. See changelog entry format for details.

Copy link

@konalegi konalegi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, small changes are required to format of the error

@@ -34,6 +34,9 @@ def scroll_batches(batch_size: Request::DEFAULT_BATCH_SIZE, scroll: Request::DEF
scroll_id = nil

loop do
failures = result.dig('_shards', 'failures')
raise Chewy::Error, failures if failures.present?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Chewy::Error is not the best thing here, from your example, failures is a hash, and Chewy::Error is simply a StandardError with a string argument. Please make a specific error message and convert failure into a meaningful string? For instance

class ImportFailed < Error
def initialize(type, import_errors)
message = "Import failed for `#{type}` with:\n"
import_errors.each do |action, action_errors|
message << " #{action.to_s.humanize} errors:\n"
action_errors.each do |error, documents|
message << " `#{error}`\n"
message << " on #{documents.count} documents: #{documents}\n"
end
end
super message
end
end

README.md Outdated
Comment on lines 1278 to 1285
## Running specs

Make sure you're running a local Elasticsearch instance.

```
ES_PORT=9200 bundle exec rspec
```

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, could we remove this? I planning to backport docker-compose setup with ES, so it runs on proper port https://github.com/toptal/chewy/pull/917/files#diff-e45e45baeda1c1e73482975a664062aa56f20c03dd9d64a827aba57775bed0d3R1-R16

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course, done!

@@ -33,6 +33,56 @@
let(:countries) { Array.new(3) { |i| Country.create!(rating: i + 2, name: "country #{i}") } }

describe '#scroll_batches' do
describe 'with search backend returning failures' do

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw, do you think it will be possible to provide integration spec? Where you really call ES and get real answer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have a suggestion how I can set up an integration spec that returns this error state? Elasticsearch should return a total count higher than zero but then return zero hits in order to hit this code path.

@barthez
Copy link
Contributor

barthez commented Dec 18, 2023

Hey @tomdev
Thanks for the PR. I tried to reproduce it by setting low scroll time and adding extra wait time between scroll calls. Whenever there is an mentioned error (search_context_missing_exception) ES transport gem throws the exception for me:

[404] {"error":{"root_cause":[{"type":"search_context_missing_exception","reason":"No search context found for id [24881178]"}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":-1,"index":null,"reason":{"type":"search_context_missing_exception","reason":"No search context found for id [24881178]"}}],"caused_by":{"type":"search_context_missing_exception","reason":"No search context found for id [24881178]"}},"status":404} (Elasticsearch::Transport::Transport::Errors::NotFound)

Could you please share your chewy version, elasticsearch server version and elasticsearch gem version, that could be helpful?

The issue could be also solved if you could set the scroll pointer expiration (which defaults to 1 minute) for pluck calls (which is called in chewy:sync).

@tomdev
Copy link
Contributor Author

tomdev commented Dec 18, 2023

Hey Barthez, thanks for reproducing the issue.

Interesting to see the response is actually a 404 response. In our setup we recently migrated from Elasticsearch 7.10 to OpenSearch 1.3; this appears to return a 200, even though the "search_context_missing_exception" is being hit.

We're unable to continue using Elasticsearch as we're using the AWS OpenSearch service.

We're running:

  • gem chewy (7.3.4)
  • OpenSearch 1.3
  • gem elasticsearch (7.13.3)

We haven't identified any other issues (thus far) using chewy against OpenSearch.

An attempt to increase the scroll pointer expiration did not succeed; we've set it to 10m, but due to the search_context_missing_exception (that in our case was not caused by an expiring scroll window, but seems to be related to an internal OpenSearch error, yielding the same error).

Do you think OpenSearch returning a 200 with failures should be handled in chewy, or would that be in a dependency like ES transport? You are probably more familiar than I am on where this should be handled.

@barthez
Copy link
Contributor

barthez commented Dec 19, 2023

Thanks @tomdev

Chewy does not aim to support OpenSearch. This sounds like an altered behavior of OpenSearch vs Elasticsearch.

I would suggest adding a custom exception, something like MissingHitsInScrollError, and raising it when we receive no hits when we expect some. I wouldn't parse the response as this is too platform-dependent. Can you do that?

@tomdev
Copy link
Contributor Author

tomdev commented Jan 9, 2024

I implemented the MissingHitsInScrollError when no hits are returned when they are expected. This required me to change the looping behaviour when scrolling; instead of infinitely scrolling we now precalcalculate how often we should perform batched requests.

This is slightly altering the behaviour of how this data is retrieved and even though the test suite succeeds I wanted to double check if anyone knows why the previous approach was chosen (loop until fetched >= total hits). Could I be missing an edge case here that was covered by the previous implementation? The specs don't indicate that.

@tomdev
Copy link
Contributor Author

tomdev commented Jan 23, 2024

We've been running this PR in production for ~2 weeks and have seen the failure (where we'd previously got stuck in an infinite loop) now successfully raising the MissingHitsInScrollError.

@barthez
Copy link
Contributor

barthez commented Feb 22, 2024

Thank you @tomdev. Sorry, I lost track of this PR. Cold you please rebase and fix the conflict? I will try to merge & release it as soon as possible.

@konalegi
Copy link

konalegi commented Oct 8, 2024

@tomdev is this PR still relevant? Sorry for the delay, I've been extremely busy.

@tomdev
Copy link
Contributor Author

tomdev commented Oct 8, 2024

@tomdev is this PR still relevant? Sorry for the delay, I've been extremely busy.

No worries, same here! Yes; this is still relevant to us, and we've been running from this PR in production for months. It's working well for us.

I think this PR needs some rebasing and fixing conflicts, I can take a look at that.

@konalegi
Copy link

konalegi commented Oct 8, 2024

@tomdev Sure thank you! Keep in mind, we have moved to ES 8.x, so some extra adjustments might be needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants