You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the Issue
When the crawler encounters the No Engines Left Error and the crawl job fails, a client using the crawl endpoint synchronously through the API hangs forever.
To Reproduce
Steps to reproduce the issue:
Run the following code with the python api (I only have fetch engine turned on and the job errors):
Firecrawl job fails (which is OK for my use case, but here is the error:)
firecrawl-worker-1 | 2025-01-15 04:46:45 error [queue-worker:processJob]: Error: All scraping engines failed! -- Double check the URL to make sure it's not broken. If the issue persists, contact us at [email protected].
firecrawl-worker-1 | at scrapeURLLoop (/app/dist/src/scraper/scrapeURL/index.js:220:15)
firecrawl-worker-1 | at async scrapeURL (/app/dist/src/scraper/scrapeURL/index.js:258:24)
firecrawl-worker-1 | at async runWebScraper (/app/dist/src/main/runWebScraper.js:67:24)
firecrawl-worker-1 | at async startWebScraperPipeline (/app/dist/src/main/runWebScraper.js:12:12)
firecrawl-worker-1 | at async processJob (/app/dist/src/services/queue-worker.js:457:26)
firecrawl-worker-1 | at async processJobInternal (/app/dist/src/services/queue-worker.js:168:28) {"module":"queue-worker","method":"processJob","jobId":"dcfbc7aa-af37-4d30-88a8-22f76f36c07e","scrapeId":"dcfbc7aa-af37-4d30-88a8-22f76f36c07e","crawlId":"0f32eff9-809b-4cff-af4d-b3a173c0aa48","teamId":"bypass"}
firecrawl-worker-1 | 2025-01-15 16:59:57 debug [queue-worker:processJob]: Declaring job as done...
firecrawl-worker-1 | 2025-01-15 16:59:57 debug [crawl-redis:addCrawlJobDone]: Adding done crawl job to Redis...
firecrawl-worker-1 | 2025-01-15 16:59:57 debug [queue-worker:processJob]: Logging job to DB...
firecrawl-worker-1 | 2025-01-15 16:59:57 debug [crawl-redis:finishCrawl]: Marking crawl as finished.
firecrawl-worker-1 | 2025-01-15 16:59:57 debug [queue-worker:processJobInternal]: Job failed
Expected Behavior
I would expect the client to return an error or raise an exception.
Additional Context
The Firecrawl app keeps working normally after the job has errored and accepts more requests, which is the behaviour I am looking for. However the client hanging forever when its job errors instead of throwing an error is a problem for me.
The text was updated successfully, but these errors were encountered:
Describe the Issue
When the crawler encounters the No Engines Left Error and the crawl job fails, a client using the crawl endpoint synchronously through the API hangs forever.
To Reproduce
Steps to reproduce the issue:
Expected Behavior
I would expect the client to return an error or raise an exception.
Additional Context
The Firecrawl app keeps working normally after the job has errored and accepts more requests, which is the behaviour I am looking for. However the client hanging forever when its job errors instead of throwing an error is a problem for me.
The text was updated successfully, but these errors were encountered: