Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This pull request introduces some improvements to b0bot
A Redis caching layer has been added to cache requests, which should help improve performance by reducing redundant calls.
Tests for the new Redis functionality have been added. The README.md and other documentation files have been updated to reflect these changes.
Additional small refactors were made to improve code consistency.
The initialization of
HuggingFaceEndpoint
was updated by replacing the previousrepo_id
andtoken
parameters withmodel=self.model_name
Related Issue:
This PR is for recently added issue #33 - Implement Redis Caching for Faster News Retrieval.
Motivation and Context:
The main motivation for this change is to cache external requests using Redis. This enhancement should reduce latency and lower the load on our external services. The LLM configuration update also aligns with our current project standards and simplifies maintenance.
How Has This Been Tested?:
Logs with comments
Client requests the homepage. No LLM processing is needed.
127.0.0.1 - - [26/Feb/2025 07:48:56] "GET / HTTP/1.1" 200 -
Client requests the favicon.
127.0.0.1 - - [26/Feb/2025 07:48:57] "GET /favicon.ico HTTP/1.1" 204 -
Client makes a request for "red hat ".
Since this is the first time this detailed query is received, no cached response exists.
The system triggers the LLM to fetch and process the request, and the resulting response is cached.
127.0.0.1 - - [26/Feb/2025 07:48:59] "GET /mistralai/news_keywords?keywords=red+hat HTTP/1.1" 200 -
Client makes another request, this time again for "red hat".
The system recognizes the cached response to the previous query and retrieves the response from cache,
thereby avoiding a redundant LLM call.
127.0.0.1 - - [26/Feb/2025 07:50:07] "GET /mistralai/news_keywords?keywords=red+hat HTTP/1.1" 200 -
Screenshots (if appropriate):



Below are the screenshot taken during a testing in which we can see the responses for the cached
Future Improvement Note:
In future, we can implement an agentic caching mechanism.
This new mechanism will analyze incoming requests (e.g., "red hat hackers latest news" vs. "red hat")
to determine if existing cached responses can be reused for similar contexts,
thereby making external LLM calls only when necessary.
Types of changes:
Checklist: