-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make visual features configurable by the user #14
base: master
Are you sure you want to change the base?
Conversation
libs/community/langchain_community/tools/azure_ai_services/image_analysis.py
Outdated
Show resolved
Hide resolved
libs/community/langchain_community/tools/azure_ai_services/image_analysis.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Can we add a basic test for the changes? I don't think we need a full example notebook for this one.
azure_ai_services_key (Optional[str]): The API key for Azure AI Services. | ||
azure_ai_services_endpoint (Optional[str]): The endpoint URL for Azure AI Services. | ||
visual_features Any: The visual features to analyze in the image, can be set as | ||
either strings or VisualFeatures. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be helpful to show where this type comes from
either strings or VisualFeatures. | |
either strings or ~azure.ai.vision.imageanalysis.models.VisualFeatures. |
…n-ai#28161) - [x] **PR title**: "langchain: add missing punctuation in react_single_input.py" - [x] **PR message**: - **Description:** Add missing single quote to line 12: "Invalid Format: Missing 'Action:' after 'Thought:"
…ai#28165) ### **Description** Fixed a grammatical error in the documentation section about the delegation to synchronous methods to improve readability and clarity. ### **Issue** No associated issue. ### **Dependencies** No additional dependencies required. ### **Twitter handle** N/A --------- Co-authored-by: ccurme <[email protected]>
…8142) - Add to embedding model tabs - Add tabs for vector stores - Add "hello world" examples in integration landing pages using tabs
…i#28189) Add missing streaming concept
**Description**: removed the redundant phrase (typo)
- Update fake embeddings to deterministic fake embeddings - Fix indentation
if you run `make build start` in one tab, then start editing files, you can efficient rebuild notebooks with `make generate-files md-sync render`
…8190) Add other langgraph packages to sys_info output
Thank you for contributing to LangChain! - [ ] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" - [ ] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** a description of the change - **Issue:** the issue # it fixes, if applicable - **Dependencies:** any dependencies required for this change - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out! - [ ] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
…28227) PR Title: `docs: fix typo in migration guide` PR Message: - **Description**: This PR fixes a small typo in the "How to Migrate from Legacy LangChain Agents to LangGraph" guide. "In this cases" -> "In this case" - **Issue**: N/A (no issue linked for this typo fix) - **Dependencies**: None - **Twitter handle**: N/A
…ngchain-ai#28228) # Description - adding stopReason to response_metadata to call stream and astream - excluding NCP_APIGW_API_KEY input required validation - to remove warning Field "model_name" has conflict with protected namespace "model_". cc. @vbarda
Description: * I'm planning to add `Document.id` support to the Chroma VectorStore, but first I wanted to make sure all the integration tests were passing first. They weren't. This PR fixes the broken tests. * I found 2 issues: * This change (from a year ago, exactly :) ) for supporting multi-modal embeddings: https://docs.trychroma.com/deployment/migration#migration-to-0.4.16---november-7,-2023 * This change langchain-ai#27827 due to an update in the chroma client. Also ran `format` and `lint` on the changes. Note: I am not a member of the Chroma team.
…ng error when async embed (langchain-ai#27953) This change refines the handling of _model_kwargs in POST requests. Instead of nesting _model_kwargs as a dictionary under the parameters key, it is now directly unpacked and merged into the request's JSON payload. This ensures that the model parameters are passed correctly and avoids unnecessary nesting.E. g.: ```python import asyncio from langchain_huggingface.embeddings import HuggingFaceEndpointEmbeddings embedding_input = ["This input will get multiplied" * 10000] embeddings = HuggingFaceEndpointEmbeddings( model="http://127.0.0.1:8081/embed", model_kwargs={"truncate": True}, ) # Truncated parameters in synchronized methods are handled correctly embeddings.embed_documents(texts=embedding_input) # The truncate parameter is not handled correctly in the asynchronous method, # and 413 Request Entity Too Large is returned. asyncio.run(embeddings.aembed_documents(texts=embedding_input)) ``` Co-authored-by: af su <[email protected]> Co-authored-by: Erick Friis <[email protected]>
…in-ai#28260) # What problem are we fixing? Currently documents loaded using `O365BaseLoader` fetch source from `file.web_url` (where `file` is `<class 'O365.drive.File'>`). This works well for `.pdf` documents. Unfortunately office documents (`.xlsx`, `.docx` ...) pass their `web_url` in following format: `https://sharepoint_address/sites/path/to/library/root/Doc.aspx?sourcedoc=%XXXXXXXX-1111-1111-XXXX-XXXXXXXXXX%7D&file=filename.xlsx&action=default&mobileredirect=true` This obfuscates the path to the file. This PR utilizes the parrent folder's path and file name to reconstruct the actual location of the file. Knowing the file's location can be crucial for some RAG applications (path to the file can carry information we don't want to loose). @vbarda Could you please look at this one? I'm @-mentioning you since we've already closed some PRs together :-) Co-authored-by: Erick Friis <[email protected]>
set open_browser to false to resolve "could not locate runnable browser" error while default browser is None Thank you for contributing to LangChain! - [ ] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" - [ ] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** a description of the change - **Issue:** the issue # it fixes, if applicable - **Dependencies:** any dependencies required for this change - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out! - [ ] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. Co-authored-by: Erick Friis <[email protected]>
## Description First of all, thanks for the great framework that is LangChain! At [Linkup](https://www.linkup.so/) we're working on an API to connect LLMs and agents to the internet and our partner sources. We'd be super excited to see our API integrated in LangChain! This essentially consists in adding a LangChain retriever and tool, which is done in our own [package](https://pypi.org/project/langchain-linkup/). Here we're simply following the [integration documentation](https://python.langchain.com/docs/contributing/how_to/integrations/) and update the documentation of LangChain to mention the Linkup integration. We do have tests (both units & integration) in our [source code](https://github.com/LinkupPlatform/langchain-linkup), and tried to follow as close as possible the [integration documentation](https://python.langchain.com/docs/contributing/how_to/integrations/) which specifically requests to focus on documentation changes for an integration PR, so I'm not adding tests here, even though the PR checklist seems to suggest so. Feel free to correct me if I got this wrong! By the way, we would be thrilled by being mentioned in the list of providers which have standalone packages [here](https://langchain-git-fork-linkupplatform-cj-doc-langchain.vercel.app/docs/integrations/providers/), is there something in particular for us to do for that? 🙂 ## Twitter handle Linkup_platform <!-- ## PR Checklist Thank you for contributing to LangChain! - [x] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" - [x] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** a description of the change - **Issue:** the issue # it fixes, if applicable - **Dependencies:** any dependencies required for this change - **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out! - [x] **Add tests and docs**: If you're adding a new integration, please include 1. a test for the integration, preferably unit tests that do not rely on network access, 2. an example notebook showing its use. It lives in `docs/docs/integrations` directory. - [x] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. --!>
…angchain-ai#28421) - [x] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" - **Description:** I realized the invocation parameters were not being passed into `_generate` so I added those in but then realized that the parameters contained some old fields designed for an older openai client which I removed. Parameters work fine now. - **Issue:** Fixes langchain-ai#28229 - **Dependencies:** No new dependencies. - **Twitter handle:** @arch_plane - [x] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ Additional guidelines: - Make sure optional dependencies are imported within a function. - Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests. - Most PRs should not touch more than one package. - Changes should be backwards compatible. - If you are adding something to community, do not re-import it in langchain. If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. Co-authored-by: Erick Friis <[email protected]>
Thank you for contributing to LangChain! - [ ] **PR title**: "core: google docstring parsing fix" - [x] **PR message**: - **Description:** Added a solution for invalid parsing of google docstring such as: Args: net_annual_income (float): The user's net annual income (in current year dollars). - **Issue:** Previous code would return arg = "net_annual_income (float)" which would cause exception in _validate_docstring_args_against_annotations - **Dependencies:** None If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. Co-authored-by: Erick Friis <[email protected]>
…i#28173) **Description:** Adding Documentation for new SQL Server Vector Store Package. Changed files - Added new Vector Store - docs\docs\integrations\vectorstores\sqlserver.ipynb FeatureTable.Js - docs\src\theme\FeatureTables.js Microsoft.mdx - docs\docs\integrations\providers\microsoft.mdx Detailed documentation on API - https://python.langchain.com/api_reference/sqlserver/index.html --------- Co-authored-by: Erick Friis <[email protected]>
…nts via LLMGraphTransformer (langchain-ai#27949) - [x] **PR title**: "community: Kuzu - Add graph documents via LLMGraphTransformer" - This PR adds a new method `add_graph_documents` to use the `GraphDocument`s extracted by `LLMGraphTransformer` and store in a Kùzu graph backend. - This allows users to transform unstructured text into a graph that uses Kùzu as the graph store. - [x] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/ --------- Co-authored-by: pookam90 <[email protected]> Co-authored-by: Pooja Kamath <[email protected]> Co-authored-by: hsm207 <[email protected]> Co-authored-by: Erick Friis <[email protected]>
…f AzureSearch (langchain-ai#28376) **Description** This PR updates the `as_retriever` method in the `AzureSearch` to ensure that the `search_type` parameter defaults to 'similarity' when not explicitly provided. Previously, if the `search_type` was omitted, it did not default to any specific value. So it was inherited from `AzureSearchVectorStoreRetriever`, which defaults to 'hybrid'. This change ensures that the intended default behavior aligns with the expected usage. **Issue** No specific issue was found related to this change. **Dependencies** No new dependencies are introduced with this change. --------- Co-authored-by: prrao87 <[email protected]> Co-authored-by: Erick Friis <[email protected]>
Thank you for contributing to LangChain! - [x] **PR title**: "package: description" - Where "package" is whichever of langchain, community, core, etc. is being modified. Use "docs: ..." for purely docs changes, "infra: ..." for CI changes. - Example: "community: add foobar LLM" Docs on how to do hybrid search with weaviate is covered [here](https://python.langchain.com/docs/integrations/vectorstores/weaviate/) @efriis --------- Co-authored-by: pookam90 <[email protected]> Co-authored-by: Pooja Kamath <[email protected]> Co-authored-by: Erick Friis <[email protected]>
- Bump minimum version of `ollama` to 0.4.4 (which also addresses langchain-ai#28607). - Support recently-released [structured output](https://ollama.com/blog/structured-outputs) feature. This can be accessed by calling `.with_structured_output` with `method="json_schema"` (choice of name [mirrors](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html#langchain_openai.chat_models.base.ChatOpenAI.with_structured_output) what we have for OpenAI's structured output feature). `ChatOllama` previously implemented `.with_structured_output` via the [base implementation](https://github.com/langchain-ai/langchain/blob/ec9b41431ea8bccccc3b9f1679c1e1e16c26e23e/libs/core/langchain_core/language_models/chat_models.py#L1117).
**Description:** - **Memgraph** no longer relies on `Neo4jGraphStore` but **implements `GraphStore`**, just like other graph databases. - **Memgraph** no longer relies on `GraphQAChain`, but implements `MemgraphQAChain`, just like other graph databases. - The refresh schema procedure has been updated to try using `SHOW SCHEMA INFO`. The fallback uses Cypher queries (a combination of schema and Cypher) → **LangChain integration no longer relies on MAGE library**. - The **schema structure** has been reformatted. Regardless of the procedures used to get schema, schema structure is the same. - The `add_graph_documents()` method has been implemented. It transforms `GraphDocument` into Cypher queries and creates a graph in Memgraph. It implements the ability to use `baseEntityLabel` to improve speed (`baseEntityLabel` has an index on the `id` property). It also implements the ability to include sources by creating a `MENTIONS` relationship to the source document. - Jupyter Notebook for Memgraph has been updated. - **Issue:** / - **Dependencies:** / - **Twitter handle:** supe_katarina (DX Engineer @ Memgraph) Closes langchain-ai#25606
fix langchain-ai#12082 <!--- If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->
…rser (langchain-ai#28382) **Description**: This PR modifies the doc_intelligence.py parser in the community package to include all metadata returned by the Azure Doc Intelligence API in the Document object. Previously, only the parsed content (markdown) was retained, while other important metadata such as bounding boxes (bboxes) for images and tables was discarded. These image bboxes are crucial for supporting use cases like multi-modal RAG workflows when using Azure Doc Intelligence. The change ensures that all information returned by the Azure Doc Intelligence API is preserved by setting the metadata attribute of the Document object to the entire result returned by the API, rather than an empty dictionary. This extends the parser's utility for complex use cases without breaking existing functionality. **Issue**: This change does not address a specific issue number, but it resolves a critical limitation in supporting multimodal workflows when using the LangChain wrapper for the Azure API. **Dependencies**: No additional dependencies are required for this change. --------- Co-authored-by: jmohren <[email protected]>
Resolves langchain-ai#28543 Ollama recently [released](https://github.com/ollama/ollama/releases/tag/v0.4.6) support for streaming tool calls. Previously we would override the `stream` parameter if tools were passed in. Covered in standard tests here: https://github.com/langchain-ai/langchain/blob/c1d348e95d0f89f8a7322d3458e5a39a73de63bf/libs/standard-tests/langchain_tests/integration_tests/chat_models.py#L893-L897 Before, the test generates one message chunk: ```python [ AIMessageChunk( content='', additional_kwargs={}, response_metadata={ 'model': 'llama3.1', 'created_at': '2024-12-10T17:49:04.468487Z', 'done': True, 'done_reason': 'stop', 'total_duration': 525471208, 'load_duration': 19701000, 'prompt_eval_count': 170, 'prompt_eval_duration': 31000000, 'eval_count': 17, 'eval_duration': 473000000, 'message': Message( role='assistant', content='', images=None, tool_calls=[ ToolCall( function=Function(name='magic_function', arguments={'input': 3}) ) ] ) }, id='run-552bbe0f-8fb2-4105-ada1-fa38c1db444d', tool_calls=[ { 'name': 'magic_function', 'args': {'input': 3}, 'id': 'b0a4dc07-7d7a-487b-bd7b-ad062c2363a2', 'type': 'tool_call', }, ], usage_metadata={ 'input_tokens': 170, 'output_tokens': 17, 'total_tokens': 187 }, tool_call_chunks=[ { 'name': 'magic_function', 'args': '{"input": 3}', 'id': 'b0a4dc07-7d7a-487b-bd7b-ad062c2363a2', 'index': None, 'type': 'tool_call_chunk', } ] ) ] ``` After, it generates two (tool call in one, response metadata in another): ```python [ AIMessageChunk( content='', additional_kwargs={}, response_metadata={}, id='run-9a3f0860-baa1-4bae-9562-13a61702de70', tool_calls=[ { 'name': 'magic_function', 'args': {'input': 3}, 'id': '5bbaee2d-c335-4709-8d67-0783c74bd2e0', 'type': 'tool_call', }, ], tool_call_chunks=[ { 'name': 'magic_function', 'args': '{"input": 3}', 'id': '5bbaee2d-c335-4709-8d67-0783c74bd2e0', 'index': None, 'type': 'tool_call_chunk', }, ], ), AIMessageChunk( content='', additional_kwargs={}, response_metadata={ 'model': 'llama3.1', 'created_at': '2024-12-10T17:46:43.278436Z', 'done': True, 'done_reason': 'stop', 'total_duration': 514282750, 'load_duration': 16894458, 'prompt_eval_count': 170, 'prompt_eval_duration': 31000000, 'eval_count': 17, 'eval_duration': 464000000, 'message': Message( role='assistant', content='', images=None, tool_calls=None ), }, id='run-9a3f0860-baa1-4bae-9562-13a61702de70', usage_metadata={ 'input_tokens': 170, 'output_tokens': 17, 'total_tokens': 187 } ), ] ```
…langchain-ai#28645) Description: The multimodal(tongyi) response format "message": {"role": "assistant", "content": [{"text": "图像"}]}}]} is not compatible with LangChain. Dependencies: No --------- Co-authored-by: Erick Friis <[email protected]>
…ry Operators (langchain-ai#28207) ## Description We are submitting as a team of four for a project. Other team members are @RuofanChen03, @LikeWang10067, @TANYAL77. This pull requests expands the filtering capabilities of the FAISS vectorstore by adding MongoDB-style query operators indicated as follows, while including comprehensive testing for the added functionality. - $eq (equals) - $neq (not equals) - $gt (greater than) - $lt (less than) - $gte (greater than or equal) - $lte (less than or equal) - $in (membership in list) - $nin (not in list) - $and (all conditions must match) - $or (any condition must match) - $not (negation of condition) ## Issue This closes langchain-ai#26379. ## Sample Usage ```python import faiss import asyncio from langchain_community.vectorstores import FAISS from langchain.schema import Document from langchain_huggingface import HuggingFaceEmbeddings embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2") documents = [ Document(page_content="Process customer refund request", metadata={"schema_type": "financial", "handler_type": "refund",}), Document(page_content="Update customer shipping address", metadata={"schema_type": "customer", "handler_type": "update",}), Document(page_content="Process payment transaction", metadata={"schema_type": "financial", "handler_type": "payment",}), Document(page_content="Handle customer complaint", metadata={"schema_type": "customer","handler_type": "complaint",}), Document(page_content="Process invoice payment", metadata={"schema_type": "financial","handler_type": "payment",}) ] async def search(vectorstore, query, schema_type, handler_type, k=2): schema_filter = {"schema_type": {"$eq": schema_type}} handler_filter = {"handler_type": {"$eq": handler_type}} combined_filter = { "$and": [ schema_filter, handler_filter, ] } base_retriever = vectorstore.as_retriever( search_kwargs={"k":k, "filter":combined_filter} ) return await base_retriever.ainvoke(query) async def main(): vectorstore = FAISS.from_texts( texts=[doc.page_content for doc in documents], embedding=embeddings, metadatas=[doc.metadata for doc in documents] ) def printt(title, documents): print(title) if not documents: print("\tNo documents found.") return for doc in documents: print(f"\t{doc.page_content}. {doc.metadata}") printt("Documents:", documents) printt('\nquery="process payment", schema_type="financial", handler_type="payment":', await search(vectorstore, query="process payment", schema_type="financial", handler_type="payment", k=2)) printt('\nquery="customer update", schema_type="customer", handler_type="update":', await search(vectorstore, query="customer update", schema_type="customer", handler_type="update", k=2)) printt('\nquery="refund process", schema_type="financial", handler_type="refund":', await search(vectorstore, query="refund process", schema_type="financial", handler_type="refund", k=2)) printt('\nquery="refund process", schema_type="financial", handler_type="foobar":', await search(vectorstore, query="refund process", schema_type="financial", handler_type="foobar", k=2)) print() if __name__ == "__main__":asyncio.run(main()) ``` ## Output ``` Documents: Process customer refund request. {'schema_type': 'financial', 'handler_type': 'refund'} Update customer shipping address. {'schema_type': 'customer', 'handler_type': 'update'} Process payment transaction. {'schema_type': 'financial', 'handler_type': 'payment'} Handle customer complaint. {'schema_type': 'customer', 'handler_type': 'complaint'} Process invoice payment. {'schema_type': 'financial', 'handler_type': 'payment'} query="process payment", schema_type="financial", handler_type="payment": Process payment transaction. {'schema_type': 'financial', 'handler_type': 'payment'} Process invoice payment. {'schema_type': 'financial', 'handler_type': 'payment'} query="customer update", schema_type="customer", handler_type="update": Update customer shipping address. {'schema_type': 'customer', 'handler_type': 'update'} query="refund process", schema_type="financial", handler_type="refund": Process customer refund request. {'schema_type': 'financial', 'handler_type': 'refund'} query="refund process", schema_type="financial", handler_type="foobar": No documents found. ``` --------- Co-authored-by: ruofan chen <[email protected]> Co-authored-by: RickyCowboy <[email protected]> Co-authored-by: Shanni Li <[email protected]> Co-authored-by: RuofanChen03 <[email protected]> Co-authored-by: Like Wang <[email protected]>
…t Error for `MistralAIEmbeddings` (langchain-ai#27818) - **Description:**: In the event of a Rate Limit Error from the MistralAI server, the response JSON raises a KeyError. To address this, a simple retry mechanism has been implemented to handle cases where the request limit is exceeded. - **Issue:** langchain-ai#27790 --------- Co-authored-by: Eugene Yurtsev <[email protected]>
I've completed Yurika's investigation into making the visual features configurable. It's a relatively small change but will allow for a default and passed visual features as a parameter