Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebLLM Example Code does not Work in Google Chrome #5771

Closed
5 tasks done
radiantone opened this issue Jun 15, 2024 · 5 comments
Closed
5 tasks done

WebLLM Example Code does not Work in Google Chrome #5771

radiantone opened this issue Jun 15, 2024 · 5 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@radiantone
Copy link
Contributor

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

Tried the example code from langchain and it throws exception in Google chrome.
web-llm 0.2.46

const model = new ChatWebLLM({
  model: 'Phi-3-mini-4k-instruct-q4f16_1-MLC',
  chatOptions: {
    temperature: 0.5,
  },
});
const response = await model.invoke([
  new HumanMessage({ content: "What is 1 + 1?" }),
]);

Error Message and Stack Trace (if applicable)

Error: Model not loaded before calling chatCompletion(). Please ensure you have called `MLCEngine.reload(model)` to load the model before initiating chat operations, or initialize your engine using `CreateMLCEngine()` with a valid model configuration.
    at MLCEngine.<anonymous> (chunk-ZOKLVKAW.js?v=427e5d1f:13082:15)
    at Generator.next (<anonymous>)
    at chunk-ZOKLVKAW.js?v=427e5d1f:978:67
    at new Promise (<anonymous>)
    at __awaiter (chunk-ZOKLVKAW.js?v=427e5d1f:960:10)
    at MLCEngine.chatCompletion (chunk-ZOKLVKAW.js?v=427e5d1f:13080:12)
    at Completions.create (chunk-ZOKLVKAW.js?v=427e5d1f:11815:24)
    at ChatWebLLM._streamResponseChunks (@langchain_community_chat_models_webllm.js?v=427e5d1f:102:55)
    at _streamResponseChunks.next (<anonymous>)
    at ChatWebLLM._call (@langchain_community_chat_models_webllm.js?v=427e5d1f:125:22)

Description

WebLLM Example Code does not Work in Google Chrome

System Info

[email protected] | MIT | deps: 16 | versions: 277
Typescript bindings for langchain
https://github.com/langchain-ai/langchainjs/tree/main/langchain/

keywords: llm, ai, gpt3, chain, prompt, prompt engineering, chatgpt, machine learning, ml, openai, embeddings, vectorstores

dist
.tarball: https://registry.npmjs.org/langchain/-/langchain-0.2.5.tgz
.shasum: fc23848c20244a1d0a3dd5dd4663af9a41017ccb
.integrity: sha512-H5WL0NanCdQ+tzoeEt7Fyz9YGdR3wbfDvfQrJvxAO95istKo5JraRh24dzyvqxM9439xwRMNaMIpMwsyqtWDtQ==
.unpackedSize: 4.0 MB

dependencies:
@langchain/core: ~0.2.0
@langchain/openai: ~0.1.0
@langchain/textsplitters: ~0.0.0
binary-extensions: ^2.2.0
js-tiktoken: ^1.0.12
js-yaml: ^4.1.0
jsonpointer: ^5.0.1
langchainhub: ~0.0.8
langsmith: ~0.1.30
ml-distance: ^4.0.0
openapi-types: ^12.1.3
p-retry: 4
uuid: ^9.0.0
yaml: ^2.2.1
zod-to-json-schema: ^3.22.3
zod: ^3.22.4

maintainers:

dist-tags:
latest: 0.2.5
next: 0.2.3-rc.0

published a week ago by jacoblee93 [email protected]

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Jun 15, 2024
Copy link

dosubot bot commented Jun 15, 2024

To resolve the issue with the WebLLM example code in Google Chrome, where the code throws an exception indicating that the model is not loaded before calling chatCompletion(), you need to ensure that the model is properly loaded before invoking the chatCompletion() method. This can be done by calling the reload method on the Engine instance and awaiting its completion before proceeding.

Here is an example of how you can modify the code to ensure the model is loaded:

import { ChatWebLLM } from "@langchain/community/chat_models/webllm";
import { HumanMessage } from "@langchain/core/messages";

const model = new ChatWebLLM({
  model: "Phi-3-mini-4k-instruct-q4f16_1-MLC",
  chatOptions: {
    temperature: 0.5,
  },
});

await model.engine.reload(model.model); // Ensure the model is loaded

const response = await model.invoke([
  new HumanMessage({ content: "What is 1 + 1?" }),
]);

console.log(response);

/*
AIMessage {
  content: ' 2\n',
}

In this example, the reload method is called on the engine instance of the ChatWebLLM model to ensure the model is loaded before any chat operations are performed. This should resolve the issue of the model not being loaded before calling chatCompletion() [1].

Additionally, ensure that you are running the code in a web environment, such as a web worker, as shown in the example above.

To continue talking to Dosu, mention @dosu.

@jacoblee93
Copy link
Collaborator

I suspect related to #5776

We will lock the peer dep to a specific version from here on out

@Neet-Nestor
Copy link
Contributor

@jacoblee93 Actually for this specific issue it's not related to webllm version, but wrong sample code in the documentation. As suggested by the Dosubot above, either model.engine.reload or the wrapper model.initialize() need to be called before calling completion.

@jacoblee93
Copy link
Collaborator

Ah I see - thank you! That PR should fix the example as well then.

@radiantone
Copy link
Contributor Author

Thank you all. I will make all the updates described here and try again.

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Oct 4, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 11, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Oct 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

3 participants