Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Errors on vite.js project #142

Open
Raf-sns opened this issue Sep 9, 2024 · 13 comments
Open

Errors on vite.js project #142

Raf-sns opened this issue Sep 9, 2024 · 13 comments

Comments

@Raf-sns
Copy link

Raf-sns commented Sep 9, 2024

Hi,
I'm trying to use ollama-js in a vite.js project.
What I did:
initialize a new vite.js project:
npm create vite@latest
Project name: test-ollama
Selected: Vanilla
Select a variant: JavaScript
cd test-ollama
npm i ollama
npm install
npm run dev

on main.js:

import './style.css'
import javascriptLogo from './javascript.svg'
import viteLogo from '/vite.svg'
import ollama from 'ollama/browser'

const response = await ollama.chat({
  model: 'llama3.1',
  messages: [{ role: 'user', content: 'Why is the sky blue?' }],
})
console.log(response.message.content)

Error in console :
POST http://127.0.0.1:11434/api/chat net::ERR_CONNECTION_REFUSED

What i tried :

import './style.css'
import javascriptLogo from './javascript.svg'
import viteLogo from '/vite.svg'
import { Ollama } from 'ollama'
import ollama from 'ollama/browser'

const testOllama = new Ollama({ host: 'http://127.0.0.1:5173' })
const response = await testOllama.chat({
  model: 'llama3.1',
  messages: [{ role: 'user', content: 'Why is the sky blue?' }],
})
console.log(response.message.content)

Error in console :
POST http://127.0.0.1:5173/api/chat 404 (Not Found)

I don't really understand why I can't connect to Ollama.

Additionally, I downloaded version 8B of llama
-> Meta-Llama-3.1-8B
I don't understand how I could connect to this place with ollama.
My end goal is to serve a fine-tuned version of llama 8B on a website.
Thank you for your answers, kind regards,
Raf

@hopperelec
Copy link
Contributor

ollama-js is just a library for interacting with the Ollama API, which you need to be hosting separately. The reason you get ERR_CONNECTION_REFUSED with the first code is because nothing is listening on that port (presumably because you're not running Ollama). The reason you get 404 with the second code is because your Vite project doesn't have a /api/chat route. You can download Ollama from here

@Raf-sns
Copy link
Author

Raf-sns commented Sep 9, 2024

ok thank you for your response.
What I understand is that Ollama only interacts with a locally hosted version?
I'm going to try to download the library but I don't think it will match my project...
Thank you for your response and suggestions.
Sincerely,
Raf

@hopperelec
Copy link
Contributor

What I understand is that Ollama only interacts with a locally hosted version?

The purpose of Ollama is to locally host LLMs, yes

I'm going to try to download the library

Ollama isn't a library, ollama-js is

but I don't think it will match my project...

You can still use your ollama-js code to interact with the Ollama API however you like, the Ollama API isn't limited with what you can do with it

@Raf-sns
Copy link
Author

Raf-sns commented Sep 9, 2024

Thank you for your response.
I installed ollama, reviewed the documentation but still don't understand how to link ollama with the version of llama 3.1 (8B) that I downloaded to my system.
I tried:
-> ollama run llama3.1
nothing happens
-> ollama serve
When I open http://127.0.0.1:11434/ on my browser I get ollama is runnig
But after that???
-> I tried again:
ollama run llama3.1
But it seems to download llama-3.1 8B which I already have on my PC.
-> I reviewed the doc. but I still don't understand how to make ollama point to my version of llama-3.1 8B which I have already downloaded.
MMmm! The documentation is very incomplete...

@hopperelec
Copy link
Contributor

I installed ollama, reviewed the documentation but still don't understand how to link ollama with the version of llama 3.1 (8B) that I downloaded to my system.

But it seems to download llama-3.1 8B which I already have on my PC.

Ollama is designed for you to download models via Ollama, not via external sites. You might be able to use the model you have already downloaded by creating a Modelfile, which is documented here

@Raf-sns
Copy link
Author

Raf-sns commented Sep 9, 2024

no, I let it go:
ollama run llama3.1
I don't even know where he downloads llama.
Either I use the official repo or nothing.
The documentation is not clear enough on what each function does.
And I couldn't use it on the web anyway.
Too bad it looked promising but I have enough of a headache for today trying to understand something that is inherently unclear.
Thank you very much for your support!
Sincerely,
Raphael.

@hopperelec
Copy link
Contributor

hopperelec commented Sep 9, 2024

I don't even know where he downloads llama.

Not sure about other OSs, but on Windows it downloads to %HOMEPATH%/.ollama/models/blobs. However, the purpose of Ollama is to completely abstract all this away from you, I think you're over-complicating it a lot. ollama run will download the model if you don't already have it then start a conversation directly inside the terminal. Once it finishes downloading and then loading the model into memory, it should say "Send a message (/? for help)".

And I couldn't use it on the web anyway.

If you're meaning through Ollama, Ollama doesn't have a built-in web ui, you need a third-party one such as open-webui. All Ollama does is provide an API for interfacing with models created via Ollama.

The documentation is not clear enough on what each function does.

Would you mind linking me to what you are referencing as the documentation, because the docs seem perfectly clear to me

@Raf-sns
Copy link
Author

Raf-sns commented Sep 9, 2024

Hi,
Here is the link to the first page of the documentation:
ollama

Follow step 1:
Install #Linux
And so where, what, what size is it?

Follow step 2:
ollama run llama3.1
Nothing happens the first time.
ollama serve :
When it is started, nothing clearly shows which url to go to.
then
ollama run llama3.1
Otherwise, as I said, I had to fill in forms to download Llama-3.1 8B and here I'm downloading it directly without signing anything?
Seriously, what am I actually downloading?
The truth is that I don't know anything about it and that's unacceptable, don't you agree with me?

Secondly, what's the point of CREATE, PULL, PUSH?
I'm not an AI professional, I'm trying to grapple with a world that speaks its own language, and I'm not a novice dev.

"GGUF", but what is that???

Modelfile: ‘You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.’
How do I turn this into a context that will take thousands of pages from my client, a doctor of psychology, so that it can analyse his theses and an AI can extract concepts that he wouldn't have developed himself as a doctor?
How are visitors to his website going to be able to understand the highly technical concepts he develops in his texts?
They're not going to ask Mario ^_^!

I think I'll tell my client that his aspirations in terms of AI are doomed to failure, especially as he doesn't have the colossal resources (and neither do I) to embark on this adventure.
Thank you all the same for your help.
Maybe it's me who's not up to it.
Frankly, the things explained here are like a thick black fog to me, so I'll pass!

All the best,
Raf.

@remon-nashid
Copy link

remon-nashid commented Sep 9, 2024

@Raf-sns this has nothing to do with ollama-JS, you just need to familiarize yourself with ollama itself, and fortunately it's dead simple https://github.com/ollama/ollama/blob/main/README.md#quickstart

@hopperelec
Copy link
Contributor

hopperelec commented Sep 9, 2024

Follow step 1:
Install #Linux
And so where, what, what size is it?

What are you referring to here? Linux isn't required to use Ollama, and nothing in the Ollama docs, or the README I assume you're reading, mentions installing Linux

When it is started, nothing clearly shows which url to go to.

As I keep explaining, Ollama is just an API, it does not have a built-in web UI, so there is no URL to go to. You interact with it either directly from the terminal, which is what ollama run does, or using third-party tools such as ollama-js, which you created this issue on, or open-webui, which I already linked to.

and here I'm downloading it directly without signing anything?

Correct

Seriously, what am I actually downloading?

The llama3.1 model

Secondly, what's the point of CREATE, PULL, PUSH?

This is clearly explained in the docs, and you shouldn't need to worry about these anyways for basic usage.

"GGUF", but what is that???

It's a file format used for compressed AI models

Modelfile: ‘You are Mario from Super Mario Bros. Answer as Mario, the assistant, only.’
How do I turn this into a context that will take thousands of pages from my client, a doctor of psychology, so that it can analyse his theses and an AI can extract concepts that he wouldn't have developed himself as a doctor?
How are visitors to his website going to be able to understand the highly technical concepts he develops in his texts?
They're not going to ask Mario ^_^!

That Modelfile, or rather system message, is just an example. The point of that example is to show that you can set it to whatever you want!

@hopperelec
Copy link
Contributor

You seem to be assuming that Ollama does a lot of things it never claimed to do, and then getting frustrated because you can't figure out how to make it do those things. Please just understand that all Ollama does on it's own is provide an API, for other tools such as ollama-js and open-webui to interface with, as well as some some terminal commands for downloading, creating and testing models. Once you have Ollama installed and running, the Vite project you mentioned at the start of this PR should work as expected, and you can go from there. If all you're wanting to do is interact with llama-3.1, then you do not need to worry about any of the terminal commands because all of the third-party tools you would traditionally use to interact with the Ollama API will handle all of that for you, including your JS code.

@Raf-sns
Copy link
Author

Raf-sns commented Sep 9, 2024

Hi,
"What are you referring to here? Linux isn't required to use Ollama, and nothing in the Ollama docs, or the README I assume you're reading, mentions installing Linux"

Maybe, but it's my system, and Linux is mentioned in the two links below: https://ollama.com/download
and https://github.com/ollama/ollama#linux

-> I don't want to argue, you asked me what I found unclear in the explanations and I simply answered you, with my resentment, otherwise, I would not have done it.

Regards,
Raf

@hopperelec
Copy link
Contributor

and Linux is mentioned in the two links below: https://ollama.com/download and https://github.com/ollama/ollama#linux

Ok, I thought you were saying that step 1 to using Ollama was to install Linux itself, I'm guessing you meant that it was to install the Linux version of Ollama

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants