-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workaround for ChatRTX on Linux #84
Comments
+1 |
NVIDIA, a 2.83 trillion USD company, big "product placement" on their website* for ChatRTX "Download here !!" ... and then ONLY WINDOZE ??? ... How sad is that. |
Found this: Seems like the solution is to either run from a vm or to use Wine |
It seems like this website is not an official NVIDIA site. I've tried running the official Windows version with wine but have encountered issues.
Specs
|
for Linux we can use Alpaca (meanwhile waiting for an official NVidia solution) https://github.com/Jeffser/Alpaca you will have different models to choose like mistral nemo (trained by Nvidia) or Llama Chatqa (also trained by Nvidia) I recommend you to use ollama local instead of the ollama bundled with Alpaca 1 :: you install ollama 2 :: you open a terminal and type 3 :: while ollama is open in a terminal (and without closing the terminal) Open Alpaca / Preferences / on URL of Remote Instance set http://127.0.0.1:11434 for me works better that way you can always use the bundled ollama in alpaca instead of running the local ollama instance |
Hi NVidia staff
congrats for the great work you are doing
Since Tensorflow, TensorRT-LLM, LlamaIndex and FAISS are libraries that can be used without problems under Linux
is there any workaround to have ChatRTX on Linux?
even if is trough terminal, without UI
Would be a great if you guide us on how to use ChatRTX on Linux Terminal
The text was updated successfully, but these errors were encountered: