Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump llama-cpp-python to 0.2.7 #65

Merged
merged 1 commit into from
Nov 19, 2023

Conversation

7omb
Copy link
Contributor

@7omb 7omb commented Nov 19, 2023

Upgrade llama-cpp-python to version 0.2.7 which is required by the currently used version 1.7 of text-generation-webui. This fixes issue #61. The update of flake.lock is necessary to get a matching version of scikit-build-core.

@MatthewCroughan
Copy link
Member

I don't currently have the time to implement it, but a VM test for testing that nixpkgs updates don't break anything would be easier, I just tested invokeai on nvidia and saw that it was not impacted by this PR, so I'm going to merge it.

@MatthewCroughan MatthewCroughan merged commit 0709150 into nixified-ai:master Nov 19, 2023
@wozeparrot
Copy link

This possible broke invokeai build for both nvidia and amd: https://hercules-ci.com/github/nixified-ai/flake/jobs/318.

Reproduced locally as well.

@7omb
Copy link
Contributor Author

7omb commented Nov 20, 2023

Sorry, that I missed that :(

@7omb 7omb deleted the bump-llama-cpp-python branch November 20, 2023 18:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants