Skip to content

Commit

Permalink
Create 2024-12-18.md
Browse files Browse the repository at this point in the history
  • Loading branch information
hyperagon authored Dec 18, 2024
1 parent 8af7703 commit ab23c1b
Showing 1 changed file with 15 additions and 0 deletions.
15 changes: 15 additions & 0 deletions content/posts/2024-12-18.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
+++
title = "Artificial"
summary = "How to get Llama up and running?"
date = 2024-12-18T08:10:34+01:00
draft = false
tags = ['AI',]
+++
I've watched a lot about **AI** but only saw random results until I decided to go as low-level as possible.

For starters grab a copy of [Llama Cpp](https://github.com/ggerganov/llama.cpp.git) with `git clone https://github.com/ggerganov/llama.cpp.git`.
Then compile it (you'll need [CMake](https://cmake.org/)) witb `cd llama.cpp/; cmake -B build; cmake --build build --config Release`.

Now just get any **GGUF** model from [Huggingface](https://huggingface.co/models?library=gguf), download it and use it with `./build/bin/llama-cli -m your_model.gguf -p "I believe the meaning of life is" -n 128`

or, if your computer can take it, talk with it using `./build/bin/llama-cli -m your_model.gguf -cnv -n 128`.

0 comments on commit ab23c1b

Please sign in to comment.