Skip to content

v0.0.1 Generation Quality Improvements

Pre-release
Pre-release
Compare
Choose a tag to compare
@mdegans mdegans released this 24 May 18:37
· 3 commits to main since this release
6b36ea2

This release improves quality of generation by:

  • Changing the defaults sampling settings for LLaMA from greedy to locally typical sampling.
  • Updating drama_llama and llama.cpp for BPE tokenizer changes. This will require regenerating any models. See the linked issue for scripts.
  • Changing OpenAI sampling settings to better suit story generation.

Known Issues:

  • This release has some crashes that are fixed in v0.0.2