Skip to content

Releases: dottxt-ai/outlines

Outlines v0.1.5

22 Nov 16:21
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.1.4...0.1.5

Outlines v0.1.4

18 Nov 22:05
c406da8
Compare
Choose a tag to compare

What's Changed

  • Bump to outlines-core=0.1.17 for python 3.12-3.13 support by @mgoin in #1273

New Contributors

Full Changelog: 0.1.3...0.1.4

Outlines v0.1.3

10 Nov 10:01
d842522
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.1.2...0.1.3

Outlines v0.1.2

08 Nov 15:22
5f39ded
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.1.1...0.1.2

Outlines v0.1.1

15 Oct 12:51
Compare
Choose a tag to compare

The 0.1.0 included a version of outlines-core for which wheels where not available, causing many errors for users who don't have a Rust compiler installed. We fixed this in outlines-core, but changes to the interface where pushed in the meantime so we have to account for these before cutting this new release.

What's Changed

  • Logits processors: Update inplace, with batch operation by @lapp0 in #1192
  • Fix Broken Docs Links by @lapp0 in #1195
  • use dottxt-ai/outlines not outlines-dev/outlines in mkdocs by @lapp0 in #1194
  • Add docs on serving with LM Studio by @cpfiffer in #1205
  • Compatibility updates for next outlines-core release by @lapp0 in #1204

Full Changelog: 0.1.0...0.1.1

Outlines v0.1.0

07 Oct 14:01
Compare
Choose a tag to compare

⚑ Performance Improvements

  • Outlines Core: Enjoy faster FSM index construction with a new implementation (#1175).
  • 98% Reduction in Runtime Overhead: Reduced overhead by storing FSM-token-mask as tensors. (#1013)

πŸš€ New Features

πŸ’‘ Enhancements

  • Unified Logits Processors: All models now use shared outlines.processors, completed by adding the following to the integration: llama-cpp, vLLM and ExLlamaV2).
  • Custom Regex Parsers: Simplify the implementation of custom Guide classes with Regex Parser support (#1039).
  • Qwen-style Byte Tokenizer Support: Now compatible with Qwen-style byte tokenizers (#1153).

πŸ› Bug Fixes

  • CFG Beta: Fixed large number of bugs to enable beta version grammar-based generation using Lark (#1067)
  • Fixed incorrect argument order breaking some models in models.transformers_vision (#1077).
  • Resolved OpenAI fallback tokenizer issue (#1046).
  • Option to disable tqdm bars during inference with vLLM (#1004).
  • models.llamacpp no longer includes implicit max_tokens (#996).
  • Fixed whitespace handling for models.mlxlm (#1003).
  • models.mamba now working, and supporting structured generation (#1040).
  • Resolved pad_token_id reset issue in TransformerTokenizer (#1068).
  • Fixed outlines.generate generator reuse causing runtime errors (#1160).

⚠️ Breaking Changes

  • outlines.integrations is now deprecated: #1061

Full Changeset

Read more

Outlines v0.0.46

22 Jun 15:23
Compare
Choose a tag to compare

What's Changed

  • Adding MLXLM, VLLM classes to LogitsGenerator type by @parkervg in #970
  • Fix samplers documentation by @jrinder42 in #980
  • Ensure regex matches valid JSON for "const" and "enum" with booleans, nulls, and strings by @mwootten in #972
  • Add link to docs of Multimodal Structured Generation for CVPR 2nd MMFM Challenge by @leloykun in #960
  • Fix Hugging Face Hub model ID in example code by @davanstrien in #988
  • Allow escaped strings in json_schema.py by @lapp0 in #991
  • Fix use of os.environ in documentation by @rlouf in #993
  • fix pattern-string in json_schema.py by removing anchors by @lapp0 in #995
  • Fix Incorrect Token Normalization Method for LlamaCppTokenizer by @lapp0 in #992

New Contributors

Full Changelog: 0.0.45...0.0.46

Outlines v0.0.45

17 Jun 19:34
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.0.44...0.0.45

Outlines v0.0.44

14 Jun 10:43
Compare
Choose a tag to compare

What's Changed

  • Fix null byte \x00 issue in byte level fsm resulting in KeyError in BetterFSM::FSMInfo by @lapp0 in #930
  • Correct link for llamacpp library by @alonsosilvaallende in #949
  • Add statement regarding OS vs closed models by @rlouf in #950
  • Support min/max number of digits for numbers in JSON Schema by @smagnan in #932
  • Fix/extend re replacement seq by @saattrupdan in #948
  • Update docker ENTRYPOINT to ensure proper argument handling by @shashankmangla in #962
  • Add cerebrium as deployment option in documentation by @rlouf in #963
  • Add link to TGI documentation by @rlouf in #964
  • Introduce outlines.models.mlxlm by @lapp0 in #956
  • Update the documentation for OpenAI models by @rlouf in #951

New Contributors

Full Changelog: 0.0.43...0.0.44

Outlines v0.0.43

04 Jun 20:40
Compare
Choose a tag to compare

What's Changed

  • fix typo in docs by @eitanturok in #860
  • fix code rendering by @eitanturok in #864
  • Ignore errors caused by import warnings from huggingface_hub & pyairports by @leloykun in #866
  • Fix format in the BentoML doc by @Sherlock113 in #867
  • Hotfix for CFG Generation by @leloykun in #865
  • Localize types by @rlouf in #868
  • Add Email type by @eitanturok in #870
  • Fix installation instructions by @eitanturok in #877
  • Extract function name in get_schema_from_signature by @eitanturok in #878
  • Remove broken final state loop by @br3no in #874
  • Fixing stream stopping at wrong location by @isamu-isozaki in #898
  • Prevent Illegal Look-Around for OneOf in JSONSchema by @lapp0 in #897
  • Circumvent Broken llama.cpp Pre-Tokenizer by @lapp0 in #892
  • Add args to Jinja filters by @eitanturok in #902
  • Allow Parenthesis in STRING_INNER by @lapp0 in #899
  • Allow Objects Which are Unconstrained (No additionalProperties) in JSON Schemas by @lapp0 in #907
  • Use TQDM to track index compilation progress by @lapp0 in #915
  • Update caching and add tokenizer to create_states_mapping by @brandonwillard in #911
  • Use less problematic whitespace token by @lapp0 in #916
  • Enable Tuples / prefixItems in build_regex_from_schema() by @lapp0 in #912
  • Fix invalid regex in unconstrained arrays for json_schema.py by @lapp0 in #919
  • Allow json schema of {}, resulting in unconstrained json value by @lapp0 in #914
  • Fix llamacpp caching by making LlamaCppTokenizer an outlines Tokenizer by @lapp0 in #929
  • Fix Missing pyproject.toml Deps, Breaking Release PyPi Workflow & Add Build Wheel / SDist Check to PR Workflow by @lapp0 in #938
  • Introduce PR Benchmark Workflow by @lapp0 in #903
  • Add Documentation on Outlines Versioning and Releases by @lapp0 in #940

New Contributors

Full Changelog: 0.0.42...0.0.43