Releases: dottxt-ai/outlines
Releases Β· dottxt-ai/outlines
Outlines v0.1.5
Outlines v0.1.4
Outlines v0.1.3
Outlines v0.1.2
What's Changed
- Doc corrections by @cpfiffer in #1213
- Add transformers vision cookbook with atomic caption flow by @fearnworks in #1216
- docs: update llamacpp.md by @eltociear in #1231
- Earnings report cookbook by @cpfiffer in #1235
New Contributors
- @fearnworks made their first contribution in #1216
Full Changelog: 0.1.1...0.1.2
Outlines v0.1.1
The 0.1.0
included a version of outlines-core
for which wheels where not available, causing many errors for users who don't have a Rust compiler installed. We fixed this in outlines-core
, but changes to the interface where pushed in the meantime so we have to account for these before cutting this new release.
What's Changed
- Logits processors: Update inplace, with batch operation by @lapp0 in #1192
- Fix Broken Docs Links by @lapp0 in #1195
- use
dottxt-ai/outlines
notoutlines-dev/outlines
in mkdocs by @lapp0 in #1194 - Add docs on serving with LM Studio by @cpfiffer in #1205
- Compatibility updates for next
outlines-core
release by @lapp0 in #1204
Full Changelog: 0.1.0...0.1.1
Outlines v0.1.0
β‘ Performance Improvements
- Outlines Core: Enjoy faster FSM index construction with a new implementation (#1175).
- 98% Reduction in Runtime Overhead: Reduced overhead by storing FSM-token-mask as tensors. (#1013)
π New Features
- Transformers Vision Models: Apply structured generation with vision + text inputs (#1052)
- OpenAI-Compatible API Support: Use
models.openai
with any OpenAI-like API (e.g. vLLM, ChatGPT), including structured generation withgenerate.json
andgenerate.choice
(#1142).
π‘ Enhancements
- Unified Logits Processors: All models now use shared
outlines.processors
, completed by adding the following to the integration: llama-cpp, vLLM and ExLlamaV2). - Custom Regex Parsers: Simplify the implementation of custom Guide classes with Regex Parser support (#1039).
- Qwen-style Byte Tokenizer Support: Now compatible with Qwen-style byte tokenizers (#1153).
π Bug Fixes
- CFG Beta: Fixed large number of bugs to enable beta version grammar-based generation using Lark (#1067)
- Fixed incorrect argument order breaking some models in
models.transformers_vision
(#1077). - Resolved OpenAI fallback tokenizer issue (#1046).
- Option to disable tqdm bars during inference with vLLM (#1004).
models.llamacpp
no longer includes implicitmax_tokens
(#996).- Fixed whitespace handling for
models.mlxlm
(#1003). models.mamba
now working, and supporting structured generation (#1040).- Resolved
pad_token_id
reset issue inTransformerTokenizer
(#1068). - Fixed
outlines.generate
generator reuse causing runtime errors (#1160).
β οΈ Breaking Changes
outlines.integrations
is now deprecated: #1061
Full Changeset
- Add contributors, creation and update date by @rlouf in #1000
- Add SimToM prompt recipe by @kstathou in #1002
- Ensure
models.llamacpp
Doesn't Have Implicitmax_tokens
by @lapp0 in #996 - fix
models.mlxlm
whitespace prefix handling by @lapp0 in #1003 - Adding the option to avoid displaying tqdm bars at inference with
vllm
by @BIMM99 in #1004 - Update
pyproject.toml
, enablemlx-lm
requirement only on darwin, disablevllm
requirement on darwin by @lapp0 in #1005 - Add asknews example by @robcaulk in #1008
- Improve
outlines.processors
, add integration tests to test_generate.py by @lapp0 in #998 - Abridged version of the .txt article on Coding For Structured Generat⦠by @willkurt in #1012
RegexFSM
: Cache Legal-Token Mask astorch.Tensor
to Improve Performance by @lapp0 in #1013- JSON response format fix by @rshah713 in #1029
- Fix broken link in README.md regarding Serving with vLLM by @shinohara-rin in #1030
- Major bug & fix: Fix bug in batched multi sample generation by @JulesGM in #1025
- Update file contributors style for docs by @gtsiolis in #1034
- fix: ocd missing blank space by @talboren in #1036
- Adds support for custom regex parsers (for multimodal structured generation) by @leloykun in #1039
- Update
models.transformers
to useSequenceGeneratorAdapter
andOutlinesLogitsProcessors
by @lapp0 in #966 - Use
outlines.processors
formodels.llamacpp
by @lapp0 in #997 - Add QA with Citations example to the Cookbook by @alonsosilvaallende in #1042
- Fix mamba integration by making it a variant of
outlines.models.transformers
by @lapp0 in #1040 - fix PyPI name for autoawq by @davanstrien in #1045
- add fallback tokenizer by @JerryKwan in #1046
- Update cerebrium instructions by @milo157 in #1047
- Add Knowledge Graph Extraction example to the Cookbook by @alonsosilvaallende in #1049
- Correct link and add llama-cpp-python installation instructions by @alonsosilvaallende in #1051
- Introduce
outlines.models.transformers_vision
by @lapp0 in #1052 - Use
outlines.processors
andSequenceGeneratorAdapter
foroutlines.models.vllm
by @lapp0 in #1053 - Modal documentation: fix deprecated memory= parameter by @Perdu in #1057
- Documentation: Fix failing Modal example by @Perdu in #1058
- Add links to the two examples by @alonsosilvaallende in #1062
- Add link to Multimodal Structured Generation (MMSG) library to docs by @leloykun in #1064
- Correct the documentation to disable caching by @alonsosilvaallende in #1069
- Fix TransformerTokenizer pad_token_id reset by @ispobock in #1068
- Fix link to
mamba
model reference by @rlouf in #1072 - More detailed mlxlm documentation by @lapp0 in #1074
- Add chain of thought example by @alonsosilvaallende in #1087
- Fix coverage
exclude_lines
setting for ellipsis by @brandonwillard in #1089 - Add ReAct agent example by @alonsosilvaallende in #1090
- Include SPDX license info in project metadata by @tiran in #1094
- Update Documentation by @lapp0 in #1063
- Make
model_class
required arg, defaultprocessor_class
toAutoProcessor
by @parkervg in #1077 - Fix details in the documentation by @rlouf in #1096
- Change cookbook examples: Download model weights in the hub cache folder by @alonsosilvaallende in #1097
- Correct variable name in chain-of-thought example by @cpfiffer in #1101
- Remove deprecated
outlines.integrations
by @rlouf in #1061 - Use relative coverage source paths by @brandonwillard in #1113
- Add missing CI matrix step by @brandonwillard in #1124
- Update modal example by @cpfiffer in #1111
- docs: fix typo by @billmetangmo in #1120
- Pass
text
andimages
as kwargs to VLM processor by @lapp0 in #1126 - Update
CFGGuide
to useoutlines.fsm.parsing
. Enablegenerate.cfg
by @lapp0 in #1067 - Include hidden files in coverage CI upload by @brandonwillard in #1136
- Add documentation request issue template by @cpfiffer in #1138
- Set the tokenizer versions in
test_create_fsm_index_tokenizer
by @brandonwillard in #1139 - Change Outlines' logo by @rlouf in #1143
- Update logo size in documentation by @rlouf in #1144
- Improve sampler docs by @cpfiffer in #1141
- Update vllm.md by @americanthinker in #1137
- Correct pathways, update site color, front page fixes by @cpfiffer in #1146
- Change the color of the logo by @rlouf in #1155
- Remove Broken pyairports Package, Replace with airportsdata by @lapp0 in #1156
- Enable Tokenizers with Byte Tokens by @lapp0 in #1153
- Integrate OpenAI API Structured Generation by @lapp0 in #1142
- Remove link to Outlines twitter account by @rlouf in #1168
- Don't re-use logits processors in SequenceGeneratorAdapter, copy them by @lapp0 in #1160
- Fix benchmark workflow triggers by @brandonwillard in #1170
- Reuse jinja environment for a prompt by @jantrienes in #1162
- Use Faster FSM by @lapp0 in #1175
- Pin
outlines-core
version by @brandonwillard in #1187 - add missing comma in llamacpp docs by @cpfiffer in https://github.com/dottxt-ai/outl...
Outlines v0.0.46
What's Changed
- Adding
MLXLM
,VLLM
classes toLogitsGenerator
type by @parkervg in #970 - Fix samplers documentation by @jrinder42 in #980
- Ensure regex matches valid JSON for "const" and "enum" with booleans, nulls, and strings by @mwootten in #972
- Add link to docs of Multimodal Structured Generation for CVPR 2nd MMFM Challenge by @leloykun in #960
- Fix Hugging Face Hub model ID in example code by @davanstrien in #988
- Allow escaped strings in
json_schema.py
by @lapp0 in #991 - Fix use of
os.environ
in documentation by @rlouf in #993 - fix pattern-string in
json_schema.py
by removing anchors by @lapp0 in #995 - Fix Incorrect Token Normalization Method for
LlamaCppTokenizer
by @lapp0 in #992
New Contributors
- @parkervg made their first contribution in #970
- @jrinder42 made their first contribution in #980
- @mwootten made their first contribution in #972
- @davanstrien made their first contribution in #988
Full Changelog: 0.0.45...0.0.46
Outlines v0.0.45
What's Changed
- Fix some dependency issues and remove obsolete try-except block by @fpgmaas in #967
- Update Modal refs from stub to app by @kstathou in #974
- Mask cache Performance Optimization for vllm by @paul-grundmann in #939
- update README.md by @silviachen46 in #968
- Pin Numpy:
numpy<2.0.0
, PreventModuleNotFoundError
by @lapp0 in #977
New Contributors
- @fpgmaas made their first contribution in #967
- @kstathou made their first contribution in #974
- @paul-grundmann made their first contribution in #939
- @silviachen46 made their first contribution in #968
Full Changelog: 0.0.44...0.0.45
Outlines v0.0.44
What's Changed
- Fix null byte
\x00
issue in byte level fsm resulting inKeyError
inBetterFSM::FSMInfo
by @lapp0 in #930 - Correct link for llamacpp library by @alonsosilvaallende in #949
- Add statement regarding OS vs closed models by @rlouf in #950
- Support min/max number of digits for numbers in JSON Schema by @smagnan in #932
- Fix/extend re replacement seq by @saattrupdan in #948
- Update docker ENTRYPOINT to ensure proper argument handling by @shashankmangla in #962
- Add cerebrium as deployment option in documentation by @rlouf in #963
- Add link to TGI documentation by @rlouf in #964
- Introduce
outlines.models.mlxlm
by @lapp0 in #956 - Update the documentation for OpenAI models by @rlouf in #951
New Contributors
- @alonsosilvaallende made their first contribution in #949
- @smagnan made their first contribution in #932
- @shashankmangla made their first contribution in #962
Full Changelog: 0.0.43...0.0.44
Outlines v0.0.43
What's Changed
- fix typo in docs by @eitanturok in #860
- fix code rendering by @eitanturok in #864
- Ignore errors caused by import warnings from
huggingface_hub
&pyairports
by @leloykun in #866 - Fix format in the BentoML doc by @Sherlock113 in #867
- Hotfix for CFG Generation by @leloykun in #865
- Localize types by @rlouf in #868
- Add
Email
type by @eitanturok in #870 - Fix installation instructions by @eitanturok in #877
- Extract function name in
get_schema_from_signature
by @eitanturok in #878 - Remove broken final state loop by @br3no in #874
- Fixing stream stopping at wrong location by @isamu-isozaki in #898
- Prevent Illegal Look-Around for OneOf in JSONSchema by @lapp0 in #897
- Circumvent Broken llama.cpp Pre-Tokenizer by @lapp0 in #892
- Add args to Jinja filters by @eitanturok in #902
- Allow Parenthesis in
STRING_INNER
by @lapp0 in #899 - Allow Objects Which are Unconstrained (No
additionalProperties
) in JSON Schemas by @lapp0 in #907 - Use TQDM to track index compilation progress by @lapp0 in #915
- Update caching and add tokenizer to
create_states_mapping
by @brandonwillard in #911 - Use less problematic whitespace token by @lapp0 in #916
- Enable Tuples / prefixItems in build_regex_from_schema() by @lapp0 in #912
- Fix invalid regex in unconstrained arrays for json_schema.py by @lapp0 in #919
- Allow json schema of
{}
, resulting in unconstrained json value by @lapp0 in #914 - Fix llamacpp caching by making
LlamaCppTokenizer
an outlinesTokenizer
by @lapp0 in #929 - Fix Missing
pyproject.toml
Deps, BreakingRelease PyPi
Workflow & Add Build Wheel / SDist Check to PR Workflow by @lapp0 in #938 - Introduce PR Benchmark Workflow by @lapp0 in #903
- Add Documentation on Outlines Versioning and Releases by @lapp0 in #940
New Contributors
- @eitanturok made their first contribution in #860
- @leloykun made their first contribution in #866
- @Sherlock113 made their first contribution in #867
- @br3no made their first contribution in #874
Full Changelog: 0.0.42...0.0.43