From 2604d3c47ba8073c8e7916c2c6b79824110d4bd0 Mon Sep 17 00:00:00 2001 From: josc146 Date: Mon, 11 Mar 2024 19:07:08 +0800 Subject: [PATCH] release v1.7.3 --- CURRENT_CHANGE.md | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/CURRENT_CHANGE.md b/CURRENT_CHANGE.md index eff3767d..c486eabe 100644 --- a/CURRENT_CHANGE.md +++ b/CURRENT_CHANGE.md @@ -2,16 +2,24 @@ ### Features -- allow setting tokenChunkSize of WebGPU mode -- expose global_penalty +- add Docker support (#291) @LonghronShen + +### Fixes + +- fix a generation exception caused by potentially dangerous regex being passed into the stop array +- fix max_tokens parameter of Chat page not being passed to backend +- fix the issue where penalty_decay and global_penalty are not being passed to the backend default config when running + the model through client ### Improvements -- improve parameters controllable range +- prevent 'torch' has no attribute 'cuda' error in torch_gc, so user can use CPU or WebGPU (#302) ### Chores -- update defaultModelConfigs +- bump dependencies +- add pre-release workflow +- dep_check.py now ignores GPUtil ## Install