-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Pull requests: BerriAI/litellm
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
(feat) added experiemental guidance function calling
#1258
opened Dec 27, 2023 by
danikhan632
Loading…
fix(utils.py): support complete_response=true for text completion streaming
#1358
opened Jan 8, 2024 by
krrishdholakia
Loading…
fix(main.py): Correctly route to
/completions
(if supported) when called for openai-compatible endpoints
#2595
opened Mar 20, 2024 by
krrishdholakia
Loading…
feat(main.py): support calling text completione endpoint for openai compatible providers
#2709
opened Mar 27, 2024 by
krrishdholakia
Loading…
fix: fix embedding response to return pydantic object
#2784
opened Apr 1, 2024 by
krrishdholakia
Loading…
The Spark API supports the completion method from the Litellm
#3058
opened Apr 16, 2024 by
bwl0211
Loading…
fix(main.py): support 'custom_llm_provider' in acompletion
#3121
opened Apr 18, 2024 by
krrishdholakia
Loading…
fix(router.py): check cache hits before making router.completion calls
#3227
opened Apr 22, 2024 by
krrishdholakia
Loading…
fix(main.py): use model_api_key determined from get_api_key
#3348
opened Apr 29, 2024 by
nobu007
Loading…
1 of 4 tasks
fix(router.py): fix default cooldown time to be 60s
#3529
opened May 8, 2024 by
krrishdholakia
Loading…
[Optimize] Optimize the code for remove time complexity in llms bedro…
#3665
opened May 16, 2024 by
rkataria1000
Loading…
fix(http_handler.py): fix async client ssl verify
#3985
opened Jun 3, 2024 by
krrishdholakia
Loading…
Added type hints for model_list parameter in RouterConfig
#4074
opened Jun 8, 2024 by
AliZeeshan998
Loading…
fix(parallel_request_limiter.py): support spend tracking caching across multiple instances
#4396
opened Jun 25, 2024 by
krrishdholakia
Loading…
Previous Next
ProTip!
no:milestone will show everything without a milestone.