Skip to content

Releases: pkelaita/l2m2

v0.0.39

17 Dec 23:24
4e5652b
Compare
Choose a tag to compare

0.0.39 - December 17, 2024

Caution

This release has breaking changes! Please read the changelog carefully.

Added

Note

At the time of this release, you must be on OpenAI's usage tier 5 to use o1 and tier 1+ to use o1-preview and o1-mini.

Removed

  • gemma-7b has been removed as it has been deprecated by Groq.
  • llama-3.1-70b has been removed as it has been deprecated by both Groq and Cerebras.

v0.0.38

13 Dec 01:14
8ca3138
Compare
Choose a tag to compare

v0.0.38 - December 12, 2024

Caution

This release has breaking changes! Please read the changelog carefully.

Added

Removed

  • Gemini 1.0 Pro is no longer supported, as it is deprecated by Google. This is a breaking change!!! Calls to Gemini 1.0 Pro will fail.

v0.0.37

10 Dec 06:14
25dec17
Compare
Choose a tag to compare

0.0.37 - December 9, 2024

Caution

This release has significant breaking changes! Please read the changelog carefully.

Added

  • Support for Anthropic's claude-3.5-haiku.
  • Support for provider Cerebras, offering llama-3.1-8b and llama-3.1-70b.
  • Support for Mistral's mistral-small, ministral-8b, and ministral-3b models via La Plateforme.

Changed

  • mistral-large-2 has been renamed to mistral-large, to keep up with Mistral's naming scheme. This is a breaking change!!! Calls to mistral-large-2 will fail.

Removed

  • mixtral-8x22b, mixtral-8x7b, and mistral-7b are no longer available from provider Mistral as they have been deprecated. This is a breaking change!!! Calls to mixtral-8x7b and mistral-7b will fail, and calls to mixtral-8x22b via provider Mistral will fail.

Note

The model mixtral-8x22b is still available via Groq.

v0.0.36

21 Nov 19:27
9bbb2bf
Compare
Choose a tag to compare

0.0.36 - November 21, 2024

Changed

  • Updated gpt-4o version from gpt-4o-2024-08-06 to gpt-4o-2024-11-20 (Announcement)

v0.0.35

23 Oct 04:20
5ed0caf
Compare
Choose a tag to compare

0.0.35 - October 22, 2024

Added

Changed

  • claude-3.5-sonnet now points to version claude-3-5-sonnet-latest

v0.0.34

30 Sep 21:51
c4281af
Compare
Choose a tag to compare

0.0.34 - September 30, 2024

Caution

This release has breaking changes! Please read the changelog carefully.

Added

  • New supported models gemma-2-9b, llama-3.2-1b, and llama-3.2-3b via Groq.

Changed

  • In order to be more consistent with l2m2's naming scheme, the following model ids have been updated:
    • llama3-8bllama-3-8b
    • llama3-70bllama-3-70b
    • llama3.1-8bllama-3.1-8b
    • llama3.1-70bllama-3.1-70b
    • llama3.1-405bllama-3.1-405b
  • This is a breaking change!!! Calls using the old model_ids (llama3-8b, etc.) will fail.

Removed

  • Provider octoai has been removed as they have been acquired and are shutting down their cloud platform. This is a breaking change!!! Calls using the octoai provider will fail.
    • All previous OctoAI supported models (mixtral-8x22b, mixtral-8x7b, mistral-7b, llama-3-70b, llama-3.1-8b, llama-3.1-70b, and llama-3.1-405b) are still available via Mistral, Groq, and/or Replicate.

v0.0.33

11 Sep 22:14
91e36a8
Compare
Choose a tag to compare

0.0.33 - September 11, 2024

Changed

  • Updated gpt-4o version from gpt-4o-2024-05-13 to gpt-4o-2024-08-06.

v0.0.32

06 Aug 01:03
c28dd38
Compare
Choose a tag to compare

0.0.32 - August 5, 2024

Added

  • Mistral provider support via La Plateforme.

  • Mistral Large 2 model availibility from Mistral.

  • Mistral 7B, Mixtral 8x7B, and Mixtral 8x22B model availibility from Mistral in addition to existing providers.

  • 0.0.30 and 0.0.31 are skipped due to a packaging error and a model key typo.

v0.0.29

05 Aug 04:52
7078afd
Compare
Choose a tag to compare

0.0.29 - August 4, 2024

Caution

This release has breaking changes! Please read the changelog carefully.

Added

  • alt_memory and bypass_memory have been added as parameters to call and call_custom in LLMClient and AsyncLLMClient. These parameters allow you to specify alternative memory streams to use for the call, or to bypass memory entirely.

Changed

  • Previously, the LLMClient and AsyncLLMClient constructors took memory_type, memory_window_size, and memory_loading_type as arguments. Now, it just takes memory as an argument, while window_size and loading_type can be set on the memory object itself. These changes make the memory API far more consistent and easy to use, especially with the additions of alt_memory and bypass_memory.

Removed

  • The MemoryType enum has been removed. This is a breaking change!!! Instances of client = LLMClient(memory_type=MemoryType.CHAT) should be replaced with client = LLMClient(memory=ChatMemory()), and so on.

v0.0.28

04 Aug 03:23
d317e3e
Compare
Choose a tag to compare
[client] add default provider activation