Skip to content

Releases: devoxx/DevoxxGenieIDEAPlugin

v0.2.16

02 Sep 09:45
Compare
Choose a tag to compare
  • Feat #245 : Always show execution time
  • Feat #242 : Add LMStudio Model Selection
  • Fix #251 : LMStudio check should happen every time LLM provider was changed
  • Feat #234 : Reuse the LLMStudio token usage in response
  • Fix #249 : Calculate token cost shows consistent results after switching projects
  • Feat #256 : "Shift+Enter" submits prompt
  • Feat #263 : Clear prompt when response is returned
  • Feat #261 : Support deepseek.com as LLM provider

v0.2.15

21 Aug 14:20
Compare
Choose a tag to compare
  • Feat #219 : Mention how many files are used when calculating total tokens
  • Feat #221 : Add multiple selected files using right-click
  • Fix #232 : "Add full project to prompt" doesn't include the attached project content tokens in calculation
  • Feat #228 : Show execution time even when no token usage is provided

image

v0.2.14

17 Aug 15:01
Compare
Choose a tag to compare

Fix #217 : Prompting local LLMs throws exception

v0.2.13

16 Aug 10:35
aff779c
Compare
Choose a tag to compare
  • Feat #209 : Upgraded to LangChain4j 0.33.0
  • Fix #211 : Class initialization must not depend on services
  • Feat #213 : Show input/output tokens and cost per request in footer of response

image

v0.2.12

14 Aug 07:03
Compare
Choose a tag to compare
  • Fix #203 : Google WebSeach is broken
  • Feat #199 : Show execution time of prompt enhancement
  • Fix - Token, Cost and Context Window Settings page mapping correction
  • Fix #202 : Update the Gradle IntelliJ build file so it can be installed on other IntelliJ products

v0.2.10

05 Aug 07:38
0f0512b
Compare
Choose a tag to compare
  • Fix #184 - Input panel has bigger min/preferred height size
  • Feat #186 - Support for local LLaMA.c++ http server
  • Feat #191 - Add Google model : gemini-1.5-pro-exp-0801
  • Fix #181 - Last selected LLM provider is not persisted anymore, fixed by @mydeveloperplanet
  • Feat #181 - Support for multiple projects with different LLM providers & language models
  • Fix #190 - Scroll output panel to the bottom when new output is added

v0.2.9

26 Jul 17:15
Compare
Choose a tag to compare

Fix #183 : Allows a remote Ollama instance to be used.

v0.2.8

24 Jul 18:09
71613e8
Compare
Choose a tag to compare

Support for Exo which allows you to run a local LLM cluster with Llama 3.1 using 8b, 70b and 405b on your Apple Silicon computers.

image

v0.2.7

23 Jul 17:52
Compare
Choose a tag to compare
  • Show window context for downloaded Ollama models
  • Also allow Token Calculation + "Add full project" to Ollama models 🔥

image

v0.2.6

22 Jul 09:48
Compare
Choose a tag to compare
  • Renamed Gemini LLM provider to Google
  • Increased Gemini Pro 1.5 window context to 2M
  • Sorting LLM providers and model names alphabetically in combobox
  • LLM cost calculation refactored