- Support for the OpenAI models in Azure
- Added
--show-repo-map
- Improved output when retrying connections to the OpenAI API
- Redacted api key from
--verbose
output - Bugfix: recognize and add files in subdirectories mentioned by user or GPT
- Benchmarked at 53.8% for gpt-3.5-turbo/whole
- Added
--dark-mode
and--light-mode
to select colors optimized for terminal background - Install docs link to NeoVim plugin by @joshuavial
- Reorganized the
--help
output - Bugfix/improvement to whole edit format, may improve coding editing for GPT-3.5
- Bugfix and tests around git filenames with unicode characters
- Bugfix so that aider throws an exception when OpenAI returns InvalidRequest
- Bugfix/improvement to /add and /drop to recurse selected directories
- Bugfix for live diff output when using "whole" edit format
- Disabled general availability of gpt-4 (it's rolling out, not 100% available yet)
- Ask to create a git repo if none found, to better track GPT's code changes
- Glob wildcards are now supported in
/add
and/drop
commands - Pass
--encoding
into ctags, require it to returnutf-8
- More robust handling of filepaths, to avoid 8.3 windows filenames
- Added FAQ
- Marked GPT-4 as generally available
- Bugfix for live diffs of whole coder with missing filenames
- Bugfix for chats with multiple files
- Bugfix in editblock coder prompt
- Benchmark comparing code editing in GPT-3.5 and GPT-4
- Improved Windows support:
- Fixed bugs related to path separators in Windows
- Added a CI step to run all tests on Windows
- Improved handling of Unicode encoding/decoding
- Explicitly read/write text files with utf-8 encoding by default (mainly benefits Windows)
- Added
--encoding
switch to specify another encoding - Gracefully handle decoding errors
- Added
--code-theme
switch to control the pygments styling of code blocks (by @kwmiebach) - Better status messages explaining the reason when ctags is disabled
- Fixed a bug to allow aider to edit files that contain triple backtick fences.
- Fixed a bug in the display of streaming diffs in GPT-3.5 chats
- Graceful handling of context window exhaustion, including helpful tips.
- Added
--message
to give GPT that one instruction and then exit after it replies and any edits are performed. - Added
--no-stream
to disable streaming GPT responses.- Non-streaming responses include token usage info.
- Enables display of cost info based on OpenAI advertised pricing.
- Coding competence benchmarking tool against suite of programming tasks based on Execism's python repo.
- Major refactor in preparation for supporting new function calls api.
- Initial implementation of a function based code editing backend for 3.5.
- Initial experiments show that using functions makes 3.5 less competent at coding.
- Limit automatic retries when GPT returns a malformed edit response.
- Support for
gpt-3.5-turbo-16k
, and all OpenAI chat models - Improved ability to correct when gpt-4 omits leading whitespace in code edits
- Added
--openai-api-base
to support API proxies, etc.
- Added support for
gpt-3.5-turbo
andgpt-4-32k
. - Added
--map-tokens
to set a token budget for the repo map, along with a PageRank based algorithm for prioritizing which files and identifiers to include in the map. - Added in-chat command
/tokens
to report on context window token usage. - Added in-chat command
/clear
to clear the conversation history.