Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into feature_add_modelclie…
Browse files Browse the repository at this point in the history
…nt_notebook
  • Loading branch information
fm1320 committed Nov 26, 2024
2 parents c35b5aa + 53f537c commit fcf85aa
Show file tree
Hide file tree
Showing 72 changed files with 11,879 additions and 10,171 deletions.
7 changes: 2 additions & 5 deletions .github/ISSUE_TEMPLATE/1_bug_report.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: Report a bug
description: Any errors that you encounter.
labels: ['needs triage', 'bug']
labels: ['bug']
body:
- type: markdown
attributes:
Expand Down Expand Up @@ -71,10 +71,7 @@ body:
Please provide details about your environment, including the following:
- OS (e.g., Linux, Windows, macOS)
value: |
<details>
<summary>Current environment</summary>
</details>
- OS: [e.g., Linux, Windows, macOS]
validations:
required: false

Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/2_suggest_improvement.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: Improvement suggestion
description: Suggest an improvement, a code refactor, or deprecation
labels: ['needs triage', 'refactor']
labels: ['[adalflow] improvement']
body:
- type: textarea
attributes:
Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/3_feature_request.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: Feature request
description: Propose a feature for this project
labels: ["needs triage", "feature"]
labels: ["[adalflow] new feature request"]
body:
- type: textarea
attributes:
Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/4_documenting.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: Typos and doc fixes
description: Tell us about how we can improve our documentation and Google colab/ipynb notebooks.
labels: ["needs triage", "docs"]
labels: ["documentation"]
body:
- type: textarea
attributes:
Expand Down
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/5_suggest_integration.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name: Feature request
name: New integration proposal
description: Propose a new integration for this project, either db, retriever, model_client. We highly recommend you to find a POC from the provider team to work together on this.
labels: ['needs triage', 'feature']
labels: ['[adalflow] integration']
body:
- type: textarea
attributes:
Expand Down
32 changes: 32 additions & 0 deletions .github/ISSUE_TEMPLATE/6_suggest_usecases_benchmarks.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Suggest use cases and benchmarks
description: Propose new use cases that AdalFlow should support or benchmarks that we should compare against
labels: ["new use cases/benchmarks"]
body:
- type: textarea
attributes:
label: Description & Motivation
description: A clear and concise description of the new use case or benchmark proposal
placeholder: |
Please outline the motivation for the proposal.
- type: textarea
attributes:
label: Pitch
description: A clear and concise description of what you want to happen.
validations:
required: false

- type: textarea
attributes:
label: Alternatives
description: A clear and concise description of any alternative solutions or features you've considered, if any.
validations:
required: false

- type: textarea
attributes:
label: Additional context
description: Add any other context or screenshots about the feature request here.
validations:
required: false
3 changes: 3 additions & 0 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: 👍 Upvote an issue
url: https://github.com/SylphAI-Inc/AdalFlow/issues
about: You should upvote an issue if it is important to you.
- name: 💬 Chat with us
url: https://discord.gg/ezzszrRZvT
about: Live chat with experts, engineers, and users in our Discord community.
Expand Down
38 changes: 38 additions & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
## What does this PR do?

<!--
Please include a summary of the change and which issue is fixed.
Please also include relevant motivation and context.
List any dependencies that are required for this change.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
The following links the related issue to the PR (https://docs.github.com/en/free-pro-team@latest/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)
-->

Fixes #\<issue_number>

<!-- Does your PR introduce any breaking changes? If yes, please list them. -->

<details>
<summary><b>Before submitting</b></summary>

- Was this **discussed/agreed** via a GitHub issue? (not for typos and docs)
- [ ] Did you read the [contributor guideline](https://adalflow.sylph.ai/contributor/index.html)?
- [ ] Did you make sure your **PR does only one thing**, instead of bundling different changes together?
- Did you make sure to **update the documentation** with your changes? (if necessary)
- Did you write any **new necessary tests**? (not for typos and docs)
- [ ] Did you verify new and **existing tests pass** locally with your changes?
- Did you list all the **breaking changes** introduced by this pull request?


</details>


<!--
Did you have fun?
Make sure you had fun coding 🙃
-->
13 changes: 12 additions & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,24 @@ repos:
hooks:
- id: black
args: ['--line-length=88']
exclude: ^docs/|.*\.(json|yaml|md|txt)$

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.2
hooks:
# Run the linter.
- id: ruff
args: ['--fix', '--extend-ignore=E402']
args: ['--fix']
exclude: ^docs/|.*\.(json|yaml|md|txt)$

# Add local hooks to run custom commands
- repo: local
hooks:
- id: run-make-format
name: Run Make Format
entry: make format
language: system
pass_filenames: false
# - repo: https://github.com/pycqa/flake8
# rev: 4.0.1
# hooks:
Expand Down
51 changes: 51 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# Define variables for common directories and commands
PYTHON = poetry run
SRC_DIR = .

# Default target: Show help
.PHONY: help
help:
@echo "Available targets:"
@echo " setup Install dependencies and set up pre-commit hooks"
@echo " format Run Black and Ruff to format the code"
@echo " lint Run Ruff to check code quality"
@echo " test Run tests with pytest"
@echo " precommit Run pre-commit hooks on all files"
@echo " clean Clean up temporary files and build artifacts"

# Install dependencies and set up pre-commit hooks
.PHONY: setup
setup:
poetry install
poetry run pre-commit install

# Format code using Black and Ruff
.PHONY: format
format:
$(PYTHON) black $(SRC_DIR)
git ls-files | xargs pre-commit run black --files

# Run lint checks using Ruff
.PHONY: lint
lint:
$(PYTHON) ruff check $(SRC_DIR)

# Run all pre-commit hooks on all files
.PHONY: precommit
precommit:
$(PYTHON) pre-commit run --all-files

# Run tests
.PHONY: test
test:
$(PYTHON) pytest

# Clean up temporary files and build artifacts
.PHONY: clean
clean:
rm -rf .pytest_cache
rm -rf .mypy_cache
rm -rf __pycache__
rm -rf build dist *.egg-info
find . -type d -name "__pycache__" -exec rm -r {} +
find . -type f -name "*.pyc" -delete
39 changes: 30 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,21 @@ For AI researchers, product teams, and software engineers who want to learn the



# Quick Start


Install AdalFlow with pip:

```bash
pip install adalflow
```

Please refer to the [full installation guide](https://adalflow.sylph.ai/get_started/installation.html) for more details.


* Try the [Building Quickstart](https://colab.research.google.com/drive/1TKw_JHE42Z_AWo8UuRYZCO2iuMgyslTZ?usp=sharing) in Colab to see how AdalFlow can build the task pipeline, including Chatbot, RAG, agent, and structured output.
* Try the [Optimization Quickstart](https://colab.research.google.com/github/SylphAI-Inc/AdalFlow/blob/main/notebooks/qas/adalflow_object_count_auto_optimization.ipynb) to see how AdalFlow can optimize the task pipeline.


# Why AdalFlow

Expand Down Expand Up @@ -111,6 +124,8 @@ Here is an optimization demonstration on a text classification task:

Among all libraries, AdalFlow achieved the highest accuracy with manual prompting (starting at 82%) and the highest accuracy after optimization.



Further reading: [Optimize Classification](https://adalflow.sylph.ai/use_cases/classification.html)

## Light, Modular, and Model-Agnostic Task Pipeline
Expand All @@ -127,6 +142,14 @@ You have full control over the prompt template, the model you use, and the outpu
<img src="https://raw.githubusercontent.com/SylphAI-Inc/LightRAG/main/docs/source/_static/images/AdalFlow_task_pipeline.png" alt="AdalFlow Task Pipeline">
</p>

Many providers and models accessible via the same interface:

<p align="center">
<img src="https://raw.githubusercontent.com/SylphAI-Inc/LightRAG/main/docs/source/_static/images/multi-providers.png" alt="AdalFlow Model Providers">
</p>

[All available model providers](https://adalflow.sylph.ai/apis/components/components.model_client.html)


<!-- LLMs are like water; they can be shaped into anything, from GenAI applications such as chatbots, translation, summarization, code generation, and autonomous agents to classical NLP tasks like text classification and named entity recognition. They interact with the world beyond the model’s internal knowledge via retrievers, memory, and tools (function calls). Each use case is unique in its data, business logic, and user experience.
Expand Down Expand Up @@ -192,15 +215,6 @@ Just define it as a ``Parameter`` and pass it to AdalFlow's ``Generator``.

</p>

# Quick Install

Install AdalFlow with pip:

```bash
pip install adalflow
```

Please refer to the [full installation guide](https://adalflow.sylph.ai/get_started/installation.html) for more details.



Expand All @@ -224,6 +238,13 @@ AdalFlow full documentation available at [adalflow.sylph.ai](https://adalflow.sy

AdalFlow is named in honor of [Ada Lovelace](https://en.wikipedia.org/wiki/Ada_Lovelace), the pioneering female mathematician who first recognized that machines could go beyond mere calculations. As a team led by a female founder, we aim to inspire more women to pursue careers in AI.

# Community & Contributors

The AdalFlow is a community-driven project, and we welcome everyone to join us in building the future of LLM applications.

Join our [Discord](https://discord.gg/ezzszrRZvT) community to ask questions, share your projects, and get updates on AdalFlow.

To contribute, please read our [Contributor Guide](https://adalflow.sylph.ai/contributor/index.html).

# Contributors

Expand Down
3 changes: 0 additions & 3 deletions SETUP.md

This file was deleted.

5 changes: 5 additions & 0 deletions adalflow/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@

## [0.2.6] - 2024-11-25
### Improved
- Add default `max_tokens=512` to the `AnthropicAPIClient` to avoid the error when the user does not provide the `max_tokens` in the prompt.

## [0.2.5] - 2024-10-28

### Fixed
Expand Down
15 changes: 14 additions & 1 deletion adalflow/PACKAGING.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
#Poetry Packaging Guide
# Poetry Packaging Guide
## Development

To install optional dependencies, use the following command:
Expand Down Expand Up @@ -27,3 +27,16 @@ Better to use a colab to update the whl file and test the installation.
```bash
pip install "dist/adalflow-0.1.0b1-py3-none-any.whl[openai,groq,faiss]"
```


## Update the version

1. Update the version in `pyproject.toml`
2. Add the version number in `adalflow/__init__.py`
3. Build the package
4. Test the package locally
5. Push the changes to the repository
6. Ensure to run `poetry lock --no-update` in the root directory (project-level) to update the lock file for other directories such as `tutorials`, `use_cases`, `benchmarks`, etc.
7. Update the `CHANGELOG.md` file with the new version number and the changes made in the new version.

## TODO: we need to automate the version update process. Help is appreciated.
2 changes: 1 addition & 1 deletion adalflow/adalflow/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "0.2.5"
__version__ = "0.2.6"

from adalflow.core.component import Component, fun_to_component
from adalflow.core.container import Sequential
Expand Down
13 changes: 10 additions & 3 deletions adalflow/adalflow/components/model_client/anthropic_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@
anthropic = safe_import(
OptionalPackages.ANTHROPIC.value[0], OptionalPackages.ANTHROPIC.value[1]
)
import anthropic

# import anthropic
from anthropic import (
RateLimitError,
APITimeoutError,
Expand Down Expand Up @@ -43,7 +44,10 @@ class AnthropicAPIClient(ModelClient):
Visit https://docs.anthropic.com/en/docs/intro-to-claude for more api details.
Ensure "max_tokens" are set.
Note:
As antropic API needs users to set max_tokens, we set up a default value of 512 for the max_tokens.
You can override this value by passing the max_tokens in the model_kwargs.
Reference: 8/1/2024
- https://docs.anthropic.com/en/docs/about-claude/models
Expand All @@ -63,6 +67,7 @@ def __init__(
self.chat_completion_parser = (
chat_completion_parser or get_first_message_content
)
self.default_max_tokens = 512

def init_sync_client(self):
api_key = self._api_key or os.getenv("ANTHROPIC_API_KEY")
Expand Down Expand Up @@ -115,6 +120,8 @@ def convert_inputs_to_api_kwargs(
api_kwargs["messages"] = [
{"role": "user", "content": input},
]
if "max_tokens" not in api_kwargs:
api_kwargs["max_tokens"] = self.default_max_tokens
# if input and input != "":
# api_kwargs["system"] = input
else:
Expand Down Expand Up @@ -167,4 +174,4 @@ async def acall(
elif model_type == ModelType.LLM:
return await self.async_client.messages.create(**api_kwargs)
else:
raise ValueError(f"model_type {model_type} is not supported")
raise ValueError(f"model_type {model_type} is not supported")
Loading

0 comments on commit fcf85aa

Please sign in to comment.