Skip to content

Commit

Permalink
Merge branch 'master' into fadilp/fix-tongyi-api-key
Browse files Browse the repository at this point in the history
  • Loading branch information
eyurtsev committed Dec 5, 2023
2 parents dfc5b2a + f758c8a commit 8e1199a
Show file tree
Hide file tree
Showing 163 changed files with 5,635 additions and 2,307 deletions.
23 changes: 21 additions & 2 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,10 @@ tell Poetry to use the virtualenv python environment (`poetry config virtualenvs

### Core vs. Experimental

This repository contains two separate projects:
This repository contains three separate projects:
- `langchain`: core langchain code, abstractions, and use cases.
- `langchain.experimental`: see the [Experimental README](https://github.com/langchain-ai/langchain/tree/master/libs/experimental/README.md) for more information.
- `langchain_core`: contain interfaces for key abstractions as well as logic for combining them in chains (LCEL).
- `langchain_experimental`: see the [Experimental README](https://github.com/langchain-ai/langchain/tree/master/libs/experimental/README.md) for more information.

Each of these has its own development environment. Docs are run from the top-level makefile, but development
is split across separate test & release flows.
Expand Down Expand Up @@ -128,6 +129,24 @@ make docker_tests

There are also [integration tests and code-coverage](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/tests/README.md) available.

### Only develop langchain_core or langchain_experimental

If you are only developing `langchain_core` or `langchain_experimental`, you can simply install the dependencies for the respective projects and run tests:

```bash
cd libs/core
poetry install --with test
make test
```

Or:

```bash
cd libs/experimental
poetry install --with test
make test
```

### Formatting and Linting

Run these locally before submitting a PR; the CI system will check also.
Expand Down
12 changes: 5 additions & 7 deletions docs/.local_build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,15 @@ SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)"
cd "${SCRIPT_DIR}"

mkdir -p ../_dist
rsync -ruv . ../_dist
rsync -ruv --exclude node_modules . ../_dist
cd ../_dist
poetry run python scripts/model_feat_table.py
poetry run nbdoc_build --srcdir docs --pause 0
mkdir docs/templates
cp ../templates/docs/INDEX.md docs/templates/index.md
cp ../cookbook/README.md src/pages/cookbook.mdx
cp ../.github/CONTRIBUTING.md docs/contributing.md
mkdir -p docs/templates
cp ../templates/docs/INDEX.md docs/templates/index.md
wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
poetry run python scripts/generate_api_reference_links.py
yarn install
yarn start

yarn

quarto preview docs
47 changes: 25 additions & 22 deletions docs/docs/expression_language/why.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,15 +10,16 @@
"title: Why use LCEL\n",
"---\n",
"\n",
"import { ColumnContainer, Column } from '@theme/Columns';"
"import { ColumnContainer, Column } from \\\"@theme/Columns\\\";"
]
},
{
"cell_type": "markdown",
"id": "919a5ae2-ed21-4923-b98f-723c111bac67",
"metadata": {},
"source": [
":::tip We recommend reading the LCEL [Get started](/docs/expression_language/get_started) section first.\n",
":::tip \n",
"We recommend reading the LCEL [Get started](/docs/expression_language/get_started) section first.\n",
":::"
]
},
Expand Down Expand Up @@ -62,11 +63,12 @@
"In the simplest case, we just want to pass in a topic string and get back a joke string:\n",
"\n",
"<ColumnContainer>\n",
"\n",
"<Column>\n",
"\n",
"#### Without LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand All @@ -76,6 +78,7 @@
"metadata": {},
"outputs": [],
"source": [
"\n",
"from typing import List\n",
"\n",
"import openai\n",
Expand Down Expand Up @@ -111,7 +114,7 @@
"\n",
"#### LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -156,7 +159,7 @@
"\n",
"#### Without LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -201,7 +204,7 @@
"\n",
"#### LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -233,7 +236,7 @@
"\n",
"#### Without LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -265,7 +268,7 @@
"\n",
"#### LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -296,7 +299,7 @@
"\n",
"#### Without LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -337,7 +340,7 @@
"\n",
"#### LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>\n",
"<div style=\"zoom:80%\">\n",
"\n",
"```python\n",
"chain.ainvoke(\"ice cream\")\n",
Expand All @@ -362,7 +365,7 @@
"\n",
"#### Without LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -398,7 +401,7 @@
"\n",
"#### LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -439,7 +442,7 @@
"\n",
"#### Without LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -481,7 +484,7 @@
"\n",
"#### LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -522,7 +525,7 @@
"\n",
"#### Without LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -607,7 +610,7 @@
"\n",
"#### With LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -677,7 +680,7 @@
"\n",
"We'll `print` intermediate steps for illustrative purposes\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -711,7 +714,7 @@
"#### LCEL\n",
"Every component has built-in integrations with LangSmith. If we set the following two environment variables, all chain traces are logged to LangSmith.\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -757,7 +760,7 @@
"#### Without LCEL\n",
"\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -804,7 +807,7 @@
"\n",
"#### LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -845,7 +848,7 @@
"\n",
"#### Without LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down Expand Up @@ -1029,7 +1032,7 @@
"\n",
"#### LCEL\n",
"\n",
"<div style={{ zoom: \"80%\" }}>"
"<div style=\"zoom:80%\">"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/guides/debugging.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Platforms with tracing capabilities like [LangSmith](/docs/langsmith/) and [Wand

For anyone building production-grade LLM applications, we highly recommend using a platform like this.

![LangSmith run](/img/run_details.png)
![LangSmith run](../../static/img/run_details.png)

## `set_debug` and `set_verbose`

Expand Down
6 changes: 3 additions & 3 deletions docs/docs/guides/local_llms.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
"1. `Base model`: What is the base-model and how was it trained?\n",
"2. `Fine-tuning approach`: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n",
"\n",
"![Image description](/img/OSS_LLM_overview.png)\n",
"![Image description](../../static/img/OSS_LLM_overview.png)\n",
"\n",
"The relative performance of these models can be assessed using several leaderboards, including:\n",
"\n",
Expand All @@ -55,15 +55,15 @@
"\n",
"In particular, see [this excellent post](https://finbarr.ca/how-is-llama-cpp-possible/) on the importance of quantization.\n",
"\n",
"![Image description](/img/llama-memory-weights.png)\n",
"![Image description](../../static/img/llama-memory-weights.png)\n",
"\n",
"With less precision, we radically decrease the memory needed to store the LLM in memory.\n",
"\n",
"In addition, we can see the importance of GPU memory bandwidth [sheet](https://docs.google.com/spreadsheets/d/1OehfHHNSn66BP2h3Bxp2NJTVX97icU0GmCXF6pK23H8/edit#gid=0)!\n",
"\n",
"A Mac M2 Max is 5-6x faster than a M1 for inference due to the larger GPU memory bandwidth.\n",
"\n",
"![Image description](/img/llama_t_put.png)\n",
"![Image description](../../static/img/llama_t_put.png)\n",
"\n",
"## Quickstart\n",
"\n",
Expand Down
Loading

0 comments on commit 8e1199a

Please sign in to comment.