Skip to content

Commit

Permalink
ci[minor]: add lint/format to ci (#169)
Browse files Browse the repository at this point in the history
  • Loading branch information
hinthornw authored Apr 15, 2024
2 parents 1f9302e + 4f4ed69 commit f253278
Show file tree
Hide file tree
Showing 13 changed files with 357 additions and 227 deletions.
57 changes: 57 additions & 0 deletions .eslintrc.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*
* @format
*/

const OFF = 0;
const WARNING = 1;
const ERROR = 2;

module.exports = {
root: true,
env: {
browser: true,
commonjs: true,
jest: true,
node: true,
},
parser: "@babel/eslint-parser",
parserOptions: {
allowImportExportEverywhere: true,
},
extends: ["airbnb", "prettier"],
plugins: ["react-hooks", "header"],
ignorePatterns: [
"build",
"docs/api",
"node_modules",
"docs/_static",
"static",
],
rules: {
// Ignore certain webpack alias because it can't be resolved
"import/no-unresolved": [
ERROR,
{ ignore: ["^@theme", "^@docusaurus", "^@generated"] },
],
"import/extensions": OFF,
"react/jsx-filename-extension": OFF,
"react-hooks/rules-of-hooks": ERROR,
"react/prop-types": OFF, // PropTypes aren't used much these days.
"react/function-component-definition": [
WARNING,
{
namedComponents: "function-declaration",
unnamedComponents: "arrow-function",
},
],
"no-unused-vars": WARNING,
"import/prefer-default-export": WARNING,
"react/jsx-props-no-spreading": OFF,
"no-empty-pattern": WARNING,
},
};
52 changes: 52 additions & 0 deletions .github/workflows/format-lint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# This workflow will do a clean installation of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-nodejs

name: CI (Lint/Format)

on:
push:
branches: ["main"]
pull_request:
workflow_dispatch: # Allows triggering the workflow manually in GitHub UI


# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
lint:
name: Check linting
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 18.x
uses: actions/setup-node@v3
with:
node-version: 18.x
cache: "yarn"
- name: Install dependencies
run: yarn install --immutable --mode=skip-build
- name: Check linting
run: yarn run lint

format:
name: Check formatting
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Use Node.js 18.x
uses: actions/setup-node@v3
with:
node-version: 18.x
cache: "yarn"
- name: Install dependencies
run: yarn install --immutable --mode=skip-build
- name: Check formatting
run: yarn run format:check
35 changes: 17 additions & 18 deletions docs/evaluation/faq/evaluator-implementations.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -53,10 +53,10 @@ Three QA evaluators you can load are: `"qa"`, `"context_qa"`, `"cot_qa"`. Based
- The `"qa"` evaluator ([reference](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html#langchain-evaluation-qa-eval-chain-qaevalchain)) instructs an llm to directly grade a response as "correct" or "incorrect" based on the reference answer.
- The `"context_qa"` evaluator ([reference](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html#langchain.evaluation.qa.eval_chain.ContextQAEvalChain)) instructs the LLM chain to use reference "context" (provided throught the example outputs) in determining correctness. This is useful if you have a larger corpus of grounding docs but don't have ground truth answers to a query.
- The `"cot_qa"` evaluator ([reference](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html#langchain.evaluation.qa.eval_chain.CotQAEvalChain)) is similar to the "context_qa" evaluator, except it instructs the LLMChain to use chain of thought "reasoning" before determining a final verdict. This tends to lead to responses that better correlate with human labels, for a slightly higher token and runtime cost.
{" "}
<CodeTabs
tabs={[
PythonBlock(`from langsmith import Client

<CodeTabs
tabs={[
PythonBlock(`from langsmith import Client
from langsmith.evaluation import LangChainStringEvaluator, evaluate\n
qa_evaluator = LangChainStringEvaluator("qa")
context_qa_evaluator = LangChainStringEvaluator("context_qa")
Expand All @@ -68,17 +68,16 @@ evaluate(
evaluators=[qa_evaluator, context_qa_evaluator, cot_qa_evaluator],
metadata={"revision_id": "the version of your pipeline you are testing"},
)`),
]}
groupId="client-language"
/>
You can customize the evaluator by specifying the LLM used to power its LLM
chain or even by customizing the prompt itself. Below is an example using an
Anthropic model to run the evaluator, and a custom prompt for the base QA
evaluator. Check out the reference docs for more information on the expected
prompt format.
<CodeTabs
tabs={[
PythonBlock(`from langchain.chat_models import ChatAnthropic
]}
groupId="client-language"
/>
You can customize the evaluator by specifying the LLM used to power its LLM chain
or even by customizing the prompt itself. Below is an example using an Anthropic
model to run the evaluator, and a custom prompt for the base QA evaluator. Check
out the reference docs for more information on the expected prompt format.
<CodeTabs
tabs={[
PythonBlock(`from langchain.chat_models import ChatAnthropic
from langchain_core.prompts.prompt import PromptTemplate
from langsmith.evaluation import LangChainStringEvaluator\n
_PROMPT_TEMPLATE = """You are an expert professor specialized in grading students' answers to questions.
Expand All @@ -104,9 +103,9 @@ evaluate(
evaluators=[qa_evaluator, context_qa_evaluator, cot_qa_evaluator],
)
`),
]}
groupId="client-language"
/>
]}
groupId="client-language"
/>

## Criteria Evaluators (No Labels)

Expand Down
1 change: 1 addition & 0 deletions docs/tracing/faq/logging_and_viewing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ Additionally, you will need to set `LANGCHAIN_TRACING_V2='true'` if you plan to

- LangChain (Python or JS)
- `@traceable` decorator or `wrap_openai` method in the Python SDK

:::

<CodeTabs
Expand Down
64 changes: 33 additions & 31 deletions src/components/ClientInstallation.js
Original file line number Diff line number Diff line change
@@ -1,34 +1,36 @@
import React from "react";
import { CodeTabs } from "./InstructionsWithCode";

export const ClientInstallationCodeTabs = () => (
<CodeTabs
groupId="client-language"
tabs={[
{
value: "python",
label: "pip",
language: "bash",
content: `pip install -U langsmith`,
},
{
value: "typescript",
label: "yarn",
language: "bash",
content: `yarn add langsmith`,
},
{
value: "npm",
label: "npm",
language: "bash",
content: `npm install -S langsmith`,
},
{
value: "pnpm",
label: "pnpm",
language: "bash",
content: `pnpm add langsmith`,
},
]}
/>
);
export function ClientInstallationCodeTabs() {

Check warning on line 4 in src/components/ClientInstallation.js

View workflow job for this annotation

GitHub Actions / Check linting

Prefer default export on a file with single export
return (
<CodeTabs
groupId="client-language"
tabs={[
{
value: "python",
label: "pip",
language: "bash",
content: `pip install -U langsmith`,
},
{
value: "typescript",
label: "yarn",
language: "bash",
content: `yarn add langsmith`,
},
{
value: "npm",
label: "npm",
language: "bash",
content: `npm install -S langsmith`,
},
{
value: "pnpm",
label: "pnpm",
language: "bash",
content: `pnpm add langsmith`,
},
]}
/>
);
}
72 changes: 37 additions & 35 deletions src/components/Hub.js
Original file line number Diff line number Diff line change
Expand Up @@ -10,39 +10,41 @@ import {
ShellBlock,

Check warning on line 10 in src/components/Hub.js

View workflow job for this annotation

GitHub Actions / Check linting

'ShellBlock' is defined but never used
} from "./InstructionsWithCode";

export const HubInstallationCodeTabs = () => (
<CodeTabs
groupId="client-language"
tabs={[
{
value: "python",
label: "pip",
language: "bash",
content: `pip install -U langchain langchainhub`,
},
{
value: "typescript",
label: "yarn",
language: "bash",
content: `yarn add langchain`,
},
{
value: "npm",
label: "npm",
language: "bash",
content: `npm install -S langchain`,
},
{
value: "pnpm",
label: "pnpm",
language: "bash",
content: `pnpm add langchain`,
},
]}
/>
);
export function HubInstallationCodeTabs() {
return (
<CodeTabs
groupId="client-language"
tabs={[
{
value: "python",
label: "pip",
language: "bash",
content: `pip install -U langchain langchainhub`,
},
{
value: "typescript",
label: "yarn",
language: "bash",
content: `yarn add langchain`,
},
{
value: "npm",
label: "npm",
language: "bash",
content: `npm install -S langchain`,
},
{
value: "pnpm",
label: "pnpm",
language: "bash",
content: `pnpm add langchain`,
},
]}
/>
);
}

export const HubPullCodeTabs = ({}) => {
export function HubPullCodeTabs() {
const pyBlock = `from langchain import hub
# pull a chat prompt
Expand Down Expand Up @@ -94,9 +96,9 @@ console.log(result);`;
</TabItem>
</Tabs>
);
};
}

export const HubPushCodeTabs = ({}) => {
export function HubPushCodeTabs() {
const pyBlock = `from langchain import hub
from langchain.prompts.chat import ChatPromptTemplate
Expand Down Expand Up @@ -131,4 +133,4 @@ await hub.push("<handle>/my-first-prompt", prompt);`;
</TabItem>
</Tabs>
);
};
}
Loading

0 comments on commit f253278

Please sign in to comment.