Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove attachment prefix #1324

Merged
merged 2 commits into from
Dec 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions js/src/client.ts
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import * as uuid from "uuid";

Check notice on line 1 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / benchmark

Benchmark results

........... WARNING: the benchmark result may be unstable * the standard deviation (95.7 ms) is 14% of the mean (688 ms) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. create_5_000_run_trees: Mean +- std dev: 688 ms +- 96 ms ........... WARNING: the benchmark result may be unstable * the standard deviation (234 ms) is 17% of the mean (1.40 sec) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. create_10_000_run_trees: Mean +- std dev: 1.40 sec +- 0.23 sec ........... WARNING: the benchmark result may be unstable * the standard deviation (195 ms) is 14% of the mean (1.44 sec) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. create_20_000_run_trees: Mean +- std dev: 1.44 sec +- 0.19 sec ........... dumps_class_nested_py_branch_and_leaf_200x400: Mean +- std dev: 702 us +- 12 us ........... dumps_class_nested_py_leaf_50x100: Mean +- std dev: 24.9 ms +- 0.3 ms ........... dumps_class_nested_py_leaf_100x200: Mean +- std dev: 104 ms +- 2 ms ........... dumps_dataclass_nested_50x100: Mean +- std dev: 25.3 ms +- 0.4 ms ........... WARNING: the benchmark result may be unstable * the standard deviation (17.3 ms) is 24% of the mean (71.5 ms) Try to rerun the benchmark with more runs, values and/or loops. Run 'python -m pyperf system tune' command to reduce the system jitter. Use pyperf stats, pyperf dump and pyperf hist to analyze results. Use --quiet option to hide these warnings. dumps_pydantic_nested_50x100: Mean +- std dev: 71.5 ms +- 17.3 ms ........... dumps_pydanticv1_nested_50x100: Mean +- std dev: 198 ms +- 3 ms

Check notice on line 1 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / benchmark

Comparison against main

+-----------------------------------------------+----------+------------------------+ | Benchmark | main | changes | +===============================================+==========+========================+ | dumps_pydanticv1_nested_50x100 | 218 ms | 198 ms: 1.10x faster | +-----------------------------------------------+----------+------------------------+ | create_5_000_run_trees | 713 ms | 688 ms: 1.04x faster | +-----------------------------------------------+----------+------------------------+ | dumps_class_nested_py_leaf_50x100 | 25.3 ms | 24.9 ms: 1.02x faster | +-----------------------------------------------+----------+------------------------+ | dumps_dataclass_nested_50x100 | 25.5 ms | 25.3 ms: 1.01x faster | +-----------------------------------------------+----------+------------------------+ | dumps_class_nested_py_leaf_100x200 | 104 ms | 104 ms: 1.01x faster | +-----------------------------------------------+----------+------------------------+ | create_10_000_run_trees | 1.41 sec | 1.40 sec: 1.00x faster | +-----------------------------------------------+----------+------------------------+ | dumps_class_nested_py_branch_and_leaf_200x400 | 699 us | 702 us: 1.00x slower | +-----------------------------------------------+----------+------------------------+ | create_20_000_run_trees | 1.40 sec | 1.44 sec: 1.03x slower | +-----------------------------------------------+----------+------------------------+ | dumps_pydantic_nested_50x100 | 65.2 ms | 71.5 ms: 1.10x slower | +-----------------------------------------------+----------+------------------------+ | Geometric mean | (ref) | 1.00x faster | +-----------------------------------------------+----------+------------------------+

import { AsyncCaller, AsyncCallerParams } from "./utils/async_caller.js";
import {
Expand Down Expand Up @@ -423,7 +423,7 @@
// If there is an item on the queue we were unable to pop,
// just return it as a single batch.
if (popped.length === 0 && this.items.length > 0) {
const item = this.items.shift()!;

Check warning on line 426 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / Check linting

Forbidden non-null assertion
popped.push(item);
poppedSizeBytes += item.size;
this.sizeBytes -= item.size;
Expand Down Expand Up @@ -846,7 +846,7 @@
if (this._serverInfo === undefined) {
try {
this._serverInfo = await this._getServerInfo();
} catch (e) {

Check warning on line 849 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / Check linting

'e' is defined but never used. Allowed unused args must match /^_/u
console.warn(
`[WARNING]: LangSmith failed to fetch info on supported operations. Falling back to batch operations and default limits.`
);
Expand Down Expand Up @@ -1564,7 +1564,7 @@
treeFilter?: string;
isRoot?: boolean;
dataSourceType?: string;
}): Promise<any> {

Check warning on line 1567 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected any. Specify a different type
let projectIds_ = projectIds || [];
if (projectNames) {
projectIds_ = [
Expand Down Expand Up @@ -1852,7 +1852,7 @@
`Failed to list shared examples: ${response.status} ${response.statusText}`
);
}
return result.map((example: any) => ({

Check warning on line 1855 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected any. Specify a different type
...example,
_hostUrl: this.getHostUrl(),
}));
Expand Down Expand Up @@ -1989,7 +1989,7 @@
}
// projectId querying
return true;
} catch (e) {

Check warning on line 1992 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / Check linting

'e' is defined but never used. Allowed unused args must match /^_/u
return false;
}
}
Expand Down Expand Up @@ -2735,7 +2735,7 @@
// add attachments back to the example
example.attachments = Object.entries(attachment_urls).reduce(
(acc, [key, value]) => {
acc[key] = {
acc[key.slice("attachment.".length)] = {
presigned_url: value.presigned_url,
};
return acc;
Expand Down Expand Up @@ -2832,7 +2832,7 @@
if (attachment_urls) {
example.attachments = Object.entries(attachment_urls).reduce(
(acc, [key, value]) => {
acc[key] = {
acc[key.slice("attachment.".length)] = {
presigned_url: value.presigned_url,
};
return acc;
Expand Down Expand Up @@ -3364,7 +3364,7 @@
async _logEvaluationFeedback(
evaluatorResponse: EvaluationResult | EvaluationResults,
run?: Run,
sourceInfo?: { [key: string]: any }

Check warning on line 3367 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected any. Specify a different type
): Promise<[results: EvaluationResult[], feedbacks: Feedback[]]> {
const evalResults: Array<EvaluationResult> =
this._selectEvalResults(evaluatorResponse);
Expand Down Expand Up @@ -3403,7 +3403,7 @@
public async logEvaluationFeedback(
evaluatorResponse: EvaluationResult | EvaluationResults,
run?: Run,
sourceInfo?: { [key: string]: any }

Check warning on line 3406 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected any. Specify a different type
): Promise<EvaluationResult[]> {
const [results] = await this._logEvaluationFeedback(
evaluatorResponse,
Expand Down Expand Up @@ -3853,7 +3853,7 @@

public async createCommit(
promptIdentifier: string,
object: any,

Check warning on line 3856 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected any. Specify a different type
options?: {
parentCommitHash?: string;
}
Expand Down Expand Up @@ -4071,7 +4071,7 @@
isPublic?: boolean;
isArchived?: boolean;
}
): Promise<Record<string, any>> {

Check warning on line 4074 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected any. Specify a different type
if (!(await this.promptExists(promptIdentifier))) {
throw new Error("Prompt does not exist, you must create it first.");
}
Expand All @@ -4082,7 +4082,7 @@
throw await this._ownerConflictError("update a prompt", owner);
}

const payload: Record<string, any> = {};

Check warning on line 4085 in js/src/client.ts

View workflow job for this annotation

GitHub Actions / Check linting

Unexpected any. Specify a different type

if (options?.description !== undefined)
payload.description = options.description;
Expand Down
32 changes: 13 additions & 19 deletions js/src/tests/client.int.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1410,24 +1410,24 @@ test("update examples multipart", async () => {
let updatedExample = await client.readExample(exampleId);
expect(updatedExample.inputs.text).toEqual("hello world2");
expect(Object.keys(updatedExample.attachments ?? {}).sort()).toEqual(
["attachment.bar", "attachment.test_file"].sort()
["bar", "test_file"].sort()
);
expect(updatedExample.metadata).toEqual({ bar: "foo" });
let attachmentData: Uint8Array | undefined = updatedExample.attachments?.[
"attachment.test_file"
"test_file"
].presigned_url
? new Uint8Array(
(await fetch(
updatedExample.attachments?.["attachment.test_file"].presigned_url
updatedExample.attachments?.["test_file"].presigned_url
).then((res) => res.arrayBuffer())) as ArrayBuffer
)
: undefined;
expect(attachmentData).toEqual(new Uint8Array(fs.readFileSync(pathname)));
attachmentData = updatedExample.attachments?.["attachment.bar"].presigned_url
attachmentData = updatedExample.attachments?.["bar"].presigned_url
? new Uint8Array(
(await fetch(
updatedExample.attachments?.["attachment.bar"].presigned_url
).then((res) => res.arrayBuffer())) as ArrayBuffer
(await fetch(updatedExample.attachments?.["bar"].presigned_url).then(
(res) => res.arrayBuffer()
)) as ArrayBuffer
)
: undefined;
expect(attachmentData).toEqual(new Uint8Array(fs.readFileSync(pathname)));
Expand All @@ -1443,14 +1443,11 @@ test("update examples multipart", async () => {
await client.updateExamplesMultipart(dataset.id, [exampleUpdate4]);
updatedExample = await client.readExample(exampleId);
expect(updatedExample.metadata).toEqual({ foo: "bar" });
expect(Object.keys(updatedExample.attachments ?? {})).toEqual([
"attachment.test_file2",
]);
attachmentData = updatedExample.attachments?.["attachment.test_file2"]
.presigned_url
expect(Object.keys(updatedExample.attachments ?? {})).toEqual(["test_file2"]);
attachmentData = updatedExample.attachments?.["test_file2"].presigned_url
? new Uint8Array(
(await fetch(
updatedExample.attachments?.["attachment.test_file2"].presigned_url
updatedExample.attachments?.["test_file2"].presigned_url
).then((res) => res.arrayBuffer())) as ArrayBuffer
)
: undefined;
Expand All @@ -1471,14 +1468,11 @@ test("update examples multipart", async () => {
foo: "bar",
dataset_split: ["foo", "bar"],
});
expect(Object.keys(updatedExample.attachments ?? {})).toEqual([
"attachment.test_file",
]);
attachmentData = updatedExample.attachments?.["attachment.test_file"]
.presigned_url
expect(Object.keys(updatedExample.attachments ?? {})).toEqual(["test_file"]);
attachmentData = updatedExample.attachments?.["test_file"].presigned_url
? new Uint8Array(
(await fetch(
updatedExample.attachments?.["attachment.test_file"].presigned_url
updatedExample.attachments?.["test_file"].presigned_url
).then((res) => res.arrayBuffer())) as ArrayBuffer
)
: undefined;
Expand Down
85 changes: 40 additions & 45 deletions js/src/tests/evaluate_attachments.int.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -34,19 +34,19 @@ test("evaluate can handle examples with attachments", async () => {
config?: TargetConfigT
) => {
// Verify we receive the attachment data
if (!config?.attachments?.["attachment.image"]) {
if (!config?.attachments?.["image"]) {
throw new Error("Image attachment not found");
}
const expectedData = new Uint8Array(
Buffer.from("fake image data for testing")
);
const attachmentData: Uint8Array | undefined = config?.attachments?.[
"attachment.image"
"image"
].presigned_url
? new Uint8Array(
(await fetch(
config?.attachments?.["attachment.image"].presigned_url
).then((res) => res.arrayBuffer())) as ArrayBuffer
(await fetch(config?.attachments?.["image"].presigned_url).then(
(res) => res.arrayBuffer()
)) as ArrayBuffer
)
: undefined;
if (!arraysEqual(attachmentData ?? new Uint8Array(), expectedData)) {
Expand All @@ -57,16 +57,15 @@ test("evaluate can handle examples with attachments", async () => {

const customEvaluator = async ({ attachments }: { attachments?: any }) => {
expect(attachments).toBeDefined();
expect(attachments?.["attachment.image"]).toBeDefined();
expect(attachments?.["image"]).toBeDefined();
const expectedData = new Uint8Array(
Buffer.from("fake image data for testing")
);
const attachmentData: Uint8Array | undefined = attachments?.[
"attachment.image"
].presigned_url
const attachmentData: Uint8Array | undefined = attachments?.["image"]
.presigned_url
? new Uint8Array(
(await fetch(attachments?.["attachment.image"].presigned_url).then(
(res) => res.arrayBuffer()
(await fetch(attachments?.["image"].presigned_url).then((res) =>
res.arrayBuffer()
)) as ArrayBuffer
)
: undefined;
Expand Down Expand Up @@ -135,16 +134,15 @@ test("evaluate with attachments not in target function", async () => {

const customEvaluator = async ({ attachments }: { attachments?: any }) => {
expect(attachments).toBeDefined();
expect(attachments?.["attachment.image"]).toBeDefined();
expect(attachments?.["image"]).toBeDefined();
const expectedData = new Uint8Array(
Buffer.from("fake image data for testing")
);
const attachmentData: Uint8Array | undefined = attachments?.[
"attachment.image"
].presigned_url
const attachmentData: Uint8Array | undefined = attachments?.["image"]
.presigned_url
? new Uint8Array(
(await fetch(attachments?.["attachment.image"].presigned_url).then(
(res) => res.arrayBuffer()
(await fetch(attachments?.["image"].presigned_url).then((res) =>
res.arrayBuffer()
)) as ArrayBuffer
)
: undefined;
Expand Down Expand Up @@ -212,19 +210,19 @@ test("multiple evaluators with attachments", async () => {
config?: TargetConfigT
) => {
// Verify we receive the attachment data
if (!config?.attachments?.["attachment.image"]) {
if (!config?.attachments?.["image"]) {
throw new Error("Image attachment not found");
}
const expectedData = new Uint8Array(
Buffer.from("fake image data for testing")
);
const attachmentData: Uint8Array | undefined = config?.attachments?.[
"attachment.image"
"image"
].presigned_url
? new Uint8Array(
(await fetch(
config?.attachments?.["attachment.image"].presigned_url
).then((res) => res.arrayBuffer())) as ArrayBuffer
(await fetch(config?.attachments?.["image"].presigned_url).then(
(res) => res.arrayBuffer()
)) as ArrayBuffer
)
: undefined;
if (!arraysEqual(attachmentData ?? new Uint8Array(), expectedData)) {
Expand All @@ -235,16 +233,15 @@ test("multiple evaluators with attachments", async () => {

const customEvaluatorOne = async ({ attachments }: { attachments?: any }) => {
expect(attachments).toBeDefined();
expect(attachments?.["attachment.image"]).toBeDefined();
expect(attachments?.["image"]).toBeDefined();
const expectedData = new Uint8Array(
Buffer.from("fake image data for testing")
);
const attachmentData: Uint8Array | undefined = attachments?.[
"attachment.image"
].presigned_url
const attachmentData: Uint8Array | undefined = attachments?.["image"]
.presigned_url
? new Uint8Array(
(await fetch(attachments?.["attachment.image"].presigned_url).then(
(res) => res.arrayBuffer()
(await fetch(attachments?.["image"].presigned_url).then((res) =>
res.arrayBuffer()
)) as ArrayBuffer
)
: undefined;
Expand All @@ -259,16 +256,15 @@ test("multiple evaluators with attachments", async () => {

const customEvaluatorTwo = async ({ attachments }: { attachments?: any }) => {
expect(attachments).toBeDefined();
expect(attachments?.["attachment.image"]).toBeDefined();
expect(attachments?.["image"]).toBeDefined();
const expectedData = new Uint8Array(
Buffer.from("fake image data for testing")
);
const attachmentData: Uint8Array | undefined = attachments?.[
"attachment.image"
].presigned_url
const attachmentData: Uint8Array | undefined = attachments?.["image"]
.presigned_url
? new Uint8Array(
(await fetch(attachments?.["attachment.image"].presigned_url).then(
(res) => res.arrayBuffer()
(await fetch(attachments?.["image"].presigned_url).then((res) =>
res.arrayBuffer()
)) as ArrayBuffer
)
: undefined;
Expand Down Expand Up @@ -333,19 +329,19 @@ test("evaluate with attachments runnable target function", async () => {
await client.uploadExamplesMultipart(dataset.id, [example]);

const myFunction = async (_input: any, config?: any) => {
if (!config?.attachments?.["attachment.image"]) {
if (!config?.attachments?.["image"]) {
throw new Error("Image attachment not found");
}
const expectedData = new Uint8Array(
Buffer.from("fake image data for testing")
);
const attachmentData: Uint8Array | undefined = config?.attachments?.[
"attachment.image"
"image"
].presigned_url
? new Uint8Array(
(await fetch(
config?.attachments?.["attachment.image"].presigned_url
).then((res) => res.arrayBuffer())) as ArrayBuffer
(await fetch(config?.attachments?.["image"].presigned_url).then(
(res) => res.arrayBuffer()
)) as ArrayBuffer
)
: undefined;
if (!arraysEqual(attachmentData ?? new Uint8Array(), expectedData)) {
Expand All @@ -359,16 +355,15 @@ test("evaluate with attachments runnable target function", async () => {

const customEvaluator = async ({ attachments }: { attachments?: any }) => {
expect(attachments).toBeDefined();
expect(attachments?.["attachment.image"]).toBeDefined();
expect(attachments?.["image"]).toBeDefined();
const expectedData = new Uint8Array(
Buffer.from("fake image data for testing")
);
const attachmentData: Uint8Array | undefined = attachments?.[
"attachment.image"
].presigned_url
const attachmentData: Uint8Array | undefined = attachments?.["image"]
.presigned_url
? new Uint8Array(
(await fetch(attachments?.["attachment.image"].presigned_url).then(
(res) => res.arrayBuffer()
(await fetch(attachments?.["image"].presigned_url).then((res) =>
res.arrayBuffer()
)) as ArrayBuffer
)
: undefined;
Expand Down
4 changes: 2 additions & 2 deletions python/langsmith/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -3900,7 +3900,7 @@ def read_example(
response = requests.get(value["presigned_url"], stream=True)
response.raise_for_status()
reader = io.BytesIO(response.content)
attachments[key.split(".")[1]] = {
attachments[key.removeprefix("attachment.")] = {
"presigned_url": value["presigned_url"],
"reader": reader,
}
Expand Down Expand Up @@ -3986,7 +3986,7 @@ def list_examples(
response = requests.get(value["presigned_url"], stream=True)
response.raise_for_status()
reader = io.BytesIO(response.content)
attachments[key.split(".")[1]] = {
attachments[key.removeprefix("attachment.")] = {
"presigned_url": value["presigned_url"],
"reader": reader,
}
Expand Down
Loading