Skip to content

Commit

Permalink
feat(add 2.19 doc to opentronsai): upgrading documentation (#15911)
Browse files Browse the repository at this point in the history
Closes closes
[AUTH-630](https://opentrons.atlassian.net/browse/AUTH-630)



<!--
Thanks for taking the time to open a Pull Request (PR)! Please make sure
you've read the "Opening Pull Requests" section of our Contributing
Guide:


https://github.com/Opentrons/opentrons/blob/edge/CONTRIBUTING.md#opening-pull-requests

GitHub provides robust markdown to format your PR. Links, diagrams,
pictures, and videos along with text formatting make it possible to
create a rich and informative PR. For more information on GitHub
markdown, see:


https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax

To ensure your code is reviewed quickly and thoroughly, please fill out
the sections below to the best of your ability!
-->

# Overview
Currently OpentronsAI uses 2.15. This PR upgrades to 2.19.
<!--
Describe your PR at a high level. State acceptance criteria and how this
PR fits into other work. Link issues, PRs, and other relevant resources.
-->

## Test Plan and Hands on Testing
- Run server: 
```
cd opentrons-ai-server  
make local-run
```

- Run client:
```
cd opentrons-ai-client
make dev
```

Play with UI, you might type as follows:
<img width="1351" alt="image"
src="https://github.com/user-attachments/assets/8a61b763-b21e-44e2-8da0-79f853a0cafe">


<!--
Describe your testing of the PR. Emphasize testing not reflected in the
code. Attach protocols, logs, screenshots and any other assets that
support your testing.
-->

## Changelog
- Pushed indexed Python protocol API 2.19
- Pushed file that shows how to create index file
- Updated UI such that one can run the app locally

<!--
List changes introduced by this PR considering future developers and the
end user. Give careful thought and clear documentation to breaking
changes.
-->

## Review requests

<!--
- What do you need from reviewers to feel confident this PR is ready to
merge?
- Ask questions.
-->

## Risk assessment
Low
<!--
- Indicate the level of attention this PR needs.
- Provide context to guide reviewers.
- Discuss trade-offs, coupling, and side effects.
- Look for the possibility, even if you think it's small, that your
change may affect some other part of the system.
- For instance, changing return tip behavior may also change the
behavior of labware calibration.
- How do your unit tests and on hands on testing mitigate this PR's
risks and the risk of future regressions?
- Especially in high risk PRs, explain how you know your testing is
enough.
-->


[AUTH-630]:
https://opentrons.atlassian.net/browse/AUTH-630?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ
  • Loading branch information
Elyorcv authored Aug 20, 2024
1 parent 4b4f541 commit e6ff13c
Show file tree
Hide file tree
Showing 17 changed files with 157 additions and 26 deletions.
31 changes: 25 additions & 6 deletions opentrons-ai-client/src/main.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,40 @@ import { i18n } from './i18n'
import { App } from './App'
import {
AUTH0_DOMAIN,
LOCAL_AUTH0_DOMAIN,
PROD_AUTH0_CLIENT_ID,
STAGING_AUTH0_CLIENT_ID,
LOCAL_AUTH0_CLIENT_ID,
} from './resources/constants'

const rootElement = document.getElementById('root')

const getClientId = (): string => {
switch (process.env.NODE_ENV) {
case 'production':
return PROD_AUTH0_CLIENT_ID
case 'development':
return LOCAL_AUTH0_CLIENT_ID
default:
return STAGING_AUTH0_CLIENT_ID
}
}

const getDomain = (): string => {
return process.env.NODE_ENV === 'development'
? LOCAL_AUTH0_DOMAIN
: AUTH0_DOMAIN
}

if (rootElement != null) {
const clientId = getClientId()
const domain = getDomain()

ReactDOM.createRoot(rootElement).render(
<React.StrictMode>
<Auth0Provider
domain={AUTH0_DOMAIN}
clientId={
process.env.NODE_ENV === 'production'
? PROD_AUTH0_CLIENT_ID
: STAGING_AUTH0_CLIENT_ID
}
clientId={clientId}
domain={domain}
authorizationParams={{
redirect_uri: window.location.origin,
}}
Expand Down
24 changes: 19 additions & 5 deletions opentrons-ai-client/src/molecules/InputPrompt/index.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,11 @@ import { SendButton } from '../../atoms/SendButton'
import { chatDataAtom, chatHistoryAtom, tokenAtom } from '../../resources/atoms'
import { useApiCall } from '../../resources/hooks'
import { calcTextAreaHeight } from '../../resources/utils/utils'
import { STAGING_END_POINT, PROD_END_POINT } from '../../resources/constants'
import {
STAGING_END_POINT,
PROD_END_POINT,
LOCAL_END_POINT,
} from '../../resources/constants'

import type { AxiosRequestConfig } from 'axios'
import type { ChatData } from '../../resources/types'
Expand Down Expand Up @@ -47,11 +51,21 @@ export function InputPrompt(): JSX.Element {
'Content-Type': 'application/json',
}

const getEndpoint = (): string => {
switch (process.env.NODE_ENV) {
case 'production':
return PROD_END_POINT
case 'development':
return LOCAL_END_POINT
default:
return STAGING_END_POINT
}
}

const url = getEndpoint()

const config = {
url:
process.env.NODE_ENV === 'production'
? PROD_END_POINT
: STAGING_END_POINT,
url,
method: 'POST',
headers,
data: {
Expand Down
6 changes: 6 additions & 0 deletions opentrons-ai-client/src/resources/constants.ts
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,9 @@ export const STAGING_AUTH0_AUDIENCE = 'https://staging.opentrons.ai/api'
// auth0 for production
export const PROD_AUTH0_CLIENT_ID = 'b5oTRmfMY94tjYL8GyUaVYHhMTC28X8o'
export const PROD_AUTH0_AUDIENCE = 'https://opentrons.ai/api'

// auth0 for local
export const LOCAL_AUTH0_CLIENT_ID = 'PcuD1wEutfijyglNeRBi41oxsKJ1HtKw'
export const LOCAL_AUTH0_AUDIENCE = 'sandbox-ai-api'
export const LOCAL_AUTH0_DOMAIN = 'identity.auth-dev.opentrons.com'
export const LOCAL_END_POINT = 'http://localhost:8000/api/chat/completion'
29 changes: 23 additions & 6 deletions opentrons-ai-client/src/resources/hooks/useGetAccessToken.ts
Original file line number Diff line number Diff line change
@@ -1,22 +1,39 @@
import { useAuth0 } from '@auth0/auth0-react'
import { PROD_AUTH0_AUDIENCE, STAGING_AUTH0_AUDIENCE } from '../constants'
import {
LOCAL_AUTH0_AUDIENCE,
PROD_AUTH0_AUDIENCE,
STAGING_AUTH0_AUDIENCE,
} from '../constants'

interface UseGetAccessTokenResult {
getAccessToken: () => Promise<string>
}

export const useGetAccessToken = (): UseGetAccessTokenResult => {
const { getAccessTokenSilently } = useAuth0()
const auth0Audience =
process.env.NODE_ENV === 'production'
? PROD_AUTH0_AUDIENCE
: STAGING_AUTH0_AUDIENCE

const auth0Audience = (): string => {
switch (process.env.NODE_ENV) {
case 'production':
return PROD_AUTH0_AUDIENCE
case 'staging':
return STAGING_AUTH0_AUDIENCE
case 'development':
return LOCAL_AUTH0_AUDIENCE
default:
console.error(
'Error: NODE_ENV variable is not valid:',
process.env.NODE_ENV
)
return STAGING_AUTH0_AUDIENCE
}
}

const getAccessToken = async (): Promise<string> => {
try {
const accessToken = await getAccessTokenSilently({
authorizationParams: {
audience: auth0Audience,
audience: auth0Audience(),
},
})
return accessToken
Expand Down
22 changes: 16 additions & 6 deletions opentrons-ai-server/api/domain/openai_predict.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,9 @@ def get_docs_all(self, query: str) -> Tuple[str, str, str]:

# define file paths for storage
example_command_path = str(ROOT_PATH / "api" / "storage" / "index" / "commands")
documentation_path = str(ROOT_PATH / "api" / "storage" / "index" / "v215")
documentation_path = str(ROOT_PATH / "api" / "storage" / "index" / "v219")
documentation_ref_path = str(ROOT_PATH / "api" / "storage" / "index" / "v219_ref")

labware_api_path = standard_labware_api

# retrieve example commands
Expand All @@ -65,15 +67,23 @@ def get_docs_all(self, query: str) -> Tuple[str, str, str]:
# retrieve documentation
storage_context = StorageContext.from_defaults(persist_dir=documentation_path)
index = load_index_from_storage(storage_context)
retriever = index.as_retriever(similarity_top_k=3)
retriever = index.as_retriever(similarity_top_k=2)
nodes = retriever.retrieve(query)
docs = "\n".join(node.text.strip() for node in nodes)
docs_v215 = f"\n{'='*15} DOCUMENTATION {'='*15}\n\n" + docs
docs = f"\n{'='*15} DOCUMENTATION {'='*15}\n\n" + docs

# retrieve reference
storage_context = StorageContext.from_defaults(persist_dir=documentation_ref_path)
index = load_index_from_storage(storage_context)
retriever = index.as_retriever(similarity_top_k=2)
nodes = retriever.retrieve(query)
docs_ref = "\n".join(node.text.strip() for node in nodes)
docs_ref = f"\n{'='*15} DOCUMENTATION REFERENCE {'='*15}\n\n" + docs_ref

# standard api names
standard_api_names = f"\n{'='*15} STANDARD API NAMES {'='*15}\n\n" + labware_api_path

return example_commands, docs_v215, standard_api_names
return example_commands, docs + docs_ref, standard_api_names

def extract_atomic_description(self, protocol_description: str) -> List[str]:
class atomic_descr(BaseModel):
Expand Down Expand Up @@ -126,13 +136,13 @@ def predict(self, prompt: str, chat_completion_message_params: List[ChatCompleti
if chat_completion_message_params:
messages += chat_completion_message_params

example_commands, docs_v215, standard_api_names = self.get_docs_all(prompt)
example_commands, docs_refs, standard_api_names = self.get_docs_all(prompt)

user_message: ChatCompletionMessageParam = {
"role": "user",
"content": f"QUESTION/DESCRIPTION: \n{prompt}\n\n"
f"PYTHON API V2 DOCUMENTATION: \n{example_commands}\n"
f"{pipette_type}\n{example_pcr_1}\n\n{docs_v215}\n\n"
f"{pipette_type}\n{example_pcr_1}\n\n{docs_refs}\n\n"
f"{rules_for_transfer}\n\n{standard_api_names}\n\n",
}

Expand Down
14 changes: 11 additions & 3 deletions opentrons-ai-server/api/domain/prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ def execute_function_call(function_name: str, arguments: str) -> str:
INSTRUCTIONS:
1) All types of protocols are based on apiLevel 2.15,
1) All types of protocols are based on apiLevel 2.19,
thus prepend the following code block
`metadata` and `requirements`:
```python
Expand All @@ -110,7 +110,7 @@ def execute_function_call(function_name: str, arguments: str) -> str:
'author': '[user name]',
'description': "[what is the protocol about]"
}
requirements = {"robotType": "[Robot type]", "apiLevel": "2.15"}
requirements = {"robotType": "[Robot type]", "apiLevel": "2.19"}
```
2) See the transfer rules <<COMMON RULES for TRANSFER>> below.
Expand All @@ -126,6 +126,14 @@ def execute_function_call(function_name: str, arguments: str) -> str:
5) If the pipette is multi-channel eg., P20 Multi-Channel Gen2, please use `columns` method.
6) <<< Load trash for Flex >>>
For Flex protocols, NOT OT-2 protocols using API version 2.16 or later,
load a trash bin in slot A3:
```python
trash = protocol.load_trash_bin("A3")
```
Note that you load trash before commands.
\n\n
"""

Expand Down Expand Up @@ -313,7 +321,7 @@ def execute_function_call(function_name: str, arguments: str) -> str:
'author': 'chatGPT',
'description': 'Transfer reagent',
}
requirements = {"robotType": "OT-2", "apiLevel": "2.15"}
requirements = {"robotType": "OT-2", "apiLevel": "2.19"}
def run(protocol):
# labware
Expand Down

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions opentrons-ai-server/api/storage/index/v219/docstore.json

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"graph_dict": {}}
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"embedding_dict": {}, "text_id_to_ref_doc_id": {}, "metadata_dict": {}}

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"graph_dict": {}}
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"embedding_dict": {}, "text_id_to_ref_doc_id": {}, "metadata_dict": {}}
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"index_store/data": {"79f24de7-d1f1-40ed-a8d1-d2017863ba25": {"__type__": "vector_store", "__data__": "{\"index_id\": \"79f24de7-d1f1-40ed-a8d1-d2017863ba25\", \"summary\": null, \"nodes_dict\": {\"54d007e0-a439-4299-a911-d7d93df8f0ad\": \"54d007e0-a439-4299-a911-d7d93df8f0ad\", \"995cafc5-6e39-43bb-a23b-48d135fc5687\": \"995cafc5-6e39-43bb-a23b-48d135fc5687\", \"1a57d68f-bd5e-4b19-ac44-c09f14e5cda0\": \"1a57d68f-bd5e-4b19-ac44-c09f14e5cda0\", \"507eee74-04d1-421e-8f63-8a7ad4dede7a\": \"507eee74-04d1-421e-8f63-8a7ad4dede7a\", \"9cd46042-f358-4a40-81c6-18aee50f8df3\": \"9cd46042-f358-4a40-81c6-18aee50f8df3\", \"c808470e-fa8e-4aa1-9392-29e5e6d56841\": \"c808470e-fa8e-4aa1-9392-29e5e6d56841\", \"20d7b474-fffe-481a-a28d-3393a60f7c3d\": \"20d7b474-fffe-481a-a28d-3393a60f7c3d\", \"0c611f55-b589-4106-a22b-8cc7087cd72c\": \"0c611f55-b589-4106-a22b-8cc7087cd72c\", \"aaae4fe6-132e-47cf-8ebf-f923ede94684\": \"aaae4fe6-132e-47cf-8ebf-f923ede94684\", \"4812c02a-deae-4617-a631-3d1d2681ef6d\": \"4812c02a-deae-4617-a631-3d1d2681ef6d\", \"cfcb4e69-2296-491c-b649-15e9377ac385\": \"cfcb4e69-2296-491c-b649-15e9377ac385\", \"c94276a9-98a1-4349-84d0-b78b5e8d1b7e\": \"c94276a9-98a1-4349-84d0-b78b5e8d1b7e\", \"59145c5f-26e2-4613-8df5-e10874e81f20\": \"59145c5f-26e2-4613-8df5-e10874e81f20\", \"99a604e4-a110-4600-b68b-b11b0ad3538f\": \"99a604e4-a110-4600-b68b-b11b0ad3538f\", \"433118a8-4031-4aaf-bbdc-e0965f79997e\": \"433118a8-4031-4aaf-bbdc-e0965f79997e\", \"388e0ea2-c70d-4687-b13d-d9ea9f93d6a4\": \"388e0ea2-c70d-4687-b13d-d9ea9f93d6a4\", \"ee7b7752-6013-4fc6-90c6-cb1559acf82a\": \"ee7b7752-6013-4fc6-90c6-cb1559acf82a\", \"a4c2e637-bf95-40cb-b4de-78de8b49b6b9\": \"a4c2e637-bf95-40cb-b4de-78de8b49b6b9\", \"ebe419bd-5a6e-41f3-8c29-1522cbb5495b\": \"ebe419bd-5a6e-41f3-8c29-1522cbb5495b\", \"1abdb350-e5f4-4c81-942a-1105d3053d74\": \"1abdb350-e5f4-4c81-942a-1105d3053d74\", \"f3b48282-6282-4419-9b67-287f8516f532\": \"f3b48282-6282-4419-9b67-287f8516f532\", \"55820e72-350c-4d01-97be-cdc60036974a\": \"55820e72-350c-4d01-97be-cdc60036974a\", \"3712e73e-6bb3-45fb-87e3-b2539f5ac7ee\": \"3712e73e-6bb3-45fb-87e3-b2539f5ac7ee\", \"c5a03af5-c467-47c4-96ff-3b15682d23bf\": \"c5a03af5-c467-47c4-96ff-3b15682d23bf\", \"112f62c0-766e-43c9-ace3-b96969d9fadb\": \"112f62c0-766e-43c9-ace3-b96969d9fadb\", \"e39682ef-e478-472e-8f0e-da1ed1b619f8\": \"e39682ef-e478-472e-8f0e-da1ed1b619f8\", \"4e40803c-97a9-4b13-8bd8-aaa8a7840760\": \"4e40803c-97a9-4b13-8bd8-aaa8a7840760\", \"f1cf7a8c-2071-4d9a-a0bb-f6b3664fb968\": \"f1cf7a8c-2071-4d9a-a0bb-f6b3664fb968\", \"527cc4e9-d27e-46b1-9faf-1052b2b76ba0\": \"527cc4e9-d27e-46b1-9faf-1052b2b76ba0\", \"33a52153-0b96-43ff-a3fc-bc2546b70869\": \"33a52153-0b96-43ff-a3fc-bc2546b70869\", \"56eec31c-5c40-416e-81fd-7c7a6722f005\": \"56eec31c-5c40-416e-81fd-7c7a6722f005\", \"3ae87c64-eb04-41c1-ac18-e26b8acfc8a3\": \"3ae87c64-eb04-41c1-ac18-e26b8acfc8a3\", \"98338da6-5a0a-4653-89be-314b1cc2ff25\": \"98338da6-5a0a-4653-89be-314b1cc2ff25\", \"515d7962-e18d-4b3d-8b44-345f171c1de2\": \"515d7962-e18d-4b3d-8b44-345f171c1de2\", \"1062af75-9e40-499f-9e34-18c8074da7a8\": \"1062af75-9e40-499f-9e34-18c8074da7a8\", \"e559f16b-b79d-497c-83c3-1040b7290ab2\": \"e559f16b-b79d-497c-83c3-1040b7290ab2\", \"fe0c19af-46e7-46d9-8fd6-66508ffc9f88\": \"fe0c19af-46e7-46d9-8fd6-66508ffc9f88\", \"08bcf1d4-3919-4d0d-8c25-64cb6cd0f6ff\": \"08bcf1d4-3919-4d0d-8c25-64cb6cd0f6ff\", \"a33c36a6-48b6-48a9-b522-e5205e1c4b8c\": \"a33c36a6-48b6-48a9-b522-e5205e1c4b8c\", \"f0b68f74-cf73-4de3-87e3-80446f9a99d8\": \"f0b68f74-cf73-4de3-87e3-80446f9a99d8\", \"6f062ef9-2a08-4252-bfea-e3a6fec7ea21\": \"6f062ef9-2a08-4252-bfea-e3a6fec7ea21\", \"d38702d2-d706-43ce-8e66-ad161eeafb07\": \"d38702d2-d706-43ce-8e66-ad161eeafb07\", \"1fae385d-2f54-4dee-b18c-3b31945b956d\": \"1fae385d-2f54-4dee-b18c-3b31945b956d\", \"465eac93-e005-49d0-8601-0352a21c45ed\": \"465eac93-e005-49d0-8601-0352a21c45ed\"}, \"doc_id_dict\": {}, \"embeddings_dict\": {}}"}}}
47 changes: 47 additions & 0 deletions opentrons-ai-server/api/utils/create_index.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
import os
import os.path
from pathlib import Path
from typing import Any, Union

from llama_index.core import Settings as llamasettings
from llama_index.core import SimpleDirectoryReader, StorageContext, VectorStoreIndex, load_index_from_storage
from llama_index.core.indices.base import BaseIndex
from llama_index.embeddings.openai import OpenAIEmbedding

ROOT_PATH: Path = Path(Path(__file__)).parent.parent.parent
llamasettings.embed_model = OpenAIEmbedding(model_name="text-embedding-3-large", api_key=os.environ["OPENAI_API_KEY"])


def create_index(data_path: str, data_file: str, index_name: str) -> Union[BaseIndex[Any], VectorStoreIndex]:
"""
# creating index using llama-index
data_path = str(ROOT_PATH / "api" / "data")
data_docs = str(ROOT_PATH / "api" / "data" / "python_api_219_docs.md")
file_name = "v219_ref"
index = create_index(data_path, data_docs, file_name)
# if one wants to check with a prompt
query_engine: Any = index.as_query_engine()
prompt = input()
response = query_engine.query(prompt)
print(response)
Settings
- os.environ["OPENAI_API_KEY"]
"""

# check if storage already exists
PERSIST_DIR = str(ROOT_PATH / "api" / "storage" / "index" / index_name)
if not os.path.exists(PERSIST_DIR):
# load the documents and create the index
documents = SimpleDirectoryReader(data_path, [data_file]).load_data()
index = VectorStoreIndex.from_documents(documents)
# store it for later
index.storage_context.persist(persist_dir=PERSIST_DIR)
return index
else:
# load the existing index
print("Using existing nidex.")
storage_context = StorageContext.from_defaults(persist_dir=PERSIST_DIR)
return load_index_from_storage(storage_context)

0 comments on commit e6ff13c

Please sign in to comment.