diff --git a/docs/cloud/azureai/tracing/index.md b/docs/cloud/azureai/tracing/index.md deleted file mode 100644 index 2b928968a31..00000000000 --- a/docs/cloud/azureai/tracing/index.md +++ /dev/null @@ -1,103 +0,0 @@ -# Tracing from local to cloud - -:::{admonition} Experimental feature -This is an experimental feature, and may change at any time. Learn [more](../../../how-to-guides/faq.md#stable-vs-experimental). -::: - -Prompt flow [tracing feature](../../../how-to-guides/tracing/index.md) enables users to trace LLM calls, functions and even LLM frameworks. Besides, with `promptflow[azure]` installed, prompt flow can also log traces to an Azure ML workspace or Azure AI project, which makes it possible to share traces with your team members. - -## Installing the package - -```console -pip install "promptflow[azure]>=1.11.0" -``` - -## Set cloud destination - -To log traces to cloud, first of all, you need an [Azure ML workspace](https://learn.microsoft.com/en-us/azure/machine-learning/concept-workspace?view=azureml-api-2) or [Azure AI project](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/create-projects). Then, you can set the destination. Make sure you have logged in Azure CLI (`az login`, refer to [Azure CLI doc](https://learn.microsoft.com/en-us/cli/azure/) for more informations) before execute below CLI command: - -```console -pf config set trace.destination=azureml://subscriptions//resourcegroups//providers/Microsoft.MachineLearningServices/workspaces/ -``` - -Fill in with your own subscription ID, resource group name, workspace or project name, and all is ready now. You can make LLM calls, run LLM application or execute your flow with `pf flow test` or `pf run create`, you will see an Azure portal URL link in the console: - -![trace-ui-portal](../../../media/cloud/azureai/tracing/portal_url.png) - -Click the link to view the traces in Azure portal, and feel free to share it with your team members. - -![trace-ui-portal](../../../media/trace/trace-ui-portal-demo.gif) - -## Storage - -Traces in Azure ML workspace/AI project are persisted in an [Azure Cosmos DB](https://learn.microsoft.com/en-us/azure/cosmos-db/) associated with the workspace/project. It will be automatically setup the first time you execute CLI command `pf config set trace.destination` for a workspace/project. - -## Set different destination - -Prompt flow also supports to log traces to different workspace/project across different flows. To configure this, you need to set config to `azureml` via CLI command: - -```console -pf config set trace.destination=azureml -``` - -Then, you need to prepare the configuration files pointing to different workspace/project; prompt flow currently recognizes [workspace configuration file](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-environment?view=azureml-api-2#local-and-dsvm-only-create-a-workspace-configuration-file) `config.json`, you can manually create one or download from Azure portal, this JSON file contains all required informations of a workspace/project: - -```json -{ - "subscription_id": "", - "resource_group": "", - "workspace_name": "" -} -``` - -When `trace.destination` is set to `azureml`, prompt flow will search for a `config.json`, starts from `.azureml` under flow folder, then goes up to parent folder until it finds one. If no `config.json` is found, an error will be raised. It is recommended to place `config.json` under a folder named `.azureml` in your flow directory, which makes prompt flow to find it easily. - -Below is an example folder structure: - -``` -flows -├── flow1 -│ ├── .azureml -│ │ └── config.json # workspace/project A -│ ├── flow.flex.yaml -│ ├── llm.py -│ ├── data.jsonl -│ ... -├── flow2 -│ ├── .azureml -│ │ └── config.json # workspace/project B -│ ├── flow.dag.yaml -│ ├── hello.py -│ ├── data.jsonl -└── ... -``` - -Then when execute `flow1`, traces will be logged to workspace/project A, while execute `flow2`, traces will be logged to workspace/project B. - -## Disable logging to cloud - -When you want to disable logging traces to cloud, you can switch back to local by below CLI command: - -```console -pf config set trace.destination=local -``` - -`local` is the default value for `pf.trace.destination`, and no traces will be logged to Azure anymore with this value; note that traces will still be logged to local. - -## Disable tracing feature - -Use below CLI command to disable prompt flow tracing feature: - -```console -pf config set trace.destination=none -``` - -Then no traces will be logged to neither local nor cloud. - - -```{toctree} -:maxdepth: 1 -:hidden: - -run_tracking -``` \ No newline at end of file diff --git a/docs/cloud/azureai/tracing/run_tracking.md b/docs/cloud/azureai/tracing/run_tracking.md deleted file mode 100644 index 6124b2f0ae9..00000000000 --- a/docs/cloud/azureai/tracing/run_tracking.md +++ /dev/null @@ -1,32 +0,0 @@ -# Run tracking - -:::{admonition} Experimental feature -This is an experimental feature, and may change at any time. Learn [more](../../../how-to-guides/faq.md#stable-vs-experimental). -::: - -After you follow [tracing](./index.md) to set trace destination to cloud and successfully run a flow locally, there will be a run record in Azure ML workspace or Azure AI project. You can view the traces in cloud from Azure portal, and feel free to share it with your team members. - -## Portal tracing run list - -Except clicking the portal URL link in the console, you can also find the result in the tab `Tracing` -> `Runs`: - -![img](../../../media/cloud/azureai/tracing/run_tracking_list.png) - -As a result, you will not be able to find these local-to-cloud runs in the tab `Prompt flow` -> `Runs`. - -## View with pfazure commands - -You can also view the details of the run record with most of the [pfazure run commands](../../../reference/pfazure-command-reference.md#pfazure-run). - -## Compare with a cloud run - -| | Local-to-cloud run | Cloud run | -|---|---|---| -| Compute | Local | Automatic runtime/Compute instance | -| Run tracking timing | After the run is completed locally | Real-time on cloud | - -## Next steps - -Learn more about: -- [Run prompt flow in Azure AI](../run-promptflow-in-azure-ai.md) -- [CLI reference: pfazure](../../../reference/pfazure-command-reference.md) \ No newline at end of file diff --git a/docs/concepts/concept-connections.md b/docs/concepts/concept-connections.md index e8a89a20e3c..7c64e7150e4 100644 --- a/docs/concepts/concept-connections.md +++ b/docs/concepts/concept-connections.md @@ -15,7 +15,7 @@ Prompt flow provides a variety of pre-built connections, including Azure OpenAI, | [OpenAI](https://openai.com/) | LLM or Python | | [Cognitive Search](https://azure.microsoft.com/products/search) | Vector DB Lookup or Python | | [Serp](https://serpapi.com/) | Serp API or Python | -| [Serverless](https://learn.microsoft.com/azure/ai-studio/concepts/deployments-overview#deploy-models-as-serverless-apis) | LLM or Python | +| [Serverless](https://learn.microsoft.com/azure/ai-studio/concepts/deployments-overview) | LLM or Python | | Custom | Python | By leveraging connections in prompt flow, you can easily establish and manage connections to external APIs and data sources, facilitating efficient data exchange and interaction within their AI applications. diff --git a/docs/reference/tools-reference/llm-tool.md b/docs/reference/tools-reference/llm-tool.md index fda02204829..675286ec851 100644 --- a/docs/reference/tools-reference/llm-tool.md +++ b/docs/reference/tools-reference/llm-tool.md @@ -1,7 +1,7 @@ -# LLM +# LLM ## Introduction -Prompt flow LLM tool enables you to leverage widely used large language models like [OpenAI](https://platform.openai.com/), [Azure OpenAI (AOAI)](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/overview), and models in [Azure AI Studio model catalog](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/model-catalog) for natural language processing. +Prompt flow LLM tool enables you to leverage widely used large language models like [OpenAI](https://platform.openai.com/), [Azure OpenAI (AOAI)](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/overview), and models in [Azure AI Studio model catalog](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/model-catalog) for natural language processing. > [!NOTE] > The previous version of the LLM tool is now being deprecated. Please upgrade to latest [promptflow-tools](https://pypi.org/project/promptflow-tools/) package to consume new llm tools. @@ -25,7 +25,7 @@ Create OpenAI resources, Azure OpenAI resources or MaaS deployment with the LLM - **MaaS deployment** - Create MaaS deployment for models in Azure AI Studio model catalog with [instruction](https://learn.microsoft.com/azure/ai-studio/concepts/deployments-overview#deploy-models-as-serverless-apis) + Create MaaS deployment for models in Azure AI Studio model catalog with [instruction](https://learn.microsoft.com/azure/ai-studio/concepts/deployments-overview) You can create serverless connection to use this MaaS deployment. diff --git a/examples/tutorials/run-flow-with-pipeline/pipeline.ipynb b/examples/tutorials/run-flow-with-pipeline/pipeline.ipynb index 5e3ee01fce2..bcbcc6906e6 100644 --- a/examples/tutorials/run-flow-with-pipeline/pipeline.ipynb +++ b/examples/tutorials/run-flow-with-pipeline/pipeline.ipynb @@ -136,7 +136,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "When using the `load_component` function and the flow YAML specification, your flow is automatically transformed into a __[parallel component](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-parallel-job-in-pipeline?view=azureml-api-2&tabs=cliv2#why-are-parallel-jobs-needed)__. This parallel component is designed for large-scale, offline, parallelized processing with efficiency and resilience. Here are some key features of this auto-converted component:\n", + "When using the `load_component` function and the flow YAML specification, your flow is automatically transformed into a __[parallel component](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-parallel-job-in-pipeline?view=azureml-api-2&tabs=cliv2)__. This parallel component is designed for large-scale, offline, parallelized processing with efficiency and resilience. Here are some key features of this auto-converted component:\n", "\n", " - Pre-defined input and output ports:\n", "\n", @@ -176,7 +176,7 @@ "## 3.1 Declare input and output\n", "To supply your pipeline with data, you need to declare an input using the `path`, `type`, and `mode` properties. Please note: `mount` is the default and suggested mode for your file or folder data input.\n", "\n", - "Declaring the pipeline output is optional. However, if you require a customized output path in the cloud, you can follow the example below to set the path on the datastore. For more detailed information on valid path values, refer to this documentation - [manage pipeline inputs outputs](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-inputs-outputs-pipeline?view=azureml-api-2&tabs=cli#path-and-mode-for-data-inputsoutputs)." + "Declaring the pipeline output is optional. However, if you require a customized output path in the cloud, you can follow the example below to set the path on the datastore. For more detailed information on valid path values, refer to this documentation - [manage pipeline inputs outputs](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-manage-inputs-outputs-pipeline?view=azureml-api-2&tabs=cli)." ] }, { diff --git a/examples/tutorials/tracing/autogen-groupchat/trace-autogen-groupchat.ipynb b/examples/tutorials/tracing/autogen-groupchat/trace-autogen-groupchat.ipynb index 7dca62366a5..e8fe8566bcd 100644 --- a/examples/tutorials/tracing/autogen-groupchat/trace-autogen-groupchat.ipynb +++ b/examples/tutorials/tracing/autogen-groupchat/trace-autogen-groupchat.ipynb @@ -15,8 +15,6 @@ "AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n", "Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n", "\n", - "This notebook is modified based on [autogen agent chat example](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_groupchat.ipynb). \n", - "\n", "**Learning Objectives** - Upon completing this tutorial, you should be able to:\n", "\n", "- Trace LLM (OpenAI) Calls and visualize the trace of your application.\n", @@ -45,7 +43,7 @@ "\n", "You can create the config file named `OAI_CONFIG_LIST.json` from example file: `OAI_CONFIG_LIST.json.example`.\n", "\n", - "Below code use the [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file. \n" + "Below code use the [`config_list_from_json`](https://microsoft.github.io/autogen/0.2/docs/reference/oai/openai_utils/#config_list_from_json) function loads a list of configurations from an environment variable or a json file. \n" ] }, { diff --git a/examples/tutorials/tracing/langchain/trace-langchain.ipynb b/examples/tutorials/tracing/langchain/trace-langchain.ipynb index fcd2daeeb9c..18f2e283dd8 100644 --- a/examples/tutorials/tracing/langchain/trace-langchain.ipynb +++ b/examples/tutorials/tracing/langchain/trace-langchain.ipynb @@ -15,7 +15,7 @@ "The tracing capability provided by Prompt flow is built on top of [OpenTelemetry](https://opentelemetry.io/) that gives you complete observability over your LLM applications. \n", "And there is already a rich set of OpenTelemetry [instrumentation packages](https://opentelemetry.io/ecosystem/registry/?language=python&component=instrumentation) available in OpenTelemetry Eco System. \n", "\n", - "In this example we will demo how to use [opentelemetry-instrumentation-langchain](https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-langchain) package provided by [Traceloop](https://www.traceloop.com/) to instrument [LangChain](https://python.langchain.com/docs/get_started/quickstart) apps.\n", + "In this example we will demo how to use [opentelemetry-instrumentation-langchain](https://github.com/traceloop/openllmetry/tree/main/packages/opentelemetry-instrumentation-langchain) package provided by [Traceloop](https://www.traceloop.com/) to instrument [LangChain](https://python.langchain.com/docs/tutorials/) apps.\n", "\n", "\n", "**Learning Objectives** - Upon completing this tutorial, you should be able to:\n",