azureml-examples/cli/generative-ai/promptflow/deploy-flow
Blanca Li cb0bf2d053
remove env var for pf run mode (#3359)
2024-08-30 15:06:26 +05:30
..
image_build_with_reqirements add promptflow code deploy samples (#2619) 2024-02-22 18:47:15 +08:00
README.md add promptflow code deploy samples (#2619) 2024-02-22 18:47:15 +08:00
command.sh add promptflow code deploy samples (#2619) 2024-02-22 18:47:15 +08:00
deployment-with-docker-context-environment.yaml remove env var for pf run mode (#3359) 2024-08-30 15:06:26 +05:30
deployment.yaml remove env var for pf run mode (#3359) 2024-08-30 15:06:26 +05:30
endpoint-system-identity.yaml add promptflow code deploy samples (#2619) 2024-02-22 18:47:15 +08:00
endpoint-user-identity.yaml add promptflow code deploy samples (#2619) 2024-02-22 18:47:15 +08:00
model.yaml add promptflow code deploy samples (#2619) 2024-02-22 18:47:15 +08:00
sample-request.json add promptflow code deploy samples (#2619) 2024-02-22 18:47:15 +08:00

README.md

Deploy a prompt flow to AzureML managed online endpoint

This folder includes samples to deploy a prompt flow to managed online endpoint.

We use the sample flow basic-chat. To learn more about how to run this flow locally, please refer to quick start guidance of prompt flow.

The command to deploy the flow to managed online endpoint in the command.sh file.

For detailed guidance of how to deploy a prompt flow to managed online endpoint, please refer to this document