Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).
- Define workflows where each step in the workflow is a container.
- Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic graph (DAG).
- Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo Workflows on Kubernetes.
Argo is a Cloud Native Computing Foundation (CNCF) hosted project.
- Machine Learning pipelines
- Data and batch processing
- ETL
- Infrastructure automation
- CI/CD
- Argo Workflows is the most popular workflow execution engine for Kubernetes.
- It can run 1000s of workflows a day, each with 1000s of concurrent tasks.
- Our users say it is lighter-weight, faster, more powerful, and easier to use
- Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based environments.
- Cloud agnostic and can run on any Kubernetes cluster.
Read what people said in our latest survey
Access the demo environment (login using Github)
Just some of the projects that use or rely on Argo Workflows:
Check out our Java, Golang and Python clients.
kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/install.yaml
Official Argo Workflows user list
- UI to visualize and manage Workflows
- Artifact support (S3, Artifactory, Alibaba Cloud OSS, HTTP, Git, GCS, raw)
- Workflow templating to store commonly used Workflows in the cluster
- Archiving Workflows after executing for later access
- Scheduled workflows using cron
- Server interface with REST API (HTTP and GRPC)
- DAG or Steps based declaration of workflows
- Step level input & outputs (artifacts/parameters)
- Loops
- Parameterization
- Conditionals
- Timeouts (step & workflow level)
- Retry (step & workflow level)
- Resubmit (memoized)
- Suspend & Resume
- Cancellation
- K8s resource orchestration
- Exit Hooks (notifications, cleanup)
- Garbage collection of completed workflow
- Scheduling (affinity/tolerations/node selectors)
- Volumes (ephemeral/existing)
- Parallelism limits
- Daemoned steps
- DinD (docker-in-docker)
- Script steps
- Event emission
- Prometheus metrics
- Multiple executors
- Multiple pod and workflow garbage collection strategies
- Automatically calculated resource usage per step
- Java/Golang/Python SDKs
- Pod Disruption Budget support
- Single-sign on (OAuth2/OIDC)
- Webhook triggering
- CLI
- Out-of-the box and custom Prometheus metrics
- Windows container support
- Embedded widgets
- Multiplex log viewer
We host monthly community meetings where we and the community showcase demos and discuss the current and future state of the project. Feel free to join us! For Community Meeting information, minutes and recordings please see here.
Participation in the Argo Workflows project is governed by the CNCF Code of Conduct
- Awesome-Argo: A Curated List of Awesome Projects and Resources Related to Argo
- Automation of Everything - How To Combine Argo Events, Workflows & Pipelines, CD, and Rollouts
- Argo Workflows and Pipelines - CI/CD, Machine Learning, and Other Kubernetes Workflows
- Argo Ansible role: Provisioning Argo Workflows on OpenShift
- Argo Workflows vs Apache Airflow
- CI/CD with Argo on Kubernetes
- Running Argo Workflows Across Multiple Kubernetes Clusters
- Open Source Model Management Roundup: Polyaxon, Argo, and Seldon
- Producing 200 OpenStreetMap extracts in 35 minutes using a scalable data workflow
- Argo integration review
- TGI Kubernetes with Joe Beda: Argo workflow system
- Argo GitHub: https://github.com/argoproj
- Argo Website: https://argoproj.github.io/
- Argo Slack: click here to join
See SECURITY.md.