This project offers tools for AI Inference, enabling developers to build Inference Gateways.
The following are some key industry terms that are important to understand for this project:
- Model: A generative AI model that has learned patterns from data and is used for inference. Models vary in size and architecture, from smaller domain-specific models to massive multi-billion parameter neural networks that are optimized for diverse language tasks.
- Inference: The process of running a generative AI model, such as a large language model, diffusion model etc, to generate text, embeddings, or other outputs from input data.
- Model server: A service (in our case, containerized) responsible for receiving inference requests and returning predictions from a model.
- Accelerator: specialized hardware, such as Graphics Processing Units (GPUs) that can be attached to Kubernetes nodes to speed up computations, particularly for training and inference tasks.
And the following are more specific terms to this project:
- Scheduler: Makes decisions about which endpoint is optimal (best cost /
best performance) for an inference request based on
Metrics and Capabilities
from Model Serving. - Metrics and Capabilities: Data provided by model serving platforms about performance, availability and capabilities to optimize routing. Includes things like Prefix Cache status or LoRA Adapters availability.
- Endpoint Selector: A
Scheduler
combined withMetrics and Capabilities
systems is often referred to together as an Endpoint Selection Extension (this is also sometimes referred to as an "endpoint picker", or "EPP"). - Inference Gateway: A proxy/load-balancer which has been coupled with a
Endpoint Selector
. It provides optimized routing and load balancing for serving Kubernetes self-hosted generative Artificial Intelligence (AI) workloads. It simplifies the deployment, management, and observability of AI inference workloads.
For deeper insights and more advanced concepts, refer to our proposals.
This extension upgrades an ext-proc-capable proxy or gateway - such as Envoy Gateway, kGateway, or the GKE Gateway - to become an inference gateway - supporting inference platform teams self-hosting large language models on Kubernetes. This integration makes it easy to expose and control access to your local OpenAI-compatible chat completion endpoints to other workloads on or off cluster, or to integrate your self-hosted models alongside model-as-a-service providers in a higher level AI Gateway like LiteLLM, Solo AI Gateway, or Apigee.
The inference gateway:
- Improves the tail latency and throughput of LLM completion requests against Kubernetes-hosted model servers using an extensible request scheduling alogrithm that is kv-cache and request cost aware, avoiding evictions or queueing as load increases
- Provides Kubernetes-native declarative APIs to route client model names to use-case specific LoRA adapters and control incremental rollout of new adapter versions, A/B traffic splitting, and safe blue-green base model and model server upgrades
- Adds end to end observability around service objective attainment
- Ensures operational guardrails between different client model names, allowing a platform team to safely serve many different GenAI workloads on the same pool of shared foundation model servers for higher utilization and fewer required accelerators
It currently requires a version of vLLM that supports the necessary metrics to predict traffic load which is defined in the model server protocol. Support for Google's Jetstream, nVidia Triton, text-generation-inference, and SGLang is coming soon.
This project is alpha (0.3 release). It should not be used in production yet.
Follow our Getting Started Guide to get the inference-extension up and running on your cluster!
See our website at https://gateway-api-inference-extension.sigs.k8s.io/ for detailed API documentation on leveraging our Kubernetes-native declarative APIs
As Inference Gateway builds towards a GA release. We will continue to expand our capabilities, namely:
- Prefix-cache aware load balancing with interfaces for remote caches
- Recommended LoRA adapter pipeline for automated rollout
- Fairness and priority between workloads within the same criticality band
- HPA support for autoscaling on aggregate metrics derived from the load balancer
- Support for large multi-modal inputs and outputs
- Support for other GenAI model types (diffusion and other non-completion protocols)
- Heterogeneous accelerators - serve workloads on multiple types of accelerator using latency and request cost-aware load balancing
- Disaggregated serving support with independently scaling pools
Follow this README to learn more about running the inference-extension end-to-end test suite on your cluster.
Our community meeting is weekly at Thursday 10AM PDT (Zoom, Meeting Notes).
We currently utilize the #wg-serving slack channel for communications.
Contributions are readily welcomed, follow the dev guide to start contributing!
Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.