Member-only story
KEDA (Kubernetes-based Event Driven Autoscaling) Autoscales Based on External Metrics
KEDA is an open-source component that enables Kubernetes workloads to scale based on external events or custom metrics, beyond the typical CPU and memory-based autoscaling provided by the default Kubernetes Horizontal Pod Autoscaler (HPA). This allows for event-driven, dynamic scaling in Kubernetes based on external metrics such as message queue length, HTTP requests, database connections, or custom application metrics.
One of the key components of KEDA is the “ScaledObject”, which is the resource used to define how a deployment or job should scale in response to an external metric. KEDA is highly useful in scenarios where your application needs to scale based on real-time events from systems like Kafka, RabbitMQ, Azure Event Hubs, or AWS SQS.
KEDA Architecture Overview
- KEDA Operator: The core component of KEDA that integrates with Kubernetes and is responsible for managing the scaling logic.
- KEDA Scalers: External metric sources (such as message queues, databases, or Prometheus) that provide metrics for the scaling decision.
- ScaledObject: A Kubernetes custom resource definition (CRD) that defines how the Kubernetes application should scale based on metrics.