How Do I Scale Kubernetes With KEDA and Karpenter?
Introduction
Scaling Kubernetes efficiently is crucial for maintaining optimal performance and cost-effectiveness in cloud environments. This guide explores how to leverage KEDA (Kubernetes Event-Driven Autoscaling) and Karpenter to achieve dynamic scaling of Kubernetes workloads. KEDA facilitates scaling based on custom metrics, while Karpenter, an open-source node autoscaler, helps manage node scaling. By integrating these tools, you can ensure that both pods and nodes are scaled dynamically, enhancing resource utilization and reducing costs.
Key Points
- KEDA: Enables event-driven autoscaling for Kubernetes workloads.
- Karpenter: Provides efficient and cost-effective node autoscaling.
- Integration: Combining KEDA and Karpenter allows for dynamic scaling of both pods and nodes based on custom metrics.
Steps
Install KEDA: Deploy KEDA to your Kubernetes cluster to enable event-driven autoscaling. This involves setting up KEDA using Helm charts, which simplifies the installation process and ensures you have the latest version.
Install Karpenter: Deploy Karpenter to your Kubernetes cluster. Karpenter requires configuration to manage node scaling effectively, which is also facilitated through Helm charts.
Configure Scaling: Set up both KEDA and Karpenter to work in tandem. This involves configuring a
ScaledObject
in KEDA for pod scaling based on specific metrics and aProvisioner
in Karpenter for node scaling.
Below is the Pulumi program to achieve this:
import * as pulumi from "@pulumi/pulumi";
import * as k8s from "@pulumi/kubernetes";
// Create a Kubernetes provider
const k8sProvider = new k8s.Provider("k8sProvider", {
kubeconfig: process.env.KUBECONFIG,
});
// Install KEDA using Helm
const kedaNamespace = new k8s.core.v1.Namespace("keda-namespace", {
metadata: { name: "keda" },
}, { provider: k8sProvider });
const keda = new k8s.helm.v3.Chart("keda", {
chart: "keda",
version: "2.4.0",
namespace: kedaNamespace.metadata.name,
fetchOpts: {
repo: "https://kedacore.github.io/charts",
},
}, { provider: k8sProvider });
// Install Karpenter using Helm
const karpenterNamespace = new k8s.core.v1.Namespace("karpenter-namespace", {
metadata: { name: "karpenter" },
}, { provider: k8sProvider });
const karpenter = new k8s.helm.v3.Chart("karpenter", {
chart: "karpenter",
version: "0.5.0",
namespace: karpenterNamespace.metadata.name,
fetchOpts: {
repo: "https://charts.karpenter.sh",
},
}, { provider: k8sProvider });
// Create a KEDA ScaledObject to scale a deployment based on custom metrics
const scaledObject = new k8s.apiextensions.CustomResource("scaledObject", {
apiVersion: "keda.sh/v1alpha1",
kind: "ScaledObject",
metadata: {
name: "my-scaledobject",
namespace: "default",
},
spec: {
scaleTargetRef: {
kind: "Deployment",
name: "my-deployment",
},
triggers: [{
type: "cpu",
metadata: {
type: "Utilization",
value: "50",
},
}],
},
}, { provider: k8sProvider });
// Create a Karpenter Provisioner to manage node scaling
const provisioner = new k8s.apiextensions.CustomResource("provisioner", {
apiVersion: "karpenter.sh/v1alpha5",
kind: "Provisioner",
metadata: {
name: "default",
},
spec: {
cluster: {
name: "my-cluster",
endpoint: "https://my-cluster-endpoint",
},
ttlSecondsAfterEmpty: 30,
limits: {
resources: {
cpu: 1000,
memory: "1000Gi",
},
},
provider: {
instanceProfile: "KarpenterInstanceProfile",
},
},
}, { provider: k8sProvider });
Summary
In this guide, we demonstrated how to set up KEDA and Karpenter in a Kubernetes cluster using Pulumi. We covered the installation of both tools via Helm charts and configured them to enable dynamic scaling. By setting up a ScaledObject
for KEDA, we achieved pod scaling based on CPU utilization, while a Provisioner
for Karpenter managed node scaling. This integration allows for efficient resource management and cost savings, ensuring your Kubernetes cluster operates optimally.
Deploy this code
Want to deploy this code? Sign up for a free Pulumi account to deploy in a few clicks.
Sign upNew to Pulumi?
Want to deploy this code? Sign up with Pulumi to deploy in a few clicks.
Sign upThank you for your feedback!
If you have a question about how to use Pulumi, reach out in Community Slack.
Open an issue on GitHub to report a problem or suggest an improvement.