Deploying Prefect to Kubernetes -Part 1
Engineering
Jul 21, 2025

Let’s walk through deploying Prefect on Kubernetes together! If you’re building resilient data pipelines or orchestrating complex workflows, Prefect is a fantastic option. When paired with Kubernetes, you get a powerhouse setup that scales beautifully.
Understanding Prefect
Prefect is a Python-based workflow orchestration framework designed to help you build, manage, monitor data pipelines, and schedule tasks. It turns any Python function into an observable, orchestratable unit of work - all thanks to some handy decorators.
Here are the core concepts to know:
Flows: The main unit in Prefect, organizing tasks into a complete workflow.
Tasks: Individual functions that perform specific jobs within a flow.
Deployment: Configurations that define how, where, and when flows run.
Work pool: A group of deployment configurations with shared resources or execution requirements.
Workers: Processes that run tasks within flows, managing execution and resources.
Key Prefect Components
Let’s break Prefect down into the two main components you’ll deploy:
Prefect Server
This is the heart of Prefect’s backend, managing orchestration, state tracking, and logging. It includes:
API Layer: Manages communication for starting and controlling workflows.
Database: Stores metadata, task states, configurations, and logs.
Scheduler: Triggers and schedules flow runs, handles retries and manages dependencies.
UI and Monitoring: A web interface to visualize workflows, monitor statuses, and troubleshoot.
Prefect Workers
Workers are distributed agents that execute tasks and keep things running smoothly. They handle:
Task Execution: Running tasks and managing resources.
State Management: Updating task statuses for real-time tracking.
Concurrency and Resource Management: Optimizing resource usage by running tasks in parallel.
Scalability: Supports multiple workers running across various environments, supporting large-scale workflows.
Together, the server handles orchestration, and the workers handle task execution — a perfect match for flexible, scalable workflows!
Why Kubernetes for Prefect?
Kubernetes and Prefect are a dream team. Kubernetes adds scalability, reliability, and resource efficiency to Prefect’s workflow management. Let’s explore why it’s such a great fit:
Scalability: Dynamically scale Prefect workers based on demand. If your workloads spike, Kubernetes can spin up more pods automatically.
Resource Management: Define CPU and memory limits for each worker, preventing over-provisioning and optimizing infrastructure usage.
High Availability: Kubernetes manages replicas and replaces failed pods, ensuring uninterrupted workflow execution.
Isolation and Flexibility: Prefect workers run in isolated pods, preventing conflicts and enabling multiple concurrent workloads.
Fault Tolerance: Self-healing capabilities restart failed pods and balance workloads across nodes.
Monitoring and Logging: Kubernetes integrates seamlessly with tools like Prometheus and Datadog for real-time insights
Let’s Deploy Prefect on Kubernetes
Getting Ready for Deployment
Before deploying Prefect on Kubernetes, we need the listed tools below:
Kubernetes Cluster: You can quickly set this up by enabling Kubernetes in docker or optionally configuring it with k3d
Postgres
Here’s an overview of what we’ll deploy:
Prefect Server: The orchestration engine
Prefect Worker: The task execution engine
Prefect Flows: We will deploy some new flows to our deployed prefect server.
Step 1: Add the Helm Chart Repository
First, we need to add the Helm chart repository that contains the required Prefect charts:
$ helm repo add prefect https://prefecthq.github.io/prefect-helm$ helm repo update
This tells Helm to fetch charts from Prefects repository, which includes the well-maintained Prefect charts. You can verify the prefect charts have been installed from the helm repository with the command below:
$ helm search repo prefect
NB: Helm makes it easy to configure your deployment through values files. You can use the default values or provide custom value files when creating a release.
Step 2: Creating a dedicated namespace (Optional)
You can optionally create a namespace to isolate all related prefect resources that will be created:
$ kubectl create namespace prefect
Step 3: Deploying The Prefect Server
To deploy the Prefect server, we will use our custom values file below and install by creating a release.
Create prefect-values.yaml in your working directory and paste the custom values below:
nameOverride: "prefect-server"fullnameOverride: "prefect-server"namespaceOverride: "prefect"postgresql: auth: database: prefect-server username: prefect-default-user password: prefect-default-passwordserviceAccount: name: "prefect-server"
Install the prefect server with chart using the customized values:
$ helm install prefect-server prefect/prefect-server -n prefect -f <working-directory>/prefect-values.yaml
‘prefect-server’ is the release name
Pass the desired namespace to the
`-n`flagYou can omit
`-f <working-directory>/prefect-values.yaml`if you’re using the default value.Check the status of the deployment:
$ helm list –n prefect$ kubectl get all -l app.kubernetes.io/instance=prefect-server –n perfect
This should show pods, services, and deployments related to your prefect server installation.
Step 4: Deploying Prefect Worker
Create prefect-worker.yaml in your working directory and paste the custom values below:
nameOverride: "prefect-worker"fullnameOverride: "prefect-worker"namespaceOverride: "prefect"
worker:
replicaCount: 2config:
workPool: "default"selfHostedServerApiConfig: apiUrl: "http://prefect-server.prefect.svc.cluster.local:4200/api"serviceAccount: name: "prefect-worker"
Install the prefect worker with chart using the customized values:
$ helm install prefect-worker prefect/prefect-worker -n prefect -f <working-directory>/prefect-worker.yaml
Scaling Prefect Workloads
Managing workloads in Kubernetes means scaling resources to match demand. Here are the key strategies:
Horizontal Scaling: Increase the number of prefect worker pods to handle more parallel tasks.
Vertical Scaling: Allocating resources to Prefect workers and worker jobs through the base job template and job variables.
Resource Scaling with Job Templates: Kubernetes enables Prefect to scale resources dynamically using base job templates and job variables. By configuring resource limits and requests in the job templates, Kubernetes can efficiently allocate the necessary resources for executing Prefect tasks based on the workload size.
Node Autoscaling: A more advanced scaling option would be to use Kubernetes’ Cluster Autoscaler to automatically adjust the number of nodes in the cluster based on worker resource requirements.
Observability in Prefect on Kubernetes
Keeping an eye on your workflow is essential! Observability lets you track performance, spot issues, and optimize executions. With the Prefect Helm chart, you get a Prometheus Exporter that exposes detailed metrics like task duration, success/failure rates, and worker status.
Log collection is seamless, especially when integrating with tools like Datadog. You get centralized, searchable logs for troubleshooting and performance analysis.
Advanced Features and Security Best Practices
Running Prefect in Kubernetes unlocks powerful features, especially when paired with GitOps tools like Argo CD or Flux. To keep your setup secure:
Secrets Management: Use Kubernetes Secrets to store sensitive data and configurations.
Secure Inter-Service Communication: Encrypt traffic between prefect services with TLS.
RBAC Configuration: Implement Role-Based Access Control for fine-grained user/service access management.
Secure Web Access: Prefect Server comes with an inbuilt UI for monitoring and providing visibility into flow runs and deployments. It is important to secure connection over the internet using TLS or other applicable encryption methods.
Wrapping Up
Deploying Prefect on Kubernetes might seem complex at first, but with the right setup, it becomes a robust, scalable, and highly observable orchestration platform. Whether you’re handling small ETL tasks or large-scale machine learning pipelines, this combo has you covered.
References
Reader Profile Summary:
The target reader is any Software or DevOps enthusiast familiar with data pipeline management, reader can be new/old to using Prefect. They likely know general workflow orchestration concepts and may have experience with other tools like Apache Airflow. The article should aim to introduce Prefect’s core components and explain its importance in modern data orchestration, emphasizing ease of deployment to Kubernetes and practical use. The article should leave readers confident in deploying Prefect to Kubernetes and understanding its value in enhancing data workflows.
Key Takeaways:
A 15/20-minute read providing a clear introduction to Prefect, including components like Flows, Tasks, and Schedules.
Insight into Prefect’s advantages for data pipelines, like dependency management and failure handling.
Step-by-step deployment instructions, complete with practical examples, helping readers go from setup to execution.
Resources for further learning and next steps in mastering Prefect.
Written by Elvis Segbawu



