• TechOps Examples
  • Posts
  • Running Kubernetes in Locked Down Networks with Talos and Zarf

Running Kubernetes in Locked Down Networks with Talos and Zarf

In partnership with

TechOps Examples

Hey β€” It's Govardhana MK πŸ‘‹

Along with a use case deep dive, we identify the remote job opportunities, top news, tools, and articles in the TechOps industry.

πŸ‘‹ Before we begin... a big thank you to today's sponsor SUPERHUMAN AI

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

IN TODAY'S EDITION

🧠 Use Case
  • Running Kubernetes in Locked Down Networks with Talos and Zarf

πŸš€ Top News

πŸ‘€ Remote Jobs

πŸ“šοΈ Resources

πŸ“’ Reddit Threads

πŸ› οΈ TOOL OF THE DAY

lifeweeks.app - Your Life in Weeks. Create a map of your life where each week is a little box.

Document your life's journey like you never have before.

🧠 USE CASE

Running Kubernetes in Locked Down Networks with Talos and Zarf

Recently, I had a chance to work on a Kubernetes implementation with one of our clients, a locked down environment with zero external network connectivity.

No internet access, no curl, no pulling images from Docker Hub, no public CA certs, not even outbound DNS resolution.

Everything had to be self contained, validated, and secure.

This setup is often referred to as an air gap environment, where systems are intentionally isolated from unsecured networks, including the public internet.

These setups are common in:

  • Defense & national security deployments

  • Critical manufacturing and SCADA networks

  • High compliance banks with data residency rules

  • Research labs with sensitive IP

Most off the shelf Kubernetes tooling breaks in these environments. What we needed was a way to:

  1. Build everything in a connected environment

  2. Transfer it physically

  3. Reconstruct a fully functioning Kubernetes platform with zero external dependencies

That’s where Talos and Zarf came in, two tools purpose built for exactly these kinds of scenarios.

What is Talos?

Talos is a hardened, minimal OS built only to run Kubernetes. There’s no bash, SSH, or login, just an API. It gives you:

  • An immutable OS (updates via atomic reboots)

  • A gRPC API (talosctl) to manage everything: disk format, network, Kubernetes bootstrap, etc.

  • Native Kubernetes integration using upstream kubeadm

You bootstrap it like this:

talosctl apply-config --insecure --nodes 192.168.10.10 --file controlplane.yaml
talosctl bootstrap --nodes 192.168.10.10

Once bootstrapped, the control plane nodes spin up kube-apiserver, etcd, scheduler, and controller-manager as static pods, exactly as you'd expect with kubeadm.

Why Talos Made Sense

  • Zero SSH = reduced attack surface

  • Runs both control plane and worker nodes

  • Comes with built in Kubernetes bootstrap tooling

What is Zarf?

Zarf solves the missing piece - how to bring your container images, Helm charts, manifests, binaries, and everything else into the air gap.

You describe everything in a zarf.yaml file:

kind: ZarfPackageConfig

metadata:

name: demo

version: 1.0.0

components:

- name: nginx

required: true

images:

- nginx:alpine

manifests:

- files:

- nginx-deploy.yaml

Then run this on your connected machine:

zarf package create .

This does 3 things:

  1. Downloads all images, binaries, and manifests

  2. Compresses them into a .tar.zst archive

  3. Creates a zarf-package-demo-amd64-1.0.0.tar.zst file

Transfer this to the air gap system via USB or removable storage.

Inside the air gap:

zarf package deploy zarf-package-demo-amd64-1.0.0.tar.zst

Zarf unpacks the images into the local registry, applies manifests via kubectl, and sets up everything as described, with no outbound calls.

Here’s a simplified view of the actual flow:

Download a high resolution copy of this diagram here for future reference.

Zarf Environment (Connected):

  • Define your app stack and dependencies in zarf.yaml

  • Package container images + Helm + manifests

  • Transfer .tar.zst to the air gapped side

Kubernetes Cluster (Air Gap):

  • Talos boots each node with a strict config

  • talosctl bootstrap brings up control plane

  • Zarf unpacks and deploys app workloads

  • Everything runs on an isolated cluster (no internet required)

Troubleshooting & Lessons Learned
  1. Double check that transitive images (pause, coredns, init containers) are listed in zarf.yaml.

  2. Always test with a local Zarf dev deployment on a dummy cluster before transferring.

  3. Keep Talos tokens and talosconfig secure. CA rotation is strict.

  4. Zarf packages grow fast. Despite compression, plan for multi GB sizes.

  5. Add a Zarf registry for production to avoid pushing images directly into Talos containerd.

This is the infrastructure equivalent of shipping a sealed black box.

Looking to promote your company, product, service, or event to 40,000+ Cloud Native Professionals? Let's work together.