- TechOps Examples
- Posts
- Why should a container have only one process?
Why should a container have only one process?
TechOps Examples
Hey — It's Govardhana MK 👋
Along with a use case deep dive, we identify the remote job opportunities, top news, tools, and articles in the TechOps industry.
👋 Before we begin... a big thank you to today's sponsor NOTOPS
⚡ Cloud-Native Without the Learning Curve
Skip the months of trial and error. NotOps.io gives you an ideal Kubernetes and AWS setup, so you’re production-ready on day one.
🔄 Automated Day-Two Operations
Focus on innovation, not maintenance. NotOps.io automates patching, upgrades, and updates for EKS control planes, nodes, and tools like Argo CD.
👀 Why NotOps.io?
From stability to security to speed, get results from day one.
IN TODAY'S EDITION
🧠 Use Case
Why should a container have only one process?
🚀 Top News
👀 Remote Jobs
Appinio is hiring a Senior AI Engineer
Remote Location: Worldwide
Tala is hiring a Senior Cloud Infrastructure Engineer
Remote Location: India
📚️ Resources
Writer RAG tool: build production-ready RAG apps in minutes
RAG in just a few lines of code? We’ve launched a predefined RAG tool on our developer platform, making it easy to bring your data into a Knowledge Graph and interact with it with AI. With a single API call, writer LLMs will intelligently call the RAG tool to chat with your data.
Integrated into Writer’s full-stack platform, it eliminates the need for complex vendor RAG setups, making it quick to build scalable, highly accurate AI workflows just by passing a graph ID of your data as a parameter to your RAG tool.
🛠️ TOOL OF THE DAY
aws-parallelcluster - AWS supported Open Source cluster management tool to deploy and manage HPC clusters in the AWS cloud.
🧠 USE CASE
Why should a container have only one process?
Before we jump into the actual context of one process per one container, let me first answer this:
'Can you run multiple processes in a container?' oh! Yes.
But, imagine these technical scenarios:
📌 A container running a MySQL database and a logging service together experiences high memory usage.
📌 Your Flask application running alongside a cron job in one container leads to overlapping log files.
📌 A Dockerized application running multiple tightly coupled processes can’t handle a clean shutdown, leaving orphaned child processes.
Running into these kinds of scenarios is not uncommon but should be avoided, for sure.
In this era, from legacy tech to modern solutions, you can containerize any application with ease.
But how you design for things like scaling, staying reliable, and running smoothly makes all the difference.
Principles of Container-Based Application Design whitepaper is a fantastic guide that every cloud-native engineer should read to understand meaningful app design principles for modern infrastructure.
Principles of container-based application design
The Single Concern Principle addresses today's context of container-based application design to run a single process per container.
Why Stick to One Process Per Container?
1. Isolation Simplifies Debugging
A single process container means logs, resource usage, and errors are tied to one application.
Example: If your web server crashes, the logs clearly indicate the issue without interference from a database or background worker.
2. Scaling Becomes Granular
You can scale individual components independently.
Scenario: A container running an Nginx web server can be scaled up to handle increasing HTTP traffic, without wasting resources scaling a database bundled in the same container.
3. Reduced Blast Radius
If a container running one process fails, it impacts only that specific functionality.
Example: If a Redis container crashes, it doesn’t bring down the web server because they're isolated.
4. Better Resource Management
Orchestrators like Kubernetes can allocate CPU and memory more accurately when containers focus on one task.
Scenario: A Java application with high memory usage won't starve a lightweight monitoring tool running in the same container.
On contrary,
What Goes Wrong with Multiple Processes in a Container?
Docker monitors the main process (
PID 1
). If other processes spawn within the container, they might become orphaned or zombie processes.Processes sharing the same container can fight for CPU and memory, causing performance degradation.
Logs from multiple processes mix together, making it difficult to trace issues.
When exceptions arise, when multiple processes are unavoidable, carefully evaluate the trade-offs and proceed with caution:
Use a Process Manager: Tools like supervisord can manage multiple processes within the container.
Handle Signals Properly: Write a custom solution to handle SIGTERM and SIGKILL signals and clean up processes.
Plan for Failures: Implement health checks for all processes to catch individual failures early
Try to design your containers to run one process per container for better isolation, scalability, and resource efficiency.
‘Always prioritize scalability’, says the engineer who hardcoded configs into production 🥱
— Govardhana Miriyala Kannaiah (@govardhana_mk)
6:35 AM • Dec 7, 2024
You may even like: