ASCENDING Technical Blog

Stop Kubernetes from Sabotaging Your 45-Minute Jobs with These Simple Tweaks

Written by Caleb Mabry | Jul 14, 2025 6:44:32 PM

Picture this. You're running a Celery worker processing a critical data pipeline job that's been churning away for 45 minutes. Your Kubernetes cluster decides it's time to scale down, and politely taps your pod on the shoulder with a SIGTERM. "Excuse me," it says, "could you please wrap up and leave?" Your Celery worker, deep in the middle of processing thousands of records, doesn't even look up. Thirty seconds later, Kubernetes loses its patience and sends in the bouncer with a SIGKILL. Game over.

If this scenario sounds familiar, you've experienced the drama between Kubernetes pod lifecycle management and applications that don't play nicely with termination signals. Now why would Kubernetes be so rude?

The Kubernetes Termination Handshake

When Kubernetes decides a pod needs to go, whether due to scaling, rolling updates, or node maintenance, it follows a well defined termination flow. The termination process begins when the kubelet receives a deletion request from the API server. At this point, the pod enters the "Terminating" state, and Kubernetes starts a shutdown sequence:

  1. SIGTERM Signal: The kubelet sends a SIGTERM signal to the main process (PID 1) in each container. This is Kubernetes being polite! It's asking your application to gracefully shutdown.
  2. Grace Period: By default, your application gets 30 seconds to handle the SIGTERM and shut down cleanly. This period is configurable via the terminationGracePeriodSeconds field in your pod spec.
  3. SIGKILL Signal: If your process is still running after the grace period expires, Kubernetes stops being polite. The kubelet sends a SIGKILL signal, which cannot be caught or ignored. Your process dies immediately.

This handshake exists for good reason. Graceful shutdowns allow applications to finish processing current requests, close database connections, and perform other cleanup tasks that prevent data corruption or inconsistent state.

 

The Celery Conundrum

Celery workers present an interesting challenge in this context. By design, Celery workers are meant to process long-running tasks, data transformations, file uploads, machine learning model training, or batch processing jobs that can run for minutes or even hours.

Consider a typical Celery worker setup in Kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: celery-worker
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: worker
        image: myapp:latest
        command: ["celery", "-A", "myapp", "worker", "--loglevel=info"]
        terminationGracePeriodSeconds: 30  # Default value

 

An example of Terraform code to create an Amazon EKS cluster with one self-managed NodeGroup and one EKS-managed NodeGroup is provided.

When this worker is processing a job that takes 10 minutes to complete, but Kubernetes wants to scale down and terminate the pod after 30 seconds, we have a fundamental mismatch. The worker can't just drop everything and exit, that would corrupt the job and potentially leave your system in an inconsistent state.

 

The Problem with Default Behavior

Out of the box, Celery workers don't handle SIGTERM in a way that's compatible with Kubernetes expectations. When a standard Celery worker receives a SIGTERM, it attempts to gracefully shutdown by:

  1. Stopping the acceptance of new tasks
  2. Waiting for currently executing tasks to complete
  3. Shutting down worker processes

The issue? Step 2 can take much longer than Kubernetes default 30 second grace period, especially with long-running tasks. This creates a race condition where your jobs get killed mid-execution, potentially corrupting data or leaving external systems in inconsistent states.

 

Configuring Graceful Shutdowns

The solution involves configuring both your Celery workers and your Kubernetes deployment to work together. Here's how you can approach it.

Extending the Grace Period

First, you need to give your pods enough time to complete their current work. If your longest-running tasks typically take 10 minutes, configure your termination grace period accordingly:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: celery-worker
spec:
  template:
    spec:
      terminationGracePeriodSeconds: 900  # 15 minutes
      containers:
      - name: worker
        image: myapp:latest
        command: ["celery", "-A", "myapp", "worker", "--loglevel=info"]

Configuring Celery for Graceful Shutdown

Celery provides several configuration options that help with graceful shutdowns:

# celeryconfig.py
from kombu import Queue

# Configure worker to handle one task at a time
worker_prefetch_multiplier = 1

# Set up proper signal handling
worker_hijack_root_logger = False
worker_log_format = '[%(asctime)s: %(levelname)s/%(processName)s] %(message)s'

# Configure task routing and acknowledgment
task_acks_late = True
task_reject_on_worker_lost = True

 

The worker_prefetch_multiplier = 1 setting is very important! It prevents workers from prefetching multiple tasks, which would complicate shutting down since you'd have multiple tasks to wait for.

Implementing Custom Signal Handlers

For more control over the shutdown process, you can create custom signal handlers in your Celery worker:

import signal
import sys
from celery import Celery

app = Celery('myapp')

class GracefulWorker:
    def __init__(self):
        self.shutdown = False
        signal.signal(signal.SIGTERM, self.handle_sigterm)
        signal.signal(signal.SIGINT, self.handle_sigterm)
   
    def handle_sigterm(self, signum, frame):
        print(f"Received signal {signum}, initiating graceful shutdown...")
        self.shutdown = True
       
    def run(self):
        worker = app.Worker(
            loglevel='info',
            without_mingle=True,
            without_heartbeat=True,
        )
       
        # Hook into worker shutdown
        def shutdown_handler(signum, frame):
            print("Worker shutting down gracefully...")
            worker.should_stop = True
           
        signal.signal(signal.SIGTERM, shutdown_handler)
        worker.start()

if __name__ == '__main__':
    graceful_worker = GracefulWorker()
    graceful_worker.run()

 

The Balancing Act

Managing long-running jobs in Kubernetes requires balancing several concerns:

Resource Efficiency vs. Job Completion: Longer grace periods mean pods stick around longer during scaling events, potentially wasting resources. However, shorter grace periods risk job interruption.

Scaling Responsiveness vs. Stability: Quick scaling responses improve resource utilization and cost efficiency, but they can disrupt long-running processes if not handled properly.

System Resilience vs. Complexity: Simple configurations are easier to manage, but robust handling of edge cases often requires more sophisticated approaches.

 

Advanced Strategies

For production systems handling critical long-running jobs, consider these patterns:

Pre-Stop Hooks

Kubernetes provides pre-stop hooks that execute before sending SIGTERM:

spec:
  containers:
  - name: worker
    lifecycle:
      preStop:
        exec:
          command: ["/bin/sh", "-c", "celery -A myapp inspect active --json | jq '.[] | length' | grep -q '^0$' || sleep 300"]

This hook checks if the worker has active tasks and waits if necessary before allowing termination to proceed.

Job Checkpointing

For extremely long running jobs, implement checkpointing mechanisms that allow tasks to be resumed after interruption:

@app.task(bind=True)
def long_running_task(self, data, checkpoint_id=None):
    if checkpoint_id:
        # Resume from checkpoint
        state = load_checkpoint(checkpoint_id)
    else:
        # Start fresh
        state = initialize_processing(data)
   
    try:
        while not state.is_complete():
            # Process chunk
            state = process_chunk(state)
           
            # Periodically save checkpoint
            if state.should_checkpoint():
                checkpoint_id = save_checkpoint(state)
               
    except SoftTimeLimitExceeded:
        # Task is being terminated, save state
        checkpoint_id = save_checkpoint(state)
        self.retry(countdown=60, kwargs={'checkpoint_id': checkpoint_id})

 

Monitoring and Observability

Implementing proper monitoring is crucial for understanding how your termination handling performs in production:

import prometheus_client
from celery.signals import worker_shutdown

shutdown_duration = prometheus_client.Histogram(
    'celery_worker_shutdown_duration_seconds',
    'Time taken for worker to shutdown gracefully'
)

@worker_shutdown.connect
def worker_shutdown_handler(sender=None, **kwargs):
    with shutdown_duration.time():
        print("Worker shutdown completed")

Track metrics like shutdown duration, interrupted tasks, and successful graceful shutdowns to optimize your configuration over time.

 

Conclusion

The intersection of Kubernetes pod lifecycle management and long-running job processing requires careful consideration and planning. While Kubernetes provides the primitives for graceful shutdown through SIGTERM and configurable grace periods, it's up to your application to implement proper signal handling and cleanup logic.

The key is finding the right balance for your specific use case. Not every application needs to handle hour long grace periods, and not every job is critical enough to warrant complex checkpointing mechanisms. Start with the basics. Proper signal handling and appropriate grace periods, then layer on additional sophistication as your requirements demand.

Remember, the goal isn't to fight against Kubernetes lifecycle management, but to work with it. When done properly, your Celery workers will gracefully handle termination signals, your jobs will complete successfully, and your cluster will scale efficiently. That's the kind of harmony that makes both your applications and your infrastructure team happy.

The next time Kubernetes politely asks your pod to leave, make sure it knows how to say goodbye properly.