Kubernetes Pod OOM Issues: Troubleshooting Memory Problems

Listen to this Post

Kubernetes Pod Out-Of-Memory (OOM) issues can cripple your production environment. While your code might seem like the obvious culprit, external modules and misconfigurations often cause memory leaks. Here’s how to diagnose and fix them.

Common Causes of OOM in Kubernetes Pods

  1. Low Heap Configuration – JVM applications may crash if `-Xmx` (max heap size) is set too low.
  2. Incorrect Pod Resource Limits – Undersized `requests` and `limits` in Kubernetes manifests.
  3. Memory Leaks in External Modules – Third-party libraries or dependencies not releasing memory properly.

Troubleshooting Steps

1. Monitoring Container Logs

Check logs for OOM errors:

kubectl logs <pod-name> --previous | grep -i "OOM"

2. Profiling Memory Usage

Use `kubectl top` to monitor pod memory:

kubectl top pods -n <namespace>

For Java apps, enable heap dumps on OOM:

java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapdump.hprof -jar app.jar

3. Adjusting Kubernetes Resource Limits

Modify `resources` in your deployment YAML:

resources:
requests:
memory: "512Mi"
limits:
memory: "1Gi"

4. Debugging with `kubectl describe`

Inspect pod events:

kubectl describe pod <pod-name> | grep -A 10 "Events"

You Should Know: Advanced Debugging Techniques

Capturing Heap Dumps in Kubernetes

For Java apps, use sidecar containers to extract heap dumps:

containers:
- name: java-app
command: ["java", "-XX:+HeapDumpOnOutOfMemoryError", "-XX:HeapDumpPath=/dumps/heapdump.hprof", "-jar", "app.jar"]
volumeMounts:
- name: dumps-volume
mountPath: /dumps

Analyzing Memory with `jmap` (Java)

Attach to a running pod and generate a heap dump:

kubectl exec -it <pod-name> -- jmap -dump:live,format=b,file=/tmp/heapdump.hprof <pid>

Using `pprof` for Go Applications

Enable profiling in Go apps:

import _ "net/http/pprof"

Access memory profile:

go tool pprof http://localhost:6060/debug/pprof/heap

Linux Memory Diagnostics

Check system memory usage inside a pod:

kubectl exec -it <pod-name> -- free -m

Inspect process memory:

kubectl exec -it <pod-name> -- top

What Undercode Say

Kubernetes OOM errors require systematic debugging:

  • Always set memory limits to prevent pods from consuming excessive resources.
  • Enable OOM heap dumps for JVM apps to analyze crashes.
  • Profile applications using pprof, jmap, or kubectl top.
  • Check third-party dependencies—many leaks come from external libraries.
  • Use `kubectl describe` to identify evicted pods and OOM kill events.

Expected Output

 Sample kubectl top output 
NAME CPU(cores) MEMORY(bytes) 
app-pod 100m 450Mi

Sample OOM event from kubectl describe 
Events: 
Type Reason Age From Message

<hr />

Warning OOMKill 2m kubelet Memory cgroup out of memory: Kill process 1234 (java) score 1000 or sacrifice child 

For further reading:

Expected Output:

A structured debugging approach to Kubernetes OOM issues with practical commands and fixes.

References:

Reported By: Nagavamsi Kubernetes – Hackers Feeds
Extra Hub: Undercode MoN
Basic Verification: Pass ✅

Join Our Cyber World:

💬 Whatsapp | 💬 TelegramFeatured Image