In our effort to monitor and debug memory usage in Python applications running inside Kubernetes, we explored several tools and strategies. Here's a breakdown of what worked, what didn't, and what to consider next.
We aimed to:
Attach to a running Python process inside a Kubernetes pod
Profile its memory usage
Export insights (ideally to Grafana)
Avoid code changes or developer involvement where possible
Using Memray for memory profiling turned out to be effective when executed inside the same container as the Python application.
Connect to container via sidecar/debug container
Install Memray and Dependencies:
Find the Python Process:
Attach Memray:
Note: Attaching from a sidecar or ephemeral container did not work due to namespace isolation. Running inside the same container was required.
View Summary:
memrayis best for on-demand profiling, not continuous monitoringCannot run from a sidecar or ephemeral container unless
shareProcessNamespace: trueand the memory namespace is somehow shared (not typical)Output is not in a form Grafana can consume directly
Since Memray isn’t suited for long-term monitoring or Grafana integration, we explored other paths:
Pyroscope is a continuous profiling tool that integrates with Grafana. It supports Python via:
pyroscope-python(requires code change)py-spyin Pyroscope agent mode (code-free, works likestrace)
However, Pyroscope's strength lies in CPU profiling. Memory profiling is more limited or indirect. Further investigation is needed to evaluate if it can replace or supplement Memray for memory metrics.
Explore
py-spywith Pyroscope for code-free CPU profilingConsider writing a lightweight Prometheus exporter using
psutilto track RSS memoryPossibly batch export Memray data into a Grafana-compatible format (e.g., Prometheus scrape)
Stay tuned for a follow-up article on Pyroscope and profiling Python memory and CPU usage over time!