Skip to content

Reviewing Recommendations

OptiPod stores recommendations as annotations on your workloads. This guide shows you how to review, interpret, and act on these recommendations.

Terminal window
# List all managed workloads
kubectl get deployments -A -l optipod.io/enabled=true
# Show workloads with their managing policy
kubectl get deployments -A \
-o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,POLICY:.metadata.annotations.optipod\.io/policy

View Recommendations for Specific Workload

Section titled “View Recommendations for Specific Workload”
Terminal window
# Get all OptiPod annotations
kubectl get deployment my-app -o json | \
jq '.metadata.annotations | with_entries(select(.key | startswith("optipod.io/")))'

OptiPod stores recommendations as separate annotations for each container and resource type:

annotations:
# Management
optipod.io/managed: "true"
optipod.io/policy: "production-policy"
optipod.io/last-recommendation: "2026-01-26T10:30:00Z"
# Recommendations for 'nginx' container
optipod.io/recommendation.nginx.cpu-request: "250m"
optipod.io/recommendation.nginx.memory-request: "512Mi"
optipod.io/recommendation.nginx.cpu-limit: "500m"
optipod.io/recommendation.nginx.memory-limit: "1Gi"
# Recommendations for 'sidecar' container
optipod.io/recommendation.sidecar.cpu-request: "50m"
optipod.io/recommendation.sidecar.memory-request: "128Mi"
optipod.io/recommendation.sidecar.cpu-limit: "100m"
optipod.io/recommendation.sidecar.memory-limit: "256Mi"

Verify recommendations are recent:

Terminal window
kubectl get deployment my-app \
-o jsonpath='{.metadata.annotations.optipod\.io/last-recommendation}'

Step 2: List All Containers with Recommendations

Section titled “Step 2: List All Containers with Recommendations”
Terminal window
kubectl get deployment my-app -o json | \
jq -r '.metadata.annotations | keys[] | select(startswith("optipod.io/recommendation.")) | select(contains(".cpu-request")) | split(".")[1]' | sort -u

Step 3: View Recommendations for Each Container

Section titled “Step 3: View Recommendations for Each Container”
Terminal window
# For a specific container (e.g., 'nginx')
echo "CPU Request: $(kubectl get deployment my-app -o jsonpath='{.metadata.annotations.optipod\.io/recommendation\.nginx\.cpu-request}')"
echo "Memory Request: $(kubectl get deployment my-app -o jsonpath='{.metadata.annotations.optipod\.io/recommendation\.nginx\.memory-request}')"
echo "CPU Limit: $(kubectl get deployment my-app -o jsonpath='{.metadata.annotations.optipod\.io/recommendation\.nginx\.cpu-limit}')"
echo "Memory Limit: $(kubectl get deployment my-app -o jsonpath='{.metadata.annotations.optipod\.io/recommendation\.nginx\.memory-limit}')"
Terminal window
# Get current resources
kubectl get deployment my-app -o json | \
jq '.spec.template.spec.containers[] | {name, resources}'
# Get recommended resources
kubectl get deployment my-app -o json | \
jq -r '.metadata.annotations | to_entries |
map(select(.key | startswith("optipod.io/recommendation."))) |
group_by(.key | split(".")[1]) |
map({container: .[0].key | split(".")[1], recommendations: map({(.key | split(".")[2]): .value}) | add})'

OptiPod calculates recommendations based on:

  1. Metrics Collection: Gathers CPU and memory usage over the rolling window
  2. Percentile Calculation: Uses the configured percentile (P50, P90, or P99)
  3. Safety Factor: Applies a multiplier for headroom (default: 1.2 = 20% headroom)
  4. Bounds Enforcement: Ensures recommendations stay within min/max bounds

Formula:

Recommendation = Percentile(Usage) × SafetyFactor
Bounded by: [ResourceBounds.Min, ResourceBounds.Max]

Given:

  • P90 CPU usage: 200m
  • Safety factor: 1.2
  • CPU bounds: min=10m, max=4000m

Calculation:

Recommended CPU = 200m × 1.2 = 240m
Within bounds: 10m ≤ 240m ≤ 4000m ✓
Final recommendation: 240m

Limits are calculated from requests using multipliers:

CPU Limit = CPU Request × CPULimitMultiplier (default: 1.0)
Memory Limit = Memory Request × MemoryLimitMultiplier (default: 1.1)

Use this framework to decide whether to apply a recommendation:

  • Change: Less than 50% reduction or less than 100% increase
  • Workload: Non-critical (dev, staging, non-production)
  • Metrics: 7+ days of data
  • Savings: More than 30%
  • Change: 50-75% reduction or 100-200% increase
  • Workload: Important but not critical
  • Metrics: 3-7 days of data
  • Savings: 20-30%
  • Action: Monitor closely for 24-48 hours after applying
  • Change: More than 75% reduction or more than 200% increase
  • Workload: Critical production workload
  • Metrics: Less than 3 days of data
  • Action: Wait for more data or adjust policy settings

Update your deployment YAML with recommended values:

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- name: nginx
resources:
requests:
cpu: 250m # From optipod.io/recommendation.nginx.cpu-request
memory: 512Mi # From optipod.io/recommendation.nginx.memory-request
limits:
cpu: 500m # From optipod.io/recommendation.nginx.cpu-limit
memory: 1Gi # From optipod.io/recommendation.nginx.memory-limit

Commit to Git and let your GitOps tool apply the changes.

Terminal window
kubectl set resources deployment my-app \
--requests=cpu=250m,memory=512Mi \
--limits=cpu=500m,memory=1Gi

Switch to Auto mode to let OptiPod apply recommendations automatically:

apiVersion: optipod.optipod.io/v1alpha1
kind: OptimizationPolicy
metadata:
name: my-policy
spec:
mode: Auto # Changed from Recommend
# ... rest of spec
Terminal window
# List pods
kubectl get pods -l app=my-app
# Describe pod for details
kubectl describe pod <pod-name>
# Check for restarts
kubectl get pods -l app=my-app \
-o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.containerStatuses[*].restartCount}{"\n"}{end}'
Terminal window
# Current usage
kubectl top pod -l app=my-app
# Watch usage over time
watch -n 5 'kubectl top pod -l app=my-app'
Terminal window
# Check for OOM kills
kubectl get events --field-selector reason=OOMKilled --sort-by='.lastTimestamp'
# Check for CPU throttling (if metrics available)
kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods | \
jq '.items[] | select(.metadata.labels.app=="my-app") | {name: .metadata.name, containers: .containers}'
# Check logs for errors
kubectl logs -l app=my-app --tail=100 --all-containers

Using the OptiPod Recommendation Report Script

Section titled “Using the OptiPod Recommendation Report Script”

OptiPod provides a comprehensive script to generate HTML or JSON reports of all recommendations across your cluster.

Download and run the script:

Terminal window
# Download the script
curl -fsSL https://raw.githubusercontent.com/Sagart-cactus/optipod/main/scripts/optipod-recommendation-report.sh -o optipod-recommendation-report.sh
chmod +x optipod-recommendation-report.sh
# Generate HTML report
./optipod-recommendation-report.sh -o html -f optipod-recommendations.html
# Open in browser
open optipod-recommendations.html

Generate JSON for automation:

Terminal window
./optipod-recommendation-report.sh -o json -f optipod-recommendations.json

Filter by namespace:

Terminal window
./optipod-recommendation-report.sh -o html -f report.html --namespace production

The report includes:

  • Current vs recommended resources for all containers
  • Replica-weighted impact calculations
  • Warnings for potential issues
  • Sortable and filterable table
  • Visual summary cards

OptiPod Recommendation Report

If you need a simple command-line summary, here’s an example script:

#!/bin/bash
# This is an example script - save as review-recommendations.sh
echo "Reviewing all OptiPod recommendations..."
echo "========================================"
echo
kubectl get deployments -A -o json | \
jq -r '.items[] |
select(.metadata.annotations."optipod.io/managed" == "true") |
{
namespace: .metadata.namespace,
name: .metadata.name,
policy: .metadata.annotations."optipod.io/policy",
lastRecommendation: .metadata.annotations."optipod.io/last-recommendation",
containers: [
.metadata.annotations | to_entries |
map(select(.key | startswith("optipod.io/recommendation.") and (.key | contains(".cpu-request")))) |
map(.key | split(".")[1])
] | flatten | unique
} |
"\(.namespace)/\(.name):\n Policy: \(.policy)\n Last Updated: \(.lastRecommendation)\n Containers: \(.containers | join(", "))\n"'

Export Recommendations to YAML (Example Script)

Section titled “Export Recommendations to YAML (Example Script)”

Here’s an example script to export recommendations in YAML format:

export-recommendations.sh
#!/bin/bash
# Save this script and make it executable: chmod +x export-recommendations.sh
DEPLOYMENT=$1
NAMESPACE=${2:-default}
if [ -z "$DEPLOYMENT" ]; then
echo "Usage: $0 <deployment-name> [namespace]"
exit 1
fi
echo "# Recommended resources for $NAMESPACE/$DEPLOYMENT"
echo "# Generated: $(date)"
echo
kubectl get deployment $DEPLOYMENT -n $NAMESPACE -o json | \
jq -r '
.metadata.annotations |
to_entries |
map(select(.key | startswith("optipod.io/recommendation."))) |
group_by(.key | split(".")[1]) |
map({
container: .[0].key | split(".")[1],
resources: (
map({
key: (.key | split(".")[2] | gsub("-"; "_")),
value: .value
}) | from_entries
)
}) |
map(" - name: \(.container)\n resources:\n requests:\n cpu: \(.resources.cpu_request // "N/A")\n memory: \(.resources.memory_request // "N/A")\n limits:\n cpu: \(.resources.cpu_limit // "N/A")\n memory: \(.resources.memory_limit // "N/A")")[]
'

Usage:

Terminal window
# Save the script above as export-recommendations.sh
chmod +x export-recommendations.sh
# Export recommendations
./export-recommendations.sh my-app production > recommended-resources.yaml
  1. Review regularly: Check recommendations weekly
  2. Start small: Apply to non-critical workloads first
  3. Monitor closely: Watch for 24-48 hours after applying
  4. Document decisions: Note why you applied or rejected recommendations
  5. Track results: Measure actual savings and impact
  6. Iterate: Adjust policy settings based on results
  7. Automate gradually: Start with Recommend mode, move to Auto mode after building confidence

Check:

Terminal window
# Verify workload is labeled correctly
kubectl get deployment my-app -o jsonpath='{.metadata.labels}'
# Check if workload is managed
kubectl get deployment my-app -o jsonpath='{.metadata.annotations.optipod\.io/managed}'
# Check policy status
kubectl describe optimizationpolicy <policy-name>
# Verify metrics are available
kubectl logs -n optipod-system -l app.kubernetes.io/name=optipod --tail=50

Investigate:

Terminal window
# Check policy configuration
kubectl get optimizationpolicy <policy-name> -o yaml
# Review percentile setting
kubectl get optimizationpolicy <policy-name> -o jsonpath='{.spec.metricsConfig.percentile}'
# Review safety factor
kubectl get optimizationpolicy <policy-name> -o jsonpath='{.spec.metricsConfig.safetyFactor}'
# Check resource bounds
kubectl get optimizationpolicy <policy-name> -o jsonpath='{.spec.resourceBounds}'
# Check actual resource usage
kubectl top pod -l app=my-app

Verify:

Terminal window
# Check last recommendation timestamp
kubectl get deployment my-app \
-o jsonpath='{.metadata.annotations.optipod\.io/last-recommendation}'
# Check policy mode
kubectl get optimizationpolicy <policy-name> -o jsonpath='{.spec.mode}'
# Check reconciliation interval
kubectl get optimizationpolicy <policy-name> -o jsonpath='{.spec.reconciliationInterval}'
# Review operator logs
kubectl logs -n optipod-system -l app.kubernetes.io/name=optipod --tail=100