You’ve probably measured latency, throughput, or error rates in your services. But have you ever asked: how much does it cost to process a single request?
That’s where ARPC comes in—Average Request Processing Cost. It's a powerful metric for understanding performance, efficiency, and even cloud spend at the level of individual requests.
Let’s break it down.
📊 What is ARPC?
ARPC stands for Average Request Processing Cost. It measures the average amount of resources (like CPU, memory, I/O, or dollars) that your system consumes to handle a single request.
Think of it as a blend of observability + cost analysis:
“On average, how expensive is each request to /api/users?”
🧮 What does ARPC include?
Depending on how you measure it, ARPC can include:
- 🖥️ CPU time
- 💾 Memory usage
- 📡 Network I/O
- 📦 Storage reads/writes
- 💵 Monetary cost (e.g. AWS Lambda GB-seconds or Kubernetes pod usage)
Some teams even track carbon footprint per request as part of ARPC.
🔍 Why ARPC matters
- 🧪 Detect inefficiencies: A single endpoint might look fast, but be using 10× more CPU than others.
- 💰 Control cloud costs: Know which routes or services drive spend.
- ⚖️ Benchmark optimizations: Track how code changes affect cost per request.
- 🚨 Spot regressions: Monitor for spikes in resource usage over time.
⚙️ How to measure ARPC
1. Instrumentation
Use tools like:
- OpenTelemetry to tag spans with resource usage
- Prometheus/Grafana to collect CPU/memory stats per pod
- AWS CloudWatch / GCP Monitoring for billing or function cost
2. Correlate to requests
Group resource usage by request ID, route, or trace ID:
// Pseudo-code
ARPC = Total_CPU_Time_for_/api/users / Number_of_Requests
3. Export and visualize
Send metrics to a dashboard:
- 📈 Cost per endpoint
- 🔥 Most expensive requests
- 🧊 Least efficient services
📌 Real-world example
Imagine a Node.js API on AWS Lambda:
{
"function": "getUserProfile",
"avg_duration_ms": 140,
"avg_memory_mb": 256,
"invocations": 10,000,
"ARPC_usd": 0.00035
}
Now compare that to another endpoint that uses 3× more memory for the same output. That’s where optimization starts to pay off.
✅ Summary checklist
- ✅ ARPC = average resources or cost per request
- ✅ Helps you optimize for performance _and_ cost
- ✅ Requires request-level instrumentation + metrics
- ✅ Great for catching silent inefficiencies in microservices
🧠 Conclusion
In modern systems, performance isn’t just about speed—it’s about efficiency. ARPC helps teams understand the true cost of serving users, request by request.
If you’re already tracking traces, logs, and metrics, ARPC is a natural next step—especially if you're scaling or trying to cut cloud spend.
It's not just how fast you serve requests—it's how smart.