- Rust on arm64 completed CPU-intensive tasks up to 5x faster than x86
- Arm64 reduces cold boot latency across all runtimes by up to 24%
- Python 3.11 on arm64 outperforms newer versions in memory-intensive workloads
This year’s AWS Lambda benchmark shows that the arm64 architecture consistently outperforms x86 in most workloads.
Tests covered lightweight, CPU- and memory-intensive workloads across Node.js, Python, and Rust runtimes.
For CPU-bound tasks, Rust on arm64 completed SHA-256 hash loops 4 to 5 times faster than Rust x86 once architecture-specific assembly optimizations came into play.
Cold start and hot start efficiency
Python 3.11 on arm64 also outperformed newer versions of Python, while Node.js 22 ran substantially faster than Node.js 20 on x86.
These results show that arm64 not only improves raw computing performance but also maintains consistency across different memory configurations.
Cold boot latency plays a crucial role in serverless applications and arm64 offers clear improvements over x86.
Across all runtimes, arm64 delivered 13% to 24% faster cold boot initialization.
Rust, in particular, recorded almost imperceptible cold boot times at 16 ms, making it well suited for latency-sensitive applications.
Warm boot performance also favored arm64, and memory-intensive workloads benefited from the architecture’s ability to handle larger memory allocations more efficiently.
Python and Node.js showed slightly greater variability, although arm64’s gains were maintained.
These performance improvements are compounded in production environments where frequent cold starts occur.
Cost analysis shows that arm64 offers 30% lower compute costs on average compared to x86.
For memory-intensive workloads, cost savings reached up to 42%, particularly for Node.js and Rust.
Lightweight workloads, which rely heavily on I/O latency rather than raw computation, showed minimal performance differences between the architectures.
This shows that cost optimization is more important than runtime selection in these scenarios.
In CPU- and memory-intensive workloads, arm64 delivered stronger cost-performance ratios, confirming its value in production deployments.
These benchmarks indicate that arm64 should be the default CPU target for most Lambda workloads unless specific library compatibility issues arise.
Rust workloads on arm64 maximize performance and cost savings, while Python 3.11 and Node.js 22 provide solid alternatives for other use cases.
Organizations that rely on Lambda for enterprise-scale applications or run multiple functions in a single data center will likely see clear efficiency improvements.
From a workstation perspective, the results suggest that developers who compile locally for CPU-intensive workloads can also benefit from native arm64 builds.
Although these benchmarks are extensive, individual workloads and dependency configurations may lead to different results, so further testing is recommended prior to wide-scale adoption.
Organizations that leverage Lambda for enterprise-scale applications or run multiple functions in a single data center will likely see tangible efficiency improvements.
From a workstation perspective, the results suggest that developers who compile locally for CPU-intensive workloads can also benefit from native arm64 builds.
Although these benchmarks are extensive, individual workloads and dependency configurations may produce different results, so further testing is recommended prior to wide-scale adoption.
Via Chris Ebert
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




