Disk I/O Beats Memory Caching? A Surprising Benchmark

2025-09-05

Conventional wisdom dictates that memory access is far faster than disk I/O, making memory caching essential. This post challenges that assumption with a clever benchmark: counting the number of tens in a large dataset. Using an older server and optimizing code (loop unrolling and vectorization), along with a custom io_uring engine, the author demonstrates that direct disk reads can outperform memory caching under specific conditions. The key isn't that the disk is faster than memory, but rather that traditional memory access methods (mmap) introduce significant latency. The custom io_uring engine leverages the disk's high bandwidth and pipelining to mask latency. The article emphasizes adapting algorithms and data access to hardware characteristics for maximum performance in modern architectures, and looks ahead to future hardware trends.

Hardware memory caching