Cache-Friendly Code is Way Faster Than You Think

2025-05-07

Programmers often focus on algorithmic complexity, overlooking the impact of modern hardware's memory hierarchy on performance. This article experimentally compares the performance differences between sequential, indirect, and random memory access. Results show sequential access is fastest, while random access is an order of magnitude slower. Optimizing memory access patterns is crucial for performance; even simple operations see massive gains from optimized memory layout. The article advises considering memory access patterns when designing data structures and algorithms, for example, placing frequently used data contiguously in memory to leverage CPU caching and avoid cache misses.

Read more
Development memory access

Haskell Concurrency: Escape from Thread Hell

2025-04-17

This article recounts the author's journey from embedded systems development in C/C++/Rust to Haskell, highlighting Haskell's advantages in concurrent programming. Haskell uses green threads and event-driven IO, avoiding the complexities of traditional threading models. Through the `async` package and STM (Software Transactional Memory), Haskell offers a cleaner and safer approach to concurrent tasks. Functions like `concurrently`, `race`, and `mapConcurrently`, along with data structures such as `TVar` and `TBQueue`, simplify concurrent operations and prevent common concurrency issues like deadlocks and race conditions.

Read more
Development