this post was submitted on 01 Jan 2026
107 points (100.0% liked)
Programming
24153 readers
350 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
@protein
Finger Tree!
A persistent, purely functional workhorse. Amortized O(1) access at both ends, O(log n) concatenation/splitting.
It generalizes elegantly to build sequences, priority queues, and more. Powers Haskell's main Data.Sequence. A functional programmer's secret weapon.
On paper they are efficient. In practise, all pointer based data structures (linked lists, binary trees, etc) are slow on modern hardware. And this effect is more important than the complexity in practise for most practical high performance code.
You are far better off with linear access where possible (e.g. vectors, open addressing hash maps) or if you must have a tree, make the fan-out factor as large as possible (e.g. btrees rather than binary trees).
Now, I don't know if Haskell etc affords you such control, I mainly code in Rust (and C++ in the past).
Also see this old thread from 2016 on hacker news about this very topic: https://news.ycombinator.com/item?id=13263275
@Vorpal
Totally fair point, thanks for calling that out.
When I mentioned finger trees I was thinking more about the *functional* side (persistence, elegant composition, Haskell/Data.Sequence style usage) than raw performance on real hardware.
In performance‑critical code your argument for cache‑friendly linear structures and wide trees absolutely makes sense, and I appreciate the reminder to think about actual access patterns and hardware effects, not just asymptotic complexity.
I think a lot of modern software is bloated. I remember when GUI programs used to fit on a floppy or two. Nowdays we have bloated electron programs taking hundreds of MB of RAM just to show a simple text editor, because it drags a whole browser with it.
I love snappy software, and while I don't think we need to go back to programs fitting on a single floppy and using hundreds of KB of RAM, the pendulum does need to swing back a fair bit. I rewrote some CLI programs in the last few years that I found slow (one my own previously written in Python, the other written in C++ but not properly designed for speed). I used Rust, which sure helped compared to Python, but the real key was thinking carefully about the data structures used up front and designing for performance. And lots of profiling and benchmarking as I went along.
The results? The python program was sped up by 50x, the C++ program by 320x. In both cases it changed these from "irritating delay" to "functionally instant for human perception".
The two programs:
And I also rewrote a program I used to manage Arch Linux configs (written in bash) in Rust. I also added features I wanted so it was never directly comparable (and I don't have numbers), but it made "apply configs to system" take seconds instead of minutes, with several additional features as well. (https://github.com/VorpalBlade/paketkoll/tree/main/crates/konfigkoll)
Oh and want a faster way to check file integrity vs the package manager on your Linux distro? Did that too.
Now what was the point I was making again? Maybe I'm just sensitive to slow software. I disable all animations in GUIs after all, all those milliseconds of waiting adds up over the years. Computers are amazingly fast these days, we shouldn't make them slower than they have to be. So I think far more software should count as performance critical. Anything a human has to wait for should be.
Faster software is more efficient as well, using less electricity, making your phone/laptop battery last longer (since the CPU can go back to sleep sooner). And saves you money in the cloud. Imagine if you could save 30-50% on your cloud bill by renting fewer resources? Over the last few years I have seen multiple reports of this happening when companies rewrite in Rust (C++ would also do this, but why would you want to move to C++ these days?). And hyperscalers save millions in electricity by optimising their logging library by just a few percent.
Most modern software on modern CPUs is bottlenecked on memory bandwidth, so it makes sense to spend effort on data representation. Sure start with some basic profiling to find obvious stupid things (all non-trivial software that hasn't been optimised has stupid things), but once you exhausted that, you need to look at memory layout.
(My dayjob involves hard realtime embedded software. No, I swear that is unrelated to this.)
You can have immutable arrary with vectors, but to mutate them you will need to wrap your action in a Monad. It even supports unboxed values.
https://hackage.haskell.org/package/vector
But I agree boxed default actually causes a lot of performance overhead in many high-level languages.