o11c

joined 2 years ago
[–] o11c@programming.dev 2 points 2 years ago

If there's a .pc file shipped, pkg-config can simplify your life by figuring out the flags for you.

[–] o11c@programming.dev 2 points 2 years ago

The problem is that the application developer usually thinks they know everything about what they want from their dependencies, but they actually don't.

[–] o11c@programming.dev 1 points 2 years ago (4 children)

The problem is that GLIBC is the only serious attempt at a libc on Linux. The only competitor that is even trying is MUSL, and until early $CURRENTYEAR it still had worldbreaking standard-violating bugs marked WONTFIX. While I can no longer name similar catastrophes, that history gives me little confidence.

There are some lovely technical things in MUSL, but a GLIBC alternative it really is not.

[–] o11c@programming.dev 1 points 2 years ago (2 children)

That's misleading though, since it only cares about one side, and ignores e.g. the much faster development speed that dynamic linking can provide.

[–] o11c@programming.dev 3 points 2 years ago

Only if the library is completely shitty and breaks between minor versions.

If the library is that bad, it's a strong sign you should avoid it entirely since it can't be relied on to do its job.

[–] o11c@programming.dev 6 points 2 years ago (7 children)

Some languages don't even support linking at all. Interpreted languages often dispatch everything by name without any relocations, which is obviously horrible. And some compiled languages only support translating the whole program (or at least, whole binary - looking at you, Rust!) at once. Do note that "static linking" has shades of meaning: it applies to "link multiple objects into a binary", but often that it excluded from the discussion in favor of just "use a .a instead of a .so".

Dynamic linking supports much faster development cycle than static linking (which is faster than whole-binary-at-once), at the cost of slightly slower runtime (but the location of that slowness can be controlled, if you actually care, and can easily be kept out of hot paths). It is of particularly high value for security updates, but we all known most developers don't care about security so I'm talking about annoyance instead. Some realistic numbers here: dynamic linking might be "rebuild in 0.3 seconds" vs static linking "rebuild in 3 seconds" vs no linking "rebuild in 30 seconds".

Dynamic linking is generally more reliable against long-term system changes. For example, it is impossible to run old statically-linked versions of bash 3.2 anymore on a modern distro (something about an incompatible locale format?), whereas the dynamically linked versions work just fine (assuming the libraries are installed, which is a reasonable assumption). Keep in mind that "just run everything in a container" isn't a solution because somebody has to maintain the distro inside the container.

Unfortunately, a lot of programmers lack basic competence and therefore have trouble setting up dynamic linking. If you really need frobbing, there's nothing wrong with RPATH if you're not setuid or similar (and even if you are, absolute root-owned paths are safe - a reasonable restriction since setuid will require more than just extracting a tarball anyway).

Even if you do use static linking, you should NEVER statically link to libc, and probably not to libstdc++ either. There are just too many things that can go wrong when you given up on the notion of "single source of truth". If you actually read the man pages for the tools you're using this is very easy to do, but a lack of such basic abilities is common among proponents of static linking.

Again, keep in mind that "just run everything in a container" isn't a solution because somebody has to maintain the distro inside the container.

The big question these days should not be "static or dynamic linking" but "dynamic linking with or without semantic interposition?" Apple's broken "two level namespaces" is closely related but also prevents symbol migration, and is really aimed at people who forgot to use -fvisibility=hidden.

[–] o11c@programming.dev 10 points 2 years ago

As a practical matter it is likely to break somebody's unit tests.

If there's an alternative approach that you want people to use in their unit tests, go ahead and break it. If there isn't, but you're only doing such breakage rarely and it's reasonable for their unit tests to be updated in a way that works with both versions of your library, do it cautiously. Otherwise, only do it if you own the universe and you hate future debuggers.

[–] o11c@programming.dev 4 points 2 years ago

The thing is - I have probably seen hundreds of projects that use tabs for indentation ... and I've never seen a single one without tab errors. And that ignoring e.g. the fact that tabs break diffs or who knows how many other things.

Using spaces doesn't automatically mean a lack of errors but it's clearly easy enough that it's commonly achieved. The most common argument against spaces seems to boil down to "my editor inserts hard tabs and I don't know how to configure it".

[–] o11c@programming.dev 2 points 2 years ago

The problem is that what everybody really wants is parameterization, not concatenation. But most solutions therefor are flaky even if they exist.

[–] o11c@programming.dev 3 points 2 years ago

It's solving (and facing) some very interesting problems at a technical level ...

but I can't get over the dumb decision for how IO is done. It's $CURRENTYEAR; we have global constructors even if your platform really needs them (hint: it probably doesn't).

[–] o11c@programming.dev 2 points 2 years ago

There's probably a way to do "specify icon as part of the linker call" which should be easier.

[–] o11c@programming.dev 16 points 2 years ago

Stop reinventing the wheel.

Major translation systems like gettext (especially the GNU variant) have decades of tooling built up for "merging" and all sorts of other operations.

Even if you don't want to use their binary format at runtime, their tooling is still worth it.

view more: ‹ prev next ›