If #xz were a Go or Rust dependency, you wouldn’t have a single copy of xz library on your system, but many, #xzbackdoor hidden in every executable that uses it. Distros would have to rebuild all packages using that lib (not just the lib itself), which could take days or weeks, and users would have to update them all, downloading tens or hundreds of megabytes.
If you install binaries directly from vendors/devs, it’s even worse – you wouldn’t even know which ones are affected and you’d (1/3)
be at the mercy of the devs to provide the update. Not a group of active maintainers behind the distro, but many individual devs, some of whom lack the time or motivation and sustainability. The same goes for Docker containers, Flatpak and similar!
This is called static linking or bundling. Instead of rebuilding and updating a single shared library, you have to rebuild and update every single thing that links/bundles it. In the case of static linking, you usually can’t even tell which (2/3)
libraries it’s linked with!
Now do you see the value of #Linux distros and dynamic linking? Please, stop this insane “single binary” mantra and work with distros, not against them.
If #rustlang wants to replace C, devs need to acknowledge this and start providing dynamically linkable libraries with stable ABI. (3/3)
Significant share of these “unable to address” are caused by physical impossibility of resolving conflicts between what specific versions of libraries specific applications require. In the Python world, every single library<X.Y
constraint in requirements.txt
is another piece of motivation for projects such as pipx
which do for Python apps what Docker does for the whole system. And I guess unless there’s some kind of Central Committee to decide and enforce on library versions for all applications, we just need to live with these workarounds.
Upstreams should not choose versions of dependencies randomly in their own bubble.
To make deduplication of effort work, there should be awareness in every upstream that they need to align their choices with other upstreams.
The packaging and distributions ecosystem is where different upstreams meet and talk to each other about things like which versions to choose as a base for LTS branches, which versions to choose for shared libraries and so on.
The effort in packaging is not to put things in a package. That's just the entry point.
Package maintainer work is what happens after you made the initial package.
Assume you have a hundred of packages with various pinned versions of a library.
Now a CVE happens. Upstream of the library says: all good, we patched the CVE in our latest version, have a nice holidays! And goes home
What are you going to do with the hundred of different older versions now?
If you create pressure, but don't suggest a solution, most likely the pressure will blow the lid off :)
Upstream, which pinned itself to a specific commit of a library made a year ago, simply does not have an easy way to update to a latest version. They will have to cherry-pick the CVE fix and create their specific fork of the upstream to handle it.
And no, upstream devs are not especially good in cherry-picking commits into their dependencies.
Here is to my most beloved topic.
Library is not a single state of the code. It is a stream of updates.
When you pin a deps version, you get out of the stream, but the library moves on.
And then the 237th commit since your pin happens to be a critical CVE fix.
You can update your app to all of those 237 changes at once, which is a pain.
Or you can fork the library from the pinned commit and apply only the CVE fix. Which is a different kind of pain.
Half of the job of a package maintainer is to decide, for every single CVE and library version separately, how to deal with such situations.
The decision depends on the nature of updates which landed before the fix and the fix itself, and requires knowledge of the library lifecycle and lifecycles of apps depending on it.
And pressuring each upstream to do it on their own for each of their own copies, usually leads to upstreams not doing it at all.
If it were _most_, we would not need this conversation.
Packaging software which doesn't pin to a specific version but rather relies on ABI-stable interfaces and uses the latest available implementation of them doesn't require any bundling and is mostly a solved problem.
And I may be need a separate statement:
I don't believe that every upstream developer must become a packaging expert.
I believe that packaging is a job on its own. For some projects you combine roles of developer, tester, doc writer and packager, for some you just can't. And then you ask for help.
But I believe that upstream developer should be aware that there are needs in software development beyond writing the code and pleasing the user.
No. You are missing the point.
It can not be a one-way relationship, where I, The Developer, dump whatever I have into a git repo, and you The Packager, now figure the way to use it.
It should be developer looking around and saying: hey, folks, I am writing something. And would appreciate help on packaging.
And yes, if you say this change in the build/layout/tests/config/.. will make it more "packagable", then i treat it as a valid feature request.
@bookwar @ITwrx @Conan_Kudo @kravietz @jakub and even for single person projects, having a packager in each main distribution that isn't the upstream developer is a big plus, as it provides a minimum of oversight and redundancy.
Not much, especially when said maintainer(s) are overworked and demoralized, but still better than nothing.
Yes, that is an important point too.
When we say co-maintainer, we often implicitly assume that it should be an equally or comparably skilled person doing the same tasks.
And then we stop at a thought on how hard it is to find a duplicate.
While it doesn't have to be.
There is plenty of room for a developer to collaborate with a tester, or a packager or a build engineer, or a documentation writer.
It often can be healthier too.