"Installation: we recommend that you use Docker."

what I'm supposed to see: "hey, it's a simple one-liner! Such clean install, much wow."

what I actually see: "we couldn't figure out how to install this thing on anything but our own machine, but hey, here is a well-compressed image of our entire disk, use this instead so that we can stop trying"

@ssafar Couldn't agree more. Docker is making developers lazy and leading to software that's impossible to install outside of the very specific hand-tweaked environment provided by the docker image.

@dfs @ssafar It's not true all the time, but it definitely can be.

@hhardy01 @dfs @ssafar honestly, your Makefile should allow folding this into just `make install`

@hhardy01 @ssafar Everything I write works that way, although there may be a ./configure step before the first make. (Yes, I do use autotools for some projects.)

@dfs @ssafar

I thought about including ./configure, but I was making a reference to the most ancient, bsdish way I learned I think SunOS 4.1.1.

I'm just making an obscure joke really, though I do get aggravated when package manglers gunna mangle. :)

Between docker and static linking we are going back 30 years in security, maintainability and modularity. Goodbye library updates, goodbye auditing.

@dfs @ssafar sure, but i think that docker was so successful because environment reproducibility was already really bad; it wasn't bad just because of docker

Its not necessary about being lazy but f.e. about being able to install software in repeatable manner regardless of OS/libraries version .

@ssafar That’s not that awful. There’s software with weird enough dependencies that I really don’t want to figure out how to install the dependencies properly. Granted, extremely wrong things can and do also happen (like ad-hoc patching of files in /usr/lib :blobfoxterrified:).

If the Dockerfile is clear enough, it is self-contained recipe to install/build all the dependencies and build the software, so it can even help proper packaging downstream. I’d much rather prefer a Dockerfile that I run in reproducible manner than a bunch of unmaintained instructions in a text file, even if I ultimately wanted to install or package (AUR) a tool for my own native system.

@kristof @ssafar Yes, if the Dockerfile is made available, that's definite mitigation. But if all you get is a docker image with no clue how it got created, that's not so good.

@kristof yeah, if it's a complex install, having a Dockerfile is a lot better than not having one; at least you now have _one_ way this thing can be seen working. It's also great for reproducible builds... but if the default way of installing something on your OS involves installing another OS, something might be wrong with our idea of what an OS is supposed to be :)

(I've seen projects where "non-docker" installs were in the "compiling / hacking / advanced" section...)

@ssafar I’d say a “non-docker” install could be reasonably considered “compiling / hacking / advanced” from the project’s standpoint — if I wanted to install something “natively”, but not compile it / hack on it, I’d turn to the distro I use, and not necessarily towards the upstream.

Of course, an OS where Docker is the only way to install something is probably bad (unless you consider Kubernetes an OS… ugh :blobcatglare:), but upstream projects are likely not part of any particular OS. Linking to OS packages would still be nice, though.

@kristof @ssafar kubernetes is an OS, and in that OS, Ubuntu is a library, and docker image is a statically linked executable.

It's not a particularlu good OS (eg. it has no pipes) and I don't likebhow many layers it has below it, but it's easier to think about it as an OS.


This 100% made me laugh, but honestly, and Docker images are no substitute for other packages, but complex software distributed as Docker images is still often easier than the alternative.

@ssafar Or maybe package managers and distributing binaries in linux land is fundamentally broken and that's the only way you can guarantee the thing might have half a chance working...

but sure, that doesn't sound as catchy and l33t as "everyone else is incompetent"...

Maybe do half an attempt at understanding the problem space before hating on it next time.

@maltimore @alatiera @ssafar Is there anything specific that Nix or Guix can't solve for you that absolutely requires Docker?

@alatiera @ssafar
If Docker is the solution, then you're already screwed. And your 'solution' will have a shelf-life of less than a year.

As I like to say: 👇
"Your software is already broken, it just hasn't fallen apart. Yet."

The wide-spread use of Docker only shows how bad current tech is.
Docker itself is fundamentally broken. Last time I checked, in 4 YEARS time they haven't been able to move away from IPtables ... because that would break their software.
See 👇 above.


They are not broken. Thousands of apps use shared, distro packaged libraries and survive updates with no issues. There is a class of apps which are poorly written or poorly maintained to depend on one, specific version of library, usually outdated - or even worse, depend on a specific version of JVM (welcome bundled JRE) or interpreters like Ruby or Python.


@kravietz @ssafar You are clearly not paying attention if you think apps survive updates of their underlying libraries without any problem.

And I am not even getting to distros shipping the same libraries with incompatible apis and different features enabled, let alone different packaging policies...

@ssafar Indeed. At a previous job I discovered that our build process had an entire Docker container devoted to running a single Python script. Extracting that script to run on the host (which it was perfectly capable of doing) was a satisfying win.

@ssafar Oh, I forgot to mention that the script's only job was generating a new semantic-versioning release number for the software being built.

We were spinning up an entire docker container for the task of incrementing an integer.

@ssafar "well compressed" is a pretty big stretch.

Though I'll take "install via docker" over "it's super easy, just `curl | sudo sh`!" but either way, I'm almost certainly looking for an alternative since I probably don't have time to deal with your broken, poorly defined build system just to get a potentially useful new tool installed

@ssafar hmm.. sure.. but have u dive into the sysadmin world? :confused:

@Moon @ssafar I wish some of the ERP software I've had to deal with on Windows Server was contained to Docker images.
@birdulon @Moon @ssafar Docker is slightly better than running arbitrary executable that extract and install themselves in whatever way the author thinks is best.

@ssafar Yeah, and docker is a application running with sudo rights, what could go wrong right?

@ssafar Exactly. Personally, I consider any software that an ordinary person cannot install other than in Docker or Flatpak is a bloatware. Unfortunately (or rather fortunately) I found out that I can't run any such bloatware, because Flatpak depends entirely on the systemd which I don't have (and don't want to have).

On the other hand, some isolation would be useful in Linux. But not in the style of Flatpak or Docker. I would rather like to see it already at the level of the packaging system: dynamic chroots would be created for each program, mixed according to it's needs (docker does something similar, but works with the whole system image, this would be at a lower level). For example, if I wanted to install nginx, the "packages" pcre, zlib, openssl, geoip, mailcap and libxcrypt would be dynamically mixed in the chroot.

Each chroot will be mounted to limit the software as much as possible (on most directories noexec, nosuid, nodev, if the exec needed, then the whole directory readonly).

Maybe there is a distribution which works like that? I think it could be possible to replace, for instance, pacman and make it to install already existing Arch linux's packages in chroots. Just re-use it's existing repositories. All you need is mount --bind, layered filesystem, and/or maybe cgroups.

And then, of course, a tool to bind shared directories into chroots - but only those needed. For instance, I would like to isolate my Firefox to only see my Download folder.

@ssafar but you can actually get a Docker image with a one liner, no? 🤔

Whereas non-containerised solutions might depend on your operating system, it's version and whether you used only its stable & supported packages or third party repositories, own builds etc.

@ssafar My thought exactly, everytime I see an application providing only a docker image... just before closing the tab of my browser :D

@ssafar yes! I keep saying this and people be like: "you're a luddite!". No, Linux app packaging is so fucked up that shipping disk images and containers is seen as the viable solution. 😕

@AbbieNormal @ssafar

"Installation: we recommend that you use Docker." is new "It works on my machine"

😭 😭

Sign in to participate in the conversation

Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.