Follow

More flatpak grumbling... so even after I went through the process of installing some utterly massive base packages to get just one or two packages to run, there happen to be updates to those base packages (as expected!). This means I'm looking at downloading up to 2.9 GB just to get system updates. There _has_ to be a better way to do this.

Again - I want things like Flatpak and Snap to succeed! But I have to wonder if developers realize that a broad majority of us consumers may sometimes (or semi-permanently) be on slow (< 10 Mbps) internet connections.

I can deal with running updates overnight every once in a while if there are a massive amount of updates, but the fact that they're now duplicated (or triplicated) by flatpak / snap is just frustrating.

@funnylookinhat I was initially going to blame poorly maintained, bloated containers (a common issue in Docker). However, it looks like they aren't actually particularly bloated, so this really is just an inherent problem with flathub.

That said, there are well-known solutions using pre-existing code to this problem in general.

For instance, if you have container:1.0.0 and want to upgrade to container:1.0.1, then you could make a local copy of 1.0.0 and rsync 1.0.1 onto it.

@urusan Yeah it's basically how flatpak / flathub work.

I think you're onto something, but I expect that to be part of the tooling (not some hack I have to do on my own).

I'm just shooting from the hip here, but we HAVE things that do binary deltas and catchup: zsync is a thing. Are we just getting lazy with all of this containerization nowadays? Why not put a bit of thought into how much effort it is for clients to update containers / layers?

@funnylookinhat Yeah, it is just lazy not to have this built in. It's probably just because the developers all have good Interest connections, so this didn't come up for them.

Although container layers help, it is much less powerful and other development organization concerns usually dominate the decision about where to split the layers.

@funnylookinhat It might even be possible to do this without it being built-in. If you set up a local flathub repo/mirror and used it instead of the default, then you could do a trick similar to the one I suggested earlier to make the downloads cheap.

This might even already exist.

This is also why I am against Snap, which demands centralization that would prevent a workaround like this one.

@urusan I'd accept the centralization if it meant I didn't have to do workarounds like this. :)

There are trade-offs, sure... but at this point I've got a day job and a toddler, so running updates isn't something I want to spend my free time on. :)

@funnylookinhat I get it and use flatpak for some cases, but I try to avoid it for applications I know are in my package manager that way you can manage some of those dependencies better.

@ndanes I've been avoiding flatpak / snap for this reason, but there are a few things that are solely distributed via that mechanism.

For now I'm mostly sticking to debs - and when that doesn't work I'll just grab a pre-built tarball and throw it in `/usr/local/bin` and move on. 😛

@funnylookinhat Yeah unfortunately Gnome uses it as their preferred method to distribute some of their apps, but I honestly have more buggy experiences with some of the flatpaks compared to the ones provided by my packmanager.

@funnylookinhat I think for proprietary stuff though, it's great. I've been using the flatpak for Steam and have had no issues.

@ndanes Yeah agreed! Ironically steam is the one thing that broke recently with some flatpak update... And that's why I _want_ to run those 2.9GB updates, but just having to remember to schedule it overnight.

@funnylookinhat Setup a little cron job so you won't have to remember? Haha

@funnylookinhat
Look closely. It says less then. It's usually much less in my experience. Often, it's kilobytes instead of hundreds of megabytes.
Installing apps requiring different base packages is basically similar to having several DEs installed and updating all of them at once.

@lig I know that it's possibly / probably less - but I also think that if your spec can't say for sure, then your spec sucks. :)

@funnylookinhat as far as I understand, it just does some kind of rsync/binary diff/partial update thing. Thus, the result is being determined by the data itself. The alternative approach is Delta RPM way which uses pre-build drpm packages. This way it is possible to predict the outcome but you need to build and store drpms for all possible deltas.
There is always a trade-off hiding somewhere:)

@lig Aha ok - that is pretty neat! I'm going to have to dig into those docs a bit and see what it says.

Sign in to participate in the conversation
Fosstodon

Fosstodon is an English speaking Mastodon instance that is open to anyone who is interested in technology; particularly free & open source software.