Application optimization reduces disk usage and reclaims space. 🙂

  • Boomkop3@reddthat.com
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    Perhaps it’s improved over the last year, I can give it a shot. But yes, for my own packaged applications without shared dependencies, docker is handy. And that’s exclusively what I run

    • PhilipTheBucket@ponder.cat
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      I mean if it makes you happy, I won’t tell you to do anything different. I think a certain amount of it is just prejudice against Docker on my part. Just in my experience NixOS is the best of both worlds: You can have a single coherent system if everything in that system can play nice with each other, and if not, then things can be containerized completely that way still works too. And then on top it has a couple of other nice features like rolling back configs easily, or source builds that get slotted in in-place as if they were standard packages (which is generally where I abandon Docker installs of things, because making changes to the source seems like it’s going to be a big hassle).

      I’m not trying to evangelize though, you should in all seriousness just do what you find to be effective.

      • Boomkop3@reddthat.com
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        2 days ago

        Hold up, nix added containerization? How did I miss that? I will have another look now!

        Also, you’re right. For small quick scripts docker can be a hassle. Nowadays though I add building a docker image as part of my project’s build/compilation process. The main reason I do this is so that I can work with whatever machine I happen to be on, then just copy paste the app to whatever machine I want it on. No extra config or even a look at the environment required. Just install docker and forget about the rest

        update: installing docker on nixos (on a vm) with a nix package failed, not sure why. Perhaps some dependencies were no longer available?

        update: nix is is available as a docker image. I’m running it now, we shall see how it goes

        • PhilipTheBucket@ponder.cat
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 day ago

          Hold up, nix added containerization? How did I miss that? I will have another look now!

          Nix is containerization. Here is firing up a temporary little container with a new python version and then throwing it away once I’m done with it (although you can also do this with more complicated setups, this is just showing doing it with one thing only):

          [hap@glimmer:/proc/69235/fd]$ python --version
          Python 3.12.8
          
          [hap@glimmer:/proc/69235/fd]$ nix-shell -p python39
          this path will be fetched (27.46 MiB download, 80.28 MiB unpacked):
            /nix/store/jrq27pp6plnpx0iyvr04f4apghwc57sz-python3-3.9.21
          copying path '/nix/store/jrq27pp6plnpx0iyvr04f4apghwc57sz-python3-3.9.21' from 'https://cache.nixos.org/'...
          
          [nix-shell:~]$ python --version
          Python 3.9.21
          
          [nix-shell:~]$ exit
          exit
          
          [hap@glimmer:/proc/69235/fd]$ python --version
          Python 3.12.8
          

          The whole “system” you get when moving from Nix to NixOS is basically just a composition of a whole bunch of individual packages like python39 was, in one big container that is “the system.” But you can also fire up temporary containers trivially for particular things. I have a couple of tools with source in ~/src which, whenever I change the source, nix-os rebuild will automatically fire up a little container to rebuild them in (with their build dependencies which don’t have to be around cluttering up my main system). If it works, it’ll deploy the completed product into my main system image for me, but if it doesn’t then nothing will have changed (and either way it throws away the container it used to attempt the build in).

          Each config change spawns a new container for the main system OS image (“generation”), but you can roll back to one of the earlier generations (which are, from a functional perspective, still around) if you want or if you broke something.

          And so on. It’s very nice.

            • PhilipTheBucket@ponder.cat
              link
              fedilink
              arrow-up
              1
              ·
              4 hours ago

              Yes because that is a wrong and clunky way to do it lol.

              If you really wanted to, you could use dockerTools.BuildImage to create an “imaged” version of the container you made, or you could send around the flake.nix and flake.lock files exactly as someone would send around Dockerfiles. That stuff is usually just not necessary though, because it’s replaced with just a better approach (for the average-end-user case where you don’t need large numbers of Docker containers that you can deploy quickly at scale) that accomplishes the same thing.

              I feel like I’m not going to convince you of this though. Have fun with Docker, I guess.

              • Boomkop3@reddthat.com
                link
                fedilink
                arrow-up
                1
                ·
                2 hours ago

                The issue is, nix builds are only guaranteed to be reproducible if the dependencies don’t change. Which they shouldn’t, but you can’t trust the internet to be consistent. Things won’t be there to be fetched forever.

                Images do. And you can turn one into a container in seconds. I suppose it’s a matter of preference. I like one a package to be independent

                • PhilipTheBucket@ponder.cat
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  3 minutes ago

                  The issue is, nix builds are only guaranteed to be reproducible if the dependencies don’t change.

                  Dude, this is exactly why Nix is better. Docker builds are only guaranteed to be reproducible if the dependencies don’t change. Which they will. The vast majority of real-world Dockerfiles do pip install, wget, all kinds of basically unlimited nonsense to pull down their dependencies from anywhere on the internet.

                  Nix builds, on the other hand, are forbidden from the internet, specifically to force them to declare dependencies explicitly and have it within a managed system. You can trust that the Nix repositories aren’t going to change (or store them yourself, along with all the source that generated them and will actually produce the same binaries, if you’re paranoid). You can send the flake.nix and flake.lock files and it will actually work to reproduce a basically byte-identical container on the receiver’s end, which means you don’t have to send multi-gigabyte “images” in order to be able to depend on the recipient actually being able to make use of it. This is what I was saying that the whole thing of needing “images” is a non-issue if your workflow isn’t allowing arbitrary fuckery on an industrial scale whenever you are trying to spin up a new container.

                  I suspect that making a new container and populating it with something useful is so trivial on Nix, that you’re missing the point of what is actually happening, whereas with Docker you can tell something big is happening because it’s such a fandango when it happens. And so you assume Docker is “real” and Nix is “fake” or something.

                  I like one a package to be independent

                  Yes, me too, which is why an affinity for Docker is weird to me.