tl;dr all the same caveats with self-hosted software apply; don’t do anything you wouldn’t do with a self hosted database or monitoring stack.
Well the actual rules — who gets access to what
The rules themselves are the same public rules in the IAM docs on AWS, GCP etc., while the collections of these public rules (eg. the storage_analytics_ro
example in the README) defined at the org level will likely be stored in two ways: 1) in a (presumably private) infra-as-code repo most probably using the Terraform provider or a future Pulumi provider, 2) the data store backing the service which I talk about more below.
“Who received access to what” is something that is tracked in the runtime logs and audit logs, but as this is a temporary elevated access management solution where anyone who is given access to the service can make a request that can be approved or denied, this is not the right place or tool for a general long-lived least-privilege mapping of “this rule => this person/this whole team”.
where is that stored and how is it secured, to what standards?
This is largely up to the the team responsible for the implementation and maintenance, just like it would be for a self-hosted monitoring stack like Prom + Grafana or a self-hosted PostgreSQL instance; you can have your data exposed through public IPs, FQDNs and buckets with PostgreSQL or Prom + Grafana, or you can have them completely locked down and only available through a private network, and the same applies with Satounki.
Is there logging, audit, non-repudiation, tamper-proof, time-stamping etc.
Yes, yes, yes, yes and yes, though the degree of confidence in each of these depends to some degree on the competence of the people responsible for the implementation and the maintenance of the service as is the case with all things self-hosted.
If deployed in an organization which doesn’t adhere to at least a basic least-privilege permissions approach, there is nothing stopping a bad internal actor with Administrator permissions wherever this is deployed from opening up the database directly and making whatever malicious changes they want.
What sort of sensitive data are you imagining in your reading of the README? It would be useful to understand to update the language appropriately 🙏
Thanks! Turns out I have a lot more time on my hands to be found around the internet since I got laid off last month 😅
This looks cool! It’s not packaged on nixpkgs yet so I might package it and then try to selfhost 👀
I wish I had more advice, but I’m in a similar boat, just got laid off earlier this month after being with the same company from Series A in 2018 all the way until today. I’m sending job applications and trying to get interviews, but it’s hard to get past the resume screening stage, even with 8+ years of experience.
I’ve mainly been working in DevOps/SRE/Platform Infrastructure, but I am also an accomplished developer with a pretty thick portfolio of widely used open source projects, though it doesn’t seem to matter.
There are so many applicants for every single job now that it feels hopeless, and of course every single opening wants you to waste your time on multiple asinine LeetCode gotcha questions.
If I lived somewhere with a public health system I’d love to take what money I have saved up and open a traditional middle eastern bakery, but I need to do something that will bring health coverage for myself and my family. Who knows, I might just end up working at Trader Joe’s. 🤷♀
I think it’s a stack that really pays off in the long run for solo projects. After a long week of work the last thing I want to do is go tracking down runtime errors (undefined is not a function
, my old friend) or messing around with Docker containers and Kubernetes clusters. It also doesn’t hurt that once you throw away the costly deployment abstractions, the operating expenses turn out to be a lot cheaper.
The whole point is that you can build a working container image and then ship it to a registry (including private registries) so that your other developers/users/etc don’t have to build them and can just run the existing image.
Agreed, we still do this in the areas where we use Docker at day job.
I think the mileage with this approach can vary depending on the languages in use and the velocity of feature iteration (ie. if the company is still tweaking product-market fit, pivoting to a new vertical, etc.).
I’ve lost count of the number of times where a team decides they need to npm install
something with a heavy node-gyp
step to build native modules which require yet another obscure system dependency that is not in the base layer. 😅
We all use Linux on our workstations and laptops. That might make it easier.
You are living my dream!
I think this is the key piece; the experience of Docker on Linux (including WSL if it’s not hooking into Docker Desktop on Windows) and on macOS is just so wildly difference when it comes to performance, reliability and stability.
Thanks for sharing this! Added to my weekend inspiration/reading pile. 🙏
Highly recommended viewing if you’d like to learn more about the limits of reproducibility in the Docker ecosystem.
Tutorial != advocation. As I said, no attempt to engage in good faith.
I understood your point, and while there are situations where it can be optional, in a context and scale of hundreds of developers, who mostly don’t have any real docker
knowledge, and who work almost exclusively on macOS, let alone enough to set up and maintain alternatives to Docker Desktop, the only practical option becomes to pay the licensing fees to enable the path of least resistance.
Lot’s of (incorrect) assumptions here and generally a very poorly worded post that doesn’t make any attempt to engage in good faith. These are the reasons for what I believe is my very first down-vote of a comment on Lemmy.
NixOS on WSL2 is actually my development environment of choice these days! (With my tiling window manager komorebi, of course! 😀)
I believe this is the Docker Desktop license pricing.
On an individual scale and even some smaller startup scales, things are a little bit different (you qualify for the free tier, everyone you work with is able to debug off-the-beaten-path Docker errors, knowledge about fixes is quick and easy to disseminate, etc.), but the context of this article and the thread on Mastodon that spawned it was a “unicorn” company with an engineering org comprised of hundreds of developers.
Hi!
First I’d like to clarify that I’m not “anti-container/Docker”. 😅
There is a lot of discussion on this article (with my comments!) going on over at Tildes. I don’t wanna copy-paste everything from there, but I’ll share the first main response I gave to someone who had very similar feedback to kick-start some discussion on those points here as well:
Some high level points on the “why”:
Reproducibility: Docker builds are not reproducible, and especially in a company with more than a handful of developers, it’s nice not to have to worry about a docker build
command in the on-boarding docs failing inexplicably (from the POV of the regular joe developer) from one day to the next
Cost: Docker licenses for most companies now cost $9/user/month (minimum of 5 seats required) - this is very steep for something that doesn’t guarantee reproducibility and has poor performance to boot (see below)
Performance: Docker performance on macOS (and Windows), especially storage mount performance remains poor; this is even more acutely felt when working with languages like Node where the dependencies are file-count heavy. Sure, you could just issue everyone Linux laptops, but these days hiring is hard enough without shooting yourself in the foot by not providing a recent MBP to new devs by default
I think it’s also worth drawing a line between containers as a local development tool and containers as a deployment artifact, as the above points don’t really apply to the latter.
I’m not an open source guy - redistribution restrictions (as well as restrictions for corporate and commercial use) are non negotiable for me. You’re welcome to learn from the source code, and anyone is free to fork and make whatever changes they want for personal use.
The license history for this project goes MIT > PolyForm Strict > Forked PolyForm Strict to explicitly allow changes for personal use (named as the “Komorebi” license as changing the text of PolyForm licenses requires removal of the PolyForm trademark).
If anyone is interested in the story behind the initial MIT > PolyForm Strict switch, the tl;dr is that I decided to explicitly restrict redistribution after someone did a rename of the project and started selling it on the Windows Store. A lot has happened since then that has changed my views on open source in general.
OSI licenses are not “standard” by any stretch of the imagination, and I personally don’t want to have anything to do with licenses which would permit the use of my software in the mass murder of children.