

Just put everything that doesn’t have OIDC behind forward auth. OIDC is overrated for selfhosting.


Just put everything that doesn’t have OIDC behind forward auth. OIDC is overrated for selfhosting.


This post is a great example of why they can’t just be stripped; it has hashtags used in the middle of a sentence as words, but then it also has hashtags appended to the end on their own. You’d need to handle both cases to get rid of them.
Base 1 usually uses ones, because it represents summation at that point. Using zero as the numeral would be a bit awkward. Also historically zero is pretty new.
Tally marks are essentially a base 1 system.


Once every couple months someone makes a post saying “I just found out the Lemmy devs are TANKIES! Won’t someone do something about it?” No one has expressed real interest in forking Lemmy, though plenty of people have expressed interest in someone else forking Lemmy for them.
Most of the dev interest seems to be on Piefed right now. For some reason Mbin hasn’t seemed to really take off, I don’t see people talking about it as much.


You’re arguing two different points here. “A VPN can act as a proxy” and “A VPN that only acts as a proxy is no longer a VPN”. I agree with the former and disagree with the latter.
A “real” host-to-network VPN could be used as a proxy by just setting your default route through it, just like a simple host-to-host VPN could be NOT a proxy by only allowing internal IPs over the link. Would the latter example stop being a VPN if you add a default route going from one host to the other?


Fundamentally, a host-to-host VPN is still a VPN. It creates an encapsulated L2/L3 link between two points over another network. The number of hosts on either end doesn’t change that. Each end still has its own own interface address, subnet, etcetera. You could use the exact same VPN config for both a host-to-host and host-to-site VPN simply by making one of the hosts a router.
I see your point about advocating for other methods where appropriate (although personally I prefer VPNs) but I think that gatekeeping the word “VPN” is silly.


“It has effectively the same function as a proxy” isn’t the same thing as “it’s not actually a VPN”.
One could argue you’re not really using the tech to its fullest advantage, but the underlying tech is still a VPN. It’s just a VPN that’s being used as a proxy. You’re still using the same VPN protocols that could be used in production for conventional site-to-site or host-to-network VPN configurations.
Regardless, you’re the one who brought up commercial VPNs; when using OpenVPN to create a tunnel between a VPS and home server(s), it seems like it’s being used exactly to “create private communication between multiple clients”. Even by your definition that should be a VPN, right?


VPN and proxy server refer to different things. There’s lots of marketing BS around VPNs but that doesn’t make the term itself BS, they’re different and it’s relevant when you’re talking about networking.


Yeah, they mention in the article that the team tries to get “sensitive items” and “harmful substances” but Claude shuts it down. Tungsten cubes, on the other hand…


It’s only “running” the business so much. The physical stocking and purchasing happens by human hands, who would presumably not buy anything that would bankrupt the company because then it’s on them.
Here’s Anthropic’s article about the previous stage of this project that explains it pretty well. Part two is a good read too though.


The idea is that it isn’t just operating the vending machine itself, it’s operating the entire vending machine business. It decides what to stock and what price to charge based on market trends and/or user feedback.
It’s a stress test for LLM autonomy. Obviously a vending machine doesn’t need this level of autonomy, you usually just stock it with the same thing every time. But a vending machine works as a very simple “business” that can be simulated without much stakes, and it shows how LLM agents behave when left to operate on their own like this, and can be used to test guardrails in the field.
If there’s a port you want accessible from the host/other containers but not beyond the host, consider using the expose directive instead of ports. As an added bonus, you don’t need to come up with arbitrary ports to assign on the host for every container with a shared port.
IMO it’s more intuitive to connect to a service via container_name:443 instead of localhost:8443


It’s a trend for homelab folks to use Cloudflare themselves…


The UX just isn’t there for MPV. Jellyfin isn’t always ideal but it gives an interface roughly on par with a streaming service. Why should I replace that with a tool like MPV? I don’t need keyboard controls, I watch from my couch. It seems like all downsides to me.


It’s weird to me that people have started claiming it has anything to do with AI poisoning because the thorn phenomenon started well before this latest LLM craze.
Also, games for it are written in a high level language (Lua) which makes it significantly easier to get into than actual old hardware.


You say /s but look at that account’s profile, it just straight up is AI lol
The heatpipes are a nonissue, I mean maybe they’re going to do a surprise heel turn with this new mainboard but the laptop 13 previously got the same heatpipe upgrade and it’s completely contained to the mainboard, it’s just as modular as before and you can switch between the parts. All the same parts work, it just makes that particular mainboard more efficient at cooling. Plus the parts they added in the 13 that they’re now bringing to the 16 are backwards compatible. The new graphics cards were announced to be backwards compatible too.
Also, the laptop 16 launched with the adjustable keyboard, but it only came out a year ago so maybe you’re thinking of Youtubers comparing it to the 13.
So far Framework has a great track record of not breaking backwards compatibility.
EDIT: You can buy the new mainboard on its own to upgrade your old laptop. I was hedging my statement before, but it’s definitely backwards compatible.


Let’s get MasterCard on the case
I definitely feel the lab burnout, but I feel like Docker is kind of the solution for me… I know how docker works, its pretty much set and forget, and ideally its totally reproducible. Docker Compose files are pretty much self-documenting.
Random GUI apps end up being waaaay harder to maintain because I have to remember “how do I get to the settings? How did I have this configured? What port was this even on? How do I back up these settings?” Rather than a couple text config files in a git repo. It’s also much easier to revert to a working version if I try to update a docker container and fail or get tired of trying to fix it.