

“Boromir would have invaded Iran and Iraq”


“Boromir would have invaded Iran and Iraq”


Set up a job to write the file names of everything in your file system to a text file and make sure that text file gets backed up. I did that on my Unraid server for years in lieu of fully backing up the whole array.


I’m sure this is a big, professional, intelligence operation, but I still can’t help but think it’s just a LAN party.


He thought it was “oaf of office” and assumed he was fine.


For a guy that’s always crowing about making deals, he sure seems unhappy about people making deals.


They should rename it Orangeland.


I did a sewage treatment plant tour in my high school biology class. At the end of the second stage filtration, the worker pointed at how it discharges into the ocean.
“So at this point, the water has been treated enough that it’s safe to drink”
We all scrunched our faces at that. Then he added
“But I wouldn’t”


Is that before or after Kupiansk?


This is a good compromise. When I was tight on backup space, I just had a “backup” script that ran nightly and wrote all the media file names to a text file and pushed that to my backup.
It would mean tons of redownloading if my storage array failed, but it was preferable to spending hundreds of dollars I didn’t have on new hardware.


Holy shit, this has every cert I’ve ever generated or renewed since 2015.


Well that was fun! I’m confident this project isn’t malicious. It’s for sure coded using AI, and I think that’s what triggered a smear campaign. This removed Reddit post looks like there is just a downvote brigade out to get the project because the author admitted to using AI.
The only network traffic it’s made when I monitored it was local. Certainly nothing went to Asia.
I think it tries to solve a neat problem. There’s so many features packed in that it’s obviously vibe coded. That’s probably a huge turn off for AI detractors. If you don’t care about that, I think you’re safe to give it a try.


Ok, so I ran the repo through an LLM to look for any suspicious requests, and it came back clean.
But it’s hella suspicious that the repo owner edited away the issue and closed it without a response.
It’s also hella suspicious that the user that reported that issue created their account yesterday.
I think I need to go the nuclear option: pop a gummy and monitor the network traffic of the container and see what it’s doing.


Ohh that’s suspicious. I’m going to kill mine for now and take a look later tonight. I’ll report back if I find anything interesting!


I think the author literally released it like 2 days ago which is why there’s no issues or prs yet.
I installed it yesterday and have only fiddled around a little bit. I like that it pointed out a bunch of health issues with my Lidarr library and have been stuck on a side quest dealing with those.
If you want to explore it and see if anything seems malicious to you, I’d focus on code making requests, and review the sub-dependencies to see if any look sus. It should live entirely in your network and shouldn’t be making any external requests outside your server apart from the connections you set up (like last.fm).
I mean they gave it to me a few years ago and I sure didn’t deserve it.
You are now in line
We are currently experiencing extremely high volume of search requests at this time. We have placed you in a waiting queue and we will process your search request as soon as we can. Thank you for your patience.
Oh I love this. This is like Taylor Swift Ticketmaster level interest. Can’t wait to see what people start finding over the next days.


I’m just using Unraid for the server, after many iterations (PhotonOS, VMware, baremetal Windows Server, …). After many OSes, partial and complete hardware replacements, and general problems, I gave up trying to manage the base server too much. Backups are generally good enough if hardware fails or I break something.
The other side of this is that I’ve moved to having very, very little config on the server itself. Virtually everything of value is in a docker container with a single (admittedly way too large) docker compose file that describes all the services.
I think this is the ideal way for how I use a home server. Your mileage might vary, but I’ve learned the hard way that it’s really hard to maintain a server over the very long term and not also marry yourself to the specific hardware and OS configuration.


extension developers should be able to justify and explain the code they submit, within reason
I think this is the meat of how the policy will work. People can use AI or not. Nobody is going to know. But if someone slops in a giant submission and can’t explain why any of the code exists, it needs to go in the garbage.
Too many people think because something finally “works”, it’s good. Once your AI has written code that seems to work, that’s supposed to be when the human starts their work. You’re not done. You’re not almost done. You have a working prototype that you now need to turn into something of value.
Calculator apps should have achievements, like “Pressed clear 5 times”, and “You could have done this in your head”.
People said they monkeyed around.