

They’re raising it because of RAM needs of browsers and GNOME.
If you’re a shell nerd like me, you’ll still be fine running it on a potato.


They’re raising it because of RAM needs of browsers and GNOME.
If you’re a shell nerd like me, you’ll still be fine running it on a potato.


Untrackable shrapnel moving at up to 18,000 miles per hour…


Macrohard Window


KDE turned my MacBook plasma!


Given the history of bugs in the RDNA architecture, could it not be reasonable to assume these latter FSR features don’t run on the old architecture?
They are ML models after all. RDNA 3 to 4 saw FSR throughput double so I wouldn’t be surprised these new models need more CUs than available on previous cards.
Seems silly to pitchfork right now based on a rumour with little substance.
You should try posting in !tf2@lemmy.world


I doubt they’re being ignored so much as it is much easier to prosecute crimes where there is concrete evidence. Al Capone wasn’t convicted for countless murders etc, but tax evasion.
At least once they’re incarcerated, it’s not impossible to try them on other charges


More importantly; good thing open source drivers exist.
Yup! Still the default on HP-UX too!
It already is pretty rampant, however most Linux admins have minimal if any detection strategy.
Additionally, while there’s plenty of binaries about like VoidLink, almost all campaigns against Linux hosts target SSH, or RCE vulnerabilities, and deliver shell scripts that orchestrate the attack.
Why compile a binary when the shell has everything you need? The threat models are pretty different between Windows and the *nix world.
When you look at botnet composition, they’re usually made up of outdated Linux hosts with SSH open with password-based authentication.
Seriously people, switch to key-based auth and disable password auth entirely.


It’s almost as if they factored the cost of licensing a third-party IP into the price.


If Epic spent half as much money as they are suing organisations and instead funded developing their shop into a gaming community platform like Steam, they’d probably have caught up by now.


I saw one game that was a 5 minute black screen with someone talking teenage-level philosophy. There was a handful of clicks to make in the whole playthrough.
Steam has a lot of low-quality games. The volume of stuff shipped with Synty Studios assets from Humble Bundles is crazy.
Indie games are doing great. Shovelware is doing meh as it always should be.


Most corporate owned devices are managed with some kind of tool (for restricting what users can do, pushing out software and updates, etc). These tools are called Mobile Device Management (MDM).
The developer is detecting the presence of MDM tools and using that to present a splash page to the user about the licensing requirements etc.
Some educational institutes use MDM to manage students, even so far as to require it be installed on personal owned devices. The developer has been working with edu users to except them.


I don’t think you understand what “the engine supports saving at any time” entails.
Having the ability to serialise objects is not the same as handling the input and output of serialisation.
You might as well be annoyed by why aren’t all developers letting us rewind time in games? Load from our last save? No thanks. Developers are so disrespectful of our time. They just need to log all the changes that happen and play them backwards. Every engine supports that!


I recommend you try implementing a save feature in a game engine then you might have a little more respect for the difficulty of the problem you’re irritated by.
Developers aren’t being unthoughtful or lazy, you’re just trivialising a rather complex software engineering problem that isn’t easy to solve and one solution over another has trade offs / weaknesses.


I understand why people want the feature and I agree it’s amazing but the reason lots of games don’t is it requires the serialization of basically everything in the game and that can be a nightmare to maintain if you’re making lots of big changes to the game throughout development. You have you go back and rework your save code every time you change anything, and ensure older parts of the code still adhere if you need to change how save works (meaning i.e. touching so much of the same code over and over rather than getting a feature implemented and moving on).
And while it can be done with most major engines with plugins, it you’re creating your own structures/object types etc, you need to extend that plugin to support them (and maintain that code every time you make changes to the structures).
Emulators simply save the state of memory since many older consoles didn’t have much RAM to begin with dumping it entirely to disk isn’t that big of a deal (especially if only a subset of registers are marked for game state). Not so easy with modern games where there is a lot more going on in RAM.
Games that have a daily tick e.g. Stardew only need to store a set of initialisation values that are used to begin the day since no other changes would be made to state yet (since the player hasn’t made any that day). Or checkpoints where you serialise player state, quest state, etc, with enemy location etc ignored and respawned as default the next time you play.
Think of it this way. If an enemy spawns in a default location, that doesn’t need to be serialised if you load a game from a checkpoint. But if you can save anywhere? Well then you need to know the enemies, their positions, their vectors, their AI state (alerted etc), their velocity, their position in the animation timeline, and potentially so so much more. If you save mid explosion while boxes are flying all over the place, you need to serialise so much more data to resume the physics simulation. Etc.
And what about multiplayer? That’s additional players and their state and surroundings etc that need to be serialised and reserialised at load successfully.
Then there’s how you serialise. Do you go with a text markup like JSON which can get incredibly large if there’s a lot of things to serialise? Or do you make a custom binary format to compress the size but then you need to maintain that format and how you map to and from it in your engine?
It’s a lot easier if you don’t have to serialise the state of a huge number of things for saving and maintaining that saving code every time you make changes. It’s not impossible, and if you build with the feature in mind, it can be made manageable to maintain.
But if that feature isn’t essential to your game, and you’ve an acceptable alternative, it frees you up to work on other features instead.
It’s a balancing act. And for a solo developer like that of Stardew, I can completely forgive them for not wanting to implement it.


Gabe Newell talked about this years ago.
“When you look at the fact that these people have $2000 PCs and they’re spending $50 a month or more on their Internet connections, clearly they’re willing to spend money.
So, from our point of view, what we saw more and more was that piracy is a result of bad service on the part of game companies…”
You had McDonalds? That was just a farm in my day. Eee-eye-eee-eye-oh!