he/him

Alts (mostly for modding)

@sga013@lemmy.world

(Earlier also had @sga@lemmy.world for a year before I switched to @sga@lemmings.world, now trying piefed)

  • 0 Posts
  • 12 Comments
Joined 7 months ago
cake
Cake day: March 14th, 2025

help-circle
  • sga@piefed.socialtoOpen Source@lemmy.mlHelium Browser
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Most of the comment section is hating on it being a chrome based browser, and not really answering the question, so let me try.

    (partially unrelevant bit, you can skip it if you want to) I have been using it for about a week. before this, i was using qutebrowser (qt-webengine, which is essentially older lts chromium) for nearly a year and discussing with someone how i definitely should not be using such a old browser. So I am trying out “mainstream browsers” again. I went with helium, because the “someone” also recommended it. I was using librewolf for more than a year before qute, and did not like the performance (especially in my case, ha ving keyboard navigation, with something like vimium or tridactyl). Another reason is that i wanted to try something chromium (proper) after a long time.

    What it is - if you have heard of ungoogled chromium project, this project builds from that, and they add some ui/ux features. for example, in ungoogled chromium, you can not download extensions from chromestore, you have to use a separate extension, and you essentially “sideload” them. They (helium) have made a middle man service (open, you can host your own instance), which you can use to get a nearly chrome like experience. They also ship with ublock origin (the proper manifest v2 version which is now deprecated in other chromium browsers). Other than that, it is almost stock chromium.

    trustworthiness?? - can not really comment on that. I know the devs behind this browser have also made “cobalt.tools” website (imagine yt-dlp, but written from scratch and based in web tech (js)). So they have some cred from that. other than that, team is likely very small, and your proper trustworthiness essentially boils down to - do you trust their work? you can check their patches on github. if you want to, you can try to build from source and patches (building chrome is nightmarishly long). if you use their binary packages (which i am currently doing) then you are putting trust on them (remember xz situation?). in case they are using stuff like github action to generate their builds, then you can check the build files and artifacts as well.


  • sga@piefed.socialtoOpen Source@lemmy.mlHelium Browser
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    unbiased ad-blocking

    in this case, this just means they are using ublock origin with default filter lists. my guess for their wording is that they are not doing something like brave (you partially see ads) or like edge and other chromes which use some very light form of adblocking, which ofcourse does not work on their websites.

    I’d prefer it be an extension

    it is. they are shipping the manifest v2 (the full version) of ublock oob.

    Isn’t BSD a sharealike license? So they can’t not

    no. bsd (i think chrome is 3 clause, but not sure) is a just as open license like mit or gpl (minus the copyleft in gpl). and the core(ish) bits of chrome are lgpl (not sure. i am taliking about blink).


  • it is an evolution in several senses. one is a more modern language. this in of itself is not useful, but in this case, rust ecosystem provides a lot of good libraries to use. so instead of depending on glibc or some other library, or hand rolling your own stuff, you can statically compile in good libs. this allows for potentially leaner code.

    At some places they are intentionally deviating from gnu variant, for example, uu-cp,mv have a -g graphical flag, which gnu variants did not accept in, because they consider themselves feature complete.

    I have pull requests against uutils so I’m by no means anti-Rust or against the project

    (i read this part later, and just noow realising you are a better dev then me, and thank you).

    But I personally would not replace coreutils with it.

    feel free to do as you like.


  • I do not like this attitude towards uutils. phoronix makes a very click baity title, and comments shit on uutils, rust and ubuntu.

    last time it was “extremely slow” (17x), and by the time most people reported it, a pull request had been made and merged which brought the sha function within 2x of gnu version. not ideal, but definitely not reporting worthy.

    then it was sort function can not sort big files, which came from a artificial benchmark of a 4 gigabyte file with single line all consisting of character ‘a’ (not sure if it was a or 0 or something, but that is not relevant). gnu version finished in ~1 sec, and the rust version could not. you can not sort a single line, it is already sorted. so there is some check which uutils is missing, which could be easily added, but no, we must shit on uutils and rust because they are trying.

    In this case, some md5 errors happen, but apparently problematic part is not md5, but dd (actual bug report - https://bugs.launchpad.net/ubuntu/+source/makeself/+bug/2125535).

    I am not saying uutils is a perfect project, but gnu coreutils are nearly 4 decades old, where as uutils are less than 1 decade (yes the project did not start last year). There are bugs which need to be ironed out, and testing it in a non lts distribution is the best way to do that.


  • it feels like a xy problem. if i am not wrong, single bit corruption leaving file unextractable is a bit wild, and my guess is that it’s headers were blown.

    As for general stuff, use a file system which does parity calc and such. or use something like raid to have redundant drives. (you can set something like 1 in 5 breaks, or 2 in 5, but more you allow to break, less actual space usable you would have). Or have really simple backups.

    As to physical media - do not go flash based (ssd/sd cards/usb pen drives) if you want to leave them unpowered. they expect to be powered once every few months. they are effectively ram disks but like much more stable. Hard disk drives are better, but handle physical shocks much worse. you can drop a ssd and expect it to work. for a hard disk it is almost game over. Magnetic tape are better, they are much less data dense, but they are cheap.

    I would assume it’s lossless

    yes. these are lossless algorithms.

    Now coming to compression - no compression practically deals with bit corruption. practically all compression formats aim for small size and or fast (de)/compression. Saving parity bits is wasteful.

    If you can install some thing, try, for eg, https://github.com/Parchive/par2cmdline. You give whatever file (compressed or not), and it will generate parity bits so you can repair stuff. now use whatever compression you want, and prepare parity bits for worst case.

    As to what compression algorithm, zip or gzip (deflate), bzip2 (or newer 3), xz (lzma in general), and zstandard (or older lz4), brotli are practically not going anywhere. most distros use them, they are used on web, and many other places. My favorite is zstadard as it gives great compression and extremely fast.

    Do any of them have the ability to recover from a bit flip or at the very least detect with certainty whether the data is corrupted or not when extracting?

    no and no.

    You should also consider file archive format. for example zip (the format, not algo) or tar are effictively standards and stable, and practically here forever. there are mountable ones like squashfs (also fairly common. most linux distros use it for live images) and dwarfs (not yet a standard, imagine squashfs but also deduplicating).

    do compression formats exist (or can they exist) which correct for bit flop - yes

    If your goal is that a single bit flip should not ruin it, you should probably not look into deduplicating ones (they are reducing the number of bits stored, so in case a bit flip happens, less files would be corrupted).

    Now coming to another part - do you want to compress data ? if so, why?

    when you compress data, you literally reducing number of bits. now imagine if all bits on your disc are equally likely to undergo bitrot. if so, compressing makes your files less likely to rot.

    but as you have also said, it is possible that if they corrupt, the corruption is more catastrophic (in a plane text file, this may just mean a character mutated, or in a image, some color changed. hardly problematic).

    So you should also check - is compression worth it? come up with a number, lets say 90%. if compression algorithm reduces file size to 0.93 the original, do not go for compression. if it does, do compress. I am not saying pick 90%, but like decide on one you seem content with.

    here is a stupid idea. if compression reduces file size by 2x, then compress, and make yet another copy. now even if one is corrupted, you have a pristine copy.





  • Other than memory speed, there is one more blocker - your cpu (ish). live usbs do not store the raw image uncompressed, they would be much larger in size. instead, the file system used (usually squashfs) is compressed (usually zstd (default level 3), but could be lz4, or xz, etc). whenever a file is loaded, it is first uncompressed, and if you have enough memory, you can try the load to ram (or memory, wording may differ) option, where, important parts of image are fist uncompressed and stored in memory, resulting in better performance. Now most cpus are fast enough to decompress, so limiting factor still is likely your storage (usb x.y standard) read speed (and if it stably runs that speed, or is thermally throttled), but if you are on a faster underlying source, it can make a difference.

    Anecdotely, I use squashfs to compress most things i keep, and it is fast enough for most purposes, but i have observed that for benchmarks, especially single threaded, there is a significant difference. for geekbench 6, my singlecore score was close to 0.6 times of the actual score, when read from uncompressed, or from memory. for all core, score was nearly 0.85 times of the uncompressed/memory score. Would you realistically feel the difference, no imo. I even have a file system level compression (btrfs, zstd, level 3), and i do not feel a significant difference.




  • I remember their appointment from brodie’s video on this, and iirc, they were someone who has been in community for ~20 years, and they also had some kind of vision disability (not 100% sure), because of which there was hope that gnome would do more investments on accessibilty front. Which to gnome’s credit, they have the best among other wayland folks, but this feels to sson. if it had been a year or 2, it would have made sense (they already were an old person, so time was not favoring them), because it is reasonable for such posts 2 change in that timeframe. the previous director also left in 5-6 months, then this person in 4, it really speaks something about gnome’s board