• Panda@lemmy.today
    link
    fedilink
    arrow-up
    19
    ·
    12 hours ago

    I’ve seen this pop up on websites a lot lately. Usually it takes a few seconds to load the website but there have been occasions where it seemed to hang as it was stuck on that screen for minutes and I ended up closing my browser tab because the website just wouldn’t load.

    Is this a (known) issue or is it intended to be like this?

    • lime!@feddit.nu
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      12 hours ago

      anubis is basically a bitcoin miner, with the difficulty turned way down (and obviously not resulting in any coins), so it’s inherently random. if it takes minutes it does seem like something is wrong though. maybe a network error?

      • isolatedscotch@discuss.tchncs.de
        link
        fedilink
        arrow-up
        12
        ·
        edit-2
        10 hours ago

        adding to this, some sites set the difficulty way higher then others, nerdvpn’s invidious and redlib instances take about 5 seconds and some ~20k hashes, while privacyredirect’s inatances are almost instant with less then 50 hashes each time

        • RepleteLocum@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          So they make the internet worse for poor people? I could get through 20k in a second, but someone with just an old laptop would take a few minutes, no?

          • isolatedscotch@discuss.tchncs.de
            link
            fedilink
            arrow-up
            3
            ·
            2 hours ago

            So they make the internet worse for poor people? I could get through 20k in a second, but someone with just an old laptop would take a few minutes, no?

            i mean, kinda? you are absolutely right that someone with an old pc might need to wait a few extra seconds, but the speed is ultimately throttled by the browser

  • refalo@programming.dev
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    16 hours ago

    I don’t understand how/why this got so popular out of nowhere… the same solution has already existed for years in the form of haproxy-protection and a couple others… but nobody seems to care about those.

    • Flipper@feddit.org
      link
      fedilink
      arrow-up
      38
      ·
      16 hours ago

      Probably because the creator had a blog post that got shared around at a point in time where this exact problem was resonating with users.

      It’s not always about being first but about marketing.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 hours ago

          Compare and contrast.

          High-performance traffic management and next-gen security with multi-cloud management and observability. Built for the enterprise — open source at heart.

          Sounds like some over priced, vacuous, do-everything solution. Looks and sounds like every other tech website. Looks like it is meant to appeal to the people who still say “cyber”. Looks and sounds like fauxpen source.

          Weigh the soul of incoming HTTP requests to protect your website!

          Cute. Adorable. Baby girl. Protect my website. Looks fun. Has one clear goal.

  • unexposedhazard@discuss.tchncs.de
    link
    fedilink
    arrow-up
    95
    ·
    edit-2
    23 hours ago

    Non paywalled link https://archive.is/VcoE1

    It basically boils down to making the browser do some cpu heavy calculations before allowing access. This is no problem for a single user, but for a bot farm this would increase the amount of compute power they need 100x or more.

  • Jankatarch@lemmy.world
    link
    fedilink
    arrow-up
    32
    arrow-down
    2
    ·
    20 hours ago

    Everytime I see anubis I get happy because I know the website has some quality information.

  • grysbok@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    39
    ·
    23 hours ago

    My archive’s server uses Anubis and after initial configuration it’s been pain-free. Also, I’m no longer getting multiple automated emails a day about how the server’s timing out. It’s great.

    We went from about 3000 unique “pinky swear I’m not a bot” visitors per (iirc) half a day to 20 such visitors. Twenty is much more in-line with expectations.

  • fuzzy_tinker@lemmy.world
    link
    fedilink
    arrow-up
    82
    ·
    1 day ago

    This is fantastic and I appreciate that it scales well on the server side.

    Ai scraping is a scourge and I would love to know the collective amount of power wasted due to the necessity of countermeasures like this and add this to the total wasted by ai.

        • interdimensionalmeme@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          37 minutes ago

          By photo ID, I don’t mean just any photo, I mean “photo id” cryptographically signed by the state, certificates checked, database pinged, identity validated, the whole enchilada

      • adr1an@programming.dev
        link
        fedilink
        arrow-up
        11
        ·
        9 hours ago

        That’s awful, it means I would get my photo id stolen hundreds of times per day, or there’s also thisfacedoesntexists… and won’t work. For many reasons. Not all websites require an account. And even those that do, when they ask for “personal verification” (like dating apps) have a hard time to implement just that. Most “serious” cases use human review of the photo and a video that has your face and you move in and out of an oval shape…

        • interdimensionalmeme@lemmy.ml
          link
          fedilink
          arrow-up
          4
          ·
          8 hours ago

          If you allow my searchxng search scraper then an AI scraper is indistinguishable.

          If you mean, “google and duckduckgo are whitelisted” then lemmy will only be searchable there, those specific whitelisted hosts. And google search index is also an AI scraper bot.

    • deadcade@lemmy.deadca.de
      link
      fedilink
      arrow-up
      9
      ·
      19 hours ago

      “Yes”, for any bits the user sees. The frontend UI can be behind Anubis without issues. The API, including both user and federation, cannot. We expect “bots” to use an API, so you can’t put human verification in front of it. These "bots* also include applications that aren’t aware of Anubis, or unable to pass it, like all third party Lemmy apps.

      That does stop almost all generic AI scraping, though it does not prevent targeted abuse.

    • seang96@spgrn.com
      link
      fedilink
      arrow-up
      4
      ·
      23 hours ago

      As long as its not configured improperly. When forgejo devs added it it broke downloading images with Kubernetes for a moment. Basically would need to make sure user agent header for federation is allowed.

  • medem@lemmy.wtf
    link
    fedilink
    arrow-up
    22
    arrow-down
    2
    ·
    1 day ago

    <Stupidquestion>

    What advantage does this software provide over simply banning bots via robots.txt?

    </Stupidquestion>

    • kcweller@feddit.nl
      link
      fedilink
      arrow-up
      77
      ·
      1 day ago

      Robots.txt expects that the client is respecting the rules, for instance, marking that they are a scraper.

      AI scrapers don’t respect this trust, and thus robots.txt is meaningless.

    • irotsoma@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      24
      ·
      21 hours ago

      TL;DR: You should have both due to the explicit breaking of the robots.txt contract by AI companies.

      AI generally doesn’t obey robots.txt. That file is just notifying scrapers what they shouldn’t scrape, but relies on good faith of the scrapers. Many AI companies have explicitly chosen not no to comply with robots.txt, thus breaking the contract, so this is a system that causes those scrapers that are not willing to comply to get stuck in a black hole of junk and waste their time. This is a countermeasure, but not a solution. It’s just way less complex than other options that just block these connections, but then make you get pounded with retries. This way the scraper bot gets stuck for a while and doesn’t waste as many of your resources blocking them over and over again.

    • medem@lemmy.wtf
      link
      fedilink
      arrow-up
      44
      ·
      1 day ago

      Well, now that y’all put it that way, I think it was pretty naive from me to think that these companies, whose business model is basically theft, would honour a lousy robots.txt file…

    • thingsiplay@beehaw.org
      link
      fedilink
      arrow-up
      14
      ·
      22 hours ago

      The difference is:

      • robots.txt is a promise without a door
      • Anubis is a physical closed door, that opens up after some time
    • Mwa@thelemmy.club
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      The problem is Ai doesn’t follow robots.txt,so Cloudflare are Anubis developed a solution.

  • Kazumara@discuss.tchncs.de
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    23 hours ago

    Just recently there was a guy on the NANOG List ranting about Anubis being the wrong approach and people should just cache properly then their servers would handle thousands of users and the bots wouldn’t matter. Anyone who puts git online has no-one to blame but themselves, e-commerce should just be made cacheable etc. Seemed a bit idealistic, a bit detached from the current reality.

    Ah found it, here

    • deadcade@lemmy.deadca.de
      link
      fedilink
      arrow-up
      9
      ·
      19 hours ago

      Someone making an argument like that clearly does not understand the situation. Just 4 years ago, a robots.txt was enough to keep most bots away, and hosting personal git on the web required very little resources. With AI companies actively profiting off stealing everything, a robots.txt doesn’t mean anything. Now, even a relatively small git web host takes an insane amount of resources. I’d know - I host a Forgejo instance. Caching doesn’t matter, because diffs berween two random commits are likely unique. Ratelimiting doesn’t matter, they will use different IP (ranges) and user agents. It would also heavily impact actual users “because the site is busy”.

      A proof-of-work solution like Anubis is the best we have currently. The least possible impact to end users, while keeping most (if not all) AI scrapers off the site.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        11 hours ago

        This would not be a problem if one bot scraped once, and the result was then mirrored to all on Big Tech’s dime (cloudflare, tailscale) but since they are all competing now, they think their edge is going to be their own more better scraper setup and they won’t share.

        Maybe there should just be a web to torrent bridge sovtge data is pushed out once by the server and tge swarm does the heavy lifting as a cache.

        • deadcade@lemmy.deadca.de
          link
          fedilink
          arrow-up
          2
          ·
          10 hours ago

          No, it’d still be a problem; every diff between commits is expensive to render to web, even if “only one company” is scraping it, “only one time”. Many of these applications are designed for humans, not scrapers.

          • interdimensionalmeme@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            8 hours ago

            If the rendering data for scraper was really the problem Then the solution is simple, just have downloadable dumps of the publicly available information That would be extremely efficient and cost fractions of pennies in monthly bandwidth Plus the data would be far more usable for whatever they are using it for.

            The problem is trying to have freely available data, but for the host to maintain the ability to leverage this data later.

            I don’t think we can have both of these.

  • not_amm@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    1 day ago

    I had seen that prompt, but never searched about it. I found it a little annoying, mostly because I didn’t know what it was for, but now I won’t mind. I hope more solutions are developed :D