AFAIK every NAS just uses unauthenticated connections to pull containers, I’m not sure how many actually allow you to log in even (raising the limit to a whopping 40 per hour).

So hopefully systems like /r/unRAID handle the throttling gracefully when clicking “update all”.

Anyone have ideas on how to set up a local docker hub proxy to keep the most common containers on-site instead of hitting docker hub every time?

  • Uli@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    Same here. I’ve been building a bootstrap script, and each time I test it, it tears down the whole cluster and starts from scratch, pulling all of the images again. Every time I hit the Docker pull limit after 10 - 12 hours of work, I treat that as my “that’s enough work for today” signal. I’m going to need to set up a caching system ASAP or the hours I work on this project are about to suddenly get a lot shorter.