Docker Hub Is Down

(dockerstatus.com)

91 points | by cipherself 2 hours ago ago

42 comments

  • __turbobrew__ 6 minutes ago ago

    Anyone have recommendations for an image cache? Native kubernetes a plus.

    What would be really nice is a system with mutating admission webhooks for pods which kicks off a job to mirror the image to a local registry and then replaces the image reference with the mirrored location.

  • esafak 12 minutes ago ago

    What's the easiest way to cache registries like docker, pypi, and npm these days?

    • lambda a minute ago ago

      The images I use the most, we pull and push to our own internal registry, so we have full control.

      There are still some we pull from Docker Hub, especially in the build process of our own images.

      To work around that, on AWS, you can prefix the image with public.ecr.aws/docker/library/ for example public.ecr.aws/docker/library/python:3.12 and it will pull from AWS's mirror of Docker Hub.

    • viraptor 8 minutes ago ago

      You pull the images you want to use, preferably with some automated process, then push them to your own repo. And anyways use your own repo when pulling for dev/production. It saves you from images disappearing as well.

      • paulddraper 4 minutes ago ago

        What do you like using for your own repo? Artifactory? Something else?

        • __turbobrew__ a few seconds ago ago

          Note, artifactory SaaS had downtime today as well.

  • thomasfromcdnjs 9 minutes ago ago

    Was already struggling to do any work today and now my builds aren't working.

    https://xkcd.com/303/

    • thehamkercat 3 minutes ago ago

      I had some images in cache, but not all of them, and pull is failing

      for example, i have redis:7.2-alpine in cache, but not golang:1.24.5-alpine

      I needed the golang image to start my dev-backend

      so i replaced FROM golang:1.24.5-alpine with FROM redis:7.2-alpine, and manually installed golang with apk in the redis container :)

  • wolttam 33 minutes ago ago

    All I really need is for Debian to have their own OCI image registry I can pull from. :)

  • gnabgib an hour ago ago
    • cipherself an hour ago ago

      I’ll admit I haven’t checked before posting, perhaps an admin can merge both submissions and change the URL on the one you linked to the one in this submission.

  • taberiand an hour ago ago

    So that's why. This gave me the kick I needed to finally switch over the remaining builds to the pull-through cache.

    • esafak 11 minutes ago ago

      Which one are you using?

  • XCSme an hour ago ago

    Yup, my Coolify deployments were failing and I didn't know why : https://softuts.com/docker-hub-is-down/

    Also, isn't it weird that it takes so long to fix given the magnitude of the issue? Already down for 3 hours.

  • miller_joe an hour ago ago

    I was hoping google cloud artifact registry pull-thru caching would help. Alas, it does not.

    I can see an image tag available in the cache in my project on cloud.google.com, but after attempting to pull from the cache (and failing) the image is deleted from GAR :(

    • qianli_cs 43 minutes ago ago

      I think it was likely caused by the cache trying to compare the tag with Docker Hub: https://docs.docker.com/docker-hub/image-library/mirror/#wha...

      > "When a pull is attempted with a tag, the Registry checks the remote to ensure if it has the latest version of the requested content. Otherwise, it fetches and caches the latest content."

      So if the authentication service is down, it might also affect the caching service.

    • rshep 24 minutes ago ago

      I’m able to pull by the digest, even images that are now missing a tag.

    • breatheoften 36 minutes ago ago

      In our ci setting up the docker buildx driver to use the artifact registry pull through cache involves (apparently) an auth transaction to dockerhub which fails out

  • frabonacci an hour ago ago
  • momeabed 35 minutes ago ago

    Also GCP K8S have an partial outage! was this vibe coded release... insane...

  • hexagonsun 13 minutes ago ago

    explains why my watchtower container was exploding

  • philip1209 an hour ago ago

    Development environment won't boot. Guess I'll go home early.

  • manasdas 29 minutes ago ago

    Therefore keep a local registry mirror. You will get it from local cache all the time.

  • Poomba an hour ago ago

    Is there a good alternative for DockerHub these days? Besides azure CR

    • akerl_ an hour ago ago

      Basically all my Docker images were being built from Github repos anyways, so I just switched to Github's container registry.

      • cyberax 33 minutes ago ago

        GHCR authentication is just broken. They still require the deprecated personal access tokens.

        • akerl_ 24 minutes ago ago

          I was publishing public containers on Docker Hub, and I'm publishing public containers on GHCR.

    • cyberax 33 minutes ago ago

      Quay.io is nice (but you have to memorize the spelling of its name)

      • viraptor 6 minutes ago ago

        Or start a pronunciation revolution and say "kway". It's all made up anyway ;-)

  • switz an hour ago ago

    I didn't even really realize it was a SPOF in my deploy chain. I figured at least most of it would be cached locally. Nope, can't deploy.

    I don't work on mission-critical software (nor do I have anyone to answer to) so it's not the end of the world, but has me wondering what my alternate deployment routes are. Is there a mirror registry with all the same basic images? (node/alpine)

    I suppose the fact that I didn't notice before says wonderful things about its reliability.

    • tom1337 an hour ago ago

      I guess the best way would be to have a self-hosted pull-through registry with a cache. This way you'd have all required images ready even when dockerhub is offline.

      Unfortunately that does not help in an outage because you cannot fill the cache now.

      • cipherself an hour ago ago

        In the case where you still have an image locally, trying to build will fail with an error complaining about not being able to load metadata for the image because a HEAD request failed. So, the real question is, why isn't there a way to disable the HEAD request for loading metadata for images? Perhaps there's a way and I don't know it.

        • switz an hour ago ago

          Yeah, this is the actual error that I'm running into. Metadata pages are returning 401 and bailing out of the build.

      • tln an hour ago ago

        You might still have it on your dev box or build box

          docker image ls
          docker tag name/name:version your.registry/here/name/name:version
          docker push your.registry/here/name/name:version
        • tln 44 minutes ago ago

          Per sibling comment, public.ecr.aws/docker/library/.... works even better

        • akshayKMR 25 minutes ago ago

          This saved me. I was able to push image from one of my nodes. Thank you.

      • pebble an hour ago ago

        This is the way tho this can lead to fun moments like I was just setting up a new cluster and couldn't figure out why I was having problems pulling images when the other clusters were pulling just fine.

        Took me a while to think of checking the docker hub status page.

    • kam an hour ago ago

      > Is there a mirror registry with all the same basic images?

      https://gallery.ecr.aws/

    • XCSme 30 minutes ago ago

      It's a bit stupid that I can't restart (on Coolify) my container, because pulling the image fails, even though I am already running it, so I do have the image, I just need to restart the Node.js process...

      • XCSme 24 minutes ago ago

        Nevermind, I used the terminal, docker ps to find the container and docker restart <container_id>, without going through Coolify.

  • juan16 an hour ago ago

    have same problem, visiting https://hub.docker.com/_/node return error