My minor suggestion would be to not to use `COPY . .` as could slow down the build process if it has to copy everything in the context that's not needed. Also a security risk in case it copies private data, but probably not a risk if it's part of a multi-stage build.
This just seems like several unnecessary layers of complexity to me.
For making a static site that you're personally deploying, exactly why is Docker required? And if the Docker process will have to bring in an entire Linux image anyway, why is obtaining Python separately better than using a Python provided by the image? And given that we've created an entire isolated container with an explicitly installed Python, why is running a Python script via `uv` better than running it via `python`? Why are we also setting up a virtual environment if we have this container already?
Since we're already making a `pyproject.toml` so that uv knows what the dependencies are, we could just as well make a wheel, use e.g. pipx to install it locally (no container required) and run the program's own entry point. (Or use someone else's SSG, permanently installed the same way. Which is what I'm doing.)
Author—though not OP—here. I’ll try to broadly address the questions, which are all fair.
Broadly speaking, I explicitly wanted to stay in the Coolify world. Coolify is a self-hostable PaaS platform—though I use the Cloud service, as I mentioned—and I really like the abstraction it provides. I haven’t had to SSH into my server for anything since I set it up—I just add repos through the web UI and things deploy and show up in my browser.
Yes, static sites certainly could—and arguably even should—be done way simpler than this. But I have other things I want to deploy on the same infrastructure, things that aren’t static sites, and for which containers make a whole lot more sense. Simplicity can be “each thing is simple in isolation”, but it can also be “all things are consistent with each other”, and in this case I chose the latter.
If this standardization on this kind of abstraction weren’t a priority, this would indeed be a pretty inefficient way of doing this. In fact, I arrived at my current setup by doing what you suggested—setting up a server without containers, building sites directly on it, and serving them from a single reverse proxy instance—and the amount of automation I found myself writing was a bit tedious. The final nail in the coffin for that approach was realizing I’d have to solve web apps with multiple processes in some other way regardless.
So what you're saying is that "Static sites with Python, uv, Caddy, and Docker" wasn't the overall goal. You want to stay in Coolify world, where most things are a container image.
It just so happens that a container can be just a statically-served site, and this is a pattern to do it.
By treating everything as a container, you get a lot of simplicity and flexibility.
Docker etc is overkill for the static case, but useful for the general case.
> I explicitly wanted to stay in the Coolify world.
I too was skeptical of the motivation until reading this. Given that Coolify requirement, your solution (build static files in one container, deploy with Caddy in another) seems quite sensible.
> why is obtaining Python separately better than using a Python provided by the image?
I mostly work in a different domain than webdev, but feel strongly about trying to decouple base technologies of your OS and your application as much as possible.
It's one thing if you are using a Linux image and choose to grab their Python package and other if their boot system is built around the specific version of Python that ships with the OS. The goal being if you later need to update Python or the OS they're not tethered together.
My guess is that when you are self taught and don't know what the hierarchy of technologies looks like, you can learn several advanced technologies without knowing the basic technologies that they are built upon and the challenges the basic tech can't solve.
So you just solve all problems with advanced tools, no matter how simple. You get into tech by learning how to use a chainsaw because it's so powerful and you wanted to cut a tree, now you need to cut some butter for a toast? Chainsaw!
Or maybe you stick with a stack that is too complex for most problems but also works for most of them so that when you solve/find a solution to a certain problem you can reuse that solution in all of your projects.
I thought it's the opposite and I have seen it self taught use simpler tools that are not the standard,like just using ftp or rsync instead of complex tools
Almost every other comment in this thread is people complaining this is too complex and over-engineered.
I had the opposite reaction when I read this post: I thought it was a very neat, clean and effective way to solve this particular problem - one that took advantage of an excellent stack of software - Caddy, Docker, uv, Plausible, Coolify - and used them all to their advantage.
Ignoring caching (which it sounds like the author is going to fix anyway, see their other comments) this is an excellent Dockerfile!
FROM ghcr.io/astral-sh/uv:debian AS build
WORKDIR /src
COPY . .
RUN uv python install 3.13
RUN uv run --no-dev sus
FROM caddy:alpine
COPY Caddyfile /etc/caddy/Caddyfile
COPY --from=build /src/output /srv/
8 lines is all it takes. Nice. And the author then did us the favor of writing up a detailed explanation of every one of them. I learned a few useful new trick from this, particularly around using Caddy with Plausible.
This one didn't strike me as over-engineering: I saw it as someone who has thought extremely carefully about their stack, figured out a lightweight pattern that uses each of the tools in that stack as effectively as possible and then documented their setup in the perfect amount of detail.
where the default target is simply `uv run --no-dev sus` and the deploy target is simply `rsync -avz --delete ./dist/ host:/path/to/site/` is hell a lot more neat, clean, effective, and lightweight? (And if you care about atomic deployment it's just another command in the deploy target.)
I have ~60 static websites deployed on a single small machine at zero marginal cost. I use nginx but I can use caddy just the same. With this "lightweight pattern" I'd be running 60 and counting docker containers for no reason.
I think the author would do well to front load "the why". Seems very over the top, sometimes you want to do things just because. Totally valid, but helps contextual the blog.
They are essentially choosing the philosophy of optimizing for speed in every dimension.
The tools selected are faster than their more mainstream counterparts — but since it's a static site anyway, the pre-build side of the toolchain is more about "nice dev ux" and the post-build is more about "really fast to load and read".
Author (not OP) here. In hindsight, I wish I’d explained “the why” mostly to save so many folks in this thread from making lots of assumptions. The third paragraph in this comment touches on that: https://news.ycombinator.com/item?id=44993875
At this point, why not use a wordpress container? With a minimalist theme, it would be way easier and faster to deploy, and still be blazingly responsive.
This level of complexity would've been acceptable if this was about deploying one's own netlify type of service for personal use. Otherwise, it's just way too complicated.
I'm currently working on a Django app, complete with a database, a caching layer, a reverse-proxy, a separate API service, etc. and still much simpler to deploy than this.
it's true, wordpress is self contained - it does bundle in a lot of overhead (database, php) - but it has a good story for quickly getting a presentable site. Static sites require learning about template engines, git (possibly), FTP...
More broadly, a lot of people are disinclined from anything that needs to run on the server, especially anything involving dynamic languages, because that brings vulnerabilities and an elevated need to keep it all patched up. Static HTML/CSS/JS can be hosted from a wide variety of zero maintanence solutions and can’t be exploited.
It might have gotten better since, but back when I was running a Wordpress install it was a constant battle to keep bots out.
It's not even the fact that you're running a dynamic language or something. PHP is to some extent and Wordpress's ecosystem is to a large extent, extremely horrendous. Ridden with vulnerabilities and performance issues that should not exist by now. As the other comment says, Laravel can be fine in some situations, but that's a tiny fraction of all PHP usage.
I occasionally do freelance work that involves PHP. I won't touch Wordpress with a 10 foot pole. PHP can be fine with a framework like Laravel. Wordpress reminds me of PHP from 2 decades ago.
Adding fuel to the fire of "this is over engineering" but this is overkill right?! I'm not in the web development field but my own site is just deployed with Emacs (specifically HTML generated from org-mode).
My Ops brain says "Taken in vacuum, yes" However, if you make other things that are not static, put them into a container and run said container on a server, keeping the CI/CD process consistent makes absolute sense.
We run static sites at my company in containers for same reason. We have Kubernetes cluster with all DNS Updating, Cert Grabbing and Prometheus monitoring so we run static sites from nginx container.
And I deploy that using Ansible! Well, in my case a truly static HTML file and a bunch of CSS files. But yes, Caddy is great for serving static pages. If you have set it up once, you can apply the whole thing as one setup (playbook).
I don't really have an opinion on using caddy in a container to serve a static site. That's fine, really.. However, the way the container is built is done in the worst possible way:
# copy all files
COPY . .
# install Python with uv
RUN uv python install 3.13
# run build process
RUN uv run --no-dev sus
This adds the entire repository to the first layer, then installs python, then runs the build which I assume will only then install the dependencies. This means that changing any file in the repository invalidates the first layer, triggering uv reinstalling python and all the dependencies again. The correct Dockerfile would be something like
# install Python with uv
RUN uv python install 3.13
# copy info for dependencies
COPY pyproject.toml uv.lock .
# Install dependencies
RUN uv whatever
# Copy over everything else
COPY . .
# run build process
RUN uv run --no-dev sus
Author (not OP) here. It hadn’t really occurred to me to optimize the Dockerfile in this way because of how rarely the build is run in the first place, but I’m absolutely going to do this, since the ratio of code changes to content changes will definitely skew heavily toward the latter, and it just seems like a good habit anyway. Thanks for reminding me, and even explaining how to do it!
Getting the dockerfile order right is critical due to how docker caching works.
Even if you aren't an expert it is trivial these days to copy/paste it into chatGPT and ask it to optimize or suggest improvements to the dockerfile, it will then explain it to you.
Exactly. I wanted to also point this out in the relation of the author's desire to put all build commands in `just` configuration file. It sounds to me like a desire to use some another "slick and shiny tool" (which `just` is when compared to `make`), but what's the point exactly? The build-process will still be container-dependent and may or may not work outside of the container, and you don't get the benefit of Docker caching anymore.
Being able to run "just build" in a container-free local development environment and have the same build process run as the one in your production setup is a productivity boost worth having.
Same as how it's good to be able to easily run the exact same test suite in both dev and CI.
But wouldn't you run whatever you are writing in Docker container on dev anyway? I always did just that, as long as I use Docker as a deployment tool for a project at all. And, in fact, even sometimes when I didn't, since it's much easier and cleaner to keep, say, multiple php or redis versions in containers, than on your PC. And it's only maybe a year since it became actually effortless to maintain all this version-zoo locally for Python, thanks to uv. In fact, even when it's something like Go, so the deployable is a compiled binary, I'd usually wrap the build process into a Docker container for dev anyway.
Depends on how complex your stuff is. Most of my own projects run outside of Docker on my machine and run inside Docker when deployed. I have over 200 projects on my laptop and I don't want the overhead of 200 separate containers for them.
It's... a static site. Generate the output (use docker if you want, doesn't really matter), and just dump the result to a directory on an ordinary server.
This is a snapshot of what's gone wrong in acutely, web development culture, and broadly, software development culture over the past few decades. Complexity, provincialization, and discarding improvements in computing hardware.
> This is a snapshot of what's gone wrong in acutely, web development culture, and broadly, software development culture over the past few decades. Complexity, provincialization, and discarding improvements in computing hardware.
Taken from the Coolify website (which OP uses for hosting):
> Brag About It. You can impress anyone by saying that you self-host in the Cloud. They will definitely be amazed.
This is the result of a hyper consumerist post Protestant culture in America and the rest of the English speaking countries.
The best way to make static sites is to install nginx/caddy or whatever basic static webserver from your repos. Then put the .html and files in directories on your filesystem in the web root folder. Done. No overhead, no attack surface, no problems with software changing (deps, etc, etc), lasts forever. Super easy interface (it's your filesystem!).
This project seems more like something you'd do to demonstrate your skills with all these tools that do have use in a business/for-profit context working with groups but they have absolutely no use or place hosting a personal static website. Unless you're doing it for kicks and enjoy useless complexity. That's fair. No accounting for taste in recreation.
The flow where you build the static site into a container in CI, push it to a registry, and then your server watches for changes (watchtower) and runs it behind nginx-proxy is the true lazy solution. Push to your git repo and forget about it. Same config for a huge variety of applications—static and not.
My minor suggestion would be to not to use `COPY . .` as could slow down the build process if it has to copy everything in the context that's not needed. Also a security risk in case it copies private data, but probably not a risk if it's part of a multi-stage build.
If you don't want to have multiple `COPY`s, you can add a `.dockerignore` file (https://docs.docker.com/build/concepts/context/#dockerignore...) with the `COPY . .`, and effectively have an allowlist of paths
This just seems like several unnecessary layers of complexity to me.
For making a static site that you're personally deploying, exactly why is Docker required? And if the Docker process will have to bring in an entire Linux image anyway, why is obtaining Python separately better than using a Python provided by the image? And given that we've created an entire isolated container with an explicitly installed Python, why is running a Python script via `uv` better than running it via `python`? Why are we also setting up a virtual environment if we have this container already?
Since we're already making a `pyproject.toml` so that uv knows what the dependencies are, we could just as well make a wheel, use e.g. pipx to install it locally (no container required) and run the program's own entry point. (Or use someone else's SSG, permanently installed the same way. Which is what I'm doing.)
Author—though not OP—here. I’ll try to broadly address the questions, which are all fair.
Broadly speaking, I explicitly wanted to stay in the Coolify world. Coolify is a self-hostable PaaS platform—though I use the Cloud service, as I mentioned—and I really like the abstraction it provides. I haven’t had to SSH into my server for anything since I set it up—I just add repos through the web UI and things deploy and show up in my browser.
Yes, static sites certainly could—and arguably even should—be done way simpler than this. But I have other things I want to deploy on the same infrastructure, things that aren’t static sites, and for which containers make a whole lot more sense. Simplicity can be “each thing is simple in isolation”, but it can also be “all things are consistent with each other”, and in this case I chose the latter.
If this standardization on this kind of abstraction weren’t a priority, this would indeed be a pretty inefficient way of doing this. In fact, I arrived at my current setup by doing what you suggested—setting up a server without containers, building sites directly on it, and serving them from a single reverse proxy instance—and the amount of automation I found myself writing was a bit tedious. The final nail in the coffin for that approach was realizing I’d have to solve web apps with multiple processes in some other way regardless.
(Hi, Nik!)
So what you're saying is that "Static sites with Python, uv, Caddy, and Docker" wasn't the overall goal. You want to stay in Coolify world, where most things are a container image.
It just so happens that a container can be just a statically-served site, and this is a pattern to do it.
By treating everything as a container, you get a lot of simplicity and flexibility.
Docker etc is overkill for the static case, but useful for the general case.
> I explicitly wanted to stay in the Coolify world.
I too was skeptical of the motivation until reading this. Given that Coolify requirement, your solution (build static files in one container, deploy with Caddy in another) seems quite sensible.
> why is obtaining Python separately better than using a Python provided by the image?
I mostly work in a different domain than webdev, but feel strongly about trying to decouple base technologies of your OS and your application as much as possible.
It's one thing if you are using a Linux image and choose to grab their Python package and other if their boot system is built around the specific version of Python that ships with the OS. The goal being if you later need to update Python or the OS they're not tethered together.
Looking forward to the follow up:
Static sites with HTML, CSS, Apache and Linux.
You can of course use something like Pelican to generate those plain static files. There are quite a few great themes available as well.
My guess is that when you are self taught and don't know what the hierarchy of technologies looks like, you can learn several advanced technologies without knowing the basic technologies that they are built upon and the challenges the basic tech can't solve.
So you just solve all problems with advanced tools, no matter how simple. You get into tech by learning how to use a chainsaw because it's so powerful and you wanted to cut a tree, now you need to cut some butter for a toast? Chainsaw!
Or maybe you stick with a stack that is too complex for most problems but also works for most of them so that when you solve/find a solution to a certain problem you can reuse that solution in all of your projects.
I thought it's the opposite and I have seen it self taught use simpler tools that are not the standard,like just using ftp or rsync instead of complex tools
This has been my experience as well. But I suspect it'll change with how AI first most students are nowadays.
> it's so powerful and you wanted to cut a tree, now you need to cut some butter for a toast? Chainsaw!
Using a Ferrari to deliver the milk is how I've heard it said.
Simple answer is to package it up with all its system dependencies ans not worry about anything.
Almost every other comment in this thread is people complaining this is too complex and over-engineered.
I had the opposite reaction when I read this post: I thought it was a very neat, clean and effective way to solve this particular problem - one that took advantage of an excellent stack of software - Caddy, Docker, uv, Plausible, Coolify - and used them all to their advantage.
Ignoring caching (which it sounds like the author is going to fix anyway, see their other comments) this is an excellent Dockerfile!
8 lines is all it takes. Nice. And the author then did us the favor of writing up a detailed explanation of every one of them. I learned a few useful new trick from this, particularly around using Caddy with Plausible.This one didn't strike me as over-engineering: I saw it as someone who has thought extremely carefully about their stack, figured out a lightweight pattern that uses each of the tools in that stack as effectively as possible and then documented their setup in the perfect amount of detail.
People are complaining because
where the default target is simply `uv run --no-dev sus` and the deploy target is simply `rsync -avz --delete ./dist/ host:/path/to/site/` is hell a lot more neat, clean, effective, and lightweight? (And if you care about atomic deployment it's just another command in the deploy target.)I have ~60 static websites deployed on a single small machine at zero marginal cost. I use nginx but I can use caddy just the same. With this "lightweight pattern" I'd be running 60 and counting docker containers for no reason.
Meanwhile, my dockerfile for "purely static—hand-crafted artisanal HTML and CSS" is this:
Sure, but that doesn't handle Plausible proxying or run the static site generator script.
My current personal site (https://knlb.dev) is built with a single 500 line python file that starts with
The HTML templates & CSS are baked into the file which is why it's so long. flask so that I can have a live view locally while writing new notes.uv's easy dependency definition really made it much easier to manage these. My previous site was org exported to html and took much more effort.
(With the conceit that the website is a "notebook" I call this file "bind").
Do you have the whole thing available? I love SSGs, especially personal ones.
I think the author would do well to front load "the why". Seems very over the top, sometimes you want to do things just because. Totally valid, but helps contextual the blog.
They are essentially choosing the philosophy of optimizing for speed in every dimension.
The tools selected are faster than their more mainstream counterparts — but since it's a static site anyway, the pre-build side of the toolchain is more about "nice dev ux" and the post-build is more about "really fast to load and read".
Much faster still would be to not set up a container for what appears to be a one-off run of some SSG-generation library on some Markdown files.
I love Caddy and it's fast enough. But it's not the fastest. nginx is generally faster for example.
So I can't agree.
Author (not OP) here. In hindsight, I wish I’d explained “the why” mostly to save so many folks in this thread from making lots of assumptions. The third paragraph in this comment touches on that: https://news.ycombinator.com/item?id=44993875
We both speak the same language. I can get to grips with "front load" but what does:
"helps contextual the blog" mean?
It appears, for the verb, you meant: "frobnicate".
I see the Docker stuff here but Kubernetes is missing. Lacking required complexity for a simple _static_ personal blog.
Also while using Kubernetes, please use event-driven PubSub mechanisms to serve files for added state-of-the-art points.
/pun
At this point, why not use a wordpress container? With a minimalist theme, it would be way easier and faster to deploy, and still be blazingly responsive.
This level of complexity would've been acceptable if this was about deploying one's own netlify type of service for personal use. Otherwise, it's just way too complicated.
I'm currently working on a Django app, complete with a database, a caching layer, a reverse-proxy, a separate API service, etc. and still much simpler to deploy than this.
it's true, wordpress is self contained - it does bundle in a lot of overhead (database, php) - but it has a good story for quickly getting a presentable site. Static sites require learning about template engines, git (possibly), FTP...
wordpress is php is a non-starter for many of us
More broadly, a lot of people are disinclined from anything that needs to run on the server, especially anything involving dynamic languages, because that brings vulnerabilities and an elevated need to keep it all patched up. Static HTML/CSS/JS can be hosted from a wide variety of zero maintanence solutions and can’t be exploited.
It might have gotten better since, but back when I was running a Wordpress install it was a constant battle to keep bots out.
It's not even the fact that you're running a dynamic language or something. PHP is to some extent and Wordpress's ecosystem is to a large extent, extremely horrendous. Ridden with vulnerabilities and performance issues that should not exist by now. As the other comment says, Laravel can be fine in some situations, but that's a tiny fraction of all PHP usage.
I occasionally do freelance work that involves PHP. I won't touch Wordpress with a 10 foot pole. PHP can be fine with a framework like Laravel. Wordpress reminds me of PHP from 2 decades ago.
Adding fuel to the fire of "this is over engineering" but this is overkill right?! I'm not in the web development field but my own site is just deployed with Emacs (specifically HTML generated from org-mode).
>this is overkill right?!
My Ops brain says "Taken in vacuum, yes" However, if you make other things that are not static, put them into a container and run said container on a server, keeping the CI/CD process consistent makes absolute sense.
We run static sites at my company in containers for same reason. We have Kubernetes cluster with all DNS Updating, Cert Grabbing and Prometheus monitoring so we run static sites from nginx container.
I'd be a lot more impressed if your static site was served from Emacs :)
You joke but this is absolutely a thing: https://github.com/skeeto/emacs-web-server
I'm interested in doing this. Have you posted about your process?
Why don't you just upload the HTML/CSS/JS files to a folder and point Apache or Nginx to that folder?
A post on Coolify from 4 months ago.
https://news.ycombinator.com/item?id=43555996
And I deploy that using Ansible! Well, in my case a truly static HTML file and a bunch of CSS files. But yes, Caddy is great for serving static pages. If you have set it up once, you can apply the whole thing as one setup (playbook).
I don't really have an opinion on using caddy in a container to serve a static site. That's fine, really.. However, the way the container is built is done in the worst possible way:
This adds the entire repository to the first layer, then installs python, then runs the build which I assume will only then install the dependencies. This means that changing any file in the repository invalidates the first layer, triggering uv reinstalling python and all the dependencies again. The correct Dockerfile would be something likeAuthor (not OP) here. It hadn’t really occurred to me to optimize the Dockerfile in this way because of how rarely the build is run in the first place, but I’m absolutely going to do this, since the ratio of code changes to content changes will definitely skew heavily toward the latter, and it just seems like a good habit anyway. Thanks for reminding me, and even explaining how to do it!
Getting the dockerfile order right is critical due to how docker caching works.
Even if you aren't an expert it is trivial these days to copy/paste it into chatGPT and ask it to optimize or suggest improvements to the dockerfile, it will then explain it to you.
Exactly. I wanted to also point this out in the relation of the author's desire to put all build commands in `just` configuration file. It sounds to me like a desire to use some another "slick and shiny tool" (which `just` is when compared to `make`), but what's the point exactly? The build-process will still be container-dependent and may or may not work outside of the container, and you don't get the benefit of Docker caching anymore.
Being able to run "just build" in a container-free local development environment and have the same build process run as the one in your production setup is a productivity boost worth having.
Same as how it's good to be able to easily run the exact same test suite in both dev and CI.
But wouldn't you run whatever you are writing in Docker container on dev anyway? I always did just that, as long as I use Docker as a deployment tool for a project at all. And, in fact, even sometimes when I didn't, since it's much easier and cleaner to keep, say, multiple php or redis versions in containers, than on your PC. And it's only maybe a year since it became actually effortless to maintain all this version-zoo locally for Python, thanks to uv. In fact, even when it's something like Go, so the deployable is a compiled binary, I'd usually wrap the build process into a Docker container for dev anyway.
Depends on how complex your stuff is. Most of my own projects run outside of Docker on my machine and run inside Docker when deployed. I have over 200 projects on my laptop and I don't want the overhead of 200 separate containers for them.
My heuristic to go from least likely to change to most likely, bearing in mind dependencies, of course.
delayed
It's... a static site. Generate the output (use docker if you want, doesn't really matter), and just dump the result to a directory on an ordinary server.
This is a snapshot of what's gone wrong in acutely, web development culture, and broadly, software development culture over the past few decades. Complexity, provincialization, and discarding improvements in computing hardware.
> This is a snapshot of what's gone wrong in acutely, web development culture, and broadly, software development culture over the past few decades. Complexity, provincialization, and discarding improvements in computing hardware.
Taken from the Coolify website (which OP uses for hosting):
> Brag About It. You can impress anyone by saying that you self-host in the Cloud. They will definitely be amazed.
This is the result of a hyper consumerist post Protestant culture in America and the rest of the English speaking countries.
Is this satire?
html file -> ftp -> WWW html file -> mv /var/www/public -> WWW
Possibly SSG -> html -> etc.
The best way to make static sites is to install nginx/caddy or whatever basic static webserver from your repos. Then put the .html and files in directories on your filesystem in the web root folder. Done. No overhead, no attack surface, no problems with software changing (deps, etc, etc), lasts forever. Super easy interface (it's your filesystem!).
This project seems more like something you'd do to demonstrate your skills with all these tools that do have use in a business/for-profit context working with groups but they have absolutely no use or place hosting a personal static website. Unless you're doing it for kicks and enjoy useless complexity. That's fair. No accounting for taste in recreation.
The author wrote a comment in another thread^1 just a few min before yours.
Also, starting any comment with an unqualified "The best way..." is probably not the best way to engage in meaningful dialog.
1. https://news.ycombinator.com/item?id=44993875
The flow where you build the static site into a container in CI, push it to a registry, and then your server watches for changes (watchtower) and runs it behind nginx-proxy is the true lazy solution. Push to your git repo and forget about it. Same config for a huge variety of applications—static and not.
this is satire right?