Caddy web server: Why use it? How to use it?
May 7, 2022 12:00 · 1673 words · 8 minute read
I am a big fan of the Caddy web server and use it for my private projects, as well as when teaching web development practices. Caddy has two main advantages compared to more established web servers like nginx and Apache:
- It offers automatic HTTPS out of the box. In the past, I have often used nginx for my websites. There, I had to install certbot to get Let’s Encrypt certificates & to update my nginx config accordingly. While this does work, this setup has failed me in the past: At some point, certificate renewal failed & Let’s Encrypt notified me that my certiciates were about to expire. With Caddy, this important feature is built-in.
- Its configuration file,
Caddyfile
, has an easy config language. For common standard use cases, like setting up a reverse proxy, you often just need one line of configuration, and you are done with it. Of course, you can configure many more details, but they are successfully hidden from you until you need them. Contrast this to nginx, where you need to work through specific config options much earlier.
That said, if you are looking to optimize for raw performance, you probably should still use nginx, as it is generally regarded the fastest web server out there. But if you are looking for a fast-enough web server that gets out of the way, Caddy is a great choice.
In this post, I’ll explain how I use Caddy, and explain config options that make Caddy even better for me.
Installation
There are two main ways to install Caddy:
- Download the Caddy binary and run it manually.
- Install Caddy as service, so Caddy gets run in the background & stays alive during reboots.
For server usage, the service installation makes much more sense. I follow the “Debian, Ubuntu, Raspbian” method, which means that I add a new apt source repository, and install caddy from there. Installing Caddy from the default distribution sources (= directly running sudo apt install caddy
, without adding the new source repository first) will likely result in an outdated version.
Afterwards, Caddy is automatically running as a systemd service, meaning that it will be automatically started in the background on boot. The guide that applies to this way of running Caddy is Keep Caddy Running: Linux Service.
Configuration
Regarding configuration, things become a bit confusing in Caddy’s docs. There a multiple ways of configuring Caddy:
- Using a Caddyfile.
- Using Caddy’s JSON Config Structure
- Or doing requests against Caddy’s REST API, running on
localhost:2019
by default. You can either submit Caddyfile or JSON configuration parts to the API.
I configure Caddy exclusively via a Caddyfile (⇒ option 1). As I have installed it with the “Debian, Ubuntu, Raspbian” method, the Caddyfile is located in /etc/caddy/Caddyfile
. Whenevery I make changes against this Caddyfile, I need to do a systemctl reload caddy
(or restart
, if the changes are still not applied).
To be honest, I think that Caddy should focus on the installation as service, and focus on the configuration via Caddyfile - and de-prioritize the other variants from their docs. I can see use cases for the other variants, e.g. for web hosters, or for local development, but having so many options makes the documentation for new readers quite confusing.
Setting email global option
It is highly recommended to set an email address in Caddy’s global options. Global options are set by adding a block with no key at the top of your Caddyfile:
|
|
That way, if there goes something wrong with e.g. certificate renewal, the certificate issuer can contact you.
Site-specific config files
As already stated, I do all my configuration through /etc/caddy/Caddyfile
. This means, if I am hosting multiple sites on the same server, the file can become quite long. Also, having everything in one file makes it more difficult to manage the Caddyfile with Ansible. It would be better if there was one folder sites-enabled
(like there is in nginx), with every file automatically included into the “main” Caddyfile. That way, we can have a separate file for every site.
Luckily, we can configure Caddy like this:
- Create a folder
/etc/caddy/sites-enabled
- Add this line to your Caddyfile:
import /etc/caddy/sites-enabled/*.caddy
- Create a dummy file inside the folder, e.g.
dummy.caddy
with contents(dummy) { respond "Hello World" }
. This file is important, as the import statement will fail if it matches no files. - Reload Caddy with
systemctl reload caddy
If something is not working (take a look at the output of systemctl status caddy
), you might need to work on the folder & file permissions. The caddy
system user needs to have read rights on the folder, e.g. via chmod -R 0755 /etc/caddy/sites-enabled
(= every user on the machine can read & execute files in this folder, but only the owner can write).
Enable access logging (HTTP request logging)
In contrast to e.g. nginx, Caddy does not enable access logging (HTTP request logging) by default. To my knowledge, you cannot simply enable it for every site (at least not with the Caddyfile
config syntax), but have to enable it per site.
Fortunately, enabling logging for a site is very easy. This is what you need to add to your site block:
|
|
By default, Caddy outputs a log in JSON format, making it easy to analyze with other programs, e.g. JSON processing tools like jq. Also, it does log rotation on its own (rotation after 100MiB, keeping the last 10 files), so you don’t have to worry about your hard-drive becoming full after some time.
In a multi-user environment: Disabling Caddy’s HTTP API
One thing that I don’t really like about Caddy: It is automatically starting an admin API on localhost:2019
, and is fully configurable through this API (incl. changing the configuration, stopping the server etc.). The API is not reachable from the outside (so no worries 😅), but every user on the system can interact with it without any extra access controls or permission checks!
According to the authors, this is expected:
In general – and this goes for most threat models – if the machine is running untrusted code, all bets are off; i.e. protecting against a system that has already been compromised is outside the scope of the threat model.
via https://github.com/caddyserver/caddy/issues/2850#issuecomment-574992381
Probably he is right that a server is “game-over” anyways if untrusted code can run do calls against localhost. But it still feels needlessly careless for me.
My web services usually run under separate system users (one user for one service). If one of these services gets compromised, the attacker gains access to the user. Of course, an attacker can still do a lot (e.g. make the server slower by starting a Bitcoin miner, make the server part of a Botnet). But at least they can not completely shut down all my other websites on the server. I don’t want all my web sites to be shut down or redirect to some other content, “just” because a single user got compromised. Generally, I only want root
to make such changes.
This issue becomes even more pressing when I’m running a server where multiple people have access to (like I do for a scholarship program), and I don’t want to make it possible for them to edit the web server configuration.
Luckily, we can configure Caddy so that the API does not listen to a TCP port, but to a Unix domain socket. Unix sockets are just files, and thus are subject to the operating system’s file permissions. Only users that have read/write permissions can connect to the socket.
The following steps are necessary:
- Edit the systemd service unit file
/lib/systemd/system/caddy.service
: Add the lineRuntimeDirectory=caddy
to the section[Service]
. This makes systemd create the directory/run/caddy
./run
contains run-time variable data & is usually a temporary file system. If we would create the/run/caddy
directory ourselves, it would be gone on the next reboot. The directory ownership is set to the caddy user, so Caddy can use the directory to put its Unix socket in there, and only this user androot
can connect to it. - Add the line
admin unix//var/run/caddy/caddy-admin.sock
to Caddy’s global option block. - Make systemd read in the changes to the service unit file:
sudo systemctl daemon-reload
- Reload or restart Caddy:
sudo systemctl restart caddy
Now, ls -lah /run/caddy
should show a file caddy-admin.sock
, and curl localhost:2019
should lead to a “Connection refused” error.
Bonus: Using for local development
Besides using Caddy on servers, you can also use it during local development! When you are developing a web application locally, you would often run it on http://localhost:<PORT>
. But here, you are just using HTTP, not HTTPS ⇒ some browsers will disallow some features, or behave differently. E.g. regarding microphone or webcam permissions. So, for these cases, to match the production environment better, it makes sense to enable HTTPS also for local development. Caddy also support this!
⚠️ Instruction only tested on macOS (but should also work on Linux).
For local development, I would suggest against Caddy’s installation as a service - instead, rather download the binary from https://github.com/caddyserver/caddy/releases , extract it with tar -xf
, and move it to /usr/local/bin
. Then you can call caddy
in your terminal without having to specify the full path the whole time.
Now, here is an example with a local domain https://www.helloworld.local , that should be served via HTTPS, and proxy to http://localhost:8080:
- Add
127.0.0.1 www.helloworld.local
to your/etc/hosts
file. This way, the domainwww.helloworld.local
will resolve to your local machine. - Run
caddy reverse-proxy --from www.helloworld.local --to localhost:8080
. On first run, Caddy will automatically install a local certiciate authority (CA) on your machine. Then, using the CA, it issues a self-signed certificate for thewww.helloworld.local
domain. - You can visit https://www.helloworld.local in your browser!
- To stop Caddy and its reverse proxy, just press
Ctrl+C
.
That was all for now! I hope I made you curious about Caddy and the possibilities it provides - maybe it is also the right choice for your project? 🚎
- You are in control
- My Favorite Shortcuts for VS Code
- Customizing my shell: From bash to zsh to fish
- JSON5: JSON with comments (and more!)
- jq: Analyzing JSON data on the command line
- Get Total Directory Size via Command Line
- Changing DNS server for Huawei LTE router
- Notes on Job & Career Development
- Adding full-text search to a static site (= no backend needed)
- Generating a random string on Linux & macOS