Caddy + Varnish: Using a single Caddy instance

Daniel West
4 min readJan 12, 2021

Introduction

If you’re interested in this subject, you may find that there are a number of articles where people have demonstrated a method whereby they “sandwich” varnish between 2 Caddy instances.

I wanted to use Caddy v2 with a couple of very nice modules, Vulcain and Mercure written primarily by Kévin Dunglas. API Platform is a major open source project, which his company also maintains and sponsors and implements Caddy with these modules. However, I thought further improvements to performance could be made by having a Varnish cache layer to cache a full response, where Vulcain will cache just some parts.

This is not going to be a long article on why all of the projects above are amazing. You should discover that for yourself!

API Platform: https://api-platform.com/
Mercure:
https://mercure.rocks/docs/getting-started
Vulcain:
https://github.com/dunglas/vulcain

My solution for getting Varnish to work involves looping the request back to the SAME Caddy layer from Varnish, and detect the varnish cache proxy host header to then forward the request to the application layer if Varnish wants it.

Caddy

My own Caddyfile looks like this:

# Info: Reload in live Docker using `caddy reload --config /etc/caddy/Caddyfile --adapter caddyfile`

# Global
{
# Debug
{$DEBUG}
# HTTP/3 support
servers {
protocol {
experimental_http3
}
}
}

# define single site using environment variable
caddy-probe.local:80, {$SERVER_NAME}

# named matchers
@do_varnish_pass {
header !X-Caddy-Forwarded
}
@health_check {
host caddy-probe.local
path /health-check
}

# customisable log level with environment
log {
level {$LOG_LEVEL:"info"}
}

# preserve orders
route {
respond @health_check 200 {
close
}

root * /srv/api/public

mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt:///data/mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Allow anonymous subscribers (double-check that it's what you want)
anonymous
# Enable the subscription API (double-check that it's what you want)
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
vulcain
push

# do the cache pass
reverse_proxy @do_varnish_pass {
to {$VARNISH_UPSTREAM}
health_path /healthz
health_interval 5s
health_timeout 20s
fail_duration 5s
header_up X-Caddy-Forwarded 1
}

php_fastcgi unix//var/run/php/php-fpm.sock
encode zstd gzip
file_server
}

One thing that really tripped me up was getting an empty response from the Varnish layer. Caddy returned an empty 200 response which was then cached. It did this because my configuration set the SERVER_NAME to be localhost, localhost:8443, caddy:80 The host header was still localhost:8443 even though I was calling the hostname caddy:80. Also, Varnish will be sending a http request, not https. I fixed this to listen on any hostname on port 80 in my SERVER_NAME. localhost, localhost:8443, :80

It is a slightly altered version to API Platform’s. Firstly, I do not want Caddy to sit in front of the front-end application, just the API. I personally deploy the app separately using Vercel Now.

Main changes

# define single site using environment variable
caddy-probe:80, {$SERVER_NAME}

Add a new host that we can use internally for the varnish probe.

# named matchers
@do_varnish_pass {
header !X-Caddy-Forwarded
}
@health_check {
host caddy-probe.local
path /health-check
}

Named matchers for Caddy directives.

respond @health_check "OK" 200 {
close
}

Respond to Varnish health probe

...
# do the cache pass
# do the cache pass
reverse_proxy @do_varnish_pass {
to {$VARNISH_UPSTREAM}
health_path /healthz
health_interval 5s
health_timeout 20s
fail_duration 5s
header_up X-Caddy-Forwarded 1
}
...

To use this reverse proxy we match the @do_varnish matcher which is checking for the X-Caddy-Forwarded header. If that does not exist, we use the reverse_proxy directive, thereby sending the request to Varnish and adding this header. So long as you do not configure varnish to remove it, if there is no cache hit, varnish will ask Caddy again for the file with the header and next time it’ll forward to your application or file server.

Varnish

Configure Varnish as applicable in your use case. You can setup the probe to ensure the connection to Caddy is there and set Caddy as the upstream. This is my configuration which has environment variables injected by another script. Notice in the request we send the probe to /health-check and set the host to caddy-probe

backend default {
.host = "${UPSTREAM}";
.port = "${UPSTREAM_PORT}";
.max_connections = 300;
.first_byte_timeout = 300s; # How long to wait before we receive a first byte from our backend?
.connect_timeout = 5s; # How long to wait for a backend connection?
.between_bytes_timeout = 2s; # How long to wait between bytes received from our backend?

# Health check
.probe = {
.request =
"HEAD /health-check HTTP/1.1"
"Host: caddy-probe.local"
"Connection: close"
"User-Agent: Varnish Health Probe";
.timeout = 5s;
.interval = 5s;
.window = 4;
.threshold = 2;
}
}
...
sub vcl_recv {
...
# For health checks
if (req.method == "GET" && req.url == "/healthz") {
return (synth(200, "OK"));
}
...
}

Conclusion

I’ve whipped this article up incredibly quickly. I hope that it makes sense and puts you on the right path so you do not need to create 2 Caddy instances around your Varnish layer.

If you’d like to see my complete integration, see my open source project here which is a template encompassing 2 other open source projects of mine. https://github.com/components-web-app/components-web-app

For completeness, there will be a cache module for Caddy v2 sometime which you may prefer to use. https://github.com/caddyserver/cache-handler

Happy coding!

--

--