Varnish 6.x, Drupal 8.4+, WSL, Docker Desktop

Varnish + Drupal using Alpine Linux Docker Containers

Time to revisit an old trusty friend

Callback Insanity

--

Spoiler Alert: those Docker images ain’t nowhere on Docker Hub !!

It’s been a few years since I’ve touched a Varnish VCL. VCL is the Varnish Configuration Language. 2010 was the first time that I used Varnish while supporting a high traffic Drupal 6.x site for some media company headquartered in New York City.

Today I was tweaking and optimizing my NGINX Docker configurations when I ran into a relatively recent story about Drupal 8 performance. The breadcrumbs went something like this:

That last “accidental” article from February, 2020 mentioned Varnish, and oh well — I had a flashback! About the last time I used Varnish, probably about 4 years ago. Then, fast forward 8 hours later and now I do have a bespoke Varnish container humming inside my personal Docker Compose stack. How did that happen?

:P

Multiple Varnish containers running in my compose stack.

As usual when deep-diving into a specific DevOps or development subject I end up in the vicinity of having about a thousand browser tabs open, give or take a few hundred. I thought that before getting rid of all of them I’d save and share with you some of the resources I came across for posterity:

Bibliography

Technical documentation:

And some initial Docker references:

Armed with all these information I listed above, I gathered that:

  • Modern Drupal releases (8–9.x) support modern Varnish releases (6.x).
  • The Drupal 8.x Cache API has quite some improvements and features over Drupal 7.x — cache tags and contexts, in addition to pre-existing max-age support, something to potentially exploit with Varnish for squeezing every once of performance out of Drupal/Varnish.
  • Alpine Linux, my primary and pretty much sole Docker container base image conveniently already provides a recent version of Varnish 6.x in an apk package.
  • Also, I’m definitely not the first one to put Varnish in a container, by many years. Which is a good thing, because it means there is plenty of documentation.

With these tidbits of information I ended up with the following Dockerfile:

And this is the Docker Compose service definition for linking Varnish to Nginx:

A little dive into Docker Compose architecture

In the rest of the “short” story below I’ll be corresponding about some of the little things that I found interesting while rolling out this Varnish container for Docker on my local development environment. Don’t be intimidated, it’s mostly screenshots !

Docker Compose dependencies

The thing I find interesting about Docker Compose and Kubernetes is that they augment Docker by providing some pieces of functionality that are either not present or not as easily implemented with plain vanilla Docker. One of these pieces would be service dependencies, for example. With both Docker Compose and Kubernetes you can specify in their respective service manifests to only start Service A after Service B has started, helping prevent race conditions or having to write bash scripts with unreliable sleep commands, healthchecks, pings, etc.

In my use case today I leveraged the Docker Compose service dependency capability to indicate that my newly minted Varnish service should never start before the NGINX service it relies on is available. Having a backend buddy to chat with (such as Nginx) is pretty-much Varnish raison d’etre.

Here, the Varnish service has the depends_on parameter, ensuring that the Varnish service is started only after the Nginx service starts:

# inside example docker-compose.ymlvarnish:
image: alexanderallen/varnish-6:alpine-3.12
depends_on:
- nginx

When Varnish starts it expects to find a backend pardner to talk to. And when said partner is not available, Varnish gets really mad — like, it fails to start. Using the depends_on parameter on the Varnish service guarantees that Varnish will never find itself lonely in the vast Sea of Containers.

Coupled Varnish/Nginx instances using decoupled service names

When starting Varnish, the -b parameter specifies which backend Varnish pulls its data from.

Originally I the backend pointing to my nginx service, as such:

-F -s malloc,32M -a :80 -b nginx:8080

Here, nginx is the name of the NGINX Docker Compose service, and 8080 is where NGINX is serving HTTP requests (web pages).

This works very well indeed, since we’re using Docker’s internal Domain Name Service (DNS) resolution to resolve the IP address of the container Varnish will communicate with, as opposed to hardcoding an IP address as such:

-F -s malloc,32M -a :80 -b 127.0.0.1:8080

However, while this is all cherry for running a single NGINX container instance that talks to the Varnish container, it won’t scale when I have multiple NGINX containers running in my machine.

I’ll come back to how the depends_on parameter in Docker Compose helped me scale to having multiple NGINX containers running simultaneously for reach of my PHP applications. But first, I’ll explain the relationship between my NGINX and PHP-FPM containers.

Using PHP-FPM and NGINX Container Pairs

I run a separate NGINX container for each PHP application in my machine, because it has simplified things on the DevOps side for me, locally speaking.

Each NGINX container has a corresponding PHP-FPM container to process the PHP code.

Because of how PHP-FPM and NGINX communicate using sockets, which are UNIX file system constructs (special files, basically), which in turn reside in mounted Docker volumes, I can have as many PHP-FPM + NGINX container instances running in my machine without any conflict.

You see, Docker volumes are namespaced in Docker Compose, automagically. And because the mechanism that PHP-FPM + NGINX uses to communicate intra-container is a file mounted in a namespaced volume, each application by definition is using a namespaced socket. Magic !

Namespacing resources in Docker (such as volumes)

But right now Varnish is not using a socket, and that presents the little problem that now I’m going to go through below.

Scaling Containers Using Service Aliases

Here is a heroic screenshot of my Varnish container attempting (and failing) to communicate with the NGINX container ona service callednginx:8080. The reason? Too many hostnames (containers) in the local Docker network have the name of nginx:

It’s like trying to call for Rick Sanchez while inside citadel. It’s shock full of Ricks!

If in the Rick & Morty multiverse each character pair has a unique identifier that ties them to the universe they come from, my services conversely need a unique identifier that ties them to the application they’re serving.

Here you can see that both the NGINX container 192.168.0.9 and NGINX container 192.168.0.6 have replied when Varnish queried for a service called nginx. Varnish doesn’t know which NGINX backend to talk to because they’re both called NGINX !

Like compose services, all Ricks are have the same name. How would you identify them? Credit: Adult Swim.

Why do two NGINX instances reply to the nginx service name? Because I am currently running two PHP applications, each being served by it’s own NGINX service each.

The screenshot below shows two applications: 1) A dummy hello-php application (yellow); 2) Adrupal-881 application (red). Each application is running an instance of PHP-FPM, Varnish, and NGINX containers each.

If you notice from the screenshot above, the NAME field, which denotes the container name, clearly delineates 3 hello-php containers (varnish, nginx, php-fpm), and 3 drupal-881 containers (the same). Why if the container names are well delineated for each application, Varnish cannot find it’s corresponding Nginx instance?

Because is not using container names to communicate with Nginx. It uses service names. Services in Docker Compose use service names to communicate with each other, as long as they’re in the same network.

Right now even though each of my containers have a unique name, all my services are still called nginx, varnish, and so on.

In the following configuration, the NGINX services’ name is defined by line 30, to be nginx, and line 25 tells Varnish to reach NGINX using that same service name defined in line 30.

This works fine and dandy until there’s multiple NGINX’s running around in a shared bridge Docker network.

Giving Docker Compose services a network alias

To solve the problem of services with duplicate names on a shared network, you can provide each service with a unique alias. In my case, I opted to do so dynamically, using the $PROJECT_NAME variable. This variable contains the name of each application, and is unique as long as each application name is different.

Using unique service aliases with variables

VSD is the name of the Docker network all applications PHP-FPM + NGINX applications are sharing. The VSD network hosts a shared MariaDB database that any application can leverage to store data. This VSD network is manually created by a bash script with the property of external set to true.

Sharing external networks between services and applications

Now back to the Varnish service, instead of using the generic and duplicated nginx:8080 name, it can access a per-project name:

Fully updated example of using service aliases

Using the modified backend (-b) parameter-b "${PROJECT_NAME}-nginx:8080" in line 28, Varnish now has access to a per-project, unique NGINX service name, as defined line the updated line 40 above.

Or, if you’d prefer to continue the Rick & Morty analogy, here’s the same compose definition as above with Varnish renamed to Rick, and Nginx to Morty:

If services where Rick and Morty…

The naming conventions for the Adult Swim protagonists is charactersName-[universe id]. Therefore Rick Sanchez C137 is Rick-C137 and his corresponding grandson is Morty-C137.

If I use universe C137 as the application name, you can see the results below after modifying the service names in the code snippet above:

Rick (Varnish) and Morty (Nginx) can finally talk to each other.

Giving Nginx (or Morty) a service alias for the shared network resolves the problem of conflicting Docker Compose service names, and now I don’t get any warnings or errors from Varnish when starting multiple NGINX+Varnish container instances on the same shared Docker network.

Docker Compose applications and w their respective service containers

Smoke Test

In the screenshot below you see TO THE LEFT

  • http://localhost:49213
  • Dummy PHP application, served by nginx/1.19.6 (the NGINX container).
  • Serviced by PHP/7.3.22 in the background (the PHP-FPM container).

In the screenshot TO THE RIGHT

  • http://localhost:49215
  • The Varnish HTTP headers:
  • X-VARNISH: 32782 and,
  • Via: 1.1 Varnish (Varnish 6.4).

The screenshot on the left is the “raw” PHP + NGINX application. On the right is same response being proxied by the Varnish container, henceforth the Varnish headers.

And this is a screenshot of a Drupal application running simultaneously:

Comparing regular NGINX versus Varnish response for Drupal install page.

The red square and font on the left browser represents the plain PHP-FPM + NGINX response. You can see it’s being served on port 49216 and lacks any Varnish headers in the browser network inspection tab.

The turquoise squares on the right browser represent the Varnish response for the same Drupal application, but on port 49217. You can see the turquoise square on the bottom right highlighting the Varnish response headers.

A few things to note amongst that sea of information that is that screenshot:

  • The Drupal Varnish response is not being cached. This is because Drupal administrative pages, such as the /install.php page have specify the Cache-Control: must-revalidate, no-cache, private headers. These headers tell Varnish to not cache the response.
  • Both the dummy and Drupal applications have different, distinct ports for both the NGINX and Varnish containers. These ports are not hardcoded, and automatically assigned by Docker Compose.

In my Docker Compose launch script vsd-start.sh I use the docker-compose port [service] [port] to capture the container ports of NGINX and Varnish in each application into a variable:

Retrieving ephemeral container ports

Then I print the container ports in the terminal for handy reference, following by a command to automatically open two browser tabs, for NGINX and Varnish containers respectively:

White glove service and luxurious DX: opening browser tabs for you !!

I think I probably failed to mention the context on which this Docker Compose application, or rather collection of is being run in.

For reference, my stack is Docker Desktop, running alongside Windows Subsystem for Linux (WSL) version 2. I detailed some of these hijinks in my previous story Successfully Connect Alpine WSL 2 to Docker Desktop 2.2, if you want to know some more about my Windows, WSL, and Docker setup.

Conclusion: Some deep reflections of today’s cartoon analogy

If Nginx is the source of content, and Varnish the dependent — why would I consider Nginx to the the Rick of my analogy? Shouldn’t Morty be Varnish, dependant on Rick’s Nginx as a fountain of content and wisdom? Isn’t Rick after all the adult in the room?

At the risk of oversimplifying a show that has recursive time manipulation elements and appears at some times contradictory.

Yes Rick is the elder. But his constant bullying and flaunting of his genius, throws into question his actual social maturity. While Rick C-137's intellect is undeniable, it appears to be used as a crutch to hide the fact that Rick is emotionally dependent on his family (except maybe for Jerry). While knowledge seems ephemeral it can be replaced, or refreshed by learning and experiences — for example by reading my stories.

Family on the other hand is irreplaceable, and as the tragic pandemic of 2020 all too often reminds us, we cannot replace the ones we’ve lost. Therefore, which is the bigger dependency —Rick’s knowledge, or Morty’s pure love. I’ll bet on love.

Thank you for reading!

--

--

Callback Insanity

Organic, fair-sourced DevOps and Full-Stack things. This is a BYOB Establishment — Bring Your Own hipster Beard.