How to Install Varnish HTTP Accelerator on Ubuntu: A High-Performance Caching Guide

How to Install Varnish HTTP Accelerator on Ubuntu: A High-Performance Caching Guide

In the world of high-traffic web management, speed is the ultimate currency. Varnish Cache (also known as an HTTP accelerator) is a powerful tool designed to sit in front of your web server—whether it is Apache or Nginx—to store copies of web pages in memory. By doing so, Varnish serves subsequent requests for the same content almost instantaneously, bypassing the heavy processing of your backend server. This not only slashes page load times but also significantly reduces server CPU and RAM usage, allowing a single server to handle thousands of concurrent visitors without breaking a sweat.

This technical guide provides a comprehensive walkthrough for installing and configuring Varnish on Ubuntu. We will cover the installation process using official repositories, the critical reconfiguration of your existing web server ports, and the final integration steps to ensure your traffic flows efficiently through the cache layer. By the end of this article, you will have a robust, enterprise-grade caching solution protecting your digital infrastructure.

Understanding the Varnish Architecture

Varnish operates as a “reverse proxy.” When a user requests a page from your site, the request hits Varnish first. If Varnish has a copy of that page in its memory (a “cache hit”), it serves it directly. If it does not (a “cache miss”), it fetches the content from your actual web server (the “backend”), serves it to the user, and stores a copy for future requests. To make this work, we must perform a “port swap”: your web server, which usually listens on port 80, must be moved to a secondary port like 8080, while Varnish takes over port 80 to greet incoming traffic.

One important limitation to note is that Varnish does not natively support SSL/TLS (HTTPS). In a modern production environment, you will typically use Nginx as an SSL terminator. In this setup, Nginx handles the HTTPS handshake on port 443, passes the decrypted traffic to Varnish on an internal port, and Varnish then communicates with your backend. For the purpose of this installation guide, we will focus on the core HTTP acceleration setup which forms the foundation of any Varnish deployment.

Step 1: Installing Varnish from Official Repositories

While Ubuntu’s default repositories often contain Varnish, they are frequently several versions behind. For the best performance and security, it is highly recommended to use the official Varnish Cache repositories. Begin by updating your local package index and installing the necessary prerequisites for managing external repositories over HTTPS.

sudo apt update
sudo apt install -y debian-archive-keyring curl gnupg apt-transport-https

Next, you will need to add the GPG key and the repository for the latest stable version of Varnish. As of current standards, Varnish 6.0 LTS or the 7.x series are the most common choices. Use the following commands to import the signing key and add the source to your list. This ensures that when you run an update, your system pulls the verified, high-performance binaries directly from the developers.

curl -s https://packagecloud.io/install/repositories/varnishcache/varnish70/script.deb.sh | sudo bash
sudo apt update
sudo apt install -y varnish

Step 2: Reconfiguring Your Backend Web Server

As mentioned earlier, Varnish and your web server cannot both occupy port 80. You must move your web server (Nginx or Apache) to port 8080. This allows Varnish to act as the primary entry point. Below are the steps for both popular web servers.

For Nginx Users

Open your default site configuration file. You are looking for the listen directive. Change 80 to 8080. If you have multiple virtual host files, you will need to repeat this for each one.

sudo nano /etc/nginx/sites-available/default

Change to:
listen 8080 default_server;
listen [::]:8080 default_server;

For Apache Users

In Apache, you need to change the port in two places: the ports configuration file and the virtual host file. First, edit ports.conf and change Listen 80 to Listen 8080. Then, update your virtual host file to match.

sudo nano /etc/apache2/ports.conf

Change to: Listen 8080
sudo nano /etc/apache2/sites-available/000-default.conf

Change to: <VirtualHost *:8080>

After making these changes, restart your web server (sudo systemctl restart nginx or apache2) to free up port 80 for Varnish.

Step 3: Configuring Varnish to Listen on Port 80

By default, Varnish listens on port 6081. To make it your primary HTTP accelerator, you must change this to port 80. On modern Ubuntu systems using systemd, this configuration is handled via the Varnish service file. Instead of editing the file in /lib/systemd/ directly (which can be overwritten during updates), we use a systemd override.

Run the following command to open the override editor:

sudo systemctl edit --full varnish

Locate the ExecStart line. Change the -a :6081 parameter to -a :80. You can also adjust the amount of memory allocated for the cache here. For example, -s malloc,1G allocates 1 Gigabyte of RAM for storage. Once changed, save the file and exit. You must then reload the systemd daemon and restart Varnish to apply the new port settings.

sudo systemctl daemon-reload
sudo systemctl restart varnish

Step 4: Defining the Backend in VCL

Now that Varnish is listening on port 80, it needs to know where to find your web server (which is now on port 8080). This logic is controlled by the Varnish Configuration Language (VCL). The primary configuration file is located at /etc/varnish/default.vcl.

Open the file and ensure the backend default section correctly points to your local server on the new port:

vcl 4.1;

backend default {
.host = "127.0.0.1";
.port = "8080";
}

This tells Varnish: “If you don’t have the requested content in your cache, go to the local machine on port 8080 to get it.” This simple handshake is the core of your accelerator’s logic. You can later add more complex rules to this file, such as excluding specific URLs (like admin dashboards) from being cached to ensure real-time data accuracy.

Step 5: Verifying the Installation

Once all services are restarted, you should verify that Varnish is correctly intercepting traffic. The easiest way to do this is by checking the HTTP response headers using curl. Run the following command against your server’s IP or domain:

curl -I http://localhost

In the output, look for a header named Via or X-Varnish. If Varnish is working, you will see something like Via: 1.1 varnish (Varnish/7.0). If you run the command twice, you might also see an Age header, which indicates how many seconds the content has been sitting in the cache. An age greater than 0 confirms a successful “cache hit.”

Additionally, Varnish provides powerful built-in monitoring tools. You can run varnishstat in your terminal to see a real-time dashboard of hits, misses, and resource usage. This is invaluable for fine-tuning your cache duration and observing how Varnish handles traffic spikes.

Optimizing Varnish for WordPress and Dynamic Sites

If you are running a CMS like WordPress, you must be careful about what you cache. Caching the /wp-admin/ area or cookies related to logged-in users can lead to security issues or “broken” functionality. Professional Varnish configurations often include logic to bypass the cache when specific cookies are detected. By editing your default.vcl, you can add rules that check for WordPress login cookies and tell Varnish to return (pass), ensuring that administrators always see the latest version of the site while anonymous visitors receive the accelerated, cached version.

Another optimization is setting the TTL (Time to Live). By default, Varnish respects the cache headers sent by your backend, but you can override this in VCL to keep content in memory longer for static pages. This balance of “freshness” versus “speed” is what separates a basic installation from a highly optimized delivery network.

Common Troubleshooting Steps

Varnish fails to start: This is usually caused by port conflicts. Ensure that no other service (like Nginx or Apache) is still trying to use port 80. Use sudo netstat -tulpn | grep :80 to identify any blocking processes.

Changes to the site aren’t appearing: If you update a post and don’t see the changes, Varnish is likely serving the old version from memory. You can “purge” the cache by restarting the Varnish service, or more professionally, by using the varnishadm tool to ban specific URLs from the cache manually.

Conclusion

Implementing Varnish HTTP Accelerator on Ubuntu is one of the most effective ways to scale a web application. By shifting the heavy lifting of content delivery from the application layer to a dedicated, memory-resident cache, you ensure a lightning-fast experience for your users and a significant reduction in server overhead. While the initial setup requires careful management of network ports and VCL logic, the resulting performance gains—often reducing time-to-first-byte (TTFB) to just a few milliseconds—are well worth the effort. As your traffic grows, Varnish will remain the silent guardian of your server’s stability, allowing you to focus on content and development while it handles the millions of requests that come your way.

Al Mahbub Khan
Written by Al Mahbub Khan Full-Stack Developer & Adobe Certified Magento Developer