Nginx is best known for being a really good web server. (The page you're reading was served by it! Unless it's been a really long time and I've revamped things. Check the Server header to confirm, I guess.) It's also known for being a really good TLS-terminating reverse proxy server. It's less well known for being a non-terminating reverse proxy for TLS connections, but it's actually capable of that, too.

So I have this use case where I'd like Nginx to serve some static web sites (again, like this one); serve as a TLS-terminating reverse proxy for some other services; and serve as a non-TLS-terminating reverse proxy for yet others (specifically, I'm proxying those connections over a VPN to my LAN, but that's not important here). And this… turns out not to be straightforward.

Non-terminating proxying of TLS connections is done by the stream module. You can use this with the ssl_preread module to direct connection handling off the non-encrypted TLS ClientHello message. Browsers typically send a server_name in this message (though non-browsers typically don't, so… be aware of that potential failure mode, I guess), so we should be able to use that to dispatch appropriately: connections to the non-terminated proxy get sent to that endpoint; others go to their appropriate site, as defined elsewhere in the config.

But… it turns out you can't combine stream and the http module that provides normal static web and TLS-terminating reverse proxy serving like this. The modules are oblivious to each other; if you try to configure them both to listen on the same port and address, the server fails to come up with the message “address already in use” as Nginx fights with itself for the binding to the listen port.

So if you want to handle anything on a given port with stream, you need to handle everything with stream. This sort of suggests standing up another Nginx instance to handle the http module serving, but fortunately you don't have to go that far; while stream and http can't listen to the same port, they can otherwise coexist, and they're sufficiently mutually oblivious that you can have a single Nginx instance proxy to itself to handle locally-terminated connections.

There's one more optimization we can make here, which is that with Nginx talking to itself, the connections are by definition local, and we don't need to invoke the whole TCP machinery for them. Both stream and http support using Unix sockets for transport. So we have stream proxy these connections to a Unix socket and http listen on that socket. And this pretty much Just Works™. It was weirdly straightforward once I figured out what to do.

Let's look at what these configs look like more specifically. First, let's set up stream and ssl_preread (my server is running Debian Bullseye, but at most minor adaptations should be required): # /etc/nginx/modules-available/ssl-preread.conf (linked into modules-enabled) # Preread server name from SNI and dispatch appropriately load_module /usr/lib/nginx/modules/; stream { # Populate the $target_backend variable depending on SNI server name map $ssl_preread_server_name $target_backend { # The host to be proxied destination; # All other connections default localhost; } # Upstream server for the non-terminated proxied connection upstream destination { server; } # Other connections go through a Unix domain socket upstream localhost { server unix:/tmp/nginx.sock; } server { listen 443; listen [::]:443; # We *always* proxy (it's the only thing we *can* do), but the proxy # destination is determined by the `map` and `upstream` sections above proxy_pass $target_backend; ssl_preread on; } } Configuring http will vary depending on what your setup looks like; the important thing is to not listen on 443 (or whatever port you configure stream to bind to). My nginx.conf is unmodified, with site configs in sites-available/enabled: # /etc/nginx/sites-availabe/default.conf (linked into sites-enabled) server { # Listen on the domain socket for proxied connections from `stream` listen unix:/tmp/nginx.sock ssl; ssl_certificate /etc/certs/site.pem; ssl_certificate_key /etc/certs/privkey.pem; server_name _; location / { # idk whatever root /var/www/html; } } You can have multiple http server blocks listening on the same socket, just as you would with a TCP port. Connections will be dispatched correctly according to server_name, just as they would with a TCP port, as well.

There's one ugly hiccup I haven't come up with a good fix for yet: if the socket file (e.g. /tmp/nginx.sock) already exists, Nginx will fail to come up (with a somewhat-confusing "address already in use" error). This is trivial to workaround by removing the socket file, but if you'd prefer not to constantly muck with your webserver, bound sockets are unfortunately not cleaned up by a Nginx on a graceful exit. A service nginx restart will blow up in your face. Under most circumstances, you can reload for similar effect, and a full system reboot will clear /tmp, but it's still an unfortunately brittle behavior. (This is almost certainly a bug, as an urgent shutdown, e.g. by sending SIGTERM to the server process, will clean up the socket file.)

So there you have it. A workaround to mix locally-terminated and stream-proxied connections through the same port in a single Nginx instance.