Bypassing CGNAT with Wireguard and Caddy

Screw CGNAT

As anyone who tried to self-host will tell you, fuck CGNAT, It is nothing but an annoying band aid solution to the IPv4 shortage. You get excited and set up your local CS:GO calculator looking laptop, install your favorite distro on it (arch btw) and then at the last moment when you forward the ports on your router and then absolutely nothing, so then you have the brilliant idea of port scanning your router to double check that you haven’t messed up your port forwarding setup and (maybe?) accidentally commit a crime by scanning your ISPs router.

No one?

Really?

No way it is just me who is that dumb…

Anyways putting that aside this is how to bypass it.

Note: I am currently only using services that depend on HTTP so I won’t explain how to setup masquerading with iptables because I myself have no clue on how to do that but maybe I’ll update this article in the future when I do.

The Plan

Before we start we will need the following:

  • Your server locally at home with the services you want to expose.
  • A VPS with a public IP from any provider of your choice.
  • A domain name (not necessary as you can just refer to your services via an IP address but who does that).

And I will have some assumptions:

  • You are ready to read documentation when needed and won’t just be copy pasting everything
  • That your local server has either no firewall, this is of course a terrible idea, I am not saying that you shouldn’t run a firewall, you very much should, but it is just easier for me to write the article with that assumption, instead of disabling it you should, of course, expose ports on your local machines firewall when needed.

We will then connect these two via a Wireguard VPN connection and set up a reverse proxy on the VPS side which will point to your local machine over the VPN connection, and have subdomains alias to each service.

I will be using Oracle Cloud free tier for the VPS because I am broke (apologies for saying the “o” without censoring it). I will also assume that you have already provisioned the VPS and are using Ubuntu, if you don’t know how to do that stuff then you can follow this tutorial by Or*cle themselves which explains how to do it, just change your distro to Ubuntu and the shape to anything free from either AMD or Intel as I don’t have any idea as to what kind of packages exist for Ampere. Return once you have a shell on the server.

Back? Nice. Let’s continue.

Setting Up the VPS

First we are going to disable the firewall. I swear I am not stupid just stay with me here. Ubuntu uses UFW as the default firewall. It isn’t really a firewall as much as it is a front end for iptables so instead of using it we are just going to use iptables directly.

Why? Because it is what this goofy goober knows how to use (and because it gives us more granular control). So let’s tear down this firewall!

Now I need to mention that we aren’t completely exposing the machine to the internet as there is another firewall you can configure on Or*cle’s website under the security list for your VM’s subnet if you don’t know what I am talking about then read the “Open Firewall and Security List Ports to Allow Public Access” section in the aforementioned tutorial.

Tearing Down the Firewall

First of all we will yeet UFW off the system

1
sudo apt remove ufw

While this does get rid of UFW it doesn’t remove the UFW specific firewall rules, they do get removed on the next boot-up but we also wants to get rid off the rules set up by Or*cle. Write the following script to disk and then execute it as root. This shouldn’t interfere with your current SSH session as we basically enabled all traffic, in and out.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#!/bin/sh
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -t raw -F
iptables -t raw -X
iptables -t security -F
iptables -t security -X

Rebuilding the Firewall

Now we will rebuild our firewall from scratch. First we will allow incoming SSH, HTTP, HTTPS traffic and then we will allow established and related traffic as well as a few important services such as NTP, DNS, etc… And then block all other traffic by default.

Enabling Important Services

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# Output firewall rules
sudo iptables -A OUTPUT -o lo -j ACCEPT
sudo iptables -A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m tcp --dport 22 -m comment --comment "Output SSH" -j ACCEPT
sudo iptables -A OUTPUT -p udp -m udp --dport 53 -m comment --comment "Output DNS/udp" -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m tcp --dport 53 -m comment --comment "Output DNS/tcp" -j ACCEPT
sudo iptables -A OUTPUT -p udp -m udp --dport 123 -m comment --comment "Output NTP" -j ACCEPT
sudo iptables -A OUTPUT -p icmp -m icmp --icmp-type 8 -m comment --comment "Output ICMP Ping" -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m tcp --dport 80 -m comment --comment "Output HTTP" -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m tcp --dport 443 -m comment --comment "Output HTTPS" -j ACCEPT



# Input firewall rules
sudo iptables -A INPUT -i lo -j ACCEPT
sudo iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A INPUT -p tcp -m tcp --dport 22 -m comment --comment "Input SSH" -j ACCEPT
sudo iptables -A INPUT -p tcp -m tcp --dport 80 -m comment --comment "Input HTTP" -j ACCEPT
sudo iptables -A INPUT -p tcp -m tcp --dport 443 -m comment --comment "Input HTTPS" -j ACCEPT

Let us understand what the hell we just witnessed, but instead of explaining command by command I’ll just explain the form of the commands and it is left to you to understand it and/or do further research.

I do recommend this playlist for the basics of iptables.

The first example will be the rule on line 13:

1
2
3
4
5
6
7
8
sudo iptables           -> Invoke iptables with sudo.
-A INPUT                -> Add to the input chain.
-p tcp                  -> Specify tcp as the protocol.
--dport 22              -> The destination port is 22.
-m commend              -> Use the comments module. It allow us
                           to have human readable explinations of rules.
--comment "Input SSH"   -> Specfiy the comment contents.
-j ACCEPT               -> Accept all traffic that matches this rule.

You can think of chains as the source of traffic/ where it came from. That isn’t exactly true but it will do as an explanation for now. So the INPUT chain just means incoming traffic, OUTPUT chain is traffic going out of the machine.

The second example will be the rule on line 12:

1
2
3
4
5
6
7
sudo iptables                   -> Invoke iptables with sudo.
-A INPUT                        -> Add to the input chain.
-m conntrack                    -> Use the connection tracking module.
                                   it allows us to have statful rules in 
                                   iptables.
--ctstate RELATED,ESTABLISHED   -> Allow all established and related traffic.
-j ACCEPT                       -> Accept all traffic that matches this rule.

You might be asking now, what is established and related traffic. The best explanation I found was from this RedHat article, so I will steal it :P

  • “ESTABLISHED — A packet that is part of an existing connection.”
  • “RELATED — A packet that is requesting a new connection but is part of an existing connection. For example, FTP uses port 21 to establish a connection, but data is transferred on a different port (typically port 20).”

The last example will be the rule on number 11:

1
2
3
4
sudo iptables   -> Invoke iptables with sudo.
-A INPUT        -> Add to the input chain.
-i lo           -> Incoming traffic on  the network interface "lo".
-j ACCEPT       -> Accept all traffic that matches this rule.

The “lo” interface means loopback, it is a virtual network interface that is used for traffic that never leaves the machine. Think about when you use “localhost” or “127.0.0.1” when accessing services running on your own device.

Since such traffic basically never leaves the device there is no reason to block it, and since we didn’t specify anything for protocols or ports and such, this rule applies on all traffic on the interface.

Basically, the rule means “allow all traffic on this network interface.”

Closing Up the Firewall

Now that we have our essential services, we should set up iptables to deny any traffic that hasn’t been explicitly allowed by us. This is very simple as we just have to change the policy for each chain. You can think of the policy as the default case if we didn’t match any rules.

The script we ran earlier set the default policy to ACCEPT meaning that all traffic would be by default, well… accepted, we can change this to DROP, which denies traffic by default.

1
2
3
sudo iptables -P INPUT DROP
sudo iptables -P FORWARD DROP
sudo iptables -P OUTPUT DROP

Setting Up Wireguard

Before starting I recommend reading the Wireguard website as understanding the underlying principles behind it will help you understand how and why Wireguard operates in the way that it operates.

Remember kids, understand shit THEN memorize it if needed (this is for me not you :) ).

Instead of using ip-link and wg directly we are going to depend on wg-quick and systemd to have a persistent configuration (sorry if you are using a shell script as init :p).

First things first, we will generate our public and private key pairs for both our local server:

1
wg genkey | (umask 0077 && tee local_server.key) | wg pubkey > local_server.pub

And our VPS:

1
wg genkey | (umask 0077 && tee vps.key) | wg pubkey > vps.pub

The name of the files doesn’t matter, it is just for convenience sake when we refer to them later.

After generating our keys we need to build our config files for each peer (client) and then store them in each machines respective /etc/wireguard/wg0.conf which then allows us to use wg-quick to bring up the interfaces or even better, use:

1
sudo systemctl enable wg-quick@wg0.service

To start them automatically on boot-up.

Do note that it is the convention to name your interfaces “wg(Number)” (without parentheses obviously) where number is an integer starting from 0.

So the first interface is wg0, then wg1, wg2….. etc. But you can name it anything, of course substitute wg0 with your interface name.

Before we write our server configuration we need to specify a port on which Wireguard will operate on. For our case we will use port 51820 for both the VPS and local server.

Wireguard uses UDP not TCP, beware to not mess it up when opening the ports on your VPS/local server, speaking of which lets punch another through the VPS’ firewall:

1
sudo iptables -A INPUT -p udp -m udp --dport 51820 -m comment --comment "Input Wireguard" -j ACCEPT

As you can see, we don’t have an output rule, that’s because we don’t need it :D

Understanding Wireguard Config Files

I will not be covering every option for Wireguard config files, that has already been compiled and explained in great detail in the unofficial Wireguard documentation by Github user Pirate to be more exact this part which covers everything you would want to know about Wireguard configuration files.

Wireguard uses INI files for configuration, but instead of using semicolons for comments we use #. Here is an example config and explanation of each part right after:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
[Interface]
PrivateKey = foo
Address = 192.168.50.2
ListenPort = 51820

[Peer]
PublicKey = bar
AllowedIPs = 192.168.50.1/32
Endpoint = 193.123.63.87:51820
PersistentKeepalive = 25

First part defines options for our virtual interface, in our case “wg0” here is an explanation of each line

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Every Wireguard config file starts with this line.
# It specifies that the following parameters are for the Wireguard
# interface configuration.
[Interface]                

# Should be equal to the private key we generated before. 
# Used to decrypt trafic encrypted with the coresponding public key
# generated by other peers.
PrivateKey = foo 

#An arbitrary IP address we select for the device.
Address = 192.168.50.2 
                            
# Which port to listen to for Wireguard traffic.
ListenPort = 51820 

....

Following the interface configuration you will have configuration for peers (basically clients), each peer should have its public key (that is the public key of other machines which we generated in the previous steps) as we use that for encryption and authentication and “AllowedIPs” which specify which IP address(es) are associated with which keys, more into can be found on Cryptokey Routing section on the Wireguard website.

Let’s take a look at the rest of our config file, in this example we will use 50.50.50.50 as the public IP of the peer, and port 51820 as the port used for the Wireguard port:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
...

# Specifies a new peer
[Peer]

# Specifies the public key for the peer. 
# Used for encryption and authentication.
PublicKey = bar

# Specifies either an IP or an IP range that is assosciated with the key, 
# these are IPs for the VPN and not the machines public, or router
# assighned IP. 

# Traffic sent to this IP range is encrypted with the 
# afformentioned public key.
AllowedIPs = 192.168.50.1/32 

# specifies the actual ip address of the machine (eg. public IP,
# router assigned IP), and the Wireguard port.
Endpoint = 193.123.63.87:51820 


# Send keep alive packets every 25 seconds, 
# these are important when behind a NAT or firewall so they don't 
# kill our connection.
PersistentKeepalive = 25       

PersistantKeepalive is very important for our usage, it basically allows us to keep the connection between our VPS and local server even if there is no traffic for an extended period of time, you can read more about it here. But 25 is a nice number :D.

If we have multiple peers we can have something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
[Interface]
blah blah blah
....

[Peer]
PublicKey = foo
AllowedIPs = foobar
blah blah blah
....

[Peer]
PublicKey = baz
AllowedIPs = bazbar
blah blah blah
....

[Peer]
....

etc...

And just have as many as we want :D.

Setting up the VPS Wireguard Config

Before configuring anything we need to do some housekeeping, first of all find your IP and subnet through ifconfig this is so we don’t accidentally overlap your home networks subnet with that of the one we are going to set up as that could lead to problems.

Usually for home networks it is something along the lines of 192.168.x.0/24.

We will use 192.168.50.0/24 as our subnet, if that contradicts with your home network then please select another number.

Now the next part is specifying on which port Wireguard will operate on, we will use port 51820/udp as we previously mentioned, if you are using that port for anything else then select another one.

1
2
3
4
5
6
7
8
[Interface]
PrivateKey = <the private key in vps.key>
ListenPort = 51820
Address = 192.168.50.1

[Peer]
PublicKey = <the public key in local_server.pub>
AllowedIPs = 192.168.50.2/32

Fill in the keys as instructed and then store it in /etc/wireguard/wg0.conf and then run:

1
2
sudo systemctl enable wg-quick@wg0.service
sudo systemctl start wg-quick@wg0.service

If you have any syntactic issues the commands will fail give you an error otherwise you can simply do a

1
sudo wg

and you should see info about the interface

Setting up the Local Server Wireguard Config

Again edit this to your needs and then store it in /etc/wireguard/wg0.conf

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
[Interface]
PrivateKey = <Contents of local_server.key>
Address = 192.168.50.2
ListenPort = 51820

[Peer]
PublicKey = <Contents of vps.pub>
AllowedIPs = 192.168.50.1/32
Endpoint = <Public IP of the VPS>:51820
PersistentKeepalive = 25

and you should know the drill by now:

1
2
sudo systemctl enable wg-quick@wg0.service
sudo systemctl start wg-quick@wg0.service

and

1
sudo wg

to check the interface status.

Now for the moment of truth, if everything went fine then we should be able to ping our local server from the VPS, you can’t do the reverse since our VPS doesn’t allow ICMP pings through the firewall, if you want to enable that then go ahead.

Here goes nothing, run this command from your VPS:

1
ping 192.168.50.2

And hopefully everything should be fine. If it isn’t then either I fucked up explaining it or something is broken at your end. Double check that the port you opened on the VPS side is in fact UDP and not TCP.

Either way, may god have mercy on your soul.

Anyways, time to set up Caddy!

Caddy as a Reverse Proxy

Caddy is a goofy web server written in funny gopher language. It has simple syntax which is good very, it has auto SSL cert management which is very ongabunga, and simple syntax which is very UNGABONGA AAAOUUUUUUGA.

God do I hate people who write shit like that in the middle of the text, just get into the damn topic.

Install the damn thing via the installation instructions, and read caddyfile concepts and caddyfile tutorial, they are relatively short, and will spare you a lot of grief, trust me, my ass stayed until 4 AM trying to debug it because I couldn’t be bothered to spend the 5 minutes it needed TO JUST READ THE FUCKING MANUAL, anyways enough about my ADHD, let the configuration begin.

Caddy Basics

As we mentioned before we will deploy the reverse proxy on the VPS and point subdomains to our services hosted on the local server over the Wireguard tunnel, do note that we are going to be using the system package and not docker, because it is just plain easier. Sounds good? Then let’s start.

First of all, we can run caddy in two ways, either using the command line utility or using systemd running as a service, the command line utility is nice for testing but actual deployment is done with systemctl, like so:

1
2
sudo systemctl enable caddy
sudo systemctl start caddy

When started this way caddy will use the config in /etc/caddy/Caddyfile, for any modifications to take place reload caddy using:

1
sudo systemctl reload caddy

Caddyfile

Same as with Wireguard configuration files, I will not explain everything, but just enough for what we want to do. If you want to do something outside of what I am going to explain then please read the docs.

Just as with the Wireguard confgs here is an example:

1
2
3
4
5
6
7
{
    email person@example.com
}

service.example.com {
    reverse_proxy 10.0.0.1:8080
}

Now lets take each part and explain it in detail

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Each Caddyfile starts with a global options block.
# Most important of these options is the email option, 
# which specifies an email address which is used to send warnings 
# in case there are any problems with your certs.
# You can just supply a dummy address or none at all but you won't get
# said warnings.
{
    email person@example.com
}

...

For more about global options you can read this.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
...

# This is a site block it defines info about our site
# for our use case the only important keyword (also called directive)
# is the reverse_proxy keyword which makes Caddy act as a reverse proxy.
# When receving traffic on the "service" subdomain it will be forwarded
# to 10.0.0.1:8080.

service.example.com {
    reverse_proxy 10.0.0.1:8080
}

Practical Example: Pingvin Share

The next part is going to be a practical example for setting up a service, on your local server with docker with an example Caddyfile Pingvin Share by stonith404, it is a temporary file storage platform that you can self-host.

I am assuming the following with this example:

  • That you have a domain name registered
  • That you know how to create DNS records

First things first, install docker and docker compose, on your local server as per the documentation.

Now we are going to use the example docker-compose.yml from the Github repo so either download it with something like wget or just copy paste it onto your machine. But before you run it lets take a look at its contents.

PLEASE DON’T COPY PASTE THIS COMPOSE FILE, IT IS INCOMPLETE. USE THE ONE FROM THE LINK ABOVE.

I won’t put the compose file on here since I don’t want people opening up issues on the repo when the problem is them using an old compose file, I’ll only be highlighting the parts that are important to us.

1
2
3
4
5
6
7
8
services:
  pingvin-share:
    image: stonith404/pingvin-share
    ...
    ports:
      - 3000:3000
    volumes:
    ...

Looking at compose file we can see that the service runs on port 3000, if you have anything running on that port then just change to an unused port, e.g. port 4000:

1
2
3
4
5
6
7
8
services:
  pingvin-share:
    image: stonith404/pingvin-share
    ...
    ports:
      - 4000:3000
    volumes:
    ...

Yes, if you haven’t had the misfortune of messing around with docker, the port exposed on the host is on the left while the one the container uses internally is on the right, which is so dumb.

Now let’s download the container image and bring it up using:

1
2
docker compose pull
docker compose up -d

Anyways now we need to add a new rule in the VPS’ iptables’ OUTPUT chain so that caddy can communicate with our service OR you can just enable all traffic on the wg0 interface (in and out, or just out) on the VPS side like we did with our loopback interface with:

1
2
sudo iptables -A INPUT -i wg0 -j ACCEPT
sudo iptables -A OUTPUT -o wg0 -j ACCEPT

Fully up to you.

But I prefer white listing each service manually, eg with port 4000:

1
sudo iptables -A OUTPUT -p tcp -m tcp --dport 4000 -m comment --comment "Pingvin Share" -j ACCEPT

Before writing a Caddyfile first choose a subdomain you would like to associate Pingvin Share with. For our example we will use example.com as our domain. And pingvin.example.com as the subdomain.

Obviously replace example.com with your own domain name.

Now create an A record on your domain providers dashboard and point it to your VPS’ IP address.

After you did that we can now edit /etc/caddy/Caddyfile,

First things first specify your email so you get notifications if anything with SSL certs goes wrong, then it is a simple case of using the reverse_proxy directive in order to point caddy to act as a reverse proxy when requests with the pingvin subdomain reach it.

Here is an example config:

1
2
3
4
5
6
7
{
        email <An email address you control>
}

pingvin.example.com {
        reverse_proxy 192.168.50.2:4000
}

Conclusion

That’s it! Enjoy self-hosting your services!

If you found any issues with this article please email me~