I set up Headscale and Tailscale using Docker on a VPS, which I want to use as my public IPv4 and Reverse Proxy to route incoming traffic to my local network and e. g. my home server. I also set up Tailscale using Docker on my home server and connected both to my Headscale server.
I am able to ping on Tailscale container from the other and vice versa and set up –advertise-routes=192.168.178.0/24 on my home server as well as –accept-routes on my VPS, but I can’t ping local IP addresses from my VPS. What am I missing?
Both container are connected to the host network, I have opened UDP ports 41641 and 3478 on my VPS.
Try v1.60.1
Of what?
image: tailscale/tailscale:v1.60.1
To pull that version of tailscale. Latest broke subnets.
Doesn’t seem to work.
You might have other issues then, but I’d use that version of tailscale since it was the last version to work with subnets. Also, only the owner’s account works cuz sharing subnets broke even longer ago, and I’m positive neither has been fixed. Good luck!
Subnets seem to work for me with 1.62.0 docker image. In what way were they broken?
I reported it the day the update was released cuz all of my containers are on their own ip. Got that update and nothing was reachable till I rolled back.
deleted by creator
Did you enable the route in the admin web ui?
I’m using Headscale, but yes.
That should be all that’s required. Are you using ACLs? If so you need to provide access to the subnet router as well as a rule to the IP behind it
No, I’m not using ACLs.
Can your nodes ping each other on the tailscale ips? Check
tailscale status
and make sure the nodes see each other listed there.Try
tailscale ping 1.2.3.4
with the internal IP addresses and see what message it gives you.tailscale debug netmap
is useful to make sure your clients are seeing the routes that headscale pushes.Yes, both clients can tailscale ping each other and after doing so the status shows active; relay “ams”.
Using tailcale ping 192.168.178.178 also works for some reason.
Not sure what to do with the output of netmap.
Relay “ams” means you’re using tailscales DERP node in amsterdam, this is expected if you don’t have direct connectivity through your firewall. Since you opened the ports that’s unusual and worth looking into, but I’d worry about that after you get basic connectivity.
So to confirm your behavior, you can tailscale ping each other fine and tailscale ping to the internal network. You cannot however ping from the OS to the remote internal network?
Have you checked your routing tables to make sure the tailscale client added the route properly?
Also have you checked your firewall rules? If you’re using ipfw or something, try just turning off iptables briefly and see if that lets you ping through.
So to confirm your behavior, you can tailscale ping each other fine and tailscale ping to the internal network. You cannot however ping from the OS to the remote internal network?
Exactly.
Have you checked your routing tables to make sure the tailscale client added the route properly?
How do I do this? I use Headscale and
headscale routes list
shows the following:ID | Machine | Prefix | Advertised | Enabled | Primary 1 | server | 0.0.0.0/0 | false | false | - 2 | server | ::/0 | false | false | - 3 | server | 192.168.178.0/24 | true | true | true
Also have you checked your firewall rules? If you’re using ipfw or something, try just turning off iptables briefly and see if that lets you ping through.
I’m not using a firewall, but the VPS is hosted on Hetzner, which has a firewall. But I already allowed UDP port 41641 and 41641. The wg0 rule is from the Wireguard setup I want to replace using Tailscale.
# iptables --list-rules -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -N DOCKER -N DOCKER-ISOLATION-STAGE-1 -N DOCKER-ISOLATION-STAGE-2 -N DOCKER-USER -A INPUT -s 100.64.0.0/10 -j ACCEPT -A FORWARD -j DOCKER-USER -A FORWARD -j DOCKER-ISOLATION-STAGE-1 -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o docker0 -j DOCKER -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -i wg0 -j ACCEPT -A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT -A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 81 -j ACCEPT -A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT -A DOCKER -d 172.17.0.5/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9090 -j ACCEPT -A DOCKER -d 172.17.0.5/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8080 -j ACCEPT -A DOCKER -d 172.17.0.6/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 443 -j ACCEPT -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9001 -j ACCEPT -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -j RETURN -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP -A DOCKER-ISOLATION-STAGE-2 -j RETURN -A DOCKER-USER -j RETURN
I ran into a similar problem with tailscale. It looked like I needed to disable source NAT but that didn’t appear to be implemented in the FreeBSD package so it didn’t work for me. If you’re in Linux it might be worth a shot.
--snat-subnet-routes=false
“Disables source NAT. In normal operations, a subnet device will see the traffic originating from the subnet router. This simplifies routing, but does not allow traversing multiple networks. By disabling source NAT, the end machine sees the LAN IP address of the originating machine as the source.”
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters IP Internet Protocol NAT Network Address Translation UDP User Datagram Protocol, for real-time communications VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting)
5 acronyms in this thread; the most compressed thread commented on today has 11 acronyms.
[Thread #703 for this sub, first seen 22nd Apr 2024, 16:55] [FAQ] [Full list] [Contact] [Source code]
Sometimes these issues happen because of the IP range you’re using. If your local network and your remote network both use the 192.168.x.x range, then there can be conflicts and issues like this. This is a thing that happens generally with VPNs, not sure how Tailscale specifically functions with this issue.
Even if that’s not what’s going on here, you might try setting up your remote node as an exit node, and configuring your local node to route all traffic through it. Theoretically that shouldn’t be necessary, and it will also slow down your traffic if you’re routing EVERYTHING through Tailscale. But it could work in a pinch.
Actually, I’m looking at Tailscale documentation now and I see that they recommend setting up subnet routers instead of exit nodes in most cases. Maybe go that route instead, that makes more sense to me. That way you’re only routing necessary traffic through the remote node, rather than everything.
Thanks, that’s what I’m trying to do. :)
And my VPS doesn’t have any IPs in the same range as my home server.
‘ip route show’ on all machines. Make sure they know how to get to each other.
How do I make sure of this? What am I supposed to see using the command?
You expect to see the subnet of the VPN network mentioned, and the wg0 interface as it’s gateway. Also might want to make sure your wg0 interface even exists and is up with ‘ip addr show’
Are you sure Tailscale in Docker is creating a wg0 interface? Because I got a working connection between my smartphone and my home server and the home server is not showing any interface related to Tailscale?
default via 192.168.178.1 dev ens18 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 192.168.178.0/24 dev ens18 proto kernel scope link src 192.168.178.178
Are you running it in a container? Then you’ll be seeing the docker0 interface as you see there, and the container will route through that.
Yes I’m running it on Docker and therefore have the docker0 interface.