I am not sure if I should be asking this in Unix & Linux or Network Engineering
Here is the physical scenario
[Host 1]----[Carrier-grade NAT]---->AWS<----[Carrier-grade NAT]----[Host 2]
Host 1 and Host 2 are reverse ssh'ed (autossh) into an AWS Box, so they do have shell connectivity if required, and possibility to expose any other port if required.
Host 2 pushes backup dumps to Host 1 using SCP on regular basis. There are actually Host2 X 10 boxes pushing the data dumps. Nearest AWS location is quite far from the location of boxes so latency is quite a lot.
Is there a possibility to use the AWS box as a rendezvous point to broker a ssh tunnel between the boxes? I am aware about the IPv6 tunnel brokers but the ISPs in the region are yet to adopt it (20 years late... duh!) I am exploring a solution bases on:
- TCP / UDP hole punching (with practical implementation)
- UPnP / NAT-PMP service on AWS
- Using tools such as Chrome Remote Desktop, hack it to expose SSH port rather then VNC
- Any other router service.
- Any other practical approach.
Boxes are running CentOS 6/7 mostly.
socat(not netcat) to punch the hole, then connect ssh to it (to localhost). eg: on one side (the "server" side, so host1) :socat TCP4-CONNECT:cgnatedhost2:cgnatedport2,bind=outgoingip1:outgoingport1 TCP4-CONNECT:localhost:22on the "client" sidesocat TCP4-CONNECT:cgnatedhost1:cgnatedport1,bind=outgoingip2:outgoingport2 TCP4-LISTEN:2222and on the "client" you'd connect with ssh -p 2222 user1@localhost (and if the port 2222 is never used for something else, modern ssh will remember the correct remote hostid, so no need for NoHostAuthenticationForLocalhost)