1

Actually m facing the issue of docker networking between cluster in which nodes are deployed over different host and for solving the issue I have use: docker run -idt --net=host mongodb /bin/bash So after run this command I have found container's application is exposing its port and is running on hosts IP, this solved my problem and all the nodes are able to communicate with others but I want to know is running container like this is a right way.....? Shall we use this way in production ?

1 Answer 1

2

No you should not be using --net=host in production. That said it really is dependent on your specific environment, maybe you have other security measures in place that make is ok to run your container fully open to the host. By using --net=host you are essentially just making a 1 to 1 mapping of all the container's ports to host's ports. So your mongodb port is exposed to anything that can access your host.

You should be using a docker overlay network to network containers together that are running on different hosts.

https://docs.docker.com/engine/userguide/networking/dockernetworks/

Sign up to request clarification or add additional context in comments.

4 Comments

Thanks but how to solve the network issue while creating cluster on different host servers..example is on serverA container1 IP is 172.17.0.2/17 and host IP is 192.168.x.x/24 from inside container I can able to ping any 192.168.x.x network but from outside except host machine I can't able to ping container1... So how can I expose containers IP accessible for all...?
So in your container if you are wanting to expose a specific port you can use the "-p <hostPortNum>:<containerPortNum>" and that will make the container's port accessible through the HOST IP. So in your example lets say your container uses port 28015, when you run the container you could add a "-p 28015:28015" flag. Then you would be able to access that container at that port via the host IP, 192.168.x.x/24:28015.
With a docker network however, if you have container_B and container_B you can run them on the same docker network and they will be able to access each other without exposing anything to the outside world. Not sure about your use case exactly though if everything you are running on multiple host servers are in docker containers or not. If you have mix of docker containers and non containerized processes then you can't really go with that route. You have to just let your containers expose their ports to the host for use.
Thanks, that's y m using --net=host this solved the all problem and in this configuration I use -v flag to make data path out from container. I think this can be use in production.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.