51

I'm getting this strange error, when I try to run a docker with a name it gives me this error.

docker: Error response from daemon: service endpoint with name qc.T8 already exists. 

However, there is no container with this name.

> docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES > sudo docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 3 Server Version: 1.12.3 Storage Driver: aufs Root Dir: /ahdee/docker/aufs Backing Filesystem: extfs Dirs: 28 Dirperm1 Supported: false Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: null bridge host overlay Swarm: inactive Runtimes: runc Default Runtime: runc Security Options: apparmor Kernel Version: 3.13.0-101-generic Operating System: Ubuntu 14.04.4 LTS OSType: linux Architecture: x86_64 CPUs: 64 Total Memory: 480.3 GiB 

Is there anyway I can flush this out?

10 Answers 10

50

Just in case someone else needs this. As @Jmons pointed out it was a weird networking issue. So I solved this by forcing a removal

docker network disconnect --force bridge qc.T8 

A

Sign up to request clarification or add additional context in comments.

2 Comments

this helped A LOT!!
As described by @ShalabhNegi answer, this command has the following pattern: docker network disconnect <network name> <container name>
38

TLDR: restart your docker daemon or restart your docker-machine (if you're using that e.g. on a mac).

Edit: As there are more recent posts below, they answer the question better then mine. The Network adapter is stuck on the daemon. I'm updating mine as its possibly 'on top' of the list and people might not scroll down.

  1. Restarting your docker daemon / docker service / docker-machine is the easiest answer.

  2. the better answer (via Shalabh Negi):

docker network inspect <network name> docker network disconnect <network name> <container id/ container name> 

This is also faster in real time if you can find the network as restarting the docker machine/demon/service in my experience is a slow thing. If you use that, please scroll down and click +1 on their answer.


So the problem is probably your network adapter (virtual, docker thing, not real): have a quick peek at this: https://github.com/moby/moby/issues/23302.

To prevent it happening again is a bit tricky. It seems there may be an issue with docker where a container exits with a bad status code (e.g. non-zero) that holds the network open. You can't then start a new container with that endpoint.

2 Comments

thanks for pointing me to that forum. I entered docker network disconnect --force bridge qc.T8 and it seem to work, yah.
-force did the miracle to me, without having to restart the docker daemon.
37
docker network inspect <network name> docker network disconnect <network name> <container id/ container name> 

You can also try doing:

docker network prune docker volume prune docker system prune 

these commands will help clearing zombie containers, volume and network. When no command works then do

sudo service docker restart 

your problem will be solved

1 Comment

sudo service docker restart worked for me, thanks +1
6
docker network rm <network name> 

Worked for me

Comments

4

Restarting docker solved it for me.

1 Comment

Thank you, I got a problem when I can not disconnect from a network and restart works fine sudo service docker restart
1

I created a script a while back, I think this should help people working with swarm. Using docker-machine this can help a bit.

https://gist.github.com/lcamilo15/7aaaebe71852444ea8f1da5c4c9c84b7

declare -a NODE_NAMES=("node_01", "node_02"); declare -a CONTAINER_NAMES=("container_a", "container_b"); declare -a NETWORK_NAMES=("network_1", "network_2"); for x in "${NODE_NAMES[@]}"; do; docker-machine env $x; eval $(docker-machine env $x) for CONTAINER_NAME in "${CONTAINER_NAMES[@]}"; do; for NETWORK_NAME in "${NETWORK_NAMES[@]}"; do; echo "Disconnecting $CONTAINER_NAME from $NETWORK_NAME" docker network disconnect -f $NETWORK_NAME $CONTAINER_NAME; done; done; done; 

Comments

1

You could try seeing if there's any network with that container name by running:

docker network ls

If there is, copy the network id then go on to remove it by running:

docker network rm network-id

Comments

0

This could be because an abrupt removal of a container may leave the network open for that endpoint (container-name).

Try stopping the container first before removing it. docker stop <container-name>. Then docker rm <container-name>.

Then docker run <same-container-name>.

Comments

0

i think restart docker deamon will solve the problem

Comments

-1

Even reboot did not help in my case. It turned out that port 80 to be assigned by the nginx container automatically was in use, even after reboot. How come?

root@IONOS_2: /root/2_proxy # netstat -tlpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:873 0.0.0.0:* LISTEN 1378/rsync tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1565/systemd-resolv tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1463/nginx: master tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1742/sshd tcp6 0 0 :::2377 :::* LISTEN 24139/dockerd tcp6 0 0 :::873 :::* LISTEN 1378/rsync tcp6 0 0 :::7946 :::* LISTEN 24139/dockerd tcp6 0 0 :::5355 :::* LISTEN 1565/systemd-resolv tcp6 0 0 :::21 :::* LISTEN 1447/vsftpd tcp6 0 0 :::22 :::* LISTEN 1742/sshd tcp6 0 0 :::5000 :::* LISTEN 24139/dockerd 

No idea what nginx: master means or where it came from. And indeed 1463 is the PID:

root@IONOS_2: /root/2_proxy # ps aux | grep "nginx" root 1463 0.0 0.0 43296 908 ? Ss 00:53 0:00 nginx: master process /usr/sbin/nginx root 1464 0.0 0.0 74280 4568 ? S 00:53 0:00 nginx: worker process root 30422 0.0 0.0 12108 1060 pts/0 S+ 01:23 0:00 grep --color=auto nginx 

So I tried this:

root@IONOS_2: /root/2_proxy # kill 1463 root@IONOS_2: /root/2_proxy # ps aux | grep "nginx" root 30783 0.0 0.0 12108 980 pts/0 S+ 01:24 0:00 grep --color=auto nginx 

And the problem was gone.

4 Comments

okay, so this is something else running on your compuer. Nginx is the web server that lots of things use, and probably starts by default on your machine (e.g. look in init.d or whatever your system uses). I would suggest your problem has nothing to do with the original post, but but is just a seperate service already holding the port open. the Original OC was unable to start the container because the name of the container was held open by a previous running container, irrelevant of network ports.
@Jmons Ok, ok, that's virtually true, but I was led to this page in search for a solution to my problem to which I found the solution myself after not getting any clue elsewhere. I added this for those in my situation to give them an additional hint which I did not find here.
Then you should create a new question that addresses that and add yours as an answer, but I suspect there would be a duplicate. I'm sorry, but You would not have got the error message in the original question, and your solution of stop/starting other applications would not fix someone with an error message of "service endpoint with name XXX already exists" and therefore your answer is in the wrong place, it is wrong and it is misleading.
Ok, you are wright and I am wrong. Sorry. What now? Can I delete it to make you happy?

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.