0

I would like to mount a directory from a docker container to the local filesystem. the directory is a website root and I need to be able to edit it my local machine using any editor.

I know I can run docker run -v local_path:container_path but doing that it is only creating an empty directory inside the container.

How can one mount a directory inside a docker container on linux host?

5
  • 1
    Have you read this stackoverflow.com/a/31726568/1981061 ? Commented Oct 14, 2016 at 13:10
  • thank you, how would you suggest use volumes to provide developer a ready to use lamp stack environment so that they can use their own computer with their favourites editors and tools? Commented Oct 14, 2016 at 13:38
  • Consider doing it the other way around. Mount your source directory inside your container. Use your editor, build, execute from the mounted directory inside your container. Other way would be to mount the directory on host and copy from container the source code to the mounted directory so it is accessible from host for your editor. Commented Oct 14, 2016 at 13:47
  • I don't understand people which dedicates to spread negative points to questions and answers. This question is not only perfectly valid it is a missing feature in docker. You should be able to mount a named volume in your host. It is already in the filesystem, but if you write there the changed won't be reflected in the container Commented Oct 14, 2016 at 18:12
  • 4
    Possible duplicate of Mount directory in Container and share with Host Commented Jun 10, 2018 at 4:46

2 Answers 2

2

It is a bit weird but you can use named volumes for that. Despite host mounted volumes, named ones won't be empied. And you can access the dir. See the example:

docker volume create --name data docker run -rm=true -v data:/etc ubuntu:trusty docker volume inspect data [ { "Name": "data", "Driver": "local", "Mountpoint": "/var/lib/docker/volumes/data/_data", "Labels": {}, "Scope": "local" } ] 

See the mount point?

mkdir ~/data sudo -s cp -r /var/lib/docker/volumes/data/_data/* ~/data echo "Hello World">~/data/hello.txt docker run --rm=true -v ~/data:/etc ubuntu:trusty cat /etc/fstab #The content is preserved docker run --rm=true -v ~/data:/etc ubuntu:trusty cat /etc/hello.txt #And your changes too 

It is not exactly you were asking for but depends on your needs it works

Regards

Sign up to request clarification or add additional context in comments.

Comments

0

If your goal is to provide a ready to go LAMP, you should use the VOLUMES declaration inside the Dockerfile. VOLUME volume_path_in_container The problem is that docker will not mount the file cause they were already present in the path you are creating the volume on. You can go as @Grif-fin said in his comment or modify the entry point of the container so he copy the file you want to expose to the volume at the run time.

You have to insert your datas using the build COPY or ADD command in Dockerfile so the base files will be present in the container.

Then create an entrypoint that will copy file from the COPY path to the volume path.

Then run the container using the -v tag and like -v local_path:volume_path_in_container. Like this, you should have the files inside the container mounted on the local. (At least, it was what I add).

Find an exemple here : https://github.com/titouanfreville/Docker/tree/master/ready_to_go_lamp.

It will avoid to have to build every time and you can provide it from a definitive image.

To be nicer, it would be nice to add an user support so you are owner of the mounted files (if you are not root).

Hope it was useful to you.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.