8

Am trying to build a setup where my Dockerfile has instructions to clone a couple of git repos (amongst other stuff). After the first run, the cloned git repos should be made available to the host machine for editing. All further edits to the local cloned git repos should be made available for future docker builds.

So, how do I expose the git repos that were cloned in the Dockerfile for editing on the host machine.

5
  • Docker is not an ideal solution if used as a storage container. Commented Mar 30, 2018 at 22:50
  • Interesting, I was trying to combine two use cases here: First: I wanted new developers to just run docker-compose and have their whole environment setup automatically (Thats why the git clone). Second: They should also have the ability to start working locally and run their changes off the docker environment. Isn't this a plausible scenario? Commented Mar 30, 2018 at 23:23
  • Or maybe the git cloning of remote repos only makes sense in a dockerfile if its a part of a CI/CD pipeline. Otherwise just do a git clone separately and then push that context into the Docker daemon? Is that the "normal" approach of using docker? Commented Mar 30, 2018 at 23:26
  • When you build a Docker image, whatever you put into it at that time is set. You can change the contents during a image run, but if the image is stopped and restarted, you are back to the original contents - think CD-ROM. Maybe read through this: docs.docker.com/storage Or maybe I'm not understanding your use case. Commented Mar 30, 2018 at 23:41
  • Dockerfile should not interfere the host system, this is by design. If you just want to setup the development environment, why not use a shell script? Commented Mar 31, 2018 at 3:08

2 Answers 2

16

You can do it by three ways.

Here is the Dockerfile.

FROM node:alpine RUN apk add --no-cache git RUN apk add --no-cache openssh WORKDIR /data RUN git clone https://github.com/jahio/hello-world-node-express.git /data/app WORKDIR /data/app EXPOSE 3000 

Build:

docker build -t node-test . 

Update:

Damn, I am crazy about docker :D Another solution easiest and good

Create an empty directory in host and container and mount that one

/home/adiii/Desktop/container_Data:/to_host 

Copy the cloned repo to to_host at the entry point with -u flat so only will new file will be paste and host data will be persistent.

and entrypoint.sh

#!/bin/ash cp -r -u /data/app /to_host && /bin/ash 

dockerfile update section.

ADD entrypoint.sh /usr/bin/entrypoint.sh RUN chmod +x /usr/bin/entrypoint.sh RUN WORKDIR / RUN mkdir -p to_host # so we will put the code in to_host after container bootup boom im crazy about docker so no need to make it complex..simple easy :D ENTRYPOINT [ "/usr/bin/entrypoint.sh" ] 

1: using docker volume

Create volume named code

docker volume create code 

Now run that container with mounting this volume.

docker run -p 3000:3000 -v myvol:/data/app --rm -it node-test ash 

Now terminate the container or stopping it will data still preserved in volume.

You can find if OS is Linux.

/var/lib/docker/volumes/code/_data 

you will see three

app.js node_modules package.json 

2: using bash see comments in the script

 #!/bin/bash image_name=node-test container_name=git_code # for first time use first_time if [ $1 == "first_time" ] ; then # remove if exist docker rm -f $container_name #run contianer for first time to copy code docker run --name $container_name -dit $image_name ash fi # check if running if docker inspect -f '{{.State.Running}}' $container_name ; then # copy code from container to /home/adiii/desktop docker cp $container_name:/data/app /home/adil/Desktop/app fi # for normal runing using run if [ $1 == "run" ]; then # remove old container if running docker rm -f $container_name docker run --name $container_name -v /home/adil/Desktop/app:/data/app -dit $image_name fi 

Now run the command in the container

docker exec -it git_code ash 

enter image description here

3: By mounting the empty directory of the host with code directory of the container at the runtime. So when you run next time with mount directory it will contain your update code which you made any change from host OS. But make sure the permission of that directory after container run and terminated data will be there but the behaviour of this method is not constant.

docker run -p 3000:3000 -v /home/adiii/code/app:/data/app --rm -it node-test ash 

Here /home/adiii/code/app is an empty directory of host and after termination of containers its still have cloned code but I said its behavior varies.

Sign up to request clarification or add additional context in comments.

3 Comments

Thanks! Need some clarifications: In method #1, creating and mounting a host volume will ensure that the cloned repo will be made available to the host for editing. But after editing, the changes wont be reflected the next time we build the container because it will again do a fresh git clone of the repo. I believe volume mounts provide a one way flow from the container to the host not vice versa.
#Method 2 looks more promising. The first time we run the Dockerfile using the "first_time" bash argument we do the git clone. All subsequent runs we do a normal run and mount our edited/updated host volume to the mapped container folder. Is my understand right? Thanks again for your details response. :)
yes method2 do the same as you expect and method1 will overide whatever in container if you mapped the host folder.pls accept and up vote if it work.
0

I can suggest the next solution, it's slightly different from the one proposed above, maybe it will help someone save time. First of all I must say, this method doesn't work well in Windows, due to the difference in file systems.

  1. We need to mount the volume, I do it with a docker-compose.yml

docker-compose.yml

... app: build: context: ./app dockerfile: Dockerfile image: app volumes: - './app/src:/var/www/myuser/data/www/site.com' ... 
  1. The solution to clone a repository on the host machine seemed bad to me, in this case docker is pointless, because my host machine may not have a Git or there may not be a suitable version of Nodejs.
    So, we need to clone git repo in the container, but we can't do it in Dockerfile, because the volume is mounted during container startup and after exec Dockerfile, thus deleting our repo.
    Therefore, we will clone the repo in a bash script, it will be executed after the container is started.

Dockerfile

FROM ubuntu:latest # Set working directory WORKDIR /var/www/myuser/data/www/site.com ... # Some work like RUN apt-get update && apt-get install -y nginx \sudo ... # Add new user RUN groupadd -g 1000 myuser RUN useradd -u 1000 -ms /bin/bash -g myuser myuser RUN chown -R myuser:myuser /var/www/myuser/data/www/site.com ... # Before starting the containerm we had to make `ssh-keygen` and copy the public key (id_rsa.pub) to our bitbucket (git hub/lab) COPY --chown=myuser:myuser .ssh /home/myuser/.ssh RUN chmod 744 /home/myuser/.ssh/id_rsa.pub RUN chmod 700 /home/myuser/.ssh/id_rsa RUN ssh-keyscan -t rsa bitbucket.org > /home/myuser/.ssh/known_hosts # Now, copy our bash script which I will describe below COPY --chown=myuser:myuser startup.sh /home/myuser/ RUN chmod +x /home/myuser/startup.sh # We'll run our script as a `myuser` CMD service nginx start && sudo -H -u myuser bash /home/myuser/startup.sh 

startup.sh

#!/bin/bash cd /var/www/myuser/data/www/site.com/ # Git if ! git ls-files >& /dev/null; then git clone [email protected]/site.com.git . --progress 2>&1 fi; npm install npm run watch 

In this case we build the app in a proper environment and can work in two way with the app directory.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.