3

I want to correctly share directories between host and docker's container where the same UID and GID are used on both systems. For this, I wanted to pass them as variables MY_UID=$(id -u) to the docker.

We are considering case where .env has row with reference to some variable like MY_UID=$(id -u), not row value like MY_UID=1000

Variables are correctly declared, exported as an environment variables and it works on the docker-compose.yml level, but docker-compose do not pass them further to Dockerfile(s).

I've tried so far in docker-compose.yml:

  • env_file field
  • environment field

together with in bash:

  • exporting variables, like:
    • cat .env | envsubst | docker-compose -f - ...
    • VAR=123 docker-compose ...

or options for docker-compose -e.

The entire route for the given variables is export > docker-compose > dockerfile.


for testing purpose Dockerfile is simple like: (on this level variables doesn't works!)

FROM heroku/heroku:18 AS production RUN useradd -ms /usr/bin/fish -p $(openssl passwd -1 django) --uid "$MY_UID" --gid "$MY_GID" -r 

and $MY_UID is empty while using docker-compose.


docker-compose.yml (on this level variables works!)

version: '3.7' networks: {} services: django: build: context: ${MY_DIR} dockerfile: ${COMPOSE_DIR}/django/Dockerfile env_file: - .env environment: - MY_UID: ${MY_UID} volumes: - ${MY_DIR}:/app:rw 

docker-compose config returns: MY_GID: $$(id -u) for .env: MY_GID=$(id -u)


I wanted to avoid the workaround like this:

source .env && cat template_Dockerfile | envsubst > Dockerfile

or

source .env && cat .env | envsubst > .env_for_dockercompose

2
  • This isn't something you can reliably set in the Dockerfile. On my system id -u tells me my uid is 501; does this mean I won't be able to use your image if you built it on a system where your host uid is 1000? Commented Feb 26, 2019 at 12:47
  • @David Maze: that's exactly why I want to give that as a variable (an instruction), not raw value. FYI: envsubst run on the target machine will do a job, except it's complicating a process of building containers. Commented Feb 26, 2019 at 12:52

2 Answers 2

2

Regarding the proper way to pass values from docker-compose (or just from CLI) to Dockerfile, I guess you need to add some ARG directive, for example:

FROM heroku/heroku:18 AS production ARG MY_UID="...default UID..." ARG MY_GID="...default GID..." RUN useradd -ms /usr/bin/fish -p $(openssl passwd -1 django) --uid "$MY_UID" --gid "$MY_GID" -r 

Then to test it:

$ docker build --build-arg=MY_UID="1000" --build-arg=MY_GID="1000" -t test . 

I use a similar approach in the Dockerfile of coqorg/base (which is based on Debian).

However, if you are especially interested in passing variables to ensure that the UID/GID match, note that another approach is possible that has the additional benefit to make your image compatible with several hosts using different UID/GID. It is described in this SO answer by @BMitch which proposes to fix the permissions of a container's directory at startup time, see e.g.:

Sign up to request clarification or add additional context in comments.

1 Comment

If docker-compose is used adding in docker-compose.yml args: - MY_GID="1000" allows you to pass the value.
0

A solution for UID/GID (fact that different hosts=thus developers have different uids)

You can approach UID/GID by using 2 images (first = main and public, second = tiny and private) This way the majority of the environment is cached and given to others by the preconfigured first image, called base_image.

Then the instruction for final image is simple like that:

FROM base_image RUN groupadd -r django -g ${UFG_GID} ... 

base_image is given to all, but the final image as a continuation, which has that part of different UID/GID on every host is intended to be built by everyone. But this is no problem since such operation is very cheap.

This way no extra script is required, but the final image cannot be used by others to avoid issues with different UID/GID.

If you want one image ready to use by everyone, the additional script for fixing user's permission is required - see ErikiMD answer


A solution for full control of how variables must be filled (from a host or a container).

Instructions:

  • preparing templates for docker, docker-compose
    • Dockerfile_template
    • docker-compose.yml_template
  • rendering templates into the right docker files

and this way, with a template like below, you can fully control which variable will be substituted on each level (host, container)

... ENV PATH="/pyenv/bin:${DOLLAR}PATH" ... RUN ... ${MY_VAR}... ... 

Note:

usage of DOLLAR='$' and escaping $PATH with ${DOLLAR}PATH to avoid replace $PATH with the value from "wrong" host.

Meaning of that:

  • the $MY_VAR will be filled while rendering right docker file, on the main host
  • the $PATH will be filled later (it's escaped while rendering template into docker file)

Rendering the docker files is this way:

source .env && \ cat $COMPOSE_DIR/django/Dockerfile_template | DOLLAR='$' envsubst > $COMPOSE_DIR/django/Dockerfile \ docker-compose up -d --build 

which generates preconfigured Dockerfile - replace there some variables with values.

  • .env keep all variables
  • envsubst is for substitutes environment variables in shell format strings

The same goes for docker-compose.yml_template (if needed).

2 Comments

Have you tested the two approaches suggested in my answer? as your solution seems more complex to implement…
OK :) but without speaking about the fix-perms script, assuming you have these two Dockerfiles, it might be simpler to tell the end-users to just run docker build --build-arg=MY_UID=$(id -u) ... with a ready-to-use Dockerfile (involving thereby the ARG command I had mentioned), rather than telling them to write themselves the second Dockerfile?

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.