1703

I have build a Docker image from a Docker file using the below command.

$ docker build -t u12_core -f u12_core . 

When I am trying to rebuild it with the same command, it's using the build cache like:

Step 1 : FROM ubuntu:12.04 ---> eb965dfb09d2 Step 2 : MAINTAINER Pavan Gupta <[email protected]> ---> Using cache ---> 4354ccf9dcd8 Step 3 : RUN apt-get update ---> Using cache ---> bcbca2fcf204 Step 4 : RUN apt-get install -y openjdk-7-jdk ---> Using cache ---> 103f1a261d44 Step 5 : RUN apt-get install -y openssh-server ---> Using cache ---> dde41f8d0904 Step 6 : RUN apt-get install -y git-core ---> Using cache ---> 9be002f08b6a Step 7 : RUN apt-get install -y build-essential ---> Using cache ---> a752fd73a698 Step 8 : RUN apt-get install -y logrotate ---> Using cache ---> 93bca09b509d Step 9 : RUN apt-get install -y lsb-release ---> Using cache ---> fd4d10cf18bc Step 10 : RUN mkdir /var/run/sshd ---> Using cache ---> 63b4ecc39ff0 Step 11 : RUN echo 'root:root' | chpasswd ---> Using cache ---> 9532e31518a6 Step 12 : RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config ---> Using cache ---> 47d1660bd544 Step 13 : RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd ---> Using cache ---> d1f97f1c52f7 Step 14 : RUN wget -O aerospike.tgz 'http://aerospike.com/download/server/latest/artifact/ubuntu12' ---> Using cache ---> bd7dde7a98b9 Step 15 : RUN tar -xvf aerospike.tgz ---> Using cache ---> 54adaa09921f Step 16 : RUN dpkg -i aerospike-server-community-*/*.deb ---> Using cache ---> 11aba013eea5 Step 17 : EXPOSE 22 3000 3001 3002 3003 ---> Using cache ---> e33aaa78a931 Step 18 : CMD /usr/sbin/sshd -D ---> Using cache ---> 25f5fe70fa84 Successfully built 25f5fe70fa84 

The cache shows that aerospike is installed. However, I don't find it inside containers spawn from this image, so I want to rebuild this image without using the cache. How can I force Docker to rebuild a clean image without the cache?

5
  • 32
    As an aside, you should generally try to minimize the number of RUN directives. Commented Sep 27, 2017 at 11:29
  • 29
    @Ya. It used to be that Docker always created a separate layer for each RUN directive, so a Dockerfile with many RUN directives would consume ginormous amounts of disk space; but this has apparently been improved somewhat in recent versions. Commented Feb 20, 2019 at 17:02
  • 1
    When I try docker-compose up -d, where can I use --no-cache? Commented Jan 29, 2020 at 1:56
  • 13
    @O.o that's not possible. You first have to do docker-compose build --no-cache and then docker-compose up -d Commented May 5, 2020 at 8:40
  • At the end of the day, I was being dumb with the --volume option. I was using the wrong path the entire time, thinking the old one was being cached Commented Apr 15, 2021 at 3:54

10 Answers 10

2759

There's a --no-cache option:

docker build --no-cache -t u12_core -f u12_core . 

In older versions of Docker you needed to pass --no-cache=true, but this is no longer the case.

Sign up to request clarification or add additional context in comments.

8 Comments

Also note that --no-cache works with docker-compose build.
You might also want to use --pull. This will tell docker to get the latest version of the base image. This is necessary in addition to --no-cache if you already have the base image (ex: ubuntu/latest) and the base image has been updated since you last pulled it. See the docs here.
@CollinKrawll: The --pull option did the trick for me. Just --no-cache, build still broke. Put in --pull as well, build worked! Thank you!
If someone is calling docker build isn't it assumed that they want to rebuild without the cache? In what use case would someone want to build an image and use a previously built image? <rant> I just lost a day because an earlier build failed silently yet completed "successful" and I was using the broken image not understanding why updates to the build script wasnt working </rant>
@Jeff When you're developing a docker image, docker build will only redo layers/steps that have been modified. If I have five steps, and I add a new step at index 3, the layers associated with step 1 and 2 can be re-used. This greatly speeds up the development process
|
278

In some extreme cases, your only way around recurring build failures is by running:

docker system prune 

The command will ask you for your confirmation:

WARNING! This will remove: - all stopped containers - all volumes not used by at least one container - all networks not used by at least one container - all images without at least one container associated to them Are you sure you want to continue? [y/N] 

This is of course not a direct answer to the question, but might save some lives... It did save mine.

7 Comments

adding -a -f makes it better
@IulianOnofrei Works for me, Docker version 17.09.0-ce, build afdb6d4
This is way overkill for this scenario and not a usable answer if you do not want to delete everything.
This will even delete the images of stopped containers, probably something you do not want. Recent versions of docker have the command docker builder prune to clear the cached build layers. Just fell into the trap after blindly copying commands from stack overflow.
This doesn't even work as a solution to the problem.
|
222

To ensure that your build is completely rebuilt, including checking the base image for updates, use the following options when building:

--no-cache - This will force rebuilding of layers already available

--pull - This will trigger a pull of the base image referenced using FROM ensuring you got the latest version.

The full command will therefore look like this:

docker build --pull --no-cache --tag myimage:version . 

Same options are available for docker-compose:

docker-compose build --no-cache --pull 

Note that if your docker-compose file references an image, the --pull option will not actually pull the image if there is one already.

To force docker-compose to re-pull this, you can run:

docker-compose pull 

2 Comments

Curiously this didn't work for me!
83

The command docker build --no-cache . solved our similar problem.

Our Dockerfile was:

RUN apt-get update RUN apt-get -y install php5-fpm 

But should have been:

RUN apt-get update && apt-get -y install php5-fpm 

To prevent caching the update and install separately.

See: Best practices for writing Dockerfiles

3 Comments

The "should have been" is misleading. If Docker sees that it has a cached copy of RUN apt-get update && apt-get -y install php5-fpm you would still see it get reused with the old contents.
Actually it still makes sense to join them, because otherwise if you change the installation line, it will still use the old package cache, which will often have problems if the cache is out of date (usually, files will 404.)
But should have been: RUN apt-get update && apt-get -y install php5-fpm && rm -rf /var/lib/apt/lists/* In fact, is best practice to cleanup the apt/lists files before the "closing" of the RUN
71

Most of information here are correct.
Here a compilation of them and my way of using them.

The idea is to stick to the recommended approach (build specific and no impact on other stored docker objects) and to try the more radical approach (not build specific and with impact on other stored docker objects) when it is not enough.

Recommended approach :

1) Force the execution of each step/instruction in the Dockerfile :

docker build --no-cache 

or with docker-compose build :

docker-compose build --no-cache 

We could also combine that to the up sub-command that recreate all containers:

docker-compose build --no-cache && docker-compose up -d --force-recreate 

These way don't use cache but for the docker builder and the base image referenced with the FROM instruction.

2) Wipe the docker builder cache (if we use Buildkit we very probably need that) :

docker builder prune -af 

3) If we don't want to use the cache of the parent images, we may try to delete them such as :

docker image rm -f fooParentImage 

In most of cases, these 3 things are perfectly enough to allow a clean build of our image.
So we should try to stick to that.

More radical approach :

In corner cases where it seems that some objects in the docker cache are still used during the build and that looks repeatable, we should try to understand the cause to be able to wipe the missing part very specifically. If we really don't find a way to rebuild from scratch, there are other ways but it is important to remember that these generally delete much more than it is required. So we should use them with cautious overall when we are not in a local/dev environment.

1) Remove all images without at least one container associated to them :

docker image prune -a 

2) Remove many more things :

docker system prune -a 

That says :

 WARNING! This will remove: - all stopped containers - all networks not used by at least one container - all images without at least one container associated to them - all build cache 

Using that super delete command may not be enough because it strongly depends on the state of containers (running or not). When that command is not enough, I try to think carefully which docker containers could cause side effects to our docker build and to allow these containers to be exited in order to allow them to be removed with the command.

4 Comments

docker image prune (without -a) is friendlier and won't nuke all your images you might want.
Any ideas why I am in an edge case? :( Can give more details as required - but under what circumstances could they occur?
docker builder prune -af deprecated.
40

With docker-compose try docker-compose up -d --build --force-recreate

2 Comments

( docker-compose pull or --pullis necessary to get the base image updated before )
What is the equivalent of this code with a single container Dockerfile?
18

I would not recommend using --no-cache in your case.

You are running a couple of installations from step 3 to 9 (I would, by the way, prefer using a one liner) and if you don't want the overhead of re-running these steps each time you are building your image you can modify your Dockerfile with a temporary step prior to your wget instruction.

I use to do something like RUN ls . and change it to RUN ls ./ then RUN ls ./. and so on for each modification done on the tarball retrieved by wget

You can of course do something like RUN echo 'test1' > test && rm test increasing the number in 'test1 for each iteration.

It looks dirty, but as far as I know it's the most efficient way to continue benefiting from the cache system of Docker, which saves time when you have many layers...

2 Comments

The ability to be able to not use the cache after a certain point is a feature requested by many (see github.com/moby/moby/issues/1996 for alternatives for cache busting)
14

sometimes docker build --no-cache and even removing all containers and images on the system does not clear all docker stuffs , in such case you should use docker system prune , to remove all unused containers, networks, images, and volumes. This will remove all cached data, including any dangling images or containers. so to achieve a force fresh build run this commands:

//remove all containers 1- docker rm -f $(docker ps -aq) //remove all images 2- docker image rm $(docker images -q) /remove all unused containers, networks, images, and volumes 3- docker system prune / 

so now anything related to the docker is gone and docker cache is completely deleted , like you have a fresh docker installation .

Comments

7

You can manage the builder cache with docker builder

To clean all the cache with no prompt: docker builder prune -af

1 Comment

OP is not using buildkit
-1

GUI-driven approach: Open the docker desktop tool (that usually comes with Docker):

  1. under "Containers / Apps" stop all running instances of that image
  2. under "Images" remove the build image (hover over the box name to get a context menu), eventually also the underlying base image

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.