31

I'm trying to run a docker image that works on other systems (and you can even pull it from dockerhub, if you'd like: it's dougbtv/asterisk) however, on my general workstation, it's complaining about free space when it (looks like) it's untarring the docker images.

I try to run it, and when I do I get an error stating that it's out of space. Here's an example of me trying to run it, and it complaining about space..

[root@localhost docker]# docker run -i -t dougbtv/asterisk /bin/bash Timestamp: 2015-05-13 07:50:58.128736228 -0400 EDT Code: System error Message: [/usr/bin/tar -xf /var/lib/docker/tmp/70c178005ccd9cc5373faa8ff0ff9c7c7a4cf0284bd9f65bbbcc2c0d96e8565d410879741/_tmp.tar -C /var/lib/docker/devicemapper/mnt/70c178005ccd9cc5373faa8ff0ff9c7c7a4cf0284bd9f65bbbcc2c0d96e8565d/rootfs/tmp .] failed: /usr/bin/tar: ./asterisk/utils/astdb2sqlite3: Wrote only 512 of 10240 bytes /usr/bin/tar: ./asterisk/utils/conf2ael.c: Cannot write: No space left on device /usr/bin/tar: ./asterisk/utils/astcanary: Cannot write: No space left on device /usr/bin/tar: ./asterisk/utils/.astcanary.o.d: Cannot write: No space left on device /usr/bin/tar: ./asterisk/utils/check_expr.c: Cannot write: No space left on device [... another few hundred similar lines] 

Of course, I check how much space is available, and through googling I find that sometimes this happens because you're out of inodes. So I take a look at both, and I can see that there's plenty of inodes as well.

[root@localhost docker]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 20M 3.9G 1% /dev/shm tmpfs 3.9G 1.2M 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/mapper/fedora-root 36G 9.4G 25G 28% / tmpfs 3.9G 5.2M 3.9G 1% /tmp /dev/sda3 477M 164M 285M 37% /boot /dev/mapper/fedora-home 18G 7.7G 8.9G 47% /home tmpfs 793M 40K 793M 1% /run/user/1000 /dev/sdb1 489G 225G 265G 46% /mnt/extradoze [root@localhost docker]# df -i Filesystem Inodes IUsed IFree IUse% Mounted on devtmpfs 1012063 585 1011478 1% /dev tmpfs 1015038 97 1014941 1% /dev/shm tmpfs 1015038 771 1014267 1% /run tmpfs 1015038 15 1015023 1% /sys/fs/cgroup /dev/mapper/fedora-root 2392064 165351 2226713 7% / tmpfs 1015038 141 1014897 1% /tmp /dev/sda3 128016 429 127587 1% /boot /dev/mapper/fedora-home 1166880 145777 1021103 13% /home tmpfs 1015038 39 1014999 1% /run/user/1000 /dev/sdb1 277252836 168000 277084836 1% /mnt/extradoze 

And so you can see a bit what's going on here's my /etc/fstab

[root@localhost docker]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Mar 17 20:11:16 2015 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/fedora-root / ext4 defaults 1 1 UUID=2e2535da-907a-44ec-93d8-1baa73fb6696 /boot ext4 defaults 1 2 /dev/mapper/fedora-home /home ext4 defaults 1 2 /dev/mapper/fedora-swap swap swap defaults 0 0 

And I also asked someone with a similar stack exchange question asked for the results of the lvs command, which shows:

[root@localhost docker]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home fedora -wi-ao---- 17.79g root fedora -wi-ao---- 36.45g swap fedora -wi-ao---- 7.77g 

It's a Fedora 21 system:

[root@localhost docker]# cat /etc/redhat-release Fedora release 21 (Twenty One) [root@localhost docker]# uname -a Linux localhost.localdomain 3.19.5-200.fc21.x86_64 #1 SMP Mon Apr 20 19:51:56 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux 

Storage driver:

[doug@localhost cs]$ sudo docker info|grep Driver: Storage Driver: devicemapper Execution Driver: native-0.2 

Docker version:

[doug@localhost cs]$ sudo docker -v Docker version 1.6.0, build 3eac457/1.6.0 

Per this recommended article I tried to change docker to /etc/sysconfig/docker

OPTIONS='--selinux-enabled --storage-opt dm.loopdatasize=500GB --storage-opt dm.loopmetadatasize=10GB' 

And restarted docker, to no avail. I have changed it back to just --selinux-enabled (note: I have selinux disabled)

Additionally I noticed that the article mentioned looking at the spare data file, which looks like:

[root@localhost doug]# ls -alhs /var/lib/docker/devicemapper/devicemapper total 3.4G 4.0K drwx------ 2 root root 4.0K Mar 20 13:37 . 4.0K drwx------ 5 root root 4.0K Mar 20 13:39 .. 3.4G -rw------- 1 root root 100G May 13 14:33 data 9.7M -rw------- 1 root root 2.0G May 13 14:33 metadata 

Is it a problem that the sparse file is larger than the size of the disk?

My lsblk looks like:

[root@localhost doug]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 111.8G 0 disk ├─sda1 8:1 0 100M 0 part ├─sda2 8:2 0 49.2G 0 part ├─sda3 8:3 0 500M 0 part /boot ├─sda4 8:4 0 1K 0 part └─sda5 8:5 0 62G 0 part ├─fedora-swap 253:0 0 7.8G 0 lvm [SWAP] ├─fedora-root 253:1 0 36.5G 0 lvm / └─fedora-home 253:2 0 17.8G 0 lvm /home sdb 8:16 0 1.8T 0 disk └─sdb1 8:17 0 489G 0 part /mnt/extradoze loop0 7:0 0 100G 0 loop └─docker-253:1-1051064-pool 253:3 0 100G 0 dm loop1 7:1 0 2G 0 loop └─docker-253:1-1051064-pool 253:3 0 100G 0 dm 
6
  • What storage driver are you using? sudo docker info|grep Driver: Commented May 13, 2015 at 13:30
  • Good question @mattdm, it's devicemapper with execution driver native-0.2. I updated my question with that info and the docker version. Commented May 13, 2015 at 13:53
  • can you attach the output of lsblk? This blog post may help, although it's a little out of date. Commented May 13, 2015 at 17:50
  • I went and updated it and walked through the blog post to checkout a couple things, which I also noted, appreciate the pointer there @mattdm Commented May 13, 2015 at 18:38
  • 1
    Docker is extracting a tar file from /var/lib/docker/tmp/70.... to a directory under /var/lib/docker/devicemapper/.... The fact that there's the substring 'devicemapper' in that path makes me think that docker is mapping some block storage device into a docker specific block device for use. This probably means that if you run lsblk/df and friends before or after the docker command, you'll miss the mapped device (docker cleans up after itself?). I'd propbably ctrl-z the unbundling process after the errors start popping and start poking around with df/dm tools. Can you do that? Commented May 23, 2015 at 20:36

4 Answers 4

18

If you are using any operating system Red-Hat based, you should know that "Devicemapper" is limited to 10 GB per image, and if you are trying to run an image which is up to 10GB you may get that error. That may be your issue. Try this, it worked for me

https://docs.docker.com/engine/reference/commandline/daemon/#storage-driver-options

 sudo systemctl stop docker.service 

or

sudo service docker stop rm -rvf /var/lib/docker (Take back up of any important data; containers and images will be deleted) 

Run this command

docker daemon --storage-opt dm.basesize=20G 

Where "20G" refers to the new size you want the devicemapper to take, and then, restart docker

sudo systemctl start docker.service 

or

sudo service docker start 

Check if it is set by running

docker info 

Hope this works!

5
  • Thank you! I will give this a shot later. I still have this machine around, however, it's now fairly out of date, so it might behoove me to update to latest Fedora & Docker. Commented Mar 10, 2016 at 19:09
  • I gave this a go, but docker refused to start up afterwards. I think "just don't use redhat" would be a simpler solution :) Commented Jul 28, 2016 at 13:31
  • 1
    sudo dockerd-current --storage-opt dm.basesize=20G... that is docker daemon => dockerd in RedHat. Commented Apr 18, 2018 at 17:38
  • 4
    The command docker daemon --storage-opt dm.basesize=20G does not seem to exist anymore. Commented Apr 4, 2019 at 8:02
  • 2
    docker daemon is simply dockerd now Commented Dec 30, 2020 at 22:01
4

Are you by any chance trying to run a very-large image ? RHEL does not have aufs native support. So you use devicemapper When you use devicemapper, you only have access to 10GB by default for your container filesystem. Check this article, it may be of help.

1
  • 4
    While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. Commented Jul 17, 2015 at 4:43
2

Run docker system df to see where the disk usage is coming from. In my case, the build cache maxed out my allocated disk space of 128GB. I tried various .. prune commands and flags and they all missed clearing the cache.

To clear the build cache, run docker builder prune. That dropped my cache disk space usage to 0. The next build took a lot longer because it had to download 10gb+. After that it uses the cache again.

0

Running docker system prune worked for a while, then I had to keep increasing "Disk image size" in Preferences... > Disk, but that only helped until I increased it again.

I refuse to keep increasing disk image size and docker system prune was not claiming any more space, but running docker volume prune has helped this time.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.