Jump to content

Cleaning out Docker image


dalben

Recommended Posts

My Docker image slowly fills up for no known reason that I can think of.  I'm assuming the docker updates come down but the old versions never get killed.  I am now close to my size limit again.  What's the easiest way to clear out unused files/containers bits without wiping the entire img file and restarting again (that's what I have had to do a couple of times before.

 

Label: none  uuid: 5143f85b-b38f-4e19-89bf-a7c6a96b1f29
Total devices 1 FS bytes used 7.08GiB
devid    1 size 10.00GiB used 9.04GiB path /dev/loop0

btrfs-progs v4.1.2

Link to comment

My Docker image slowly fills up for no known reason that I can think of.  I'm assuming the docker updates come down but the old versions never get killed.  I am now close to my size limit again.  What's the easiest way to clear out unused files/containers bits without wiping the entire img file and restarting again (that's what I have had to do a couple of times before.

 

Label: none  uuid: 5143f85b-b38f-4e19-89bf-a7c6a96b1f29
Total devices 1 FS bytes used 7.08GiB
devid    1 size 10.00GiB used 9.04GiB path /dev/loop0

btrfs-progs v4.1.2

 

The cAdvisor container is useful for showing you container sizes, so you could at least work out what's taking up all the space.

Link to comment

This is often caused by a particular docker being configured in such a way that temporary files are being created inside the container when ideally they should be external.  If this is the issue, you would need to narrow down which container to fix the issue.

 

Yep, agreed, cAdvisor is very helpful for that. Gives stats on how much space each individual container is using..

Link to comment

i have the same situation. i suspect that this is related to the update. after update i get a notification about this (Docker image disk utilization of 80%). i m not sure about the exac usage before update but i m sure it is much more less. And it is abvious that nearly 4.5gb of data is missing (by looking the outputs below; from cadvisor and settings).

 

 

    Label: none  uuid: 55aa0bad-eb36-4903-8039-a43499414a0f
    	Total devices 1 FS bytes used 5.68GiB
    	devid    1 size 10.00GiB used 9.04GiB path /dev/loop0

 

tobbenb/mkvtoolnix-gui	latest	5036ba1338dcce3e3dcf6e52	698.55 MiB	26.07.2015 22:54:48
sparklyballs/serviio	latest	18049895a6aada79820586b2	1.02 GiB	22.05.2015 14:07:02
siomiz/softethervpn	latest	3b17021377b7edb0ac7b30ab	351.19 MiB	05.08.2015 21:23:26
needo/sickrage		latest	b580bde9d271c63427fc6c19	339.51 MiB	01.08.2015 17:37:35
needo/mariadb		latest	566c91aa7b1e209ddd41e5b0	563.20 MiB	11.07.2014 14:53:52
needo/couchpotato	latest	196d8d33f934fa545f1310ef	322.36 MiB	06.05.2015 06:27:38
mobtitude/vpn-pptp	latest	1164a70c07b12e5287b06cad	202.88 MiB	01.02.2015 21:35:57
mace/openvpn-as		latest	e3724358ea4c045ae9727a8c	281.96 MiB	15.08.2015 12:19:30
hurricane/docker-btsync	free	7054a03fe9c4d8cbfba3d147	273.97 MiB	17.04.2015 20:25:42
google/cadvisor		latest	175221acbf890310cc61dc3d	19.00 MiB	02.07.2015 03:06:45
gfjardim/transmission	latest	5a0c5c6db90d5a636c807bb5	454.02 MiB	06.09.2014 07:03:03

Link to comment

Disk space after deleting docker.img and starting from scratch. I dont add mkvtoolnix container.

 

Label: none  uuid: 359cdd0a-5603-41eb-87ca-ea1ee1f616d7
Total devices 1 FS bytes used 3.34GiB
devid    1 size 10.00GiB used 6.04GiB path /dev/loop0

 

root@Tower:/var/lib# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop0       10G  3.7G  4.9G  43% /var/lib/docker

 

sparklyballs/serviio	latest	18049895a6aada79820586b2	1.02 GiB	22.05.2015 14:07:02
siomiz/softethervpn	latest	9fb22ed1022c9c8b1e6ef1ad	357.91 MiB	02.09.2015 22:15:26
needo/sickrage		latest	b580bde9d271c63427fc6c19	339.51 MiB	01.08.2015 17:37:35
needo/mariadb		latest	566c91aa7b1e209ddd41e5b0	563.20 MiB	11.07.2014 14:53:52
needo/couchpotato	latest	196d8d33f934fa545f1310ef	322.36 MiB	06.05.2015 06:27:38
mobtitude/vpn-pptp	latest	1164a70c07b12e5287b06cad	202.88 MiB	01.02.2015 21:35:57
mace/openvpn-as		latest	e3724358ea4c045ae9727a8c	281.96 MiB	15.08.2015 12:19:30
hurricane/docker-btsync	free	7054a03fe9c4d8cbfba3d147	273.97 MiB	17.04.2015 20:25:42
google/cadvisor		latest	175221acbf890310cc61dc3d	19.00 MiB	02.07.2015 03:06:45
gfjardim/transmission	latest	5a0c5c6db90d5a636c807bb5	454.02 MiB	06.09.2014 07:03:03

Link to comment
  • 2 weeks later...

Running UR 6.1.0.  I got the warning message "Docker high image disk utilization" this morning. First warning said utilization was 72%. Then about an hour later got a second warning saying utilization is 73%. 

 

From settings->docker i see this:

 

Label: none  uuid: 1c150737-1e3b-4d8a-a83c-81e40bf7507f
Total devices 1 FS bytes used 16.49GiB
devid    1 size 25.00GiB used 19.04GiB path /dev/loop0

btrfs-progs v4.1.2

 

From cAdvisor i see this:

 

google/cadvisor			latest	175221acbf890310cc61dc3d	19.00 MiB	7/1/2015, 5:06:45 PM
gfjardim/btsync			latest	69d6ec3676409cd60299b773	283.96 MiB	3/27/2015, 5:36:30 AM
binhex/arch-moviegrabber	latest	f333212ec60ad6a58ab45984	494.22 MiB	8/7/2015, 9:14:54 AM
needo/mariadb			latest	566c91aa7b1e209ddd41e5b0	563.20 MiB	7/11/2014, 4:53:52 AM
needo/plex			latest	8906416ebf13bada755e356a	575.99 MiB	5/1/2015, 6:24:20 AM
gfjardim/logitechmediaserver	latest	465b1e79d3c88e69ab4c7cda	591.59 MiB	5/18/2015, 8:57:01 AM
binhex/arch-couchpotato		latest	7de43ac6e0dc047fbbccc125	599.44 MiB	9/9/2015, 7:24:48 AM
binhex/arch-sonarr		latest	14b30ef806943549665a558f	863.42 MiB	8/10/2015, 4:12:11 AM

 

The only thing I can think of to do is wait a few hours and then check cAdvisor again to see which container has increased in size.

 

Can anyone offer any better suggestions for diagnosing the problem?

 

Update:  I just got another warning saying that utilization is now 74%. So in 90 minutes utilization of the docker.img file increased by 1%.  Looking at data for cAdvisor shows that all of the containers are exactly the same size as before.

 

google/cadvisor			latest	175221acbf890310cc61dc3d	19.00 MiB	7/1/2015, 5:06:45 PM
gfjardim/btsync			latest	69d6ec3676409cd60299b773	283.96 MiB	3/27/2015, 5:36:30 AM
binhex/arch-moviegrabber	latest	f333212ec60ad6a58ab45984	494.22 MiB	8/7/2015, 9:14:54 AM
needo/mariadb			latest	566c91aa7b1e209ddd41e5b0	563.20 MiB	7/11/2014, 4:53:52 AM
needo/plex			latest	8906416ebf13bada755e356a	575.99 MiB	5/1/2015, 6:24:20 AM
gfjardim/logitechmediaserver	latest	465b1e79d3c88e69ab4c7cda	591.59 MiB	5/18/2015, 8:57:01 AM
binhex/arch-couchpotato		latest	7de43ac6e0dc047fbbccc125	599.44 MiB	9/9/2015, 7:24:48 AM
binhex/arch-sonarr		latest	14b30ef806943549665a558f	863.42 MiB	8/10/2015, 4:12:11 AM

 

Settings->docker now shows

 

Label: none  uuid: 1c150737-1e3b-4d8a-a83c-81e40bf7507f
Total devices 1 FS bytes used 17.00GiB
devid    1 size 25.00GiB used 19.04GiB path /dev/loop0

btrfs-progs v4.1.2

 

Can I conclude that the containers are not growing?  Is there anything else in docker.img that could be growing?

 

Udpate 2:  Its been about 3 hours and now utilization is at 76%.  According to cAdvisor there has been no change is size for any of the containers.  However settings->docker now shows this:

 

Label: none  uuid: 1c150737-1e3b-4d8a-a83c-81e40bf7507f
Total devices 1 FS bytes used 17.42GiB
devid    1 size 25.00GiB used 20.04GiB path /dev/loop0

btrfs-progs v4.1.2

 

Update 3:  Updated to UR6.1.2.  docker.img utilization continued to increase.  When utilization hit 80%, I deleted docker.img and rebuilt it following directions in the sticky post in this forum. After rebuilding settings->docker now show this:

 


Label: none  uuid: 1c009fb0-4cb4-4574-8ee3-3a08847d4754
Total devices 1 FS bytes used 3.28GiB
devid    1 size 25.00GiB used 6.04GiB path /dev/loop0

btrfs-progs v4.1.2

 

Although docker.img is now much less fully utilized, it is still growing so I don't think this solved the problem.

 

Update 4:  Left things alone overnight.  In the morning settings->docker showed:

        Total devices 1 FS bytes used 5.48GiB 
        devid    1 size 25.00GiB used 8.04GiB path /dev/loop0 

 

cAdvisor says each container is the same size as it was yesterday.  Don't know what else could be growing.

 

I have now stopped each of the containers to see if utilization continues to rise even with all containers stopped.

 

Update 5:  With all containers stopped, there has been no change in utilization of docker.img for the past 2 hours. Going to start one container at a time starting with needo:Plex.

Link to comment

I have the same problem.  Before updating to 6.1.2 (from 6.0.1), I ran out of space in docker.img, so I increased the size from 10 to 15GB.  After a couple weeks, without adding any new dockers or doing anything, really, I saw that I was well over 10GB according to the unRAID settings page.  I updated to 6.1.2, and I immediately got a notification saying docker.img was 80% full.  The next day I get 5 messages, showing 81%, 82%, 83%, 84% and 85%, all within a few hours.  No changes for the last few days.

 

On top of this, different places report different utilization:

 

Docker settings shows 9.98/14 or 71% (I think):

Label: none  uuid: c9b4118d-8c62-40ee-8291-827e79ededcb
Total devices 1 FS bytes used 9.98GiB
devid    1 size 15.00GiB used 14.00GiB path /dev/loop0

btrfs-progs v4.1.2

 

The notification I got said 85%:

Event: Docker high image disk utilization
Subject: Warning [bARAD-DUR] - Docker image disk utilization of 85%
Description: Docker utilization of image file /mnt/appdisk/docker.img
Importance: warning

 

From the command line, I see that the sum of all the docker images I have is around 4.7GB, which would be 31%.  I've monitored the sizes returned by "docker images" over a few days and the sizes aren't increasing at all.

root@barad-dur:~# docker images
REPOSITORY                    TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
binhex/arch-couchpotato       latest              bda16517d8d1        3 weeks ago         614.2 MB
aptalca/docker-plexrequests   latest              7eca88a70c04        5 weeks ago         643.8 MB
google/cadvisor               latest              175221acbf89        11 weeks ago        19.92 MB
binhex/arch-sonarr            latest              b2a57e24dddf        3 months ago        1.013 GB
xamindar/syncthing            latest              70b7d6227388        3 months ago        371.2 MB
binhex/arch-delugevpn         latest              5be1c02a894c        3 months ago        930.9 MB
needo/plexwatch               latest              4052a2f57e29        4 months ago        374 MB
needo/plex                    latest              8906416ebf13        4 months ago        603.9 MB
hurricane/ubooquity           latest              a598b1e14e5d        10 months ago       528.1 MB
yujiod/minecraft-mineos       latest              ff8c61f22de6        11 months ago       604.5 MB

 

Finally, CAdvisor has a different sizes, but same % as calculated above from the docker settings UI:

11.50 GB / 16.11 GB (71%)

Link to comment

@jimbobulator

 

Here are the docker images on my machine:

 

REPOSITORY                     TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
binhex/arch-couchpotato        latest              7de43ac6e0dc        10 days ago         628.6 MB
binhex/arch-sonarr             latest              14b30ef80694        5 weeks ago         905.4 MB
binhex/arch-moviegrabber       latest              f333212ec60a        6 weeks ago         518.2 MB
google/cadvisor                latest              175221acbf89        11 weeks ago        19.92 MB
gfjardim/logitechmediaserver   latest              465b1e79d3c8        4 months ago        615.9 MB
needo/plex                     latest              8906416ebf13        4 months ago        603.9 MB
gfjardim/btsync                latest              69d6ec367640        5 months ago        297.7 MB
needo/mariadb                  latest              566c91aa7b1e        14 months ago       590.6 MB

 

I note that we have several in common: binhex/arch-couchpotato, binhex/arch-sonarr, and needo/plex. This provided me the clue to focus on these three containers.

 

I've noticed that Sonarr seems to have a problem downloading torrents lately and the timing coincides with when I got the first message regarding utilization of the docker.img file.

 

Could be that sonarr is writing error messages related to failed torrents into a logfile that is continually growing inside of docker.img. This could explain why sometimes the utilization jumps quickly, and sometimes it grows relatively slowly -- on days that sonarr tries to download a lot of torrents, it would generate a lot more error log entries than on days when there is nothing on sonarr's calendar to download. This would also explain why the problem only surfaced last week. 

 

Since we never commit the docker image with the error logs, the size of the image as reported by the command docker images, never changes.

 

One way to fix this is to find the directory the sonarr container uses to write its log files and map that to a directory on the cache drive. I don't know enough about docker and sonarr to find the right directory, but maybe binhex could help us here.

 

It may be that a container's directories for log files should always be mapped to a directory on the cache drive to prevent them from filling up docker.img.

Link to comment

I am having the issue as well.  I just added another 10GB to the docker image because...well I have the space and was not in the mood to find out what i causing the docker image to keep growing.

 

Repository                         Tags ID                                         Virtual Size Creation Time

sparklyballs/handbrake         latest 7912e0056bbccecfd2d1fa32 976.92 MiB 8/19/2015, 5:04:35 AM

binhex/arch-sonarr                 latest 2def71ac7e52890962437e6f 950.63 MiB 9/25/2015, 4:36:42 AM

aptalca/docker-dolphin         latest 582777d0cf320f7c27969240 925.32 MiB 5/7/2015, 1:47:33 PM

binhex/arch-delugevpn         latest f87106f2cb7e04e44b13a7cf 826.55 MiB 9/14/2015, 4:33:24 AM

gfjardim/crashplan-desktop latest 2ecdc387fe4c464eacf2ea50 768.55 MiB 7/14/2015, 10:03:39 AM

zuhkov/guacamole                 latest 931050a833a6e9ead7af9dc3 695.21 MiB 3/1/2015, 6:10:10 PM

aptalca/docker-plexrequests latest 7eca88a70c0419a31f807787 614.01 MiB 8/9/2015, 4:45:53 AM

gfjardim/crashplan                 latest 4abc20213697e28d534019b5 607.81 MiB 7/14/2015, 9:47:38 AM

binhex/arch-couchpotato         latest 7de43ac6e0dc047fbbccc125 599.44 MiB 9/9/2015, 10:24:48 AM

needo/plex                         latest 8906416ebf13bada755e356a 575.94 MiB 5/1/2015, 9:24:20 AM

needo/plexwatch                 latest 4052a2f57e29f199cd73b9f7 356.69 MiB 5/5/2015, 4:49:17 PM

captinsano/foldingathome latest 124f89c1c08c11d1d083eb8f 277.06 MiB 2/23/2015, 4:45:34 PM

google/cadvisor                 latest 9d2add265f7f96e8973b7678 19.11 MiB 9/23/2015, 6:12:14 PM

Link to comment

@B_Sinn3d

 

By my count, there are at least 5 of us that have this problem.  LT and the docker authors don't seem to have this issue so there must be a solution.

 

For now, I've given my docker.img 50GB and I still have to recreate it roughly once a week.  I'm looking at moving the dockers off my unraid server until a solution is found.

Link to comment

The solution is to configure your dockers properly, more specificly on your volume mappings and to not run dockers that do constant updates.

 

My 5 docker containers use less than 1.7gb total after an entire month. Recent addition was PyTivo which added 1gb to the size because it uses a different base image.

 

Link to comment

The solution is to configure your dockers properly, more specificly on your volume mappings and to not run dockers that do constant updates.

 

My 5 docker containers use less than 1.7gb total after an entire month. Recent addition was PyTivo which added 1gb to the size because it uses a different base image.

 

I'm confident in my docker mapping configuration.  None of my dockers (only 5 running) have any constant updating beyond the way needo-plex and needo-plexwatch update on restart.  I basically never restart them.  So I don't think this explains it.

 

The last two times I noticed a rapidly ballooning docker.img I happened to be using streaming locally from Plex.  Both times were direct play with audio transcode, relatively high bitrate files.  I'm not sure it's linked to Plex, just a casual observation. 

 

These are the dockers I'm actually running:

 

binhex-sonarr

binhex-delugevpn

needo-plex

needo-plexwatch

hurricane-ubooquity

 

As per my understanding of docker, the images are static, and when we start a container, it's basically an running instance of said container.  With btfrs and qcow2, the incremental size on disk of this instance shouldn't be significant, unless it changes significantly from the image.  That shouldn't be happening though.  Is there a way to check the size on disk of a running container instance vs. the image sizes found by running "docker images"?

 

Link to comment

I think the problem may be with Plex dockers. JonP wrote a post about moving the transcode directory to a ram:  http://lime-technology.com/forum/index.php?topic=37553.0

 

JonP's post got me thinking about where my plex docker stores its transcode files if I don't point /transcode to ram as JonP suggests. If they are stored in docker.img, it would explain why utilization of docker.img can jump by a gigabyte a day sometimes, and go for long periods growing much slower.  Utilization will jump whenever someone is watching something that plex needs to transcode.

 

I'm going to follow JonP's advice and have /transcode point to /tmp and see if that stops the increasing utilization of docker.img.

 

To the others dealing with the same issue:

 

1) Are you running a plex docker?

2) If yes, do you let plex transcode your media, or does plex always just play the file directly.

 

 

Link to comment

Actually, I think I may agree with you about plex. The times I was getting the notifications of image space utilization was during plex streaming.  But it was external stream.  When the streaming stopped the message said it returned to normal.

 

I am pretty sure I already transcode to ram but will need to confirm later tonight.

Link to comment

Could be Plex - that seems plausible.  My Plex streaming is local and video usually isn't transcoded except for some x265 stuff recently.  Audio is sometimes (only DTS, I think). 

 

Assuming this is the problem, I have no interest in dedicating RAM to transcoding;  I think it's a waste when I have a perfectly good SSD with lots of space.  If/when I get some time I'll dig in and see if I can track the disk usage inside the docker...

 

 

Link to comment

@jimbobulator

 

If this turns out to be the problem, I think you could redirect /transcode to your cache drive. My theory is if you don't redirect /transcode to someplace outside of the container, then the container starts to use up space on docker.img.  From JonP's post there are two places you need to change to make the redirect.  First place is in the docker settings page. The second change is made using plex's web interface to change its server settings to use /transcode.

Link to comment

@ikosa

 

I used cadvisor as well. Like you, the size of my containers never changes. I conclude that either cadvisor is reporting to us the size of the images -- which AFAIK should not change -- or there is something in docker.img besides the containers that is growing.  AFAIK Lime has not said whether there is anything else in docker.img.  Perhaps Lime could say whether there is supposed to be anything else in docker.img.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...