unRAID Server Release 6.0-beta8-x86_64 Available


limetech

Recommended Posts

  • Replies 190
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I think we need to take a fresh look at the upgrade instructions as whilst they are complete the order is not as clear as it could be.

 

Also as predicted people are getting confused between docker containers, XML templates and appdata config files. We should recommend a solution that covers them all and if people want to do something differernt they can but at least by default it is clear what the recommended implementation is.

Link to comment

All seems to be working great, just re-downloading images now. Thanks!

 

I wasn't 100% sure of the update order but i did the following.

 

Deleted docker plg from plugins.

Updated unraid to beta8 & reboot

Ran btrfs subvolume delete

Started docker and added new img path

Ran  CP for existing template move

Deleted -boot/config/plugins/dockerMan.plg - /Docker-startup.plg  - /dockerMan-2014.08.28.tar.gz

 

I ahve to agree with this post. if you read the upgrade OP it isnt actually clear the correct order to do stuff. I suggest the above is validated and the OP updated ASAP so avoid any problems.

 

Nice work.

 

Hmm, I didn't have this problem.  I didn't have a Docker-startup.plg or tar.gz file to delete in my setup.

 

Ideally we could script this, but whenever a "delete *" command is involved, I'd rather see the user type it in by hand before it's executed ;-).

 

So to quicken the process, what exactly do you want it to be rewritten as, since I'm unfamiliar with these two files that you had to remove in addition to our instructions...

Link to comment

I think we need to take a fresh look at the upgrade instructions as whilst they are complete the order is not as clear as it could be.

 

Also as predicted people are getting confused between docker containers, XML templates and appdata config files. We should recommend a solution that covers them all and if people want to do something differernt they can but at least by default it is clear what the recommended implementation is.

 

I updated the first post in the thread with clearer instructions.

Link to comment

I have a test server with 2 disks both formatted with ReiserFS.  There is no parity or cache drive.  I set up docker to /mnt/disk1/docker/docker.img of 10 GB and had a lot of trouble getting it to start up.  After several re-boots while I was working on something else, docker finally started working.  It also doesn't appear to be shutting down when the array is stopped.  I had to set up a script to stop docker with "/etc/rc.d/rc.docker stop" so the array would stop cleanly with powerdown 2.07.

Link to comment

I think we need to take a fresh look at the upgrade instructions as whilst they are complete the order is not as clear as it could be.

 

Also as predicted people are getting confused between docker containers, XML templates and appdata config files. We should recommend a solution that covers them all and if people want to do something differernt they can but at least by default it is clear what the recommended implementation is.

 

I agree. I'm still confused on what the benefit of using a loopback image is. The documentation gives backups as an example, but as an end-user who just downloads docker images, I could care less if those are backed up. I can download them again. I care about my config files, databases, etc. If it is intended that the container data's host path should be within this image as well, then that makes sense. If not, I don't see the point of these changes (might just be my limited knowledge of docker).

 

In the end though, it still works fine. If someone can figure out a docker auto-start order system, we'd be all set.

Link to comment

I'm still confused on what the benefit of using a loopback image is. The documentation gives backups as an example, but as an end-user who just downloads containers, I could care less if those are backed up. I can download them again. I care about my config files, databases, etc. If it is intended that the container data's host path should be within this image as well, then that makes sense. If not, I don't see the point of these changes (might just be my limited knowledge of docker).

 

In the end though, it still works fine. If someone can figure out a docker auto-start order system, we'd be all set.

 

Having loopback mounted volumes for docker solves some problems and provides some benefits:

 

- The internal directory structure of docker images, which are implemented as btrfs snapshots, is hidden from the "shares" system.  This means for example, the 'New Permissions' script does not descend into all those directories because it doesn't see them.  They are also hidden from access via network shares - I think this is a benefit.

 

- btrfs is still a real pain to manage in some circumstances.  For example, try moving a directory that contains subvolumes to another partition and preserve the subvolumes! Very difficult.  The 'standard' unix tools: cp, mv, etc. simply don't work (well they work, but your 4GB docker directory balloons to 30-40-50GB).  By isolating Docker in it's own volume image file, it is easier to move around onto other devices.

 

- Docker is very sensitive to free space, meaning it tends to fail rather badly when it runs out.  By using an image file you can simply enlarge it.  If current device it's on has no more free space, you can simply move your image file to a different device.

 

- You can duplicate your image file for testing very easily - just make a copy.

 

- Sure you can re-download all your images, but maybe you don't have an internet connection when you need to do it.  Or maybe you just don't want to spend the time doing it.

 

- You do not have to have a btrfs-formatted device in your server now to support Docker.  In my experience, the multi-device features of btrfs are the ones really still "experimental" - single device btrfs is very stable.

 

The "Docker way" seems to be to keep appdata in separate volumes, and there are a lot of good reasons to do that.  But having the image file gives you flexibility to keep appdata, or parts of it, inside the image file.  The ability to easily back it up/move it makes this more feasible.

Link to comment

After adding my 7th docker app and adding a new share, I lost emhttp.

 

I was adding a docker app though the GUI (a lazylibrarian repo), it started fine, I created a new share, then emhttp died shortly after. All my docker containers were working normally.

 

Shutdown, restarted, no effect. Shutdown again, pulled the usb stick and checked for errors, no problem, still no emhttp.

 

 

First reboot:

Sep  1 19:58:52 Media kernel: br0: port 1(eth0) entered forwarding state
Sep  1 19:59:04 Media emhttp: read_line: read_line: input line too long
Sep  1 19:59:16 Media last message repeated 27 times
Sep  1 19:59:20 Media in.telnetd[1483]: connect from 192.168.1.5 (192.168.1.5)
Sep  1 19:59:22 Media login[1484]: ROOT LOGIN  on '/dev/pts/0' from 'my-pc'
Sep  1 20:00:03 Media emhttp: read_line: read_line: input line too long
Sep  1 20:00:45 Media last message repeated 3 times
Sep  1 20:00:47 Media last message repeated 11 times
Sep  1 20:03:06 PookieMedia kernel: ip_tables: (C) 2000-2006 Netfilter Core Team

 

Second reboot:

Sep  1 20:15:07 Media avahi-daemon[1490]: Service "Media" (/etc/avahi/services/ssh.service) successfully established.
Sep  1 20:15:07 Media avahi-daemon[1490]: Service "Media" (/etc/avahi/services/smb.service) successfully established.
Sep  1 20:15:07 Media avahi-daemon[1490]: Service "Media" (/etc/avahi/services/sftp-ssh.service) successfully established.
Sep  1 20:15:21 Media kernel: br0: port 1(eth0) entered learning state
Sep  1 20:15:36 Media kernel: br0: topology change detected, propagating
Sep  1 20:15:36 Media kernel: br0: port 1(eth0) entered forwarding state
Sep  1 20:16:06 Media emhttp: read_line: read_line: input line too long
Sep  1 20:16:10 Media last message repeated 9 times

Link to comment

And after adding my 7th docker app, I lost emhttp. I've rebooted, etc, no dice.

 

Sep  1 19:58:52 Media kernel: br0: port 1(eth0) entered forwarding state
Sep  1 19:59:04 Media emhttp: read_line: read_line: input line too long
Sep  1 19:59:16 Media last message repeated 27 times
Sep  1 19:59:20 Media in.telnetd[1483]: connect from 192.168.1.5 (192.168.1.5)
Sep  1 19:59:22 Media login[1484]: ROOT LOGIN  on '/dev/pts/0' from 'my-pc'
Sep  1 20:00:03 Media emhttp: read_line: read_line: input line too long
Sep  1 20:00:45 Media last message repeated 3 times
Sep  1 20:00:47 Media last message repeated 11 times
Sep  1 20:03:06 PookieMedia kernel: ip_tables: (C) 2000-2006 Netfilter Core Team

 

That error means either:

a) a HTTP GET was longer then 4096 bytes, or

b) a HPPP header was longer than 1024 bytes.

 

That happens upon accessing webGui after boot??

Link to comment

Apparently dockerman was just updated to 2014.09.01-1 and I can't get anything to work when I click on the + in the Apps directory.  The form does not show when clicked.

That's an issue with gfjardim's plugin.

 

Installed the 2014.09.01-1 Dockerman from gfjardim and its working fine for me from the Apps page.

Link to comment

 

That error means either:

a) a HTTP GET was longer then 4096 bytes, or

b) a HPPP header was longer than 1024 bytes.

 

That happens upon accessing webGui after boot??

 

I segfaulted emhttp somehow, probably out of memory (I was creating a share, copying data through smb, installing a docker app at the same time. No syslog for this, I'll try to recreate). This threw me off. After rebooting, I kept getting that error. Anyway, I rebooted again, saw emhttp running, and tried accessing as I normally do in Chrome. Same error. Tried accessing it in Opera, and it worked.

 

For some reason, Chrome on my PC is causing this error even though it worked not 20 minutes ago. Sorry for the false alarm.

 

Edit: I guess the problem with a huge cookie LazyLibrarian created. Odd.

Link to comment

And after adding my 7th docker app, I lost emhttp. I've rebooted, etc, no dice.

 

Sep  1 19:58:52 Media kernel: br0: port 1(eth0) entered forwarding state
Sep  1 19:59:04 Media emhttp: read_line: read_line: input line too long
Sep  1 19:59:16 Media last message repeated 27 times
Sep  1 19:59:20 Media in.telnetd[1483]: connect from 192.168.1.5 (192.168.1.5)
Sep  1 19:59:22 Media login[1484]: ROOT LOGIN  on '/dev/pts/0' from 'my-pc'
Sep  1 20:00:03 Media emhttp: read_line: read_line: input line too long
Sep  1 20:00:45 Media last message repeated 3 times
Sep  1 20:00:47 Media last message repeated 11 times
Sep  1 20:03:06 PookieMedia kernel: ip_tables: (C) 2000-2006 Netfilter Core Team

 

That error means either:

a) a HTTP GET was longer then 4096 bytes, or

b) a HPPP header was longer than 1024 bytes.

 

That happens upon accessing webGui after boot??

 

Could that be caused by a cookie being larger than the size enhttp supports?

Link to comment

Any new updates/abilities in this beta in regards to GPU passthru?  I've got my re-balled drive squared away, and am ready to update to beta8.  I do need to get GPU passthru working soon, or I need to rebuild the computer I took apart to update the unRAID server.

 

I haven't seen any new threads or posts regarding passthru, but jonp had said there was some big, new stuff coming in the new betas, so I'm just checking on the status.

 

Last I heard, nVidia cards weren't working, and my ATI card died, so I either need to buy a new video card, or rebuild the HTPC.  I'd prefer to have everything in one box, but don't want to buy a video card if I can't be sure of GPU working first.

 

thanks.

Link to comment

Any new updates/abilities in this beta in regards to GPU passthru?  I've got my re-balled drive squared away, and am ready to update to beta8.  I do need to get GPU passthru working soon, or I need to rebuild the computer I took apart to update the unRAID server.

 

I haven't seen any new threads or posts regarding passthru, but jonp had said there was some big, new stuff coming in the new betas, so I'm just checking on the status.

 

Last I heard, nVidia cards weren't working, and my ATI card died, so I either need to buy a new video card, or rebuild the HTPC.  I'd prefer to have everything in one box, but don't want to buy a video card if I can't be sure of GPU working first.

 

thanks.

Justin,

 

Nothing new in beta 8 over beta 7, but beta7 included the nVIDIA KVM patch for hiding exposure of the KVM flag to the nVIDIA drivers.

 

Beta 7/8 also both include an updated and patched Linux kernel and libvirt / QEMU toolset.

 

With either Xen or KVM, I'd hope for you to have GPU pass through success with beta 8.

 

Link to comment

Could that be caused by a cookie being larger than the size enhttp supports?

 

Confirmed. Yeah, lazylibrarian is based on Headphones, and there are previous unraid forum posts related to headphones causing this same issue. It just so happens I never noticed it when all my stuff was in a VM because of the different IPs. Because all the containers share the hosts IP, all the cookies get lumped together.

Link to comment

.....(Tom's preamble to updating)......................

You don't have to give it an img extension.  Could give it any extension or no extension.

 

Now click 'Start' to start docker.  Since this is first time starting it will create the image file and mount it at /var/lib/docker.

 

Once you have your new docker volume image set up, you can copy all your existing my-* templates by typing this command:

 

..........(Now down "Changes In This Release").................

When you visit the Extensions/Docker page you will notice a box to enter a Docker image file.  When you click 'Start' the file name you specify will be created and a btrfs file system will be created inside that file.  Next the file is loop mounted onto /var/lib/docker.  You are also able to resize the file (expand only, shrink not supported yet), and scrub (check) the file system.  The default size of "10" (meaning 10GB) seems reasonable to start with.

 

I'm a top-down-follow-the-menu kind of guy and I had problems getting Docker to start. The main problem is there's no mention of setting the initial image file size until down in the release notes. I totally missed it. I did stumble into setting it to 10GB and then it started. At the moment I'm still having issues and I probably need to reboot. But, please rewrite the instructions!

 

ADDENDUM: rebooted server and I was able to get my docker containers reloaded.

 

Link to comment

So, if I understand correctly, the advice now is to not use btrfs for our docker device?

 

I had already added an extra drive to my machine, outside of the array, and formatted it to btrfs, although I never made use of docker.

 

I guess that it would be best to now format that drive to xfs, and use it for docker?

 

There is mention of 'loopback-mounting' - what, if anything, do we have to do to achieve this 'loopback mount'?

Link to comment

So, if I understand correctly, the advice now is to not use btrfs for our docker device?

 

I had already added an extra drive to my machine, outside of the array, and formatted it to btrfs, although I never made use of docker.

 

I guess that it would be best to now format that drive to xfs, and use it for docker?

 

There is mention of 'loopback-mounting' - what, if anything, do we have to do to achieve this 'loopback mount'?

 

Maybe this will clear things up.

 

1. In the beginning, there was just array disks formatted with reiserfs.

2. Then we added "cache disk" formatted with reiserfs.

 

3. Next we added ability to format array disks and the cache disk with another file system besides reiserfs.  We chose xfs.

 

4. Next we added redundancy to the 'cache'.  But instead of using a traditional RAID-1 we decided "let's use btrfs" because of its multi-device capabilities, that is, built-in ability to mirror data, ala, ZFS.  This feature is called "cache pool".

 

5. Now btrfs works great as single-device file system so we added ability to format array disks with btrfs as well as xfs and reiserfs.

 

6. Next we wanted to add Docker; but Docker works best when it exists on a COW (copy on write) file system.  It can exist on a non-COW file system but each image layer it creates starts out as a complete copy of its parent.  This uses lots of space, but worse: it's much much slower to build images.  Traditionally Docker used aufs or device-mapper COW functionality, but recently they added btrfs support.  Hey, where do we now use btrfs?  For the cache pool!  So we let Docker live on a btrfs-formatted cache disk, or btrfs cache pool.

 

7. Uh oh, there are some problems.  Turns out it's not so easy to move btrfs directory trees to other devices.  We also break the model where array and cache is just data storage - the ./docker/btrfs/subvolume/* images are fully exposed.  Finally, a "rude awakening" would result if the 'mover' ever got ahold of the docker share  :o

 

8. loopback-mounted "disk image" volumes to the rescue!  We changed our Docker implementation so that Docker's working directory is now on a loopback mounted disk image which itself is formatted with btrfs (because: COW).  The unRaid docker manager handles all the details of initially creating the image file as well as providing tools to expand the file and "scrub" the contained file system.

 

The above order of implementation does not match exactly how those changes were released but it's fairly close.

 

So the state of affairs starting with -beta8 is this:

- You can format array disks or cache disk with either reiserfs, xfs, or btrfs

- If you want a multi-device cache pool instead of a single cache disk it must use btrfs to do this

- A disk image file is loop-mounted at /var/lib/docker to serve as the Docker working directory.  This image file is formatted using btrfs, but the file itself can exist on any storage device: one of the array disks, cache disk/pool, or outside control of unRAID (so called SNAP devices).

- Data used/manipulated by Docker containers is typically mapped outside the containers.

- Containers typically manipulate two kinds of data: media/database data located on array/cache shares; and metadata.  unRaid users have kinda "standardized" on using a share called "appdata" to store the meta-data, though not all containers have to use a single appdata share.

 

Here is how we preconfigure AVS-10/4 servers:

- array devices are formatted with reiserfs (or xfs or btrfs per customer request)

- cache is formatted with btrfs regardless of how many devices customer orders initially.  This makes it easy to add devices to cache.

- the docker disk image file is created on array disk1, an 'appdata' share is created on cache

- any VM image files are created on cache

 

Other notes:

- for array devices since btrfs is still marked "experimental" we choose to prefer reiserfs/xfs

- personally I have no reservations using btrfs on array devices, but we have far more experience with reiserfs, especially with recovering from corruption in the file system.  reiserfsck has done some amazing recovery in the past.

- btrfs-formatted array disks have nothing to do with cache disk/pool; likewise nothing to do with Docker disk image

 

Finally, why the sudden switch, after 9 years to get off reiserfs?  Only the realization that reiserfs is no longer really maintained - it is for purposes of bug fixes, but not for performance.  For practical purposes it is deprecated.  Also, there is a 16TB volume size limit and we now have HDD's hitting 8TB.  I wanted to get well out in front of the day when a HDD hits the market that's too big for reiserfs.

 

Hope this helps.

 

 

Link to comment

Thanks for the writeup - it clarifies some points.

 

Coming back to my question - I have a drive in my system, outside of the protected array, dedicated to use for docker.  I had already formatted it, and was mounting it, as btrfs while running V6beta5a.  So far I have no live docker installed - just a quick test of needo's deluge. which I installed yesterday.

 

Before I start going live with docker, would it be advisable to reformat that docker drive to xfs, bearing in mind the caveats about copying directory trees?

Link to comment

I think you should be able to use '/dev/disk/by-id/ata-Samsung_SSD_840_EVO_250GB_S1DBNSCF31xxxxx-part1' as the 'Docker image:' - did you try it?

 

Not yet. Still on beta7. Will probably update next weekend. On a business trip this week.

 

My SSD has the following contents. Should I delete any of it once I get beta8 loaded?

 

root@Tower:~# ls /mnt/btrfs

docker/

root@Tower:~# ls /mnt/btrfs/docker

btrfs/      execdriver/  init/        repositories-btrfs  volumes/

containers/  graph/      linkgraph.db  vfs/

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.