FAQ for unRAID v6

Recommended Posts

Can I replace my cache device with a smaller one?


The preferred procedure to replace a btrfs cache device (single or pool member) requires that the new device be the same size or larger than the old, but as long as the data fits on the smaller device it's possible to use it as replacement after shrinking the file system, to start you need to find the devid of the device you want to shrink (even if it's a single device, it may not be devid 1 if the pool was changed before), type in the console/SSH:

btrfs fi show /mnt/cache

Output will be similar to this:


Label: none  uuid: 85941ebc-286d-4dac-9bbc-448e85d626cc
       Total devices 2 FS bytes used 2.92GiB
        devid    1 size 232.89GiB used 4.03GiB path /dev/sde1
        devid    2 size 132.89GiB used 4.00GiB path /dev/sdf1


If you want to shrink /dev/sde then it would be devid 1, and to do it type the following:

btrfs filesystem resize X:Yg /mnt/cache

replace X with the devid and Y with the new size (in GiB), e.g., to reduce /dev/sde to 100GiB:

btrfs filesystem resize 1:100g /mnt/cache

(you can also use m for MiB, t for TiB etc, but note that the command uses Gibibyte, so if say your new device is 120GB you need to reduce the old one to ~111GiB or it will still be larger)


In a pool if the new size is smaller than the used space data will automatically be moved to the other pool device(s), as long as it fits.


After the device is the same size (or smaller) than the replacement you can use the replace cache device procedure.

Edited by
  • Upvote 1

Share this post

Link to post
Share on other sites

I'm getting low read speeds from my unRAID server, is there a fix?


There's an issue with Samba included with unRAID v6.2 or above that with some hardware configurations may give slower than normal read speed for Windows 8/10 (and related server releases) clients, my tests indicate that the HDD brand/model used is one of the main factors, write speed is not affected, Windows 7 clients are also not affected.


To fix the issue add to "Samba extra configuration" on Settings -> SMB:

max protocol = SMB2_02

Stop and re-start array for changes to take effect, Windows clients may need to reboot to reconnect.


Unrelated to this, 10GbE users should make two more changes for better overall performance (reads and writes):


1-Change NIC mtu do 9000 (unRAID server and any other computer with a 10GbE NIC)
2-Go to Settings -> Global Share Settings -> Tunable (enable direct IO): set to Yes


This last one may also improve performance for gigabit users in some hardware configurations when reading from user shares.


Edited by
  • Upvote 1

Share this post

Link to post
Share on other sites

Can you explain the Cache drive option types, and what is the difference between them?


Here is a table illustrating the differences:


Table of Cache drive usage options and their behaviors

    C=Cache drive

    D=Data drive(s)


                                Cache:No    Cache:Yes    Cache:Only  Cache:Prefer

Data should be on:          D              C+D                C             C+D

New files first to:             D                C                   C               C

Files overflow to:             -                 D                    -                D

Mover moves:                 No           C to D               No            D to C

Orphaned files:                C               -                     D                -



- Orphaned files are those files located where they don't belong (e.g. files on D with Cache:Only), they won't be moved by the Mover

- Files on both C and D are still visible in shares, for all options

- Shares are all root folders on all array data and Cache drives

- New files overflow to the secondary destination when there is not enough space on the preferred destination

- Cache:Prefer is the newest option.  In general, it is now preferred over Cache:Only because it behaves the same but adds overflow protection.  If you fill up the Cache drive, copying to that share will continue to a data drive, and not error out, as it would if marked Cache:Only.  And if the Cache drive drops out, you will still be able to continue, using a data drive for the same share.  Once the Cache drive is restored, then the Mover will move the share back to the Cache drive.

Some typical usage scenarios

  • Cache:Yes - data is written to the Cache drive, then Mover moves it to the data drives
    - This is the typical Cache drive usage for large shares, to speed up writes to the array.  The data will mainly be stored on the parity protected array, but writes will be at full speed to the Cache drive, then later moved at idle times to the array.
  • Cache:No - keeps all data on the data drives
    - This is similar to Cache:Yes, but doesn't use the Cache drive, which is fine if you don't need the speed boost when writing files to the shares.
    - An alternative usage is to keep most of the data on the array drives, but manually place selected data on a fast Cache drive, in the same share folders, for faster access to that data.  It is still visible in the share but won't be moved to the data drives.  For example, commonly accessed metadata might be placed there.  This may help keep the data drives from spinning up.  (This is similar to the alternative usage of Cache:Only)
  • Cache:Only - keeps all data on the Cache drive or pool
    - This is typically used for smaller shares or shares you want faster access to.
    - An alternative usage is to write and keep new data on the Cache drive, but manually move rarely accessed older files to the same share folders on the array data drives.  Both sets of files are visible in the share.  This may help minimize data drive spin up.  See this post.  (This is similar to the alternative usage of Cache:No)
  • Cache:Prefer - keeps data mainly on the Cache drive or pool, but allows overflow to the array
    - This is similar to Cache:Only, typically used for smaller shares or shares you want faster access to.  But it has additional advantages over Cache:Only - data that won't fit on the Cache drive can overflow to the array drives.  Also, if the Cache drive fails, the same share folders on the data drives will still continue working.  It's also useful if you don't yet have a Cache drive, but are planning to get one.  Once it is installed, the Mover will automatically (on its schedule) move all it can to the Cache drive.  And if you need to do maintenance on the Cache drive or pool, you can move all the files to the array, and they will be moved back once you are done 'maintaining'.
Edited by RobJ
try to fix formatting for IPS
  • Upvote 3

Share this post

Link to post
Share on other sites

I have an unmountable BTRFS filesystem disk or pool, what can I do to recover my data?


Unlike most other file systems, btrfs fsck (check --repair) should only be used as a last resort.  While it's much better in the latest kernels/btrfs-tools, it can still make things worse.  So before doing that, these are the steps you should try in this order:


1) Mount filesystem read only (non-destructive)


Create a temporary mount point, e.g.:

mkdir /x

Now attempt to mount the filesystem read-only:

mount -o recovery,ro /dev/sdX1 /x

For a single device: replace X with actual device, don't forget the 1 in the end, e.g., /dev/sdf1

For a pool: replace X with any of the devices from the pool to mount the whole pool (as long as there are no devices missing), don't forget the 1 in the end, e.g., /dev/sdf1, if the normal read only recovery mount doesn't work, e.g., because there's a damaged or missing device you should use instead:


mount -o degraded,recovery,ro /dev/sdX1 /x

Replace X with any of the remaining pool devices to mount the whole pool, don't forget the 1 in the end, e.g., /dev/sdf1, if all devices are present and it doesn't mount with the first device you tried use the other(s), filesystem on one of them may be more damaged then the other(s).


Note that if there are more devices missing than the profile permits for redundancy it may still mount but there will be some data missing, e.g., mounting a 4 device raid1 pool with 2 devices missing will result in missing data.


If it mounts copy all the data from /x to another destination, like an array disk, you can use Midnight Command or your favorite tool, after all data is copied format the device or pool and restore data.


2) BTRFS restore (non-destructive)


If mounting read-only fails try btrfs restore, it will try to copy all data to another disk, you need to create the destination folder before, e.g., create a folder named restore on disk2 and then:

btrfs restore -v /dev/sdX1 /mnt/disk2/restore

For a single device: replace X with actual device, don't forget the 1 in the end, e.g., /dev/sdf1

For a pool: replace X with any of the devices from the pool to recover the whole pool, don't forget the 1 in the end, e.g., /dev/sdf1, if it doesn't work with the first device you tried use the other(s).


If the restore aborts due an error you can try adding -i to the command to skip errors, e.g.:

btrfs restore -vi /dev/sdX1 /mnt/disk2/restore

If it works check that restored data is OK, then format the original btrfs device or pool and restore data.


3) BTRFS check --repair (destructive)


If all else fails and as a last resort use check --repair:


If it's an array disk first start the array in maintenance mode and use mdX, where X is the disk number, e.g., for disk5:

btrfs check --repair /dev/md5

For a cache device (or pool) stop the array and use sdX:

btrfs check --repair /dev/sdX1

Replace X with actual device (use cache1 for a pool), don't forget the 1 in the end, e.g., /dev/sdf1

Edited by
  • Upvote 2

Share this post

Link to post
Share on other sites

Why do I see csrf errors in my syslog?


Starting with 6.3.0-rc9 unRaid includes code to prevent CSRF vulnerabilities.  (See here)  Some plugins may have needed to be updated in order to properly work with this security measure.


There are 3 different errors that you may see logged in your syslog:


missing csrf_token - This error happens if you have plugins that have either not been updated to conform to the security system or the version of the plugin you are running is not up to date.  Should you see this error, then check for and install updates for your plugins via the Plugins tab.  To my knowledge, all available plugins within Community Applications have been either updated to handle csrf_tokens or they were not affected in the first place.  If updating your plugins does not solve your issue, then post in the relevant support thread for the plugin.  There will be hints on the log line as to which plugin generated the error.


wrong csrf_token - CSRF tokens are randomly generated at every boot of unRaid.  You will see this error if you have one browser tab pointed at a page in unRaid and on another tab you initiate a restart of unRaid.  Note that the browser in question can also be on any device on your network.  This includes other computers, tablets, phones, etc.  This error can also be caused by mobile apps such as ControlR checking the status of unRaid but the server has been rebooted after the app was started.  Restart the application to fix.


unitialized csrf_token - Thus far the community has never once seen any report of this being logged.  Presumably it is an error generated by unRaid itself during Limetech's debugging period (ie: not plugin related), and should you see this you should post your diagnostics in the release thread for the version of unRaid you are running.  EDIT:  There is a possibility that if your rootfs is completely full due to misconfiguration of an application that you may see this particular token error.


Edited by Squid
  • Upvote 2

Share this post

Link to post
Share on other sites

Why can't I delete a file (without permissions from root/nobody/Unix user/999/etc)?

My VM/Docker created some files but I can't access them from Windows?


First a primer:

Unix filesystem permissions/ACLs (access control lists) in a nutshell

There are always 3 permission groups (owner, group, other)

  • owner - if you own the file, these permissions apply
  • group - if you are a member of the group, these permissions apply
  • other - if you are not the owner or member of the group, these permissions apply

Permissions are cumulative, there is no "deny" permission, so if one group grants permission, permission is granted.


You can easily permissions of a file from the shell with

root@Tower:~# # ls -l /mnt/user0/slackware/
total 92
-rwxr-xr-x 1 nobody users   410 Aug 10  2016*
-rw-r--r-- 1 nobody users  5336 Oct 29 15:20 mirror-slackware-current.conf
-rwxr-xr-x 1 nobody users 39870 Nov 30  2013*
-rw-r--r-- 1 nobody users  5397 Oct 29 15:20 mirror-slackware.conf
lrwxrwxrwx 1 root   root     27 Jan 28  2016 ->*
drwxrws--- 1 root   root     56 Jan 16  2014 multilib/
-rwxr-xr-x 1 nobody users  7165 May 20  2010*
drwxrws--- 1 root   root   4096 Jun 11  2015 sbopkgs/
lrwxrwxrwx 1 root   root     16 Jan 28  2016 slackware64 -> slackware64-14.1/
drwxrws--- 1 nobody users  4096 May 28  2016 slackware64-14.1/
drwxr-xr-x 1 root   root   4096 Dec  5 02:00 slackware64-14.2/
drwxrws--- 1 root   root   4096 Aug 11  2016 slackware64-14.2-iso/
drwxr-xr-x 1 nobody users  4096 Dec  5 02:01 slackware64-current/
drwxrws--- 1 nobody users  4096 May  1  2015 slackwarearm-14.1/

The permissions are the displayed with the 10 character string at the start of the lin


  • the first character just tells us the type of the file/directory/link we are working with
  • the first triad are the owner permissions, these are the permissions that apply to the owner of the file/directory/etc
  • the 2nd triad are the group permissions, these are the permissions that apply to the members of the group of the file/directory/etc
  • the last triad are the other/else permissions, these are the permissions that apply to users who are not the owner nor members of the group of the file/directory/etc

For files:

To read a file: read permission is needed. r--

To write a file: write permission is needed. -w-

To execute a file (as a script, or binary): execute is needed. --x


For directories:

To list the contents a directory: read and execute is needed. r-x Weird things happen otherwise

To create/delete files in a directory: write is needed on both the file and the directory. -w-



So for a file /mnt/user/share/a/b

drwxrwxr-x 1 nobody users 2 Mar 15 11:57 a/
-rw-rw-rw- 1 nobody users 2 Mar 15 11:57 a/b

Other than root, nobody or members of users. the file b would be impossible to delete, since the write permission to the directory is missing.

The file however, can be overwritten by anybody.


Now, Windows access to the files is over SMB

SMB has two modes of access to the file. samba is the app providing the access.

  • Public/Guest access - (unRAID default) in this mode, all access is allowed. There are no passwords needed. Files and directories are created with the nobody user
  • Private/Secure access - in this mode, users need to be defined and passwords assigned. Files and directories are owned by the user who created them. But when a share is created, unRAID assigns it to nobody with full read, write, execute for all (owner, group, and others).

The problem begins when there is a VM, docker creating files. Lets say the VM is using the user backup.

Lets say user alice is trying to delete the old backups from her Windows PC.

Even if the shares are public, she would hit the error about requiring permissions from backup to delete the files. Why?

Because samba will be using the user nobody to delete the files made by the backup user, and typically the file permissions won't allow it.

If the shares are private/secure, it can still fail because alice user is not the same as the backup user, and thus the permission problem exists again. (There are cases where this is not true, but that's a bit outside the scope of the FAQ)


Initial stuff, will expand as needed

  • Upvote 1

Share this post

Link to post
Share on other sites
On 4/18/2016 at 10:17 PM, RobJ said:

This thread is reserved for Frequently Asked Questions, concerning unRAID as a NAS, its setup, operation, management, and troubleshooting.  Please do not ask for support here, such requests and anything off-topic will be deleted or moved, probably to the FAQ feedback topic.  If you wish to comment on the current FAQ posts, or have suggestions or requests for the FAQ, please put them in the FAQ feedback topic.  Thank you!


Index to common questions

  Some are from the wiki FAQ, some from this thread, and some from the LimeTech web site.  There are many more questions with answers on the wiki FAQ.

Getting Started

General Questions

Cache Drive/Pool


Maintenance and Troubleshooting


unRAID FAQ's and Guides -

* Guides and Videos - comprehensive collection of all unRAID guides (please let us know if you find one that's missing)

* FAQ for unRAID v6 on the forums, general NAS questions, not for Dockers or VM's

* FAQ for unRAID v6 on the unRAID wiki - it has a tremendous amount of information, questions and answers about unRAID.  It's being updated for v6, but much is still only for v4 and v5.

* Docker FAQ - concerning all things Docker, their setup, operation, management, and troubleshooting

* FAQ for binhex Docker containers - some of the questions and answers are of general interest, not just for binhex containers

* VM FAQ - a FAQ for VM's and all virtualization issues


Know of a question that ought to be here?  Please suggest it in the FAQ feedback topic.



Suggested format for FAQ entries - clearly shape the issue as a question or as a statement of the problem to solve, then fully answer it below, including any appropriate links to related info or videos.  Optionally, set the subject heading to be appropriate, perhaps the question itself.


While a moderator could cut and paste a FAQ entry here, only another moderator could edit it.  It's best therefore if only knowledgeable and experienced users create the FAQ posts, so they can be the ones to edit it later, as needed.  Later, the author may want to add new info to the post, or add links to new and helpful info.  And the post may need to be modified if a new unRAID release changes the behavior being discussed.


Moderators:  please feel free to edit this post.


Updated the links since the ones in the OP are no longer working... (Yes I had nothing else to do   :S)

Edited by Squid
add frank
  • Upvote 2

Share this post

Link to post
Share on other sites

How can I stop mover from running?  (Possibly unRaid 6.3.3+ only)


Since you can't stop the array while mover is running, in order to stop mover either from an SSH terminal or from the local keyboard / monitor, enter in the following

mover stop




Edited by Squid
  • Upvote 1

Share this post

Link to post
Share on other sites

Why is my GUI Slow and/or unresponsive?


This problem has been traced to an anti-virus program suite and its settings in several cases. The link below will take to two posts which provide a rather complete descriptions of the problem and its solution. 

While you might not be running Avast, I have no doubt that other antivirus products will have a similar issue in the future.  You should definitely investigate this area if you are having any type of problem with a slow, misbehaving  or unresponsive GUI.


EDIT:  Keep reading in the thread as there is continuing investigation into the issues with Avast. 

Edited by Frank1940
  • Upvote 1

Share this post

Link to post
Share on other sites

I'm having trouble with lockups / crashes / etc.  How can I see the syslog following a reboot?


All 3 of the methods below will continually write the syslog (as it changes) to the flashdrive up to the moment the lockup / crash / reboot of the server happens.


unRaid runs completely from RAM, so there is normally no way to view the syslog from one boot to another.  However, there are a few different ways to grab the syslog from one boot to another.


Method 1:  Via the User Scripts Plugin:

Method 2: Via Fix Common Problems Plugin

  • Within Fix Common Problems settings, put it into Troubleshooting mode

Method 3: Via a screen session or at the local keyboard & monitor

tail -f /var/log/syslog > /boot/syslog.txt



Pros / Cons


Method 1 will create a new syslog file on the flash drive at every boot so that you can compare / not lose any historical data for reference


Method 2 logs a ton of extra information that may (or may not) help with diagnosing any issues.  This extra information being logged however may contribute to a crash due to the logging being filled up if troubleshooting mode is enabled for more than a week or so.  Also requires you to re-enable it on every boot (by design)


Method 3 Information is identical to Method 1, but requires you to reenter the command every time you want the information.  Additionally, if this is not entered at the local command prompt or via a screen session, then closing the SSH (Putty) window will stop the logging from happening.




In the case of lockups, etc it is highly advised to have a monitor connected to the server and take a picture of whatever is on it prior to rebooting the server.  It is impossible for any script to capture any errors that may have been outputted to the local monitor


Edited by Squid

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Copyright © 2005-2017 Lime Technology, Inc. unRAID® is a registered trademark of Lime Technology, Inc.