RFS or XFS or BTRFS with unRAIDS 6b7+


SSD

Recommended Posts

Thought I'd create the poll to gauge what users of BETA7 and beyond plan to do with their file system. Hope this will become a resource for those not sure (me included) to help make a good decision.

 

Considerations:

- XFS has been around longer.

- BTRFS is new but has new features

 

The particular things that matter to me are ...

 

1 - Stability - I think both are stable for single disks, but the edge has to go to XFS which has more maturity

2 - Withstand a hard boot - Which is better? Both are journaled like RFS, so probably do well. Btrfs maintains checksums and can help detect corrupted files - a nice feature if a hard boot occurs and parity errors are generated

3 - Very effective "fsck" tool - reiserfsck is very good - do both XFS and BTRFS have similar tools (probably yes) - which is better? Edge probably goes to XFS based on maturity but not sure.

 

HERE is something I found that indicates that XFS is better for single tasking, and btrfs better for multitasking.

 

If you select one of the undecided categories, please change your vote based on your final decision.

 

Encourage everyone voting with a firm decision to list the reasons why, and those undecided to post questions to help make the decision.

Link to comment
  • Replies 53
  • Created
  • Last Reply

Top Posters In This Topic

Keep voting guys ...

 

I did some research and found

 

- BTRFS file checking tool can take 24H on a few TB drive - see HERE, and generates a TON of output. This article goes back to 2008, but there were updates as recently as this month, so assuming it is still accurate.

 

- BTRFS does not do de-duplication - so my original thought that a backup drive might benefit from BTRFS seems to be false.

 

- BTRFS is "experimental", although single drive usage is considered stable

 

- RFS seems to be falling out of vogue wth the Linux community and is likely on its way out. There is no team continuing development and with a smaller user base, bug fixes and support for newer Linux versions will get slower. Can you say "death march"? That combined with the author being in jail for murdering his wife kindof makes me motivated to move on.

 

- XFS - Older FS comparison (so old that BTRFS was not included) HERE:

 

Filesystems :: XFS

XFS is Silicon Graphics "A Next Generation Journalled 64-Bit Filesystem With Guaranteed Rate I/O" filesystem designed for IRIX based systems.

 

XFS uses the standard inodes, bitmaps and blocks, and is compatable with EFS and NFS filesystems.

 

According to the XFS white paper it has;

 

    Scalable features and performance from small to truly huge data (petabytes)

    Huge numbers of files (millions)

    Exceptional performance: 500+ MBytes/second

    Designed with log/database (journal) technology as a fundamental part not just an extension to an existing filesystem

    Mission-critical reliability

 

- XFS - "I have experienced several "dirty shutdowns" (due to thunderstorms and power outages) since switching to XFS. Upon restarting the box, I ran the disk check program (explained below) and didn't experience any detectable corruption. By contrast, I did lose data a few years ago on a power failure when I was using ext3. " Writeup on XFS from MythTV

 

- XFS - Tech info on XFS repair

 

- XFS - Chk Performance Test, XFS vs ext3

 

My research is pushing me more towards the XFS route. I like the Mission Critical Reliability as well as the real-world feedback on its ability to withstand power failures with a MythTV server, which does near continuous I/O.

Link to comment

Can you add an option for total indecision, as in...

 

Since we are going to be allowed to mix and match in the same array, I plan on migrating certain share types to the file system that seems to best match.

 

I may even set up large "cache pools" and rely on BTRFS RAID1 for failure protection for large portions of what is now on the array proper. That content could even be set to periodically back up to a XFS drive in the array.

 

The options seem so wide open right now, I can't honestly pick from the list you currently have.

Link to comment

I'm with jonathanm  :)

 

... although based on the detailed post JonP made on the pros/cons of the newly supported file systems, I'm leaning towards the following:

 

(a)  Add a couple SSDs as a BTRFS cache pool for my main system.

 

(b)  On the NEXT system I build, use a BTRFS cache pool and XFS array drives.

 

©  Leave all my current servers on Reiser.

 

Link to comment

As BTRFS matures, I may change my mind about the array drives.  I definitely like that it can be configured for higher levels of redundancy -- 2 failed drives, 3 failed drives, etc. (presuming you have enough drives in the pool).    But these aren't apparently features that are available yet.

 

But given a bit more time, this definitely seems like the file system of choice.

 

Link to comment

... so we are waiting to see what others do and what the consensus is in a few months time.

 

Sounds like the "Stick with RFS" option ... at least for now.

 

Not really, stick with rfs sounds like a conscious decision to stay on that file system.  My option is more about not moving until a path is cleared.

Link to comment

As BTRFS matures, I may change my mind about the array drives.  I definitely like that it can be configured for higher levels of redundancy -- 2 failed drives, 3 failed drives, etc. (presuming you have enough drives in the pool).    But these aren't apparently features that are available yet.

 

But given a bit more time, this definitely seems like the file system of choice.

 

To keep a main tenant of unRAID, each file is completely contained on a single, independently mountable drive, use of BTRFS should be limited to single drive usage within unRAID array.

 

Once you start using BTRFS based RAID, I'm not sure what you are expecting to get from unRAID. RAID on RAID?

 

But even using BTRFS single drive within unRAID, not many of the features are exposed, like snapshots.

 

XFS would be the risk adverse choice, for a long time.

Link to comment

FWIW, I thought I would chime in with my experience with btrfs and reiserfs and the fsck tools. I recently was running unRAID as a domU KVM. For whatever reason this was causing my data disks to become corrupt and show up as unformatted on unRAID. My cache drive (btrfs) was the first drive to show symptoms. I ran btrfs version of fsck but it refused to repair the errors. I then said fine and formatted the cache drive back to reiserfs and continued to use unRAID in KVM (I didn't realize that my kvm setup was causing the issues). The next day my cache drive was shown as unformatted and I ran the fsck tool and it fixed the issues on the first attempt. This happened a few more times before I stopped running unRAID as DomU and reiserfs fsck fixed the errors each time. So in my very limited experience, reiserfs has a much more successful recovery rate using the fsck tool than btrfs.

 

That said, I may consider XFS for my drives but I will see how others fair first ;)

Link to comment

The VERY robust recovery features of reiserfsck are indeed a very attractive feature of the current file system.    It's not at all clear that the other file systems can achieve equal success when you have problems.    Other than the "minor detail" that the author is serving a life sentence and not exactly available to maintain it, Reiser seems to be a lot like the "little engine that could"  :)

 

... I'll undoubtedly us something else for my next UnRAID system, but have no plans to change my current servers.  (with the possible exception of adding a btrfs cache pool)

 

 

Link to comment

FWIW, I thought I would chime in with my experience with btrfs and reiserfs and the fsck tools. I recently was running unRAID as a domU KVM. For whatever reason this was causing my data disks to become corrupt and show up as unformatted on unRAID. My cache drive (btrfs) was the first drive to show symptoms. I ran btrfs version of fsck but it refused to repair the errors. I then said fine and formatted the cache drive back to reiserfs and continued to use unRAID in KVM (I didn't realize that my kvm setup was causing the issues). The next day my cache drive was shown as unformatted and I ran the fsck tool and it fixed the issues on the first attempt. This happened a few more times before I stopped running unRAID as DomU and reiserfs fsck fixed the errors each time. So in my very limited experience, reiserfs has a much more successful recovery rate using the fsck tool than btrfs.

 

That said, I may consider XFS for my drives but I will see how others fair first ;)

 

Thanks for sharing your experience! This is great info and certainly will influence people's FS decisions. Would have been great if you had switched to XFS and we'd have gotten an apples to apples to apples comparison. I.DO expect that XFS will do at least a good job at disk recovery, but unsure it will match RFS which, based on forum experiences, has been exemplary.

 

I was thinking of a way to corrupt a disk on purpose to run a controlled test. Maybe write a couple megs.of binary zeroes to the start of a disk. Any thoughts?

Link to comment

To keep a main tenant of unRAID, each file is completely contained on a single, independently mountable drive, ...

 

In my case this was a major benefit for the use of unRAID.

I had a RAID5 array when I decided to go with unRAID.

If I would loose 2 drives it would still be better than loosing the whole array.

 

Time has passed and I'm on my way to start a backup server with unRAID.

I'm no more worried about multiple drive failures than bitrot & silent corruption

since that would also destroy my backup (mirror).

 

Which FS is best in this case?

 

P.S. I would still want to conserve the "independently mountable drive" option.

Link to comment

Which FS is best in this case?

 

I think when BTRFS is flushed out and tested well enough, it's internal checksum will help 'reveal' bitrot.

From what I've read, everytime a file is read, it's checksum is verified.

That or a routine md5 checksum of the files in the backup area.

 

BTRFS also has scrub which will find bitrot prior to the file being read. This is like having scheduled SMART long self tests (everyone does that right?).

Link to comment

Which FS is best in this case?

 

I think when BTRFS is flushed out and tested well enough, it's internal checksum will help 'reveal' bitrot.

From what I've read, everytime a file is read, it's checksum is verified.

That or a routine md5 checksum of the files in the backup area.

 

BTRFS also has scrub which will find bitrot prior to the file being read. This is like having scheduled SMART long self tests (everyone does that right?).

 

From what I saw the script was just a script calling cat (reading the file). Checksum issues are then reported via dmesg.

In any case it's better then we have today.

 

However this is a level above the SMART long test.

The long test checks that each sector can be read even if a file isn't present.

 

The smart test will tell you which LBA has an issue. Not of very much use for us.

The checksum will tell you which file has an issue, which I find highly useful. You can go to your backups and restore!

Link to comment

Here is something for you to think about.

 

If the main use of your unRAID system is to store movies, and you got 2% bitrot in your movie file, would you want

 

(a) the filesystem to tell you and lock you out of the file, or,

(b) to be blissfully unaware since with 2% corruption, most movies are still playable and you may not even notice the corruption.

 

I think it's important that future versions of unRAID have a separate file system per disk (like ReiserFS), this is a big attraction of unRAID to me.

Link to comment

I was hoping testing would sway me one way or another.  So far I have the following breakdown across 2 unraid servers:

 

9 Converted to XFS

1 Converted to BTRFS

15 Still ReiserFS

1 Cache drive BTRFS

 

I have not seen any issues with any of them.  I have been performing DD benchmarks on the various filesystems and they all perform about the same.  Occasionally I would see slightly faster times on XFS and BTRFS, but nothing significant.

 

I haven't had any crash or need for filesystem repair yet, so can't judge anything on that yet.

 

It is not alot of work switch between as long as you have some space to move things around. 

Link to comment

Being mostly ignorant on BTRFS, other than the little I've read here, I've got a question.  BTRFS is 'experimental' but considered 'mostly stable' for one disk, from what I've read.  My question is; what's the 'worst' that can happen if I convert my drive to BTRFS and it has an issue?  Would it really be any worse than having a REISRFS disk have an issue?  i.e. couldn't I just 'fix' it with the parity disk info?

 

It seems like the upside to BTRFS (checkdisk of files, awareness of bitrot, etc) outweigh the current system, and if the effect of a failure aren't really any worse than the current system, why not give it a go?

 

It sounds like the 'experimental' stuff is related to using BTRFS in a pool/raid setup, which I'm not very interested in, other than perhaps on my cache drive.  If using it on one disk is considered 'reliable' now, I'm leaning towards converting my array disks, one at a time, to this format.

 

Thoughts?

Link to comment

Being mostly ignorant on BTRFS, other than the little I've read here, I've got a question.  BTRFS is 'experimental' but considered 'mostly stable' for one disk, from what I've read.  My question is; what's the 'worst' that can happen if I convert my drive to BTRFS and it has an issue?  Would it really be any worse than having a REISRFS disk have an issue?  i.e. couldn't I just 'fix' it with the parity disk info?

If the physical disk fails, then it makes little difference which format you have used.  The parity disk can restore a replacement disk to the stat at the point of failur as it is working at the physical sector level on the disks..

 

The problem comes when any sort of file system corruption occurs (where the parity disk is of no use for recovery purposes).  This seems  to occur with some frequency.  This is the scenario where reiserfsck is used, and has proved very reliable at fixing the corrupted file system and recovering data.    I do not believe that a tool of this quality exists for btrfs (at least so far), so if you get this cass of problem data loss is quite likely.

 

Not sure how good XFS and its tools are and whether the XFS format allows for recovery from such problems.  As XFS is quite mature, it is quite likely the tools are good, but we need more information on this.

Link to comment

I've been trying to find some good fsck recovery stories to see if the general tool can match what reiserfsck has shown it can do, but with no success.  The separate xfs_check and xfs_repair utilities can help; but these apparently aren't nearly as robust as reiserfsck.

 

btrfs has more promise ... but at this point it seems it's just that -- a "promise" of a lot of really cool features that aren't necessarily ready for prime time.

 

As long as all your data is backed up, it clearly doesn't hurt to experiment a bit ... but it's not at all clear that there are any data integrity benefits to switching (except possibly with btrfs once it evolves a bit).

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.