Jump to content

Migrating data from old unRAID server to new


Recommended Posts

Greeting fellow unRAID enthusiasts.  This is my first forum post, although I've been using unRAID for many years.

 

My first unRAID server has grown too large (64TB) for the outdated hardware I built it on (PCI slots), and I'm in the process of building a second server.

 

I'll probably keep them both, but somewhat downsize the old server and relegate it to longer-term storage of less-frequently accessed data.

 

My question regards copying data between the servers.  I don't want to use the network because I only have 100Mb/s on the old server.  At that rate each terabyte will take about a day to copy.

 

(I searched for things like "migrate" but most results seemed to be about upgrading server versions or moving all drives from one host to another.)

 

My idea is to mount the drive(s) containing the data to be copied onto the new server (read-only!) and 'cp -r' on the command line from the mounted drive(s) to the new array.  I've already confirmed that I can do the required mounting on the new server, via eSATA, of a test drive that was once in an unRAID array.

 

(The old server *was* version 5.0.beta12 until this evening when I upgraded it to 5.0.5, the same version as the new server.)

 

I just want to make sure I'm not missing something that would make the proposed approach unworkable.

 

Thanks for your help, and a big thanks to LimeTech for building and maintaining such a quality system!

 

server1-half.jpg.c7a30b5124ea46ee68135b9b868ee0ac.jpg

Link to comment

There are a variety of approaches you can take ...

 

(1)  As you've noted, you could mount the old drives one-at-a-time and copy the data to the new server;

 

(2)  As dgaschk suggested, you could simply use the old drives in your new server;  then upgrade them one-at-a-time to larger drives.    Note this won't work if your old server has any IDE drives, although with 64TB that seems unlikely, as your average capacity is well above the size of the largest IDE drives.

 

(3)  Simply buy a PCI Gb network card for the old server[ http://www.newegg.com/Product/Product.aspx?Item=9SIA24G1XA5134 ], and transfer the data across the network.

 

I'd do #3, and keep the old server to backup your data.    The Gb network connection would make a daily or weekly sync very simple to do -- and the initial copy to the new server would be at the maximum rate the server can write, assuming you've assigned a parity drive  (and close to it even if you haven't).

 

 

 

 

Link to comment

Just FYI I also recently started migrating to a new server so I tried 2 solutions that both worked:

1. mounting my hdd on the new server using an external docking station and SNAP

2. mounting my hdd on my windows machine using theexternal docking station and yareg (which allows windows machines to read reiserfs)

 

The 1st solution was much easier and faster, obviously.

 

 

Link to comment

Thanks everyone for the suggestions.

 

Keep in mind that I don't want to take the old server apart, at most I'd be downsizing it.  So I need to *copy* the data as opposed to just moving it.  That precludes installing the old drives in the new server, at least until some get freed up due to downsizing.

 

I had considered the 'mounting the old drive as cache and letting the mover move it' but again I want a copy; I need to mount the drive(s) as Read Only, besides not wanting the original files erased after the copy, I don't want *any* writing done to the old drive(s) as that will break the parity correctness when I put the drive back in the old server.

 

There are a couple of things I didn't mention about the old server, that rule out certain approaches to this migration.

 

I have no spare PCI slots due to using them all for eSATA controllers; it's a 2U rackmount chassis and it only has 3 PCI slots.  Once I downsize to only needing two eSATA controllers, I can and probably will try a GBe NIC.  Still, with PCI bus speeds, I can only achieve 1/4 or so of the full Gb/s speed, which of course is still much better than 100Mb/s.

 

This last one is a biggie: the "drive size" on the old server is much *larger* than what it will be on the new server.  Yeah, sounds backwards.  I'll be using 3TB, 4TB, and perhaps 6TB drives on the new server.  On the old server they are either 8TB or 12TB.  That's because I'm using RAID5 enclosures to create one logical "drive" out of five physical drives . . . the 64TB server only has 8 "drives", one 12TB parity drive and a mix of 8TB and 12TB data drives.

 

Again I thank everyone for their suggestions and apologize for not providing more of the configuration details in my first post.

 

Once I've tried the approach I have in mind I'll have a better idea how well it will work going forward.  I mainly wanted to ensure the safety of the data I'd be subjecting to this process, as the new server has yet to be proven reliable, until then I want the data duplicated.

unRAID-Olympic-2014-1006-blue-disk7-67percent.jpg.1cefa917836104ce43973cccc7bd4daa.jpg

Link to comment

Ah did not know that, although I had seen talk around here of the 6.0 release.

 

I'd still be able to mount the old drives as resierfs I'm assuming.

 

Is there a resource that lays out the pros/cons of the two filesystems?  I'm sure I could do some internet searching but maybe you know of a more convenient, concise resource.

 

Thanks bjp999!

Link to comment

Yes, RFS is supported.

 

LimeTech has announced that they will ship new servers with XFS formatted disks. And that XFS will be the default FS unless the user changes it.

 

HERE is a link to a thread where the different filesystem are discussed. Some of the posts contain useful links.

 

Not sure if you saw but 2 of the 6.0 betas had a serious RFS defect in the Linux code in which unRaid was distributed. It's lack of popularity means that it will get less attention than XFS and BTRFS.

Link to comment

Yep, I could do that.  I'm just trying to avoid a week-long copy just to offload one of the eight RAID5 enclosures.

 

Regarding RFS versus XFS, there are a couple of things I've noticed about RFS that could be considered objectionable.  I realize that any filesystem design will make compromises, and don't fault anyone for what I'm seeing, mainly I'm interested to hear if perhaps XFS handles the same situations better.  That would help me decide to go with XFS for the new server.

 

Problem 1 is slow deletion of largish files (~10GB).  I can deal with that much better than the next problem.

 

Problem 2 is very slow initial creation of a largish file when the containing drive is nearly full.  Maybe this is due to the large "disk" size I'm using (8TB-12TB); I'd be interested to know if others are also seeing this problem.  It basically forces me to utilize the cache drive rather than write straight to the array.  The copies from the cache drive are still slow, but there isn't the timeout problem that I experience when copying from Windows.

 

More detail on the above problem: 'du --si' shows '70k' for files being created slowly due to this problem, often for several minutes, before the file size starts to go up normally.  I imagine the OS is searching for free blocks into which to write the file, but don't know enough about how RFS works to be sure that's what's causing the slowdown.

Link to comment
  • 2 months later...

Is there a way to increase the timeout on the Windows side so it waits longer for the unRAID server to begin the file transfer before giving up on it?

 

This is a great question.  I am frustrated by this even when copying files under 2gbs.

 

Here's a question.  Can I copy data from one server to another without using rsync?  I plan to copy to a different directory structure.  I am also copying from 5x to 6x.  Can I essentially use midnight commander to copy from one server to another? Does that even make sense?  I imagine doing something involving a Windows workstation as an intermediary would just slow things down.  Please realize I know just enough to be dangerous but am not a super Linux literate.

 

Thank You

Link to comment

Hi storagehound.

 

I ran it by the systems administrator where I work and it turns out to be more complicated that just increasing the timeout.  He seemed to have some idea how to fix the problem, but without getting hands-on with my server it would be hard to apply his ideas.

 

The good news is that I'm not seeing the same problem with any of my new servers since I'm now using XFS for them.  It might be impractical to switch drives in an existing server from ReiserFS to XFS, but by all means use XFS for any new drives.

 

Regarding rsync and alternatives, I've used scp successfully:

 

scp <from-directory> <to-server>:<to-directory>

 

I.e.

 

scp Movies server2:/mnt/user

 

I didn't confirm that those commands are *exactly* right but that should get you close.

 

Rsync also seems to work okay, to the extent that I've used it so far.

Link to comment

I'm a little late but it is a simple task to mount the shares from one server on another server and then you can easily copy directly on that server.

 

I couldn't get SMB shares to mount read/write but I got NFS shares to mount read/write very easily.

 

I actually have been mounting my Movies and TV-Shows shares from my unRAID5 box on an unRAID6 box for testing purposes. I'm still trying to figure out the best way to install a Newznab type indexer on unRAID6 so I don't want to change-over my unRAID5 server yet.

 

Link to comment

I'm a little late but it is a simple task to mount the shares from one server on another server and then you can easily copy directly on that server.

 

I couldn't get SMB shares to mount read/write but I got NFS shares to mount read/write very easily.

 

I actually have been mounting my Movies and TV-Shows shares from my unRAID5 box on an unRAID6 box for testing purposes. I'm still trying to figure out the best way to install a Newznab type indexer on unRAID6 so I don't want to change-over my unRAID5 server yet.

 

It's not too later.  I personally have a few terrabytes to go yet.  Can you provide directions on how to do this?  Did you use something like MC (Midnight Commander) to do the copy?

Thank You

Link to comment

Here's the basic mount command.

 

mount 192.168.1.100:/mnt/user/Movies /mnt/disk1/Movies

 

I created an empty Movies directory on my test server disk1. I exported the Movies share on my main server using NFS and public security. So, "192.168.1.100:/mnt/user/Movies" points to the share on my main server and "/mnt/disk1/Movies" is the location where the share is mounted on the unRAID6 test server.

 

Then, once mounted you can use MC or a command line method to copy the files from the mount point to the new array.

 

You use this command to un-mount the share.

 

umount /mnt/disk1/Movies

 

If you were doing a direct copy, I would consider mounting the share to the proper share name on the cache drive and then running the mover and letting it do the work.

 

Link to comment

Keep in mind that the mover will want to remove the source data as they are copied.  This may or may not be what you want.  In my case it wasn't,  I wanted the data in both locations upon completion of the copy.

 

It worked for me to mount the drive(s) from the old server on the new one:

 

cd /mnt

mkdir guest

mount -t resierfs /dev/sdz1 guest -o ro

 

Note the -o ro, without it changes could be made to the source data, invalidating the parity.

Link to comment

Keep in mind that the mover will want to remove the source data as they are copied.  This may or may not be what you want.  In my case it wasn't,  I wanted the data in both locations upon completion of the copy.

 

It worked for me to mount the drive(s) from the old server on the new one:

 

cd /mnt

mkdir guest

mount -t resierfs /dev/sdz1 guest -o ro

 

Note the -o ro, without it changes could be made to the source data, invalidating the parity.

 

Lion and bobkart, thank you.

 

I combined both of your methodologies while using shares and eliminating the "resierfs" part and it worked.  I got much better copy speeds than I expected.  I used Midnight Commander (type:  mc) and it was pretty straightforward.  I will modify my response to include the commands I typed for clarity.  I am also thinking about seeing if I can(or it makes sense) to combine Synchthing in here and not use Midnight Commander (I don't know that even makes sense).  I'm so happy guys!

Link to comment

Keep in mind that the mover will want to remove the source data as they are copied.  This may or may not be what you want.  In my case it wasn't,  I wanted the data in both locations upon completion of the copy.

 

It worked for me to mount the drive(s) from the old server on the new one:

 

cd /mnt

mkdir guest

mount -t resierfs /dev/sdz1 guest -o ro

 

Note the -o ro, without it changes could be made to the source data, invalidating the parity.

 

Lion and bobkart, thank you.

 

I combined both of your methodologies while using shares and eliminating the "resierfs" part and it worked.  I got much better copy speeds than I expected.  I used Midnight Commander (type:  mc) and it was pretty straightforward.  I will modify my response to include the commands I typed for clarity.  I am also thinking about seeing if I can(or it makes sense) to combine Synchthing in here and not use Midnight Commander (I don't know that even makes sense).  I'm so happy guys!

 

I got a little too excited here. 

 

I was able to mount the source sever on my destination server.  mount 192.100.1.200:/mnt/user/video guest -o ro 

I was then able to execute Midnight Commander and it looked like everything was working.  But for some strange reason it would come and say it could not read the "stat" on some folders.  This seems random.  I would then go and examine the folders explorer and even manually try copying it that way it would be fine.  It's utterly bizarre.  I really thought I had it.  I did get a lot of files copied but now see there are gaps.  I am going to break down and try rsync.  I so hate command line actions for things like this.

 

 

Link to comment

Sorry to hear about your migration problems.  Not sure where they might be coming from; there might be something flaky regarding the share mounting.

 

In Linux-speak 'stat' is a verb and 'cannot stat' just means the file could not be opened:

 

http://linux.die.net/man/2/stat

 

Fortunately rsync can be used pretty easily for the situation where the target file doesn't yet exist.

 

I use a command like this:

 

rsync -aruv <source-dir> <target-server>:<target-dir>

 

where <target-dir> is the directory on the target server in which you'd like there to be a copy of <source-dir>.

 

Example: you have /mnt/user/share1/dir1 on the source server that you'd like to be copied (only what's missing), to the same share on the target server.

 

rsync -aruv /mnt/user/share1/dir1 192.168.1.xxx:/mnt/user/share1

 

By adding an 'n' to the options (-naruv) you can do a dry run; it will just tell you what it would have copied.  This is very useful for making sure you've got all the details right, before committing to a potentially lengthy copy (although it can be interrupted with ^C).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...