Jump to content

How long to transfer 1.5TB of Media?


Recommended Posts

I using rsync on my new Unraid build to connect via ssh to my old Ubuntu server and transfer the data to my new machine.

 

I have 2TB parity drive and 2 1TB drives for the data.  The parity is still syncing as well.

 

Its been about 2 days now, and I've only transferred about 100GB of 1.3TB or info.

 

Is it normal to take this long to move the data?

 

 

 

Link to comment

I just copied 1GB to my server using TeraCopy and it took 58.8 seconds.

Second file 1.04GB took 1.04minutes

Third file 2.19GB 2.37minutes

 

Im running Green Drives so they are not the fastest as well I'm not running a Cache drive to speed things up.

 

Over course each and every machine is different, but I'd say that 100GB in 2Days doesn't sound right.

 

Your method is different than mine so please take what I said with a grain of salt.

Link to comment

I using rsync on my new Unraid build to connect via ssh to my old Ubuntu server and transfer the data to my new machine.

 

I have 2TB parity drive and 2 1TB drives for the data.  The parity is still syncing as well.

 

Its been about 2 days now, and I've only transferred about 100GB of 1.3TB or info.

 

Is it normal to take this long to move the data?

 

You really need to post up your hardware, you network speed and the spec's on the computer that you are using as the transfer point to your unRAID server.

Link to comment

"... The parity is still syncing as well. "  ==>  Therein lies the problem.

 

While it's certainly possible to use the UnRAID system during a parity sync or parity check, it will slow things down DRAMATICALLY.    You're "thrashing" the disks extensively when you do this ... which makes both the file operations (e.g. reading and writing) and the parity check/sync (which is, after all, just a bunch of file operations) MUCH slower.

 

Cancel your data copies;  let the parity sync complete;  and THEN copy your data.

 

Assuming you aren't using a controller that's limiting your transfer speeds (e.g. using a PCI slot or a PCIe x1 slot), you should, with native SATA speeds and "green" drives typically see write speeds in the range of 1GB/minute with parity enabled (slightly faster on the outer cylinders; slightly slower on inner cylinders).    So I'd expect 1300GB of data to take about 22 hours to load.

 

But doing this while a parity check/sync is running could easily take ten times that long, due to the extensive thrashing.

 

 

Link to comment

Thanks for the help everyone.  Here's what I've thus far since seeing every ones help.

 

I stopped the data transfer

Rebooted the machine and will let the Parity Sync complete before restarting the copy.

 

I've also attached the Sys log as well.

 

Also, I am using a PCI card in the machine since I have a Norco 4020 case being used with a M1015 card and an extender.  My Norco set up is a custom setup with 32GB of ram in it.

 

I have Unraid setup as VM with ESXI and for now gave it 8MB of memory.  My older machine was a pre-built Via Nas7800 Case that I know only has 512kb of memory in it as well.

 

And everything is connected to 20 a port switch in my basement.

 

 

syslog-2012-11-24.txt

Link to comment

Try this:

 

Edit syslinux.cfg on the flash.

Change the line:

      append initrd=bzroot

to

    append initrd=bzroot pcie_aspm=off

 

Then reboot and run parity check again.

 

Just did that here at 5:30pm.  Will report back in a couple of ours on its progress.

 

Do I need to revert that after the parity is complete?  What exactly does enabling pcei_aspm=off do?

 

Link to comment

Well, 12 hours later, my parity Rebuild is at 7%. 

 

Is this normal for a 2TB parity drive?  At this rate, it will take a week for the parity sync to complete.

 

Shutdown the unMenu interface on your Windows computer and don't access your unRAID server using a webbased interface for at least six hours.  The reason for this is that running the interface 'steals' CPU cycles from the parity sync operation and slows it down.  I have observed that 'that a watched pot never boils' effect in all parity rebuilt (or checking) operations.  Watching the progress will significantly slow the process down...

Link to comment

I missed seeing that last syslog. I didn't look at every line but I didn't see anything obvious. There is something wrong though with those slow speeds. A 2T parity build should take around 6-8 hours. You might want to try combinations of 2 disks at a time until you see a faster parity sync speed. You might see a slow speed right at the start but within a few minutes the speed should be much faster than what you're seeing.

Link to comment

How about this reference to VMware? 

 

 

Nov 24 22:08:01 UnRaidServer kernel: DMI present.
Nov 24 22:08:01 UnRaidServer kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 01/07/2011
Nov 24 22:08:01 UnRaidServer kernel: Hypervisor detected: VMware

 

I don't find it in my SYSLOG...

Link to comment

Try this:

 

Edit syslinux.cfg on the flash.

Change the line:

      append initrd=bzroot

to

    append initrd=bzroot pcie_aspm=off

 

Then reboot and run parity check again.

 

Just did that here at 5:30pm.  Will report back in a couple of ours on its progress.

 

Do I need to revert that after the parity is complete?  What exactly does enabling pcei_aspm=off do?

 

If this does not help then undo.

Link to comment

Well ... a parity sync completely isolates the speed issues from any network speed issues, as it doesn't use the network at all (except when you refresh the web interface -- but that's a truly trivial impact on the system).

 

Your PCIe network card is fine -- anything that's PCIe x4, x8, or x16 is fine.    Only PCIe x1 and PCI cards cause bus-limiting issues.

 

I see two things in your configuration that probably account for the slow speeds:

 

(1)  You're using port multipliers, which slow down a SATA port somewhat (I presume that's what you mean when you say you're using "extenders").    But I don't think that's the cause of your issue.

 

(2)  You're running your UnRAID server as a VM.    THIS is almost certainly the cause of the slowdown ... especially if the drives are also mapped as virtual drives.    But even with direct passthru drives, it will still run much slower than a native UnRAID.    I'm surprised at just how much it's slowing down ... but I really don't see anything else in your configuration that might account for it.

 

To confirm whether or not you have any actual hardware issues, I would set up UnRAID natively ... just boot the USB key directly on the system instead of through a virtual machine, and assign it the same set of drives you're already dedicating to it.    Get the array initialized;  then run a parity sync.    I'd expect parity sync to take in the neighborhood of 10 hours ... although it could be longer due to the port multipliers (but NOT the "days" you're seeing now.

 

If the system runs at "normal" speeds natively, then your issue has something to do with VM setup ... so at least you'll know what to focus on.

Link to comment

But even with direct passthru drives, it will still run much slower than a native UnRAID.   

 

That's not really true. with modern hypervisors on good hardware, you will only see 1-5% slowdown if any at all (assuming you dont overtax the host system). My Virtual unRAID is running at almost identical speeds virtualized vs bare metal.

 

@ ptmuldoon. You really have not given us a lot of information about your physical hardware. A list of what is in your server and how it is configured would really help. without full lists of what you have going on in there we can only guess.

 

garycase has an excellent suggestion of pulling your esxi flash and boot the server from the unRAID flash and see if the problem still exists. that will help troubleshoot a hardware or a configuration issue.

 

if your unRAID drives are on a passthough m1015, you should be completing a parity build in about 7-12 hours depending on the type of 2TB drives.

anything longer and you have a bottleneck somewhere.

 

 

EDIt:

I took a quick glance at the syslog. nothing jumps out at me. It looks like you have 2 WD 1TB drives and a 2TB drive on a RES2SV240. how is that RES2SV240 wired up? Have you run a smart test on those drives? I see no drive related errors but you never know.

 

also did you set the drives up as 512b or 4k ?

Link to comment

Guys,

 

Thanks again for all the help and support.  As this is still a test machine.  I have removed ESXI from the equation, and have booted straight from the unraid usb. 

 

After not seeing much speed improvement after letting it sync for an hour last night.  I than stopped the sync, removed the drives from array and started to preclear the 3 disks. 

 

I started that preclear last night about 9pm, and it still going today as of about 5pm.  Doesn't even look to be close to being done either.  I am away from home the next few days, but I can VPN into my home and than SSH into the machine to checks the preclear status running in screen

 

 

 

also did you set the drives up as 512b or 4k ?

 

John.  I'm actually not sure what you are referring to here.  Where would I set this up?

 

I am away from home for the next few days.  But If the preclear is still running when I get return, I think the next step may be to bypass the raid card, extender, and port multipliers and go straight to the MB with the sata cables to test further.

Link to comment

I am still away from home until Thursday, so can not do anything physical to the machine. But in connecting remotely to check on things, I'm pretty sure something is still definitely wrong.

 

To Summarize.

I've elimimated ESxi from the equation, booting straight to unraid.

The machine has an M1015 card along with a sas expander, and port multipliers. 

 

I am running preclear on only the 2TB drive.  And the machine has 32GB of ram in it.

 

And Preclear is showing speeds of 648KB/sec.    (attached pic)

 

Does that seem right to anyone?

preclear.gif.89e23d99597192399ccde5d1461801f3.gif

Link to comment

That seems very wrong.  On my ESXi server, with an M1015, these are the speeds for preclearing a 3TB WD Red drive (from the rpt file that's generated)

:

 

== Using :Read block size = 8225280 Bytes

== Last Cycle's Pre Read Time  : 8:00:18 (104 MB/s)

== Last Cycle's Zeroing time  : 7:06:27 (117 MB/s)

== Last Cycle's Post Read Time : 20:36:45 (40 MB/s)

== Last Cycle's Total Time    : 27:44:11

 

I'd try removing the SAS expander and port multipliers.  I haven't had time to read the entire thread,  but are you passing your M1015 through to unRAID?  I am, so that could account for some difference. 

 

 

Link to comment

Sounds like a bad disk to me.  I've had drives have nothing showing on the smart report but get extremely slow access speed before.  Can you try to clear the drive on a different box?  Or a different drive on your current box with the M1015?  Even in pass through mode on my M1015 and SAS expander I get 40-80MB/s on a parity check.  As a matter of fact I am clearing 3 Seagate 7200 drives at the same time and two of them are at 90% done on the first cycle of the zeroing step the third is only at 50% and they all show an elapsed time of within 2 minutes of each other.  This ISN'T on my M1015 as they are not headed for my unRAID server.  I preclear all disks even those not destined for unRAID so I have a separate box that I use as a preclear station.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...