3tb drives with 4.7


Recommended Posts

I know using 3 TB drives with unraid 4.7 is impossible in its fullest capacity but my unraid 4.7 servers are 98% full and I need more space.  I bought a few 3TB drives and precleared them full capacity (I believe it has been a while).  So the question is can I drop those guys in to 4.7 and use them as is for now?  I only have a 2 TB parity drive that I don't think I want to replace right now...  Anyway any ideas or suggestions would greatly be appreciated.  I don't want to invest in 2TB drives considering they are just too small with the limited number of drive spaces...

 

Thanks,

 

Neil

 

Link to comment

I found this: "But to answer your question, the number of sectors in a 2T drive is 3907029168."

 

But that wasn't from you, Joe.  So I am not sure if I have the write place.

 

Can I run the HDparam after preclearing or would I need to re-preclear after the HDParam?

 

Thanks,

 

Neil

 

Link to comment

I think this is for 2.2TB:

http://lime-technology.com/forum/index.php?topic=18666.msg166892#msg166892

I have a Western Digital 3TB drive that I am attempting to use as a Parity drive with 2.2TB of usable space using Unraid v4.7. I have looked at other threads such as:

 

http://lime-technology.com/forum/index.php?topic=11183.0

 

I am having difficulty using the "--yes-i-know-what-i-am-doing" flag after entering:

 

hdparm

I think you enter it at the same time...

 

hdparm --yes-i-know-what-i-am-doing -N p4294963168 /dev/sdX

Link to comment

would I need to re-preclear after the HDParam?

Definately, needs to be re-done after to  be recognized as a valid pre-clear signature...

 

However, the pre-clear signature is not used by unRAID when replacing an existing drive. It is only used when adding an ADDITIONAL data drive to an already parity protected array. 

Link to comment

Wow that sucks!

 

Was I right about the HDparam command?

Why?  If you already pre-cleared the entire 3TB drive, and if it looks good to to you, AND

if you are simply replacing existing drives, then the pre-clear signature is not involved at all.  No need to clear again at all (unless you are adding an ADDITIONAL data disk to a slot in the array not currently populated)

 

If all you are doing is up-sizing existing drives, the pre-clear has no effect on the time it will take to re-construct onto the new drive.  Drives that are replacing existing data drives are written to in their entirety by unRAID in the reconstruction process.

 

If you are running the pre-clear process to exercise the disk and check for bad sectors you need to do it BEFORE the HPA is added to artificially make the disk look smaller.  Otherwise, the top third of the disk will not be precleared/tested.

Link to comment

Out of curiosity why go through the hassle of trying to use the 3TB drives as 2TB drives?  You are making a lot more work for yourself down the road when you switch to unraid 5.0.  You are going to have to completely redo those drives again.  Why not just switch to 5.0 now?  Yes you will have to redo your parity drive with one of the 3TB drives, but that is all you will have to redo.  Then you can add the other new drives to the array in their full capacity, and not have more work to do later on.

Link to comment

Yes I know I will have a lot of work and be shuffling things around.  I don't want to mess with parity so I am going to make them 2 TB to be used later.  What I will do is use these drives in my backup machine that I am replacing second.  In other words when unraid 5.0 is out and proven I will buy a host of new drives for my original server and start the upgrade with new drives.

 

Joe: I already pre-cleared the drives as 3 TB.  Now I want to make them work as 2.0 so once I change everything HDPARM I have to preclear correct?

 

Thanks,

 

Neil

 

Link to comment

I ran the following and got the following error:

 

root@Storage2:~# hdparm --yes-i-know-what-i-am-doing -N p3907029168 /dev/sdl

 

/dev/sdl:

setting max visible sectors to 3907029168 (permanent)

SET_MAX_ADDRESS failed: Input/output error

max sectors  = 5860533168/5284784(5860533168?), HPA setting seems invalid (bug

gy kernel device driver?)

Link to comment

I ran the following and got the following error:

 

root@Storage2:~# hdparm --yes-i-know-what-i-am-doing -N p3907029168 /dev/sdl

 

/dev/sdl:

setting max visible sectors to 3907029168 (permanent)

SET_MAX_ADDRESS failed: Input/output error

max sectors  = 5860533168/5284784(5860533168?), HPA setting seems invalid (bug

gy kernel device driver?)

As it said, the disk controller/driver cannot handle that request.  You'll need to use a different disk controller that does report the size correctly, or, a different utility.

 

Oh yes, to keep buggy software from changing the HPA at will, most disks only allow ONE change in the HPA per power cycle.  They actually have to be powered down and back up gain to use the HPA change command again.

Link to comment

Is that unusual for that to deny the request?  I don't quite understand.  I can reboot this server but now I am wondering if I should just go 5.x now :(  Decisions Decisions.

 

Im just weary and it is a big leap that I don't know if I am prepared....

 

I thought I could use the 3tb as 2tb in the short term :(

 

Neil

 

Link to comment

Is that unusual for that to deny the request?  I don't quite understand.  I can reboot this server but now I am wondering if I should just go 5.x now :(  Decisions Decisions.

 

Im just weary and it is a big leap that I don't know if I am prepared....

 

I thought I could use the 3tb as 2tb in the short term :(

 

Neil

 

All software is beta, even if it is not labelled as such. I've never heard of anyone losing/corrupting data on any of the 5.0 beta/RCs, there's no reason not to be on 5.0 at this point. The worse case, you will have an issue that can be resolved by using a different beta/RC than the latest. I think this only applies to people with SAS-MV8 cards though.

 

It's no different than a normal unRAID upgrade, you just delete a couple more unRAID files, run new permissions after upgrading, and stop using 'root' to access data. You are only going to have problems upgrading if you skip steps and don't follow the very easy to read instructions linked in the post above this... even then its virtually impossible to lose data.

Link to comment

Did you delete these files (in flash/config/):

passwd

shadow

smbpasswd

* I deleted all of these except shadow since I couldn't see it!?

 

Did you run the New Permissions script?

*Yes I ran this but I put my cache drive in afterwards does that matter?

 

Did you re-add your users?

*I only had root before and I share everything publicly so it shouldn't be a problem

 

Are you trying to use root for access?

* I don't know I just map over to that machine in windows and I can see the shares and I click on it.  Never prompted for a user name and password but they are publicly shared as said above.

 

Have you restarted the clients?

*No not yet. But I tried connecting from a different computer which hadn't connected before I don't think and it was still an issue

 

I can drill down in the web interface so I know my stuff is all there (thank goodness)

 

Thanks!

 

Neil

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.