My first unraid- Rosewill RSV-L4411 with hot swap bays


Recommended Posts

This will be my first unraid build. It will be replacing a DroboFS 5 bay and a Drobo 2nd generation 4 bay. Will be used as a basic media sever for xbmc. I don't have any plans for any virtualization or torrents or transcoding at the moment. I'll probably only have a few addons related to server management, email notifications, time server, that sort of thing. Would greatly appreciate any thoughts or advice.

 

Enclosure- Rosewill RSV-L4411

 

MoBo- SuperMicro MBD-X10SLL-F

 

CPU- Intel Xeon E3-1241v3 Haswell

 

ECC Ram- Kingston 8GB 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 Server Memory times 2

 

Controller- SuperMicro AOC-SAS2LP-MV8

                Dell Perc H310 flashed to IT

 

PSU- Corsair HX-750

 

Hard drives will just be an assortment of whatever I have laying around. Most will just get moved out of one of the drobos as the media is transferred. Will be using 4TB parity and probably 2 or 3TB cache drive. Thanks to everyone that has suggested various components over the past several weeks. Would really appreciate any feedback regarding this system and any changes I should consider before I get started building it.

 

Edit: Corrected links for motherboard and RAM.

Edit 2: Modified to reflect current design plans.

Edit 3: The AOC-SAS2LP-MV8 Controller DOES NOT come with SAS to SATA cables. Had to order 2 of these 8087 forward breakout cables. I missed this until I started building. Now I'm waiting for cables.

Edit 4: Replaced controller.

Edited by wgstarks
  • Like 1
Link to comment

well, if you really just building a simple NAS.

this should be just fine.

but if I were you I would try to think a bit further into the future and consider if you

will not be looking into virtualization in the next 2-3 years.

 

if you are spending money now anyway you might invest into at least VT-D/IOMMU supporting hardware.  I am not sure your MB supports it though...

 

also your RAM is wrong. the link it to a 204 pin  SO-DIMM (laptop RAM)

where MB uses  240 pin DDR3 DIMM

 

 

 

Link to comment

I also noticed the laptop memory.

 

I'd consider 16G memory if you want to add VMs, but that doesn't seem to be in the plan, but my guess is, if you're like me, plans change. You might do 2x4G now, and be able to add 2x4G later if you wanted, rather than having to throw away 2x2G chips.

 

The mobo is a great price, but the SATA slots are in short supply. Those plus your add on controller get you to 10, so you'd have 2 unconnected slots of your 12. Not the end of the world, but just sayin'. If you found a MB with 8 slots, all would be right with the world.

 

Regarding the PSU. I've heard bad remarks about the CX. Not sure if single rail. Do your homework and make sure the PSU is up to the task.

Link to comment

well, if you really just building a simple NAS.

this should be just fine.

but if I were you I would try to think a bit further into the future and consider if you

will not be looking into virtualization in the next 2-3 years.

 

if you are spending money now anyway you might invest into at least VT-D/IOMMU supporting hardware.  I am not sure your MB supports it though...

 

To be honest I wanted to keep this build as simple as possible. Building servers is way outside the box for me so this'll be a learning experience. As far as Vitualization goes, I barely know the word. Most of what I have read about it is complete gibberish that sounds like it came out of some sci-fi book. I haven't been able to find anything that describes the reasons why I need this in a media server. All the threads I have found have been "How-To's" not "Why's". I'm sure there are advantages and at some point I may want to take advantage of them. That may be a good excuse in the future to build another unraid.:)

Link to comment

I'd consider 16G memory if you want to add VMs, but that doesn't seem to be in the plan, but my guess is, if you're like me, plans change. You might do 2x4G now, and be able to add 2x4G later if you wanted, rather than having to throw away 2x2G chips.

Thanks. I probably will go ahead and go with 2x4GB now.

The mobo is a great price, but the SATA slots are in short supply. Those plus your add on controller get you to 10, so you'd have 2 unconnected slots of your 12. Not the end of the world, but just sayin'. If you found a MB with 8 slots, all would be right with the world.

I had the wrong link posted for the motherboard. I think the new one should work for what you are describing.

Regarding the PSU. I've heard bad remarks about the CX. Not sure if single rail. Do your homework and make sure the PSU is up to the task.

Thanks. I saw this one posted this morning. Maybe a little more than I need but looks like a good price- http://www.newegg.com/Product/Product.aspx?Item=N82E16817139010

Link to comment

I'd consider 16G memory if you want to add VMs, but that doesn't seem to be in the plan, but my guess is, if you're like me, plans change. You might do 2x4G now, and be able to add 2x4G later if you wanted, rather than having to throw away 2x2G chips.

Thanks. I probably will go ahead and go with 2x4GB now.

The mobo is a great price, but the SATA slots are in short supply. Those plus your add on controller get you to 10, so you'd have 2 unconnected slots of your 12. Not the end of the world, but just sayin'. If you found a MB with 8 slots, all would be right with the world.

I had the wrong link posted for the motherboard. I think the new one should work for what you are describing.

Regarding the PSU. I've heard bad remarks about the CX. Not sure if single rail. Do your homework and make sure the PSU is up to the task.

Thanks. I saw this one posted this morning. Maybe a little more than I need but looks like a good price- http://www.newegg.com/Product/Product.aspx?Item=N82E16817139010

 

MB looks great except one thing. No PCIe x1 slot. Why are motherboard manufacturers continuing to put PCI slots on these motherboards?

 

So you ahve 2 PCIe slots - an x16 and an x4. Works for you, wouldn't for me.

 

I need 2 x8 or better and at least one, preferably 2, x1 or better. If anyone knows of a MB that supports vt-d, at least 8 SATA slots, and has two wide and at least one narrow PCIe slot, and preferably NO PCI slots - all at a good price, let me know. I don't care about ECC.

 

Happy computing.

Link to comment

well, if you really just building a simple NAS.

this should be just fine.

but if I were you I would try to think a bit further into the future and consider if you

will not be looking into virtualization in the next 2-3 years.

 

if you are spending money now anyway you might invest into at least VT-D/IOMMU supporting hardware.  I am not sure your MB supports it though...

 

To be honest I wanted to keep this build as simple as possible. Building servers is way outside the box for me so this'll be a learning experience. As far as Vitualization goes, I barely know the word. Most of what I have read about it is complete gibberish that sounds like it came out of some sci-fi book. I haven't been able to find anything that describes the reasons why I need this in a media server. All the threads I have found have been "How-To's" not "Why's". I'm sure there are advantages and at some point I may want to take advantage of them. That may be a good excuse in the future to build another unraid.:)

 

While you may not understand virtualization today, once you delve into UnRAID - especially with version 6 - you are likely going to get into it faster than you think. If you want an explanation of UnRAID/Virtualization look at the LimeTech blog as there is a post that explains it and why you'd want it.

 

If you are financially constrained on your build I would say go with what you can - however for an extra $100 (give or take) if you can position yourself for virtualization you will likely be patting yourself on the back in 6 months. If it was a matter of adding an extra component down the road to get virtualization that would be one thing, but since we are talking about your motherboard/cpu it something you want to make sure you get right the first time. You can always add memory later if/when you see the benefits of virtualization, but having to swap out motherboard and/or cpu won't make sense in such a short period.

 

You mentioned you are new to UnRAID, which is why you are looking for advice (which is smart). I can tell you that just about any of the senior guys here, or long term UnRAID users will advise you plan for virtualization now. You may not take advantage of it right away, but UnRAID is usually a long term investment and it makes sense to give yourself the best foundation you can.

 

Link to comment

 

To be honest I wanted to keep this build as simple as possible. Building servers is way outside the box for me so this'll be a learning experience. As far as Vitualization goes, I barely know the word. Most of what I have read about it is complete gibberish that sounds like it came out of some sci-fi book. I haven't been able to find anything that describes the reasons why I need this in a media server. All the threads I have found have been "How-To's" not "Why's". I'm sure there are advantages and at some point I may want to take advantage of them. That may be a good excuse in the future to build another unraid.:)

ok, well if you ever build a computer from parts  you are good to go. PC or server mostly the same process just slightly different components

with unraid it's mostly the same thing as it can run on any hardware, server grade or desktop grade makes little difference most of the time.

now for virtualizaion and why you might needed...

it's not so simple answer.

 

the reason WHY some people want it or needed are different.

most people here, and in fact anywhere, who get into virtualization are doing so

for simple reason of consolidation of hardware to save money, space, cut down on heat generation. have the ability to try things safely and cheap.

increase over all system stability.

 

WHY you might need/want  VM on Media server?

well that has something to do with unraid it self.

you see unraid was build as a simple NAS with some special way of doing things (Raid and drive pooling) differently than other available technology out there  ,and it does what it was created to do very well, but some people what to do more with less, so a plug-in system have developed by knowledgeable users to expand the unraid capability so you could run many things on a single physical hardware machine. each plug in  is created and maintained by original developer and mostly

none of them are related to Lime-Tech , thus Lime-Tech does not support 95% of the additions that are out there. every time a new version is released you need to pray that the plug-in you have will work with that version.

also if you find that you want to do something no already available you either need to hope that some one will create/port the app to unraid , or do it your self.

 

good example is PLEX. you can not run native PLEX app on unraid. it simply not working.  some one here with knowledge and time sit down and mange to create a plug-in that enables PLEX on unraid. but as we moved from v5 to v6 that plug in is not working anymore. now if the original person who created the plug in have time and is willing to update it we are good. but if not we are up the creak without a paddle until some one else can/willing to do it.

 

now with virtualization (which unraid now supports starting with v6 ) you can span up a VM on unraid, with a full blown distro of your choice(for let say PLEX a Ubuntu of Debian is obvious choice)  install Plex in VM  point it to your unraid share and be no wiser. #1 you are running stock app. if you want need to update it you do it any time you want.  #2. it is running tottaly insulated from main system(unraid) if it's misbehaves it will not bring the whole system down, just the VM.

if you are tired of it , just dump the VM and you are done. if you want to use something else instead (XMBC what ever) spin up a VM isntall and configure what ever, try it out if you like it and if all is good dump the VM with Plex and use the new favorite.

 

sounds interesting ? 

 

 

 

 

 

Link to comment

Ok. I can see that it makes a lot of sense to go ahead and select a CPU/motherboard that would accommodate virtualization. I'm totally in the dark about what I should be looking for though. Any advice, recommendations etc. would be appreciated. That's the whole reason why I started here. Would be foolish to ignore what everyone is telling me. I wouldn't mind spending an extra $100 or maybe $200US if anyone has some suggestions on changes. This is going to be a rather stretched out project anyway. A part here another piece later. I wouldn't say that price doesn't matter (it does) but I don't mind if it takes a little longer to finish.

 

Would anyone care to comment on the revised PSU I posted in my first post? It's on sale till Monday at what I believe is a pretty good price. I'd like to take advantage of that if it's going to work. Everything else is changeable except the enclosure.

 

 

I will also add that I really appreciate everyones input and the time you have spent or will spend helping me with this.

 


 

I found this- http://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware

Not sure how up-to-date it is. Many of the references are from 2012. The only Xen compatible motherboard listed here is this one- http://www.newegg.com/Product/Product.aspx?Item=N82E16813128514 and maybe this cpu- http://www.newegg.com/Product/Product.aspx?Item=N82E16819103888. The cpu looks a little under powered to me, but it was either this one or a 6 core monster. Couldn't find anything in between. I'm really in the dark with this. Don't have any experience with AMD.

Link to comment

wgstarks: easiest way to explain virtualization - CPU in most computers are idle 99% of the time. Even when you actually do something unless it is something like high end gaming or serious HD editing, you're generally left with a device that is severely underutilized. Certain use case scenarios in the past required a whole new build to run. When virtualization came about, you could essentially place multiple builds that had required separate builds on one maximizing the available resources (spend, power consumption to name two).

 

When reading through this forum, Xen is heavily discussed. I prefer ESXI. Assuming you purchase compatible components, it is IMO much easier to configure as you are using a GUI in Windows to setup your Virtual Machines (or builds using above nomenclature) and rarely need the command line to configure anything.

Link to comment

wgstarks: easiest way to explain virtualization - CPU in most computers are idle 99% of the time. Even when you actually do something unless it is something like high end gaming or serious HD editing, you're generally left with a device that is severely underutilized. Certain use case scenarios in the past required a whole new build to run. When virtualization came about, you could essentially place multiple builds that had required separate builds on one maximizing the available resources (spend, power consumption to name two).

 

When reading through this forum, Xen is heavily discussed. I prefer ESXI. Assuming you purchase compatible components, it is IMO much easier to configure as you are using a GUI in Windows to setup your Virtual Machines (or builds using above nomenclature) and rarely need the command line to configure anything.

Thanks for the reply. If I'm using Mac OSX instead of Windows would you still recommend ESXI? I've never used Windows and probably won't try to start learning it now. I think just trying to learn the ins and outs of unraid will be enough challenge for the immediate future. The updated components in my first post reflect a Xen build but I believe they could be easily modified for ESXI.

Link to comment

Finally get to start building. A few of the parts came in today. Hopefully the rest will be here by the weekend. Since Rosewill doesn't bother to include much in the way of documentation, I'm going to try and document the process with pictures. I'm not much of a photographer though.

 

First photo is the case with front panel unlocked to show hot swap bays.

YFuijHF.jpg?1

 

Next is a photo with the top cover removed. I have already installed the PSU.

MbAgwjH.jpg

 

Here are the cooling fans and sata connections for the hard drive cages. I'll try for some better closeup photos when I connect the wiring.

dgtvMxG.jpg

Link to comment
  • 2 weeks later...

I built two servers using that Rosewill Chassis, and am completely happy about it, but two things to consider:

 

#1 - Some hard drives will not turn on the green light on the trays.  There is nothing wrong with the tray / backplane.  It *seems* to be a firmware issue on the drive, as another drive plugged into the same tray / slot will operate the light perfectly.  Note that the drive works fine...  Just the light isn't on when idle

 

#2 - For an inexpensive 12bay hot swap case, the first thing that I expected Rosewill to skimp on was going to be the 120mm fans on the drive cages.  I never even bothered with using them at all.  Instead I picked up a three high static pressure 120mm fans (can't remember what brand) and replaced them right away.  During parity checks, my drive temps never get above 29C.  For pulling air through tight openings,  high static pressure is far more important than CFM.

 

 

Link to comment

I built two servers using that Rosewill Chassis, and am completely happy about it, but two things to consider:

 

#1 - Some hard drives will not turn on the green light on the trays.  There is nothing wrong with the tray / backplane.  It *seems* to be a firmware issue on the drive, as another drive plugged into the same tray / slot will operate the light perfectly.  Note that the drive works fine...  Just the light isn't on when idle

Thanks. Haven't run into this yet, but I've only got 6 drives up and running. Still waiting for my SAS to 4 SATA cables to arrive so I can hook up the rest.

 

#2 - For an inexpensive 12bay hot swap case, the first thing that I expected Rosewill to skimp on was going to be the 120mm fans on the drive cages.  I never even bothered with using them at all.  Instead I picked up a three high static pressure 120mm fans (can't remember what brand) and replaced them right away.  During parity checks, my drive temps never get above 29C.  For pulling air through tight openings,  high static pressure is far more important than CFM.

I've been using the Rosewill fans for almost a week now. Transferring data most of that time. Haven't had any temp issues. Highest I've seen was 40C, but most are running about 36C. Not quite as low as yours but I don't think they're high enough to be an issue. I was surprised at how quiet they are also.

 

If you come across the info for the fans you used, please let me know. I'll hang on to it in case temps do become an issue after all is complete.

 

So far the only issue I'm having is that the power LED on the front of the case is not working. I've probably got it hooked to the wrong pins on the MoBo.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.