• Content count

  • Joined

  • Last visited

  • Days Won


SSD last won the day on February 2

SSD had the most liked content!

Community Reputation

283 Very Good


About SSD

  • Rank
    unRAID Revolutionary


  • Gender
  • Location
  • Personal Text
    ASRock E3C224-4L/A+, SM C2SEE/B, Asus P5B VM DO/C
  1. Are you able to share any information based on your current inventory?
  2. I can show you from my system. Here are two RAID0 disks in my array. This always says the same thing. Temps are not right. root@tower:/var/local/emhttp/smart# cat parity smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.14.16-unRAID] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, === START OF READ SMART DATA SECTION === Current Drive Temperature: 30 C Drive Trip Temperature: 25 C Manufactured in week 30 of year 2002 Specified cycle count over device lifetime: 4278190080 Accumulated start-stop cycles: 256 Elements in grown defect list: 0 root@tower:/var/local/emhttp/smart# cat disk7 smartctl 6.5 2016-05-07 r4318 [x86_64-linux-4.14.16-unRAID] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, === START OF READ SMART DATA SECTION === Current Drive Temperature: 30 C Drive Trip Temperature: 25 C Manufactured in week 30 of year 2002 Specified cycle count over device lifetime: 4278190080 Accumulated start-stop cycles: 256 Elements in grown defect list: 0
  3. I do agree with the video above in the sense that it is good to have high volume long term data on your drives that are most like the environment in which you will have them. But unfortunately we can't arrange such a study. But even if BackBlaze is not exactly your use case, it does provide a laboratory that exposes all drives to a pretty consistent "average" usage pattern over time. Would you expect enterprise drives to do better? Yes, you might. Desktops worse? Yes, you might. But what if you are finding some desktop drives that perform as well or better than Enterprise. Would the BackBlaze study help you find just gems in the market? YES. So while you might say Seagate should not be penalized for having desktop drives that are pretty crappy for enterprise use, other manufacturers might be complimented for selling a product that is over engineered and works well for both. And in thinking about the use case of BackBlaze - are we really that different? Our media drives tend to get filled up rather quickly. Once full, deletes are rare, but do occur. And occasionally repurposed and refilled. BackBlaze is filling drives rather quickly with lower volume updates. They have client turnover and deleting their data only to replace it with new customer data is happening at some level. Our disks are often spun down when not being accessed. BackBlaze data is mostly backups that may sit unaccessed for long periods or forever. We run our parity checks that BackBlaze problably doesn't do. But overall I really don't think we are so different. Maybe for the video above, someone that plans to install Windows on a 3T spinner in a gaming case would have a very different use case. But we unRAIDers, I think it is pretty similar. I have had best luck with Hitachi and HGST drives (and maybe I'll throw in the Toshiba's that were acquired from Hitachi). The Seagates during the 2T-4T years were the worst of the worst for me. I lost several and swore off of them. Recent 8T WD RED and Seagate SMR purchases are not old enough to comment. But so far so good. I still think BackBlaze data is valuable if used properly. And they would have you buying HGST and steering clear of Seagates - very consistent with my personal experience. If an idiot savant comes up with the right answer, you have to give him credit, even if you don't agree or understand his method!
  4. External parity drive?

    Sounds like you may not have enough slots for your array. I think biggest mistake people make is plan for too few disks. I'd suggest setting up a new server, this one with enough slots to grow. And add at least a couple disks. Get your data moved over server-to-server. And once the new server is working well you can physically move over the larger disks into the new array, and keep the old server for backups and emergency use. New server can have a low powered CPU and minimal memory, that you'd plan to exchange with the existing server once all is set up. A motherboard transplant is not so hard. If you do look at eSata, be caseful. I've seen some of the eSata units come with longer cables and not work reliably. Shorter cases are not as convenient but they work better. I can't recommend USB for an array disk. In a jam I am not against plugging in a bare drive with sata and power and sitting it outside the case on the floor, turned over like a turtle with electronics side up. Not a long term solution, but if your server is out of the way and will not be disturbed while you complete whatever you are doing, I don't think that is an awful option for a short term need. I've precleared disks that way before, but now always have at least one free slot for preclear or emergency use.
  5. Sanity check - 24 drive bays

    I understand the exhaust part. It's your air intake I don't understand. There are no fans in the upper back of the case I can see. Cool air will NOT just waft into the case from above without a fan driving it down. And you are working against the natural tendency of warm air rising. Your layout is much more likely to result in hot air accumulating at the top of the case with nothing much to force it out. And that hot air will recirculate and get hotter, reducing your cooling. How is the exhaust from the CPU on the right getting out of the case? Looks like it is blowing into the back of your drives?? It will tend to rise to the top of the case, as mentioned, will get recirculated back into the CPU coolers. Take a look at this it has a lot of info on case cooling, positive vs negative pressure, etc. Here is what you are trying to achieve. Cool air coming in from low front (could also come in from bottom depending on the case), and hot air going out the back and top. Perfection is impossible, If you turn the fan on the CPU on the right to point up, and added an exhaust fan up there, while having a couple of intake fans on the front to get cool air into the case, I think your cooling would be much more effective. #ssdindex - case cooling
  6. Sanity check - 24 drive bays

    If you want a person to get a notification when you mention their name, make sure to put an "@" in front. The name should then get highlighted like @SSD. If it looks like SSD or @SSD it is just text and will not contact anyone. Just FYI. All my servers are / have been towers. Never done a rack mount. You really want to create airflow that brings in air low and front and pushes air up and back. Not always possible but that is ideal. I like to push CPU air directly to an exhaust fan if I can. But I have never had dual CPUs. See my comments below. I always replace the thermal pad with Arctic Silver 5. There are other brands that you can investigate, but I have had good luck with this one and tend to stick with it. The trouble with the pads is that they are made of wax, and if you use one, the wax melts. This can make is difficult to fully remove the compound. But I'd still try and replace with a thermal compound. I use the arctic 2 step cleaner and some of THESE to clean the old compound. They are the best I've found. They are a little pricey so I usually get CPU and HSF as clean as I can with a qtip, and then use one of these with the arctic cleaner. It is amazing how much compound it will pick up after thorough cleaning with qtip. It's a little hard to see where the air intake is. The CPU above left looks good. (Assuming that fan to the left of it is exhausting air). It is very similar to how mine is setup. The CPU to the right that is blowing air to the front of the case is not as good. I'd likely turn it facing up (towards top of case) if that is an option. And install an exhaust fan on the case top. Air should come in from the front and maybe bottom. I think it will cool well. (I'm assuming your fans are mounted the right way - I am looking at the little white labels on top showing air flow direction). Good luck!
  7. @bonienl may be able to advise. 86F = 30C, and I think Areca reports 30C as the temperature when running a non-Areca optimized smart report.
  8. On disk settings, set the controller type options:
  9. Old Hardware for basic NAS

    Crystal ball says no. You might be able to serve if no transcoding was required. You could "pre-transcode" specific shows in anticipation for them working. This type of transcoding can be done non-realtime so even a server that can't keep up with realtime, can use spare cycles over the course of hours to days to transcode some shows, and when time comes to play the show, it is ready and does not tax the server to serve the transocded file. You can delete the transcoded shows later. UPDATE: rereading this ... if other users are transcoding this could work. But your internet would need to be fast enough to send untranscoded video to that many sources. Maybe possible for lower resolutions or heavily compressed video, or if internet is exceedingly fast. Normally you can't send untranscoded video over the Internet due to bandwidth. But with gigabit, it would be possible. Have never tried it. You would be surprised, with a powerful server you can run a VM just a hairs breath slower than running it bare metal Windows. Look online for examples of people running 2 gaming VMs on the same server with high frame rates. And throwing all those resources into a single VM would not disappoint. You have tools to dedicate certain cores to certain VMs, so sharing need not be a concern. I expect running one gaming VM on a powerful server, you'd get much better performance than you are getting today on your dedicated Windows box (passing through video and keyboard/mouse using vt-d). Not sure what home automation software you are having in mind. Can you clarify? unRAID is good for redundancy. But it is not a backup. Good luck!
  10. Old Hardware for basic NAS

    Your gigabyte should work fine for testing the waters of unRAID. It should be powerful enough for the file server function. And likely fine for a couple of Dockers as well including Plex. (Your ability to transcode would likely be very limited, but if you have a capable media player, you'd be able to serve media with no problem). VM capability would be hampered by power and RAM. You could play perhaps, but not recommended for daily use. Once you build your understanding of unRAID and what you want to use it for, you might decide to convert your desktop to be your unRAID server, which would provide a lot of more power and options. I never thought I would, but my desktop Windows box is now a VM running off my unRAID server. With passthrough video and USB, no one would ever know. And with the server in the basement and using a couple longer cable runs, I have no noise at all in my office / study and enjoy very responsive 4K video and keyboard/mouse. If you want to push unRAID to be a heavy VM / transcoding machine, you might look for a newer platform like TR, X299, Rizen, or newest Kaby/Coffee Lake options. All of a sudden 4 core CPUs are old news, and these options give you 6-18 high speed cores for serious computing power. Some provide hardware transcode for X265 streams (read about Quick Sync) if that is important to you.
  11. Only one type-A port. Other ones you couple probably get an adapter. But it is one controller. And all the ports of a controller pass through to one VM at a time.
  12. I run a long cable from the server in my basement up into the main floor and into my study. This cable is not compatible with a hub, and have never tried using a hub with the port. But I would say it likely would work fine with a hub.
  13. @hatemjaber Pros: The SM C9X299-PGF has some nice features. I like the 8 memory slots. IPMI is awesome if you are running headless. 5G LAN. It also has a VGA port, which is a feature I really like. I am not much for running headless. Cons. Don't like the number of PCIe slots. It has 4 full size and one x1. vs ASRock with 5 full size, one x4, and one x1. Same: A very nice feature is having the M.2 slots hang off the CPU and not off the PCH. So many things hang off the PCH that they will bottleneck a fast M2.SSD. I wound up buying a cheap PCIe card that allowed me to mount my M2 in an x4 CPU slot. SuperMicro gets good marks - would not advise you against one. Just comes down to what features mean most to you. I will report something. Can't call it a pro or a con. There are several 2.0 controllers, but they and the USB 3.0 controllers wind up grouped together making passthrough impossible. The good news is that there is a USB 3.1 port that is separate and can be passed through. And I use that and it works just great for my Logitech Unifying controller for keyboard and mouse. But it is one and not 2 or 3, which is what I had hoped for. I need to check if there is a BIOS update that might help, or reach out to ASRock and see if they can fix the problem. I see the SM has a similar complement of controllers. Would it also group them? No idea. At least with the ASRock you know you have one that works. The the SM you may get several or none. And giving up a precious PCIe slot for a USB card is not fun.
  14. Which processor are you considering? Mine is 12 core 7920x. I bought it because it was a good price on eBay, and had already been delidded. You might look on Silicon Graphics website. They sell delidding services, but also sell CPUs they have delidded and certified at various overclocking levels. Mine is about about the lowest rung of overclockability, but runs cool enough that I can run all cores at the rated turbo speed. I am very happy with the performance! (And I am air cooling with a Noctua DH-15S). I was absolutely floored at how much faster this is than my 4 core Xeon. Motherboard? I went with the Asrock X299 OC Forumla for 2 reasons. 1 - "Silicon Lottery" listed 4 motherboards recommended to use. And this was one of them. Very very good voltage regulation and high quality parts. And a few dollars cheaper than others. 2 - It has extra PCIe x8 and x4 slots which I wanted for possible enhancement to multiple graphics cards, controllers, and other controllers. The only negative, besides BIOS screens looking dated, is that it supports only 4 memory slots (64Gb max) vs 8 (128Gb max). I figured it was a fair trade. 64G is overkill for me right now and can't imaging it becoming a bottleneck any time soon. I liked buying at the lower end of the 7900x line. That leaves room to replace the CPU with up to the 8 core 7980XE, which I expect will eventually drop in price on eBay in a few years that will give a nice upgrade without new everything. Everything else I looked at was at the top of the line with no where to go but a complete replacement. Good luck and ping with any questions.
  15. [Support] - airsonic

    Was trying to get airsonic working with my Sonus Connect, but no luck. I found this on Reddit: I quoted some of that below. I followed the github link, but could not find the referenced file inside the docker to make this patch. My intended use is strictly behind my firewall, so security concerns not a big concern. Any help appreciated! - SSD From Reddit: "The underlying problem is that the sonos api has a major security vulnerability ... ... With that said, one could easily make the sonos work again in theory with the existing airsonic code with a one line patch. Simply add '/ws/**' to the list here To be clear, this change has not been made in the actual source because it allows anyone to access the sonos api. However if you don't have airsonic exposed over the internet, you may not care. I'm really sorry for breaking it, but I can't in clear conscience go back to what it was before knowing that lots of people have airsonic exposed."

Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.