aaronwt

Members
  • Posts

    224
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • URL
    www.aaronwt.net
  • Location
    Northern VA
  • Personal Text
    63TB unRAID1--49TB unRAID2--76TB unRAID3--56TB unRAID4--56TB unRAID5--50TB unRAID6--50TB unRAID7--48TB unRAID8--48TB unRAID9

Recent Profile Visitors

3473 profile views

aaronwt's Achievements

Explorer

Explorer (4/14)

4

Reputation

1

Community Answers

  1. I have been using SATA port multiplier cards. The four stock internal drive connections, the HP Gen 7 Microservers use are SAS. But the port multiplier cards installed and external enclosures I have all use SATA. The Port multiplier cards I use, in those first three unRAID setups, are the Rosewill RC-218 PCI Express SATA II cards. They are very, very old now. I got my first one in 2011. When I switched from my Windows Home Server to my first unRAID setup. But that was an unRAID system I put together from scratch. With a combination of new and old components. I did not get my first Gen 7 microserver until 2013. EDIT: On a more relevant note, I just now finished upgrading my nine unRAID setups to v6.12.10. They all upgraded without any issues.
  2. I've been using them for around thirteen years with unRAID. With my first three unRAID setups. With around 16 drives attached to the port multiplier cards, with each system. They have been rock solid in my use. But, when I setup my unRAIDs four through nine, I got away from using port multiplier enclosures. And only use six drives, internally, in the HP Gen 7 microservers. Since those external enclosures use a lot of power. Plus I'm using 2TB, 3TB, and/or 4TB drives in my first three unRAID setups, with the port multiplier cards. WHile my newer six unRAID setups, from 2023, use 12TB and/or 14 TB drives.
  3. I updated my nine unRAID setups on Wednesday last week. That use HP Gen7 Microservers. They all upgraded to v6.12.9 without any issues.
  4. Here are my newer unRAID setups, four through nine. That still use HP Gen7 Microservers. Each one has five EXOS hard drives (12TB and/or 14TB). With one as parity and a 1TB 870 EVO SSD for the cache drive. With a UPS for each unRAID setup. And one UPS for the 2.5GbE switch.
  5. Here are my three old unRAID setups, 1, 2, and 3. That use external enclosures with the HP Gen7 Microservers. Each unRAID setup uses 20+ hard drives (2TB to 4TB WD and Seagate) for the array and a 1TB WD for the cache drive.
  6. Thanks. I did see it in the release notes. And I remember trying the r8125 plugin, when I setup six of my unRAIDs, with the r8125 cards. But I uninstalled the plugin after testing. I don't remember why though, since it's been around six months. So I did reinstall that r8125 plugin driver and was able to upgrade those six unRAIDs to 6.12.6, without any issues. And have been able to continue using Jumbo frames.
  7. I am having the same issue with 6.12.6 as 6.12.5. With the RTL8125 2.5GbE card going down from Jumbo frames. Will this be a permanent problem, or will this be fixed at some point? Because I have a noticeable transfer throughput slowdown when Jumbo frames are not enabled. So I would prefer to not disable Jumbo Frames. Right now, 6.12.4 is working fine. But I had upgraded to 6.12.6 and had to roll back to 6.12.4 by taking out the flash drive and copying the files from the previous version over. So then I figured I would try 6.12.5. Since I did not see anything in the notes about the RTL8125 2.5GbE card. But that had the same issue. So I need to roll back again to 6.12.4 by copying the files from the flash drive.
  8. I'm running nine of those old HP Microservers with unRAID. One N36L, five N40L, and three N54L units. They are all running unRAID 6.12.3 without any issues. But I also typically upgrade my unRAID OS as soon as they are out of beta, and available for general use. So I did not go straight from 6.11.x to 6.12.3
  9. Is there anyway you could add an option to benchmark SSDs when the CPU core count is below 4? All my unRAID systems run on old dual core CPUs. It would be nice to be able to benchmark the SSD I have in each one. That is used for a cache drive.
  10. Fifteen years ago I strictly used WD drives for my LAN storage. But over the years I gradually switched to Seagate. First Seagate Barracuda drives. Then Seagate Iron Wolf. Then Seagate Terascale. Most recently I started using the Seagate EXOS drives. I recently purchased seventeen of the 14TB Exos X14 drives, renewed. And six of the 12TB EXOS X18 drives, renewed. Although I still use WD drives, for my external USB drives, that are attached to my Plex PCs. And also in my old unRAIDs. For my new unRAIDs, I have gone all in with Seagate. Except for the cache drive. Which are 1TB Samsung 870 EVO SSDs. And once I transfer all my content from my oldest unRAID setups, which still use the ReiserFS file system. I will purge the forty or so, old 2TB and 3TB, WD drives. And will be left with only Seagate Terascale and Exos drives, in all my unRAID setups.
  11. With no Parity drive in use, the N36L should have no problem maxing out the throughput on the drives. How old are the drives? I know I have 24+ 4TB Seagate Terascale drives in unRAID setups. With an N40L and N54L, and they don't come anywhere close to reaching the throughput that my Exos X14 and X18 drives have. But they will still hit up to 170MB/s throughput. While the Exos drives will hit up to 250MB/s throughput, in the N36L.
  12. I have a couple of those QNAP switches. I've never had any problem getting over 8Gbps throughput, between two PCs, using Commscope Cat6 patch cables. On the 10GbE ports. And then Cat5e is rated for up to 100 meters at 2.5 GbE. My unRAIDs are connected to the 2.5 GbE ports. Some using generic Cat5e. At 2.5 GbE, they work just the same as the ones using Commscope Cat6 cables.
  13. I just setup an N36L last night. I bought a "For Parts" N54L off eBAY and needed to replace the Motherboard. So I was able to pickup an N36L Motherboard (and Tray)for around $40 shipped, off ebay. In my testing last night, with no parity drive, I had no problem hitting up to 1.6Gbps transfer rates, using a 2.5GbE network card (in the PCIe x1 slot), and bypassing the cache drive. And the 12 TB EXOS X18 drives I had in the system, easily hit up to 250MB/s (2gb/s) transfer rates. With consistent rates over 200MB/s(1.6Gb/s). Both from the Motherboard SAS connector and the SATA card I have installed. My SSD cache drive, 870 EVO, was even hitting around 450MB/s (3.6gb/s) peaks. But that 2.5" drive, and a fifth 3.5" drive, is connected to a PCIe x4 SATA card. And it hit those peaks when pulling cached data from the 16GB of ECC memory.
  14. I have 4GB, 8GB, and 16GB in my HP N36L, N40L, and N54L microservers. But I will be upgrading them all to 16GB. I currently only have four with 16GB. But, as I setup new arrays, I am installing two, 8GB, ECC sticks of memory. Unfortunately, 16GB is the max memory it will take.
  15. I'm currently running six unRAID servers. And I plan to setup two to three more. After I finish transferring data from my first three unRAID servers (only 2TB, 3TB, and 4TB drives), to my second three (Five 14TB drives in each). The first three still use the Reiser file system. So I want to move everything off of them. And then I will take the 20+ 4TB drives I have in the first three unRAID servers. And use those to setup four unRAID servers.