daniel.boone

Members
  • Posts

    347
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

daniel.boone's Achievements

Contributor

Contributor (5/14)

1

Reputation

1

Community Answers

  1. Appreciate the help, figured out webgui could be started with this command '/etc/rc.d/rc.nginx reload'. It came back with 'Nginx is not running' so I ran '/etc/rc.d/rc.nginx start'. Daemon started without issue. With that I got the webgui back and was able to correct the configuration. Rebuild is running once again. Thanks again, post marked solved.
  2. I was pretty sure I have the tuning disabled for rebuilds. Diags posted Thanks
  3. Appreciate any guidance here. Situation went sideways a bit. Don't want to do anything rash. I'm long time user. I've done a number of disk rebuilds in past. Never faced this behavior. notice I had a failed disk so snapped a quick pick of the drives. Stopped Docker Shut down replaced the disk booted system assigned the replacement disk on the old failed disk slot it did show the disk was to be rebuilt so I started the array/rebuild process This is where it got weird. I was expecting a warning about the format but upon the screen refresh I got a refused to connect, this site can’t be reached. I am using its ip. I can ssh to the server. I can see my apps are running. Server appears to be functioning to some degree. From the log I can see Oct 29 10:42:41 Tower Parity Check Tuning: Parity Sync/Data Rebuild detected .... Oct 29 10:51:21 Tower emhttpd: spinning down /dev/sde Oct 29 11:30:07 Tower kernel: mdcmd (38): nocheck PAUSE Oct 29 11:30:07 Tower kernel: Oct 29 11:30:07 Tower kernel: md: recovery thread: exit status: -4 Oct 29 11:30:12 Tower Parity Check Tuning: Paused: Parity Sync/Data Rebuild (4.5% completed) My questions are how long before I see the rebuild percentage increase. It seems to be sitting at 4.5%. Should I have seen a popup confirming the drive was to be reformatted? I selected start array and the screen went blank. I'm not seeing any rebuild entries after the "4.5% completed". Is there a way I can confirm rebuild is actually happening without the gui? I have 20+ drives so rebuild would take 24 hours or so. I'm running 6.11.4.
  4. Seems like a big hurdle at first but its not too bad. Motherboard was on the table so flashing there was easy solution for me. Now I'm not in the middle of upgrade. I just received my 3rd HBA yesterday, another PERC. I have all the time in the world. I'll put the time into flashing using EFI boot. Eventually this is all we will have.
  5. Flashing HBA card with the my new MB, ASRock Z690 Pro RS didn't go well. I found it impossible to boot into FreeDOS. Contacted ASRock and they told me to enable CSM but I'm using the igpu so the option was grayed. Took forever to get them to understand. Multiple contacts over a 2 weeks. They always got stuck on 'don't need DOS to flash the MB' so I moved on. I used my old MB to flash my h310, predominately followed Fohdeesha's guide. You wont have trouble finding that. Did lots of reading reading including sanderh.dev UEFI flash method. My advice is to boot into the necessary sessions. Get comfortable switching boot methods. Learn to list your card(s), get your SAS address and backup the old firmware before you even consider flashing. I used FreeDOS since UEFI commands didn't respond as expected. You do not want to get into the middle of flashing to find commands not working. Flashing is a bit scary but super easy. Your card should be even easier if its a LSI branded card. I'd follow the vendor flashing recommendations.
  6. I'm running that combo (i7) with IT mode flashed IBM and Dell HBAs. Zero issues so far but its only been a month. The month included dual parity upgrades along with a disk rebuild and installation of the H310. It was fine with the IBM and Supermicro HBA as well. Both from my old system. Swapped out the Supermicro because of the Marvell chip.
  7. Just for a test I added handbrake and re-encoded a couple of uhd vids to see the impact. With a direct stream of a high quality 1080p mkv on plex, the re-encode got memory up to 40% of the 32GB I have. 16GB works but during ripping ram consumption would be toward the high end. I tried a couple of different settings, no real change from that perspective. CPU on the other hand got up to 90ish%. No impact to the video playing but I'm on a 12th gen i7. Please post what your experience is like. It would help others considering a similar combo as yours. Originally I was considering a nuc for plex and leaving my old system in place.
  8. Ok your not transcoding, your ripping media. That processor is fine. Its the first version with HEVC support. I'd bet limited but x264 is well developed. That's a great place to start if your not hot on 4K. Would more cores be faster, sure but nothing wrong with the processor there. I'd be happy with that processor Ram may needs a bit of a bump but that should not stop you from using what you have. This goes without saying but don't put your job at risk. I hope my suggestion on the ram wasn't taken any other way. I don't run makemkv or handbrake on my server so can't speak to ram reqs there but should be easy to find online. Both of those apps have docker versions in community applications. I've started on a 1st gen HP media vault and a laptop. I probably ran a pair of 300GB drives at that time. Many upgrades later my system looks nothing like any of its predecessors. There is always a better system around the corner. Run with what you got, If your happy then your done. Save funds for disks.. That's where most of the money goes. If you dead set on spending on ram I'd look for a deal on ebay for a 7th gen or better processor with the ram you want. You want to move up not backwards. You already have a 6th gen processor so no point in getting the same if your not happy.
  9. From Plex site "1080p (10Mbps, H.264) file: 2000 PassMark score" so it might be rough for the number of transcodes your shooting for. I'd say that 6700 is a great place to start. Average CPU mark listed as 8083. CPU is plenty for unRaid. If I were you, assuming the office is feeling charitable, find some ram in the pile to add to your system, just to get you to 16GB. Try the system as is for Plex otherwise. I would not invest money into that system if it doesn't make you happy at that point. You can always pull the video card for testing later. I'm using the igpu for transcoding without issue but I'm running a new processor. I also have 32GB of ram just consumption is at 30% regularly. I can't recall it ever going over 35%. I run about 10 containers but 0 vms just to give you an idea. For me the biggest change was the use of nvme for Plex. The Plex display was sluggish before that. Good luck with the build.
  10. Wondering how the community is approaching this? I've been looking at reverse proxies but it seems those are intended to make microservices resolvable publicly. While purchasing a domain might make it easier I'm not looking to share these publicly. My goal is to make a series of docker services more accessible to family members. I found this recently, no association to me, https://github.com/cristianoliveira/ergo. Wanted to see how others have approached this problem.
  11. 2 cache pools, cache_sata and cache_nvme. I use the sata drive for all but Plex. Simple as that. I tried an SSD with my old motherboard. It did improve slightly but still had a lag. The nvme/motherboard combo provided the experience and transcoding I hoped to have.
  12. Thanks Lolight, your absolutely correct. Reaching over 100 degrees F is within normal operating temp for nvme. Those drives being new I was treating them more like standard HDDs. I did manage to keep the drive slightly under triple digits with a larger cooler. Goal achieved but as pointed out unnecessary.
  13. I equate this as a "fast" replace option and fast is actually the wrong word here. Anyone with a large array or large disks knows replacing drives takes considerable time. Nowadays that's probably most of us. I understand this new function like this. It would allow one to copy an existing drive with the sole purpose of replacing the disk with the same or larger drive. This to be done without performing a read of every sector on every disk in the array. Right now I'm planning to replace my 2 parity drives. To do so the simplest method is a new configuration, replace the drives and recreate the parity. The other option is me removing a drive at a time and running multiple rebuilds. This is before I even introduce the old drives to participate in the array. Both options presents risks and additional wear on all the entire array. An improved option would be copy disk and replace option. Same should be possible with any drive participating in unRAID. Say if I wanted to upgrade the old 1TB disk in the array or even a well used cache drive. I should be able to clear a replacement disk, copy, swap the disk while only reading the disk being replaced. This takes the concept of preclearing a drive to the next level. The process for rebuilding a failed drive works great. It has saved my data on more than one occasion. I feel no change is needed there. The process for good disk replacement could benefit from a more streamlined approach.
  14. upgrade is complete.. Motherboard works great for the most part. I pretty much bought items as described earlier. I did pick up a 1000 watt Seasonic to make sure I covered the power needs for the new proc and all the drives. I'm running 2 cache pools, HDD for downloads and the NVME for Plex. I testing basic transcoding and had no issues. I've pulled one Adpatec already, Will pull the 2nd card today. The 'new' HBA card I have is installed although there are no attached drives. I tend to make changes one at a time. It helps in identification of hardware problems when they arise. problems I did run into.. When I tried to flash a PERC H310 on the new MB it would not boot to freedos no matter what I tried. Booting to Unraid, windows pe and efi were no problem. Vendor suggested "under BIOs setup > Boot>CSM". I haven't tested this setting yet. I just used my old MB to flash the card. I'm still fighting NVME drive temps. I'm on the 3rd cooler, MB and QIVYNSRY coolers are just hunks of aluminum. They didn't help. Temp getting to 100F. I just got a Thermalright HR-9 2280. Will try that with a small fan. Hoping the heat pipe does a better job cooling the drive. Last item is this error. It's spamming the log file. Originally I thought heat/nvme related but I suspect this is XHCI driver related. I'll post into the proper area once I get the diagnostic output to share with other members. Mar 13 10:28:13 Tower kernel: usb 1-4: new low-speed USB device number 121 using xhci_hcd Mar 13 10:28:13 Tower kernel: usb 1-4: device descriptor read/64, error -71 Mar 13 10:28:14 Tower kernel: usb 1-4: device descriptor read/64, error -71
  15. Greetings Unraid Community, My old MB\CPU has served me well. Its runs like a champ. Thanks to all that helped when I made that purchase years ago. But its that time. In the old days we had a tested motherboards list. Can't seem to find a updated version so reaching out here. Motivation for the upgrade is speed from the SSD for Plex, hardware transcoding and a processor upgrade. I run about 20 containers but no VMs. Everything on the list is supported by the MB. I'm wondering if anyone has insight on problems with this combo. I was considering going with a PCIe 3.0 SSD. All I really need here is 500GB. That's plenty for the containers and Plex meta data. I do have Plexpass and I'm playing with Jellyfin. Not ready to make that switch just yet. Here is the gear. ASRock Z690 Pro RS LGA 1700 Intel Z690 DDR4 Intel Core i7-12700 Kingston FURY Beast 32GB (2 x 16GB) 288-Pin PC RAM DDR4 3600 Samsung 980 PRO 1TB PCIe 4.0 My old system is Supermicro X9SCL, XEON E31240 with 24GB ram, 2 adaptec sata controllers, an HBA in IT mode supporting dual parity/24 drives and a intel dual 1GB card. I have another HBA that I plan to flash so I can retire the old sata controllers. The AsRock MB has 8 sata connections. As long as I use the M2_3 slot I hope to have them all available. I would plug in my intel nic card if needed. that gets to to 3 PCIe cards. What am I missing here? Any compatibility issues? I've been digging through the z690 threads looking for possible problems. Haven't run into anyone with the Z690 Pro. I see plenty problems with the MSI board using that chipset on the old RCs. Thanks -D