jakea333

Members
  • Posts

    87
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jakea333's Achievements

Rookie

Rookie (2/14)

7

Reputation

  1. Yes, I updated with the same symptoms.
  2. I updated to 6.12.9 - no change in my symptoms. I'm still blacklisting the ast to get things to function normally.
  3. You'll lose the IPMI display while Unraid is booting. For me, there's not much to "see" there or interact with once Unraid is booted. My IPMI usage is focused on accessing the BIOS and verifying the boot process is starting correctly. I'm not using the main CLI via IPMI, so not much is lost. I can confirm that it seems to fix the issue for me on 6.12.6. I've also upgraded to 6.12.8 with the same bugs and same fix still in place. It seems this problem is limited to the mATX variant (I can see at least 3 confirmed cases through this thread). Doesn't seem the ATX version is reporting the same problem. Will be interested in seeing how @Daniel15 fares with your upgrade, as you seem to have a better understanding of this process than myself.
  4. Glad it worked out. I am curious as to the root cause as well, as other boards with the Aspeed BMC don't seem to suffer in the same way. It's beyond my ability to troubleshoot, but I know that something changed between 6.12.4 and 6.12.6 that introduced this bug for me. Maybe someone else can identify the specific fix that's needed. I'm planning to leave it blacklisted and check after each Unraid release. Hopefully it's fixed in time with kernel updates.
  5. I've had issues with the W680M board and iGPU passthrough that appear to be related to the BMC that I wasn't able to resolve with the BIOS changes mentioned in this thread. These weren't present on Unraid 6.12.4, but began when I attempted to update to 6.12.6. Thanks to JorgeB in the 6.12.6 announcement thread, blacklisting the ast driver allows the iGPU to work again: echo "blacklist ast" > /boot/config/modprobe.d/ast.conf You'll lose the BMC during Unraid startup, but I don't generally use it at that point anyway.
  6. Apologies for the long delay, but I wanted to let you know that this does resolve the issue for me. Thanks for your guidance! It will drop the BMC connection during Unraid start, but that's not a big issue for me. I primarily use it for BIOS access and non-boot troubleshooting.
  7. Similar to mrhanderson, I made another attempt at 6.12.6 from a working 6.12.4 with the 13th gen Intel & W680 board. This time I captured diagnostics I'm attaching. The system doesn't seem to fully start not reaching the login prompt via command line (while watching via IPMI). However, the GUI comes up successfully and I'm able to interact with the system. The array starts (although I had Docker disabled this time), but I can see same iGPU access issues. It also hangs on every shutdown and forces a hard power down. I also booted the system using a new, cleanly installed USB. It booted to the login fully. No plugins, so I didn't check the iGPU issues. However, the system still hung during shutdown on the clean USB and I had to manually power off again. I captured the diagnostics for this boot as well, but not sure if it's helpful. For now, I'm also rolled back to 6.12.4 with everything fully functional again. tower-diagnostics-20231231-0829.zip tower-diagnostics-20231231-0931_clean install.zip
  8. Going to third this type of failure with a W680 board (ASUS Pro WS W680M-ACE SE) and Intel 13th Gen (i5-13500t). Unfortunately, I didn't capture a diagnostics package. Hopefully it can be resolved from your case here. The same inability to clean reboot/shutdown (hang at the end). The hard restarts didn't kick off Parity Checks. Only the Intel iGPU onboard (no extra GPU). Not new to Unraid, but my system was recently migrated to this new HW from a very stable 4th Gen intel system. Rolled back to version 6.12.4 and all is happy again. First ever issue with an update.
  9. I've never had any luck with this plugin keeping itself up to date. I have the setting "Automatically protect new and modified files:" set to "Enabled" but it doesn't seem to ever correctly work. So far, I've just manually rebuilt every few days which works fine, but I'd really prefer the near real-time protection enabled. I think it might be related to the inotifywait component. When I look at the config I see: root@Tower:~# cat /etc/inotifywait.conf cmd="a" method="-md5" exclude="(Domain_Backups/|Podcasts/)" disks="" Is that "disks" parameter supposed to contain all of my array disks? I've intermittently seen an error in my Syslog that's related I believe. Apr 14 20:06:08 Tower inotifywait[22757]: Failed to watch /mnt/disk5; upper limit on inotify watches reached! Apr 14 20:06:08 Tower inotifywait[22757]: Please increase the amount of inotify watches allowed per user via `/proc/sys/fs/inotify I went to that location and increased the watches from the default 524288 to 1524288 as a rough test. I then reapplied the settings in File Integrity but saw no change. Disk5 contains a backup of my Plex Library, which has a large number of small files. My guess is that it's larger than the default watches, but my quick change didn't seem to work. I've got the Share it's stored in excluded, but I don't believe that'll matter to the disk watches. Any suggestions for what I might do to troubleshoot next? I haven't found much in the way of logs to help parse what's going on, so I'm a little out of my depth. ***UPDATE It seems I spoke too soon. Since updating the max # of watches my files have been staying up to date. I added a line to my Go file to set my max watches to ~2 million each time unRAID boots. I figure that's enough so that I won't have to worry about it. Looks like inotifywait uses ~225 MB of RAM on my system, but that seems a small price to pay for the functionality of this plug-in. I still see the disks="" but the plug-in seems to be functioning correctly now.
  10. I generated the same error while trying to create a new backup external drive and testing the mount/unmount script I had assigned. I could remount (only once) each time if I rebooted the server. As these were simply backup discs that I had just formatted, I went with a different FS. Given the prevalence of XFS in UnRAID, perhaps Rob's fix should be implemented in this plugin?
  11. In the GUI, you can go to the Shares tab and click "Compute" under the Size category. I believe this only works for the top level shares, however. Any folder inside a share can easily be checked via Windows with the right-click, properties. You can also use "du" from command line at any level.
  12. Thanks, picked up the single to pair with my current parity drive. If you've got an Amex card there's a $25 back on $200 offer at Newegg that brings it down to $214 (not to mention the bonus warranty year and Shoprunner 2-day shipping benefits those cards usually provide).
  13. So, I've corrected my issue. I'm still not convinced this is due to gfjardim's plugins, but uninstalling them temporarily did resolve my problem. Basically, I uninstalled both the Preclear and Unassigned Devices plugins, rebooted the server (cache did not auto populate), assigned the drive, started the array, stopped the array, unassigned the drive, started and stopped the array again, then finally assigned the drive again. After that my cache has persisted through power cycles, even once I reinstalled these plugins. That may not be the most efficient way to do it, but I was essentially trying to emulate the process needed for unRAID to "forget" a disk you wanted to rebuild onto. I'd done something similar with the plugins installed and the cache drive never persisted through reboots.
  14. Did you find a fix for this? I have seen the same symptoms. I noticed my cache drive was no longer persistent before adding this plugin (I have recently added the pre-clear plugin and then noticed the issue when I rebooted to add a new drive). I did not change the physical cache drive, but I did unassign it to do a secure erase approximately a month ago. I probably hadn't rebooted the server since I first reassigned the drive following the secure erase and the recent power down to install the new drive. I'm assuming this isn't related to gfjardim's excellent plugins, but I am curious if you've corrected the problem. I've removed the flash and checked it in a Windows box (indicating no issues) and unassigned/reassigned the cache drive multiple times with no change.
  15. Nothing fancy on my end, just plug and play. I picked it up from BPlus via Amazon. I only had a half mini PCIe slot to play with, so I made sure I found one of the shorter cards. Left my single PCIe available for a graphics card I could passthrough in a VM.