BrianAz

Members
  • Posts

    123
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

BrianAz's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. Same. Any ideas? I'm not sure why it happens, but it does not seem to impact my Plex server (that I've noticed)
  2. I am interested in running two dual-parity protected arrays in a single chassis w/ Unraid on bare metal. I have a smaller array for testing and data that would be a minor headache to replace (6 data drives + 2 parity). I also have my "Production" Unraid array that houses my primary data that would be a huge issue if I lost (18 data drives and growing + 2 parity & full/daily offsite backup). My chassis is 36 bays and I currently run Unraid on top of ESXi, passing through the USB keys along with HBAs and NVMe cache drives to get as close to bare metal as I can. Edit: After ZFS lands, surely multiple Unraid parity arrays will be next…?
  3. I'm having this issue as well... I have two Ubuntu 16.04 servers and they both will throw an error after a while on my most used folders. I have the mover enabled (FYI), but haven't tied any of this to it yet. This had not been a problem until recently. I guess I'll try to use 1.0 and see if it fixes it. Very frustrating.
  4. Thanks for the app. I am trying to get my NUT stats to display. Am I missing them? iOS 13 ControlR v4.10.0 NUT Plug-in 02-03-19 unRaid 6.7.2 Do I need the ControlR plugin for NUT stats to work? Thanks
  5. Wanted to chime in and say thanks! I had this happen to me tonight on version 6.3.5. Would be really nice to get an explanation on what might cause this from LT.
  6. Just a heads up in case anyone else encounters this... I upgraded to 6.3.5 today and all of a sudden my unRAID CPU (Celeron G1610) usage shot up to 100% and the system load skyrocketed as well. Upon investigating, I saw that my smbd process was consuming all the CPU. Thankfully, I experienced a very similar issue recently on my FreeNAS box. It seems that Ubuntu 16.04 VMs (mine are on ESXi) mount smb shares default to v1.0 which has problems connecting to FreeNAS/unRAID smbd and causes high CPU/Load on the NAS. Like with the FreeNAS issue, I specified version 2.1 in my /ets/fstab mounts and as soon as I re-mounted, my CPU/Load on unRAID dropped to normal levels. Hope this helps someone. I have not seen any negative results of specifying version 2.1, but welcome any discussion as to why this is happening. Thanks.
  7. Hi - when you do the update, could you also look into this error? Since upgrading (I think it was to 6.3) we are seeing these errors in our logs every minute (MacOS Margarita) or sporadically (MargaritaToGo). I'm not sure if its something you can fix or if it's on the unRAID side. Related thread: Reply to Error in log every minute Thanks
  8. Looks like the issue I experienced due to Margarita... Are you running that on your Mac or phone? See this other thread:
  9. Love the new forum software! Looking forward to the dark theme. Thx
  10. Thanks everyone! Been trying to find the cause of these messages for a bit now but could not figure out what the heck was running every so often that would cause these errors. Anyway, while I did not have Margarita installed on my Mac, I DID have MargaritaToGo on my iPhone. As soon as I opened it, errors immediately appeared in the log (bottom of screenshot). Appreciate the help tracking this down, will track this thread to see what the resolution is.
  11. unRAID won't clear a new parity drive. There is no need for it to be clear since it will be completely overwritten by the parity sync. So if you want to test your new parity drive you may want to preclear it anyway. I meant I would be preclearing the OLD parity drive to be used as a new data drive in the array (like you discuss below). Thanks for this, I'll be sure to keep a link to it for the future.
  12. FYI, since v6.2 clear is done with the array online, but disk is not tested like when using preclear. Thanks for that info too. I'll take advantage of that when I upgrade my parity drives again (and look to use them as data drives in the array). As a follow up to what you noted above, I proceeded with starting the array and unRAID picked up the preclear signature and prompted for only a quick format. Everything as you indicated. thx
  13. That's normal, unRAID only checks for the preclear signature after starting the array. Many thanks Johnnie! Been quite a while since I expanded my array. Wanted to make sure I wasn't giving the go-ahead for 9+ hours of downtime.
  14. Hello - I'm attempting to preclear a 4TB drive to expand my array but after each successful preclear, the array appears to want to Clear it again before adding it. I've attached diagnostics, preclear reports and a screenshot of the "Start will bring the array on-line and start Clearing new data disks" message I'm concerned about. Please let me know what else might be helpful to troubleshoot. I'm hoping to avoid multiple hours of downtime by pre clearing and the language I'm seeing seems to tell me thats what unRAID intends to do once I click start. Am I misunderstanding the meaning or is something not right? Thanks for your help, Brian tower-diagnostics-20170215-1213.zip TOWER-preclear.disk-20170215-1211.zip