lewcass

Members
  • Content count

    184
  • Joined

  • Last visited

Community Reputation

0 Neutral

About lewcass

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I typically use this plugin to backup key data to hard disks for offsite storage. It works very well for the most part, but the challenge comes in recognizing whether adequate space remains on the backup disks as the data on the server grows. So I've been tying to figure out how to use the plugin's AVAIL variable in my scripts to both log and notify the webgui as to the amount of space available when the disk is mounted and how much remains after the backup is completed. My scripting skills/knowledge are pretty minimal and so far I haven't been successful. Advice would be very much appreciated!
  2. lewcass

    Dynamix - V6 Plugins

    Since no one else has been reporting the issue, could (likely) be completely unrelated to the update. (See logical fallacy: post hoc ergo propter hoc) When I have had similar issues with S3 sleep not recovering properly it has been related to overclocking, specifically the bus-clock. If you aren't overclocking then it also might be hardware related; e.g., power supply no longer delivering adequate standby power, or failing RAM or motherboard. You might want to do some hardware testing. Edit: To clarify, the S3 sleep overclocking related issues I was referring to were with PCs other than my UnRAID servers. I wouldn't overclock a server with data on it I wanted to preserve. Also, more to the point, what I would do first is completely remove unRAID from the equation by booting with another linux distro on a separate flash drive and testing sleep from there. If problem still exists, certainly points to hardware or BIOS settings.
  3. lewcass

    Dynamix - V6 Plugins

    DUH! I'd completely forgotten about that with the passage of time since I first set up the plugin after UnRAID 6.0 came out. Unfortunately the setting location isn't particularly obvious or intuitive. The good news is that it looks like that is solving my issue! So thanks! My theory about the non-array drives was wrong. Even with them removed, after a wake-up the plugin was still continuing to treat the cache disk as active. It was only working correctly before the first S3 sleep was initiated. Setting the plugin to ignore the cache disk seems to be solving the issue. Hope you are successful getting your issue resolved.
  4. lewcass

    Dynamix - V6 Plugins

    Previous version is not compatible with unRaid 6.4.x If you enable the debug log, you may be able to find out why the current version is not working for you. I have mine set to log to Flash. You can view the log in the logs directory of you flash drive. \\TOWER\flash\logs\s3_sleep.log I don't see any setting to ignore cache drive. If it is possible I would like to know how.
  5. lewcass

    Dynamix - V6 Plugins

    Set the Tunable (poll_attributes) to 180 (~3 min). Shutdown. Reinstalled the hot-spare disks in the cage. Rebooted. Left server do it's thing (idling). Drives spun down. Sleep plugin script did it's thing. Server went to sleep. However, woke server with WOL, played media for an hour using only one disk. Left server idle. Disk that was in use spun down. Then same issue as before. Tue Feb 13 17:43:10 EST 2018: Disk activity on going: sdd Tue Feb 13 17:43:10 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 17:44:10 EST 2018: Disk activity on going: sdd Tue Feb 13 17:44:10 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 17:45:10 EST 2018: Disk activity on going: sdd Tue Feb 13 17:45:10 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 17:46:10 EST 2018: Disk activity on going: sdd Tue Feb 13 17:46:10 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 17:47:10 EST 2018: Disk activity on going: sdg Tue Feb 13 17:47:10 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 17:48:10 EST 2018: Disk activity on going: sdg Tue Feb 13 17:48:10 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 17:49:10 EST 2018: Disk activity on going: sdg Tue Feb 13 17:49:10 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 17:50:10 EST 2018: Disk activity on going: sdg Tue Feb 13 17:50:10 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 17:51:10 EST 2018: Disk activity on going: sdg Tue Feb 13 17:51:10 EST 2018: Disk activity detected. Reset timers. ... Tue Feb 13 19:49:16 EST 2018: Disk activity on going: sdg Tue Feb 13 19:49:16 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 19:50:16 EST 2018: Disk activity on going: sdg Tue Feb 13 19:50:16 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 19:51:16 EST 2018: Disk activity on going: sdg Tue Feb 13 19:51:16 EST 2018: Disk activity detected. Reset timers. As soon as the active array disk spun down the sleep plugin once again began logging the cache disk (sdg) as active and is continuing to do so over two hours later as I write this. All this time the cache disk is shown as inactive in Dynamix GUI. So changing the disk poll attribute has not addressed the issue. Tomorrow I will remove the hot spares again and test longer to see if the sleep plugin will work consistently without non-array drives present.
  6. lewcass

    Dynamix - V6 Plugins

    OK I'll give it a shot. I'm dubious though because the plugin was consistently logging that the cache was active for hours at a time (like all night and then some), when nothing should have been happening. So difficult to understand why polling every thirty minutes would not have been sufficient to update the status correctly.
  7. lewcass

    Dynamix - V6 Plugins

    Would the setting of that parameter explain why the Dynamix GUI has been showing the the cache as inactive at the same time the sleep plugin was logging it as Disk activity ongoing?
  8. lewcass

    Dynamix - V6 Plugins

    All disks/drives are connected to onboard SATA.
  9. lewcass

    Dynamix - V6 Plugins

    Yes, but it was consistently showing it spun up for hours while the Dynamix GUI was continually showing it as inactive. Please consider my previous post and whether having two additional disks connected but not in the array may be causing the plugin to lose track of the status of the final (cache) disk. Thanks.
  10. lewcass

    Dynamix - V6 Plugins

    OK. Did a little testing and it appears that the sleep plugin's confusion about the status of the cache drive may be related to my having two hot spares on-line (but not in the array). Just rebooted with the hot spares removed from the drive cage and tested sleep. This time the plugin did not report the cache drive to be active when Dynamix shows it was not, and sleep script proceeded as it should. ############################################# Here are earlier sleep plugin drive monitoring parameters with the hot-spares. Feb 9 13:14:12 Tower s3_sleep: ---------------------------------------------- Feb 9 13:14:12 Tower s3_sleep: included disks=sdb sdc sdd sdg Feb 9 13:14:12 Tower s3_sleep: excluded disks=sda sde sdf i.e., The failure mode, when sleep.log incorrectly shows (sdg) cache drive active. Should note here that Dynamix always shows the hot spares, sde, sdf, as inactive. ############################################# Now drive monitoring parameters while testing without the hot spares connected. Feb 13 12:41:41 Tower s3_sleep: ---------------------------------------------- Feb 13 12:41:41 Tower s3_sleep: included disks=sdb sdc sdd sde Feb 13 12:41:41 Tower s3_sleep: excluded disks=sda Cache drive in this case is sde, which sleep.log appropriately shows as inactive, and sleep proceeds correctly. ############################################ I'll have to see if this now works consistently, but at first pass it seems likely to be related.
  11. lewcass

    Dynamix - V6 Plugins

    Docker is disabled. I am currently not using docker or VM. Cache drive is not active when this issue with sleep plugin occurs, at least not anyway, according to Dynamix Main or Dashboard.
  12. lewcass

    Dynamix - V6 Plugins

    OK, but I don't know what a csrf token is or how to include one. Does it even relate to solving the issue we're reporting?
  13. lewcass

    Dynamix - V6 Plugins

    I'm encountering the issue too. Thought it was working OK after the recent sleep plugin update, but now (I'm on 6.4.1) I'm repeatedly finding that the plugin thinks that my SSD cache drive is active, when in fact it is not, and so fails to initiate the sleep sequence when it should. However, if I "spin-up" and then "spin-down" the cache drive from Dynamix Main tab , the sleep plugin will then recognize that the cache drive is inactive and proceeds with putting the server to sleep. Tue Feb 13 09:54:04 EST 2018: Wake-up now Tue Feb 13 09:54:04 EST 2018: System woken-up. Reset timers Tue Feb 13 09:55:04 EST 2018: Disk activity on going: sdb Tue Feb 13 09:55:04 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 09:56:04 EST 2018: Disk activity on going: sdg Tue Feb 13 09:56:04 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 09:57:04 EST 2018: Disk activity on going: sdg Tue Feb 13 09:57:04 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 09:58:04 EST 2018: Disk activity on going: sdg Tue Feb 13 09:58:04 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 09:59:04 EST 2018: Disk activity on going: sdg Tue Feb 13 09:59:04 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 10:00:04 EST 2018: Disk activity on going: sdg Tue Feb 13 10:00:04 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 10:01:04 EST 2018: Disk activity on going: sdg Tue Feb 13 10:01:04 EST 2018: Disk activity detected. Reset timers. Tue Feb 13 10:02:04 EST 2018: All monitored HDDs are spun down Tue Feb 13 10:02:04 EST 2018: Extra delay period running: 30 minute(s) Tue Feb 13 10:03:04 EST 2018: All monitored HDDs are spun down Tue Feb 13 10:03:04 EST 2018: Extra delay period running: 29 minute(s) Note: @ ~10:00 - 10:01+ sdg (cache) is shown inactive per Dynamix Main. After "spin-up" / "spin-down" commands executed, sleep.log then correctly reports "All monitored HDDs are spun down" We had an issue several years back where the sleep plugin was losing track of whether drives were awake. An after wake-up command string was inserted to address the issue. I still had this string active when I noticed this recent problem. I currently have it removed to see if there would be any change, but it does not appear to make any difference. /usr/bin/wget -q -O - localhost/update.htm?cmdSpinupAll=true >/dev/null Seems like something similar is going on again, but possibly only related to SDDs or cache?
  14. I've been seeing this error in my logs since rebuilding my server with new motherboard plus PCI NIC I also had used with the old MB for bonded network interface. The error didn't affect my wired network noticeably for other use still really annoying to see the huge number of log entries. I found I was able to get it to go away by disabling, then re-enabling, network bonding in UnRAID network settings. BTTW The truncated error in the log looks like this: received packet on eth0 with own address as sour Sour indeed.
  15. lewcass

    Dynamix - V6 Plugins

    Appreciate all your efforts. In the meantime, even with the plugin's losing track of the drive spin-up status, I'm still succeeding at keeping my server awake for playing media by relying on the plugin's network activity monitor. I have the network idle threshold set at "Low traffic", and server hasn't gone to sleep on me yet when watching video or listening to music. I haven't tested thoroughly but I'm pretty certain the server can also be kept awake via network traffic for workstation or other access by keeping Dynamix Stats open in a browser window.

Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.