fcol

Members
  • Posts

    29
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

fcol's Achievements

Noob

Noob (1/14)

0

Reputation

  1. trurl - thanks for the warning. I tried to research this a bit (see topics below). If I understand it, then my specific case of exposing the root of the cache drive (unpooled and not otherwise used as part of the array) should be OK. I'm not trying to manually copy/move files from the cache to the array or to individual disks. I think my main use cases are: more easily access log files on cache/appdata or back up files residing on cache (e.g., copy from cache to USB drive connected to a Mac/PC). Unless I'm missing something, then this should be a reasonably safe. Thanks [And if you were referencing my screen shot where it showed check boxes for Disk 1 & Disk 2, then I will uncheck those as long as I don't lose access to the root of cache. It's just that I thought I had Enable disk and User shares set to Yes earlier when I couldn't map cache. That's why I ended up checking Disk 1 and 2. I can't stop the array for a while since it's busy redoing Plex, Crashplan, etc...] User Share Copy Bug: https://lime-technology.com/forum/index.php?topic=34480.0 Disk vs user share: https://lime-technology.com/forum/index.php?topic=43845.0 Cache Only Share Bug: https://lime-technology.com/forum/index.php?topic=34481.0
  2. Thanks but I can't seem to find settings related to showing the cache drive. Only my user disks are showing up (for "Enable disk shares" = Auto or Yes). Edit: Never mind. I started the array and now I can map cache. Thanks!
  3. I just upgraded my unRAID server from v5 to v6.2 and am using v6 for the first time. I used to be able to mount the root of my cache drive from OS X (for example, smb://192.168.254.XX/cache). Now it looks like I can only mount/map folders on the cache drive. I can access the root level of the cache drive via terminal. But I was wondering if there was a way to map the entire root of the cache drive via SMB? Or is this discouraged due to the use of dockers, etc.? Thanks
  4. Since v4.7, I also see this behavior on occasion. I found that I can *sometimes* get it to connect if I telnet into the server and manually stop/start Crashplan (wait a couple minutes before restarting): /usr/local/crashplan/bin/CrashPlanEngine stop /usr/local/crashplan/bin/CrashPlanEngine start Edit: It might be a good idea to verify that it's not performing maintenance before manually stopping it.
  5. I'm also seeing similar behavior with 4.7.0 on unRaid 5.x. So, in my case, it has nothing to do with the container since I'm not using it. 4.7.0 seems to be performing a lot more maintenance (and more often) and can bog down my entire server. I've rechecked settings (Frequency and Versions) and nothing has changed since pre-4.7.
  6. My syslog has been resetting (cleared) at seemingly random intervals (a few days or up to 48 days of uptime). I know the server hasn't actually rebooted since the uptime is reporting accurately and I have some services that need to be manually started if I reboot. I don't see anything in the syslog except: Jan 5 04:40:01 Tower syslogd 1.4.1: restart. I'm running 5.0-rc5 with 8GB ECC (with tons of free memory). I guess I'm worried there could be a corruption in RAM - but wouldn't I have other stability issues? The server is rock solid otherwise (basically only runs Crashplan, Air Server, general media server). Is there a way to automatically write out the syslog periodically to disk (maybe cache drive)? Or can I have it email the syslog periodically? Thanks in advance