• Content count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About landS

  • Rank
    Advanced Member


  • Gender
  • Location
    NWI - USA

Recent Profile Visitors

335 profile views
  1. 6.4, now I have Call Traces

    So far, everything appears stable!
  2. Good day Crew, The server has been updated to 6.4 stable (from 6.3.5 stable) and running for 3 days now. Fix Common Problems just alerted me to an issue that I have never seen before: Call Traces found on your Server Diagnostics are attached. Any advice would be greatly appreciated!
  3. Good day Crew, The server has been updated to 6.4 stable and running for 3 days now. Fix Common Problems just alerted me to an issue that I have never seen before: Call Traces found on your Server Diagnostics are attached. Any advice would be greatly appreciated!
  4. v6 on Atom D525

    These are actual still in my fractal define with 2 120mm intake fans blowing directly over the disks which checked out fine and 1 top/rear exhaust which has failed. For now, the machine is powered down and I ordered 2 140 top mount exhaust fans to replace the 1 rear failed 120. Another oddity that I'll need to track down, remembering that I have no exported shares, is that the disks refused to spin down on 6.4. I could manually trigger it and within 30 seconds they'd all be back up again. I'd guess it's a plugin, but this machine does not have too many of those enabled. *oh how I love the gui performance upgrade!
  5. Ahh... Screen is ok, but I love the gui version! I use it on all mechanical hard drives for myself, friends, family, and work when the HDD new.... And again when disks go out of commission... I also use it for fresh Unraid disks
  6. v6 on Atom D525

    Even the transfer speeds to a SSD dropped greatly in comparison to v5 writes to a HDD. I posted some steps that could be taken on a Windows machine which allowed near v5 write speeds (similar steps exist on Linux machines), but that's a pita and wasn't required on v5. Once I moved this to a pure backup NAS to my MAIN Unraid storage, I killed off sharing the shares, and now use UD to mount the MAIN shares and User Scripts to schedule rysnc *backups* of the MAIN data. Doing this the speed doesn't matter, but pulling the files is significantly faster than pushing them... Near v5 write times! I was fortunate to run the old tunables script which helped to shave some parity check speed off. All my disks are hgst 4 TB units. Typically my hgst disks hit 48* during a dual-drive parity check, they are all hovering at 50-52 for this 18 hour check now that I'm on 6.4.... So once done I'll need to pull the rig and check the fans to see if it hardware related or if 6.4 pushes the disks harder. My house certainly is cooler this time of year and I've never seen a disk hit 50. Everything should be dust free.
  7. Does the gui terminal live in Unraid memory (aka a nerd pack screen replacement) or is it tied to the web session and cancels out the running command if closed (aka, a SSH replacement). Thanks!
  8. Smoking! The web Gui has always been a dog on my X7SPA-HF D525. Now it is more responsive than even my fastest Unraid rig on any prior version. Awesome work folks. Running a backup script to this server now, then will manually trigger mover followed by a dual-drive parity check. I'll update if any snags are hit. @garycase I think the folks just injected some major life support for the atom... Thanks @limetech and @bonienl
  9. v6 on Atom D525

    Disk tunables is a must. I'm looking forward to the updated script going public! This shaves a couple of hours off of parity check. The biggest tweak on SMB writes from a windows machine (and Linux depending on what version of samba is being used) actually require some tweaks on the windows machine! I was bitten by the marvel bug after one of the 6.x updates, and swapped my saslp card for a lsi. If memory serves, my dual parity drives and my cache drive are on the motherboard headers. I'm running cache dirs, disk integrity, recycle bin, UD, and user scripts. That's near the limits. I actually have no longer have any shares exported via SMB. Instead I have my primary towers shares mounted via unassigned devices, and have an rsync job in user scripts that does not run on parity check day! Mover is set to once/day. As a backup Nas, this is still a great device.
  10. Pass through a host DVD Drive

    Tonight I tested blacklisting a pcie card which works just fine. Note that you must select a card that works with ODD/DVD/BD/etc... or flash a card to use IDE mode. The following works out of the box 2 Port PCI Express SATA 6 Gbps eSATA Controller Card Because of the chipset it uses: Asmedia ASM1062 Under flash I added vfio-pci.ids=(my [id] info from system devices) I also had to add iommu=pt Under system devices
  11. This makes sense to me... But what does not make sense is that under the Docker settings for storage i have /mnt/user .. And under the WebGui's Details/Manage file i can back up to Root /config /defaults /flash /home /lib64 /libexec /media /mnt ... but everything downstream of /mnt is 'broken/blank' .... edit: nm. Now i see a grey scroll bar in the gui to get down to /storage. barely visible in my browser Thank you greatly! (this gui is a massive step backwards)
  12. Thanks a bunch for the assistance @Djoss! Originally everything appeared ok, having run the following due to migrating from 'that other docker' prior to starting up for the first time: cp -a /mnt/user/appdata/CrashPlan /mnt/user/appdata/CrashPlanPRO "Just re-select your files, they are under /storage." Under the WebGui's files I see all the same folders selected: Root>mnt>user CJK1, CompInstalls, etc ..BUT the size shows a dash, as does the date modifed, and a note out to the right indicates 'file is missing' When I click into the folders subfolders/files are all missing. Even if i must reupload all, that is ok as it works out to only be 1 TB ... Note that on the webportal if i select any date prior to 2 days ago all the files still exist for recovery. The log is attached Log for_ CrashPlanPRO.html edit: under the original crashplan container i see the storage (mnt/user) has read/write while under crashplan pro container it is marked as read only. This article highlights the symptoms i am experiencing:
  13. Oh boy... woke up this morning, all directories still selected, but now web ui says files are missing and zero size... and crashplan's website also indicates zero files available for backup. bummer my docker's storage location is /mnt/user and i have a few share's selected in crashplan for backup. it just... isn't recognizing any subfolders/files in these shares. Given that all the files disappeared from backup from the online portal, I deleted the appdata, restarted fresh. Still, no joy. FYI, the online portal restore/'show deleted' indicates the paths in light grey for yesterday and today... but they are not backing up. grr
  14. @Djoss another legacy Crashplan docker user, now on Pro Docker due to the autoupdating into a black screen. No problems updating nor adding in the memory. Thanks!
  15. Pass through a host DVD Drive

    I would also be highly interested in a more elegant odd sata passthrough solution. @jonp @lime13tech Guys, anything better than the below for this? what i know works. Pass through a controller that has been flashed specifically for ODD (issue: manual flash is a pita, eats up an entire expansion slot on the mobo) PCI Syba SD-SATA-4P Flashed to b5500 bios (IDE) Sata 4 port for ODD Chipset Sil3114 PCIe 1 StarTech Sata 2 port for ODD Chipset ASM1062 Pass through as a USB device. (issues: doesn't always work, trying to pass through 2 optical drives this way and the vm doesn't want to start, and read is slow). Generic USB 2.0 to 7+6 13Pin Slimline SATA Laptop CD/DVD Rom Optical Drive Adapter Cable. fyi, while the pictures of these tend to show 1 usb port, most take 2 (and extra dongle for power).

Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.