mrcrlee

Members
  • Posts

    29
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

mrcrlee's Achievements

Noob

Noob (1/14)

0

Reputation

  1. After all that, I discovered that under the Settings/Share Settings, some time ago I had not included Disk9 in the list of available drives for user shares. Unfortunately, the share page does let you create shares for drives that are not included in sharing but then picks disk1 as a default for a usable share.
  2. Hello, I have a share, with what the share page reports having 410GB free, with the following configuration: Name: crashplan-backup Comments: Allocation method: Minimum free space: 400GB Split level: Not set - split function not used Included disk(s): Disk 9 Excluded disk(s): None Use cache disk: No Share empty? Yes Disk 9 Hitachi_HDS5C3030ALA630_MJ1321YNG0MR1A (sdn) 25 °C 3 TB Used: 439 GB Free: 2.56 TB When I go to the share via shell, and try to create a folder, this happens: root@Tower:/mnt/user/crashplan-backup# mkdir test mkdir: cannot create directory `test': No space left on device It sounds like this should be something obvious. I have stopped, rebooted, and started the array to no avail. I was not able to create the share simply by using the settings above. For the share to successfully be created, I had to create it with all disks included, then remove the disks I did not want to include. Here is the output when I tried to create it directly to disk 9, originally. May 20 11:43:39 Tower avahi-daemon[8727]: Service "Tower-AFP" (/services/afp.service) successfully established. May 20 11:45:25 Tower emhttp: shcmd (15613): mkdir '/mnt/user/crashplan' May 20 11:45:25 Tower shfs/user: shfs_mkdir: assign_disk: crashplan (28) No space left on device May 20 11:45:25 Tower emhttp: _shcmd: shcmd (15613): exit status: 1 May 20 11:45:25 Tower emhttp: shcmd (15614): rm '/boot/config/shares/crashplan.cfg' It clearly doesn't like disk 9. I currently have the disk shared directly and can back up directly to the drive through AFP. But if I want to create a share and use it that way, no luck. Here are the current contents of disk 9: root@Tower:/mnt/disk9# ls -la total 12 drwxrwxrwx 11 nobody users 400 2016-05-20 12:19 ./ drwxr-xr-x 14 root root 0 2016-05-20 11:36 ../ drwxr-xr-x 2 root root 304 2016-05-20 12:03 .AppleDB/ drwxrwx--- 2 root users 72 2016-05-20 12:03 .AppleDesktop/ drwxrwx--- 2 chris users 168 2016-05-20 12:03 .AppleDouble/ drwxrwx--- 3 root root 112 2014-10-08 17:57 .TemporaryItems/ -rw-rw---- 1 root root 4096 2014-10-08 17:57 ._.TemporaryItems -rw-rw---- 1 root root 4096 2014-10-08 17:57 ._.apdisk -rw-rw---- 1 root root 291 2014-10-08 17:57 .apdisk drwxrwx--- 3 root users 80 2016-05-20 12:03 Network\ Trash\ Folder/ drwxrwx--- 3 root users 80 2016-05-20 12:03 Temporary\ Items/ drwxrwxr-x 3 root root 88 2014-09-20 12:28 crashplan-backup/ du -h : 410G Any assistance is greatly appreciated! -Chris
  3. Alright. I checked all disk smartctl attributes and for all disks, 197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0. I will move forward with the replacement and rebuild. On another note, is there a threshold for amount of "power on hours" you typically use before you replace a disk?
  4. Thank you for the assessment and advice. The unused drive, /dev/sdl, is precleared and ready to use. Is this the proper protocol to follow? 1. Stop the array 2. Unassign the old drive from disk 7 (/dev/sdh). 3. Assign the new drive in the slot of the old drive (it is already installed and precleared) 4. Go to the Main -> Array Operation section 5. Put a check in the Yes, I'm sure checkbox (next to the information indicating the drive will be rebuilt), and click the Start button The rebuild will begin, with hefty disk activity on all drives, lots of writes on the new drive and lots of reads on all other drives All of the contents of the old drive will be copied onto the new drive, making it an exact replacement, except possibly with more capacity than the old drive.
  5. Hello, I have attached my syslog and smartctl reports for the drive in question and a screen capture ( ) of my array configuration. I think the relevant syslog entry starts at Apr 24 20:45:42 This drive also had a similar issue in January, referenced here: http://lime-technology.com/forum/index.php?topic=45375.msg433138 I did reboot the system and ran the smartctl (long report attached) report which reports the second instance of the error but states things have PASSED. I have not restarted the array. I believe there is enough space on 8/9 to put the data there from disk 7, if that is a smart thing to do. I have a precleared drive in slot /dev/sdl if replacing sdh (the bad one) in the array is the best course of action. Any opinions on what the professionals would do is greatly appreciated. -Chris smart.txt syslog.txt
  6. Alright. I'll run it for good measure and enable a scheduled parity check from here on out. Thank you and have a great weekend.
  7. I have enabled the drive and reconstructed it. The result is valid parity. Would you recommend I run a new parity check before using the array?
  8. Thank you for the confirmation and notes to run the long test. It's running now; fingers crossed.
  9. I was hoping someone would confirm or reject I'm on the right track and would be doing something that sounds correct, before I do something that causes more problems. Any help is appreciated, please!
  10. Hello, I am running 5.0.6 with 9 data drives, 1 parity drive and 1 cache drive. Drive in question: Disk 7, /dev/sdh, referred to in the question below as 'bad'. TL;DR;: I had a drive removed from the array after a failed parity check, ordered a new one, ran the smart report and think the bad drive is okay. What should I do next? About a week ago, I had a power outage and my UPS didn't last as long as the power outage. After the power came back, I rebooted my server and initiated a parity check. During the parity check, there were some read errors (I did not capture the log), and the bad drive was pulled out of the array. I started the parity check before I went to bed and when I awoke, it had failed. Not having used my unRaid server interactively in quite a while, I immediately ordered a new drive to have it on hand when I delved deeper into the problem. The drive arrived yesterday and I hit the wiki's and faqs to determine the next steps. 1. I captured the syslog files before rebooting. 2. I rebooted. 3. I ran the short smartctl test and it passed. I have attached the syslog for before the reboot and after the reboot and the smart report for the short test. The smartctl test is provided in full. I had to trim the syslogs to perceivably the most relevant parts due to file size. I am looking for the best guidance moving forward. I'm inclined, based on the wiki, the smart report and the post-reboot syslog to: 1. Re-enable the drive 2. Reconstruct it, based on the instructions here: http://lime-technology.com/wiki/index.php/Troubleshooting#Re-enable_the_drive 3. Shutdown the system. 4. Install new drive, pre-clear it, and wait for a failure when I need it. Thank you for reading my post and for any assistance you can provide! -Chris syslog-before-reboot.snippet.txt syslog-after-reboot.snippet.txt smart.report.20160109-105210.dev.sdh.txt
  11. I use the command 'lsof', grepping for /mnt: lsof | grep /mnt It will list shares, drives that are open. Good luck and welcome aboard!
  12. I'm also running with two AOC-SASLP-MV8 cards on b14 and have been fine for 2.5 weeks. Prior to running b14, I was running b12 and had the BLK_EH errors. I think I have noticed a little bit of issue with spindown, if that issue is where the log reports it has spundown a disk but the disk is not spundown. But I'm wiling to put up with that with no BLK_EH errors. If I go in and figure out which devices are open, and close those opens, the disk will spindown as expected after the spindown timer expires for that given resource.
  13. Well I upgraded to beta 14 and it started up okay. If I were to receive the BLK_EH_NOT_HANDLED, it will happen in the next few days. I will say that the instructions, with regards to updating from beta6+ -> beta14, say to reboot then check the parity of each drive then start the array, however my server automatically started the array, and I was not able to check parity as instructed.