Talos

Members
  • Posts

    102
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Talos's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

  1. Thanks for clarifying my options itimpl. I finished the array expansion overnight and all is working as expected. Cheers!
  2. Hi, I currently have a 9 drive array. It consists of: Parity - 6Tb (X300 Toshiba) Disk 1 - 6Tb (X300 Toshiba) Disk 2 - 6Tb (X300 Toshiba) Disk 3 - 6Tb (X300 Toshiba) Disk 4 - 3tb (Hitachi 7200rpm) Disk 5 - 3tb (Hitachi 7200rpm) Disk 6 - 3tb (Hitachi 7200rpm) Disk 7 - 3tb (Hitachi 7200rpm) Disk 8 - 3tb (Hitachi 7200rpm) Ive just moved my system into a new case (a Meshify 2) which gives me capacity for 11 drives. I've bought 2 new 8tb Ironwolfs and plan to expand my array by 2 drives. Ive already pre-cleared both the new ironwolf drives and both came back clean. The new array configuration i want is: Parity - 8Tb (New Ironwolf) Disk 1 - 6Tb (Same position) Disk 2 - 6Tb (Same position) Disk 3 - 6Tb (Same position) Disk 4 - 3tb (Same position) Disk 5 - 3tb (Same position) Disk 6 - 3tb (Same position) Disk 7 - 3tb (Same Position) Disk 8 - 3tb (Same Position) Disk 9 - 8tb (New IronWolf) Disk 10 - 6tb (Old Parity) Ive had a search on the forum and seen the posts about the Parity Swap Method and other people saying just replace the parity drive but i'm confused and don't want to risk blanking my array and losing all my data. Previously when i've upgraded my drives i've built a whole new server and copied the data from the old server to new one but i dont have the luxury of doing that this time. So what is the best method for me to 1) upgrade my Parity to support the larger drives in the array and 2) repurpose the old parity into the array and add in the 2nd ironwolf? Any help would be greatly appreciated. Cheers! Edit: Forgot to mention i'm running Unraid Pro 6.8.3 currently.
  3. For my personal situation I run my server off a UPS so I havent had an ungraceful powerdown for many years now. I also only use it as a media/file server. I don't run any complex dockers or VM's or anything that risk crashing the server. This is mainly because i only run this off a Celeron 550 with 8gb ram so it doesnt have heaps of excess grunt for those so perhaps the negatives of BTRFS wouldn't be a big obstacle for me compared to the positive of bitrot detection. Might do a bit more reading while i wait for the pre-clears to finish on those drives to see which path i decide to take Edit:OK ive done a bit more reading and i think i will just stick with XFS and run the risk with diskrot
  4. OK cool thanks mate... So maybe my best approach might be to build a new array with the new drives and copy the content over to those and then add in the other drives one by one copying over the data to each after i make those into the new filesystem. So now which file system do i go with? is BTRFS better with its bitrot detection or does the stability of XFS make it the preferred filesystem?
  5. Hi guys, Ive been running unraid for a long time now (~15 years) and have reached the capacity of my current server. I currently have 9x3tb drives in there and have just bought 4x6tb drives to start upgrading the capacity - the are currently undergoing preclear to ensure there are no issues with them. My existing array is using the reiserFS file system and ive noticed some posts indicating that XFS or BTRFS are the preferred options nowadays. I dont have slots in my server to include more drives so will be replacing 4 of the existing 9 drives (including the parity). Ive been trying to find a definitive guide on how to upgrade my server capacity and also change the filesystem but im still confused. I obviously have to replace my parity drive first with a 6tb drive to make sure that it is the largest drive in the system but when i go to replace the other drives one by one and do the rebuild of my data on to them will it rebuild back to reiserFS or can i tell it to change to XFS/BTRFS and if so which is the best option to choose? cheers! Talos
  6. Well.. I broke down the server yesterday... pulled the HSF off the CPU and gave it a vacuum as it was a tad clogged.. put it all back together and ran memtest for 12 hours - multiple passes with zero failures.. Started the array back up in maintenance mode and ran a reiserchk on all 8 of the data disks and all reported zero issues.. disk 5 had 1 safe link but apart from that nothing stood out.. It also completed a parity check overnight without parity errors.. hopefully it was just the CPU overheating from the dust and all will be ok now.. ill let it run normal now and see if the issue presents itself again.
  7. Lol.. thought I put the 6.1.9 in the original post but it says 2.1.9 instead. The high temp is probably because we've had a heat wave here in the past few weeks hitting 47c on several days so the air con has been struggling. I'll break the machine down today and make sure it's all clean inside and then do the file system check. Thanks John. Ive been running unraid for about 8 or so years now so started long before all the docker and VM stuff i love unraid just because it works perfectly as a media server for all my machines although i am now starting to consider building a VM capable server so that i can run both my storage and my everyday desktop on there and do away with a seperate machine. Sent from my XT1635-02 using Tapatalk
  8. Hi guys.. I have a server which has been running 24/7 for a few years now without issue until this week. I am experiencing an issue whereby all my /mnt/Disk1, /mnt Disk2 etc Samba shares stop responding. I can access the GUI via the browser and run commands but if I try and access any of my drives from windows explorer or from any of the media players on any of my PC's on the network it just times out. It first happened on monday this week and now has just happened again. If I try and stop the array it via the button on the Main GUI tab it gives me the across the bottom of the screen and then the GUI freezes up after a few minutes also. "Stopping Docker...Stopping libvirt...Stop AVAHI...Stop SMB...Spinning up all drives...Sync filesystems..." After this I was forced to hit the reset button and then go through the parity check process due to the unclean shutdown. It passed the parity check on powerup after a few hours with zero errors detected. Today I came home from work and I had the same issue. All shared drives were unresponsive but i could access the GUI and draw down a diagnostics zip file which I've attached. Don't have the faintest idea where to start with this one. Could it be an issue with SAMBA dying or am i looking at a dying drive or something else altogether? Server details are as follows: Unraid 6.1.9 ASRock - B75 Pro3-M Intel® Celeron® CPU G550 @ 2.60GHz 8gb Ram M1015 HBA 9x 3tb Toshiba ACA drives (8 on m1015 and parity on the mobo) Any help would be greatly appreciated thanks guys. Cheers. theburrow-diagnostics-20170217-1625.zip
  9. So in your dealings with asrock tech have they indicated they can replicate these issues people are appearing to have? Sent from my Galaxy Nexus using Tapatalk
  10. I'd like to see some power usage stats for these avotons from a kill-a-watt if you guys have one. Sent from my Galaxy Nexus using Tapatalk
  11. This would have to be damn near a perfect case for most setups. I've only just recently dropped from 3x5-in-3 hotswaps to 3x3-in-3 hotswaps. If this had of been available at the time for sure I would have grabbed this instead. Will definitely consider this for any future builds I do. Sent from my Galaxy Nexus using Tapatalk
  12. I'm same as lagamm above. Purchased my initial key in 2008 so coming up on 6 years running without fail. Mostly 24/7 system uptime that whole time. USB stick is an old lexar firefly 4gb which was the recommended stick at the time. I've had 3 hard drive failures in that 6 years and a motherboard failure also so don't really see the USB drive as the weak point given Unraids usage pattern. Sent from my Galaxy Nexus using Tapatalk
  13. I picked up 9 of the Toshiba DT01ACA300 7200rpm 3tb drives recently. These have a 3 year warranty here in Australia. My first 3 completed pre-clear last night. Two of the drives took 24.5 hours and the other took 26 hours. Not sure why the 3rd drive took longer but they all passed with 0 errors. Drive temps maxed at 37 deg C. It's quite warm in Australia atm tho. Started pre clear on the next three straight after. With the 3 year warranty and with them being faster and $50 per drive cheaper than the WD reds its sort of a no brainer. Sent from my Galaxy Nexus using Tapatalk 2
  14. Picked up 9 of these drives today. The local distributor here in Australia has decided to offer a 3 year warranty on them so that made my mind up. On 2 weeks holidays from work so will start queueing up the pre-clears I guess. Hopefully I'll get them all done before I have to return to work. Sent from my Galaxy Nexus using Tapatalk 2
  15. I'll be curious to see reports of your temps and speeds. Still not much info out there. In regards to warranty its only 2 years here in Australia so definitely varies from country to country it seems. Sent from my Galaxy Nexus using Tapatalk 2