Jump to content

wisem2540

Members
  • Posts

    426
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

wisem2540's Achievements

Enthusiast

Enthusiast (6/14)

2

Reputation

  1. Hey guys, I have not used unraid in about 5 years. I still have my old one that was infected with ransomware actually (haha) then I moved to hosting with Hetzner and Gdrive. Right now I have 120TB of content. I have found some used 10, 12, 15, and 18TB drives, So in theory, I could probably get away with less than 20 drives. So here's what im thinking. I need a case, and I like norcos with the SAS backplanes alot. Ive seen people use netapp shelfs, I like this idea, but I feel like they would be power hungry and loud. I know unraid has probably come a long way in the last 5 years. My last build was 10 drives with 1 parity drive. Moving forward ill be looking for maximum Fault Tolerance. What's available now for parity and disaster recovery? So my requirements would be. 1. Front load (maybe top load) drive caddys 2. Enough space for Cache with fault tolerance 3. Multiple parity drives 4. 150TB initial build
  2. Hey guys. Little back story, I used unRAID for years and then through a series of unfortunate events while relocating for work I got ransomware I didn't really have the upload bandwidth to support Plex anymore, so I took the opportunity to lease a hetzner server with synchronous gig. I've been using that for about 4 years Now I have fiber in my area and am considering moving everything back in house. Looks like not much has changed with unRAID from a hardware recommendation perspective. I found some refurbished 18T drives that should work fine and I'll use a similar Norco to my last one. Need to store roughly 130TB. The only changes I'll make are probably dual parity, and some kind of cache fault tolerance The question is, how easy is it to migrate that much data from a Google team drive with service accounts? What kind of download limits am I looking at? And should keep Google for $12 per month as a disaster plan? In that case I'd be looking for bidirectional sync, which I believe rclone supports
  3. Just a question for the group. I am a norco user myself and love the simplicity of having a backplane. The way I am figuring this is 100 bucks for a case and 180 dollars for the enclosures. It seems like for 300 bucks, I can buy a norco case with a backplane and not have to fuss with 15 sata cables with Power Y splitters. Yes, I have a rack for my norco, but lately it does just fine sitting on the floor. What is the big draw for a solution like this? It does not appear to be price. Side note - this IS a solid deal for this product. I am more questioning the overall solution. Thanks
  4. still seeing this. Right now I have ombi offline. It is basically unusuable
  5. Hello, I just installed Ombi over the weekend and have noticed that sometimes logging on or clicking "Request" on a movie will cause the web interface to hang. Has anyone else seen this? Any logs I can provide, I would be happy to.
  6. Hey all, Seems like for a while now, CP has been backing up 0 files. I dont know if I missed an upgrade or something. Any help would be appreciated. Attached backup history history.txt
  7. My server rebooted from a power glitch. now here is what plexreqests tell me object with id F5Zb9Drm3hfnupCeG is not valid. trying to migrate.. released expectedConstructor createdAt expectedNumber approval_status required seasons required /home/.meteor/packages/meteor-tool/.1.3.3_1.1qvqfi6++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/server-lib/node_modules/fibers/future.js:280 throw(ex); ^ Error: Date of request must be a number at getErrorObject (packages/aldeed_collection2-core/lib/collection2.js:437:1) at [object Object].doValidate (packages/aldeed_collection2-core/lib/collection2.js:420:1) at [object Object].Mongo.Collection.(anonymous function) [as update] (packages/aldeed_collection2-core/lib/collection2.js:173:1) at packages/davidyaha_collection2-migrations/packages/davidyaha_collection2-migrations.js:156:1 at SynchronousCursor.forEach (packages/mongo/mongo_driver.js:1022:16) at Cursor.(anonymous function) [as forEach] (packages/mongo/mongo_driver.js:869:44) at validateCollection (packages/davidyaha_collection2-migrations/packages/davidyaha_collection2-migrations.js:114:1) at [object Object].registerMigration [as attachSchema] (packages/davidyaha_collection2-migrations/packages/davidyaha_collection2-migrations.js:45:1) at app/lib/collections/collections.js:545:4 at app/lib/collections/collections.js:549:4 Exited with code: 8 Your application is crashing. Waiting for file change. Im assuming this means my DB is corrupt somehow. I have no problem starting over if there is a way I can at least manually readd my reqests. Can I pull out plain text somehow?
  8. I found the "my.service.xml" file in the conf folder of the crashplan container, but I can't find the "ui.info" file? Where is the "ui.info" file located?
  9. Minutes makes much more sense...especially if you factor in the time between my initial post and when I posted the snippet. Thanks for that.
  10. Here is a snippet of the smart data from unraid. Let me know what you think SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 114 099 006 Pre-fail Always - 67489640 3 Spin_Up_Time 0x0003 093 087 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 098 098 020 Old_age Always - 2747 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 072 060 030 Pre-fail Always - 19735809 9 Power_On_Hours 0x0032 012 012 000 Old_age Always - 77170 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 55 If you are referring to the VALUE, rather than RAW VALUE, then it 12, and that does not seem right either.
  11. One my older Seagate 2TB drives is showing a RAW value of 77086, if my math is correct, this thing is almost 9 years old. Am I looking at the right value here? I am curious to see some people who have been at this a while and how old some of their drives might be. 9 years seems pretty long for an HDD these days. If I am right, I don't even wanna know what I paid for this thing 9 years ago
  12. I've done this, and am having no luck getting the console to connect. I continue to get an 'unable to connect to backup engine'. My console shows version 4.4.1 - I don't know how to find out if the engine is the correct version. When I looked at the ui_info file initially it was set to the loopback 127.0.0.1, not 0.0.0.0, so I've tried that, the 172.17.42.1, as well as my unraid IP 192.168.x.x I should note that I've had the Crashplan docker running for some time (a year or so?), and I've only just now tried the -Desktop container to try and manage it. I can see online that the backup was recently run, so I know that part is working. I haven't had a way to manage this in some time - I used to have another linux server that would connect to it, but that quit working awhile ago and I didn't have time to figure it out then as long as it kept running. But now with the ease of docker I want to get it back up and functional. I'd be happy to remove the container and start over with new crashplan /config directory, but I'm not confident on how to retain my crashplan ID - I have a lot of data backed up on crashplan central, and several friends backing up to me. I've also done the "turn it off & turn it back on" approach: Stop both the CrashPlan and CrashPlan-Desktop dockers Update the CrashPlan docker, give it is a few minutes Update the CrashPlan-Desktop docker Any thoughts/direction? ***EDIT*** Thanks to LEIFGG's signature link!! I found my issue was in the my.service.xml file - it was listening on 127.0.0.1. I changed it to 0.0.0.0 as suggested, restarted everything, and it's all working well. Thanks!! Thank you! You solved this issue for me! http://lime-technology.com/forum/index.php?topic=44190.0 It was driving me NUTS
  13. gfjardim, On your GUI container for Crashplan, the Crashplan app does not launch for me. What am I missing?
  14. Take the duckdns part out. It should just be MYNAME...at least thats how it works for me.
  15. Good. If you wanna see how its working, run htop from a telnet session. Screenshot attached
×
×
  • Create New...