smakovits

Members
  • Posts

    881
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

smakovits's Achievements

Collaborator

Collaborator (7/14)

1

Reputation

  1. Any thoughts on these logs and should I be concerned? Don’t want to be sitting on a time bomb that results in some sort of data loss.
  2. Couldn't access server, so I consoled to the system only to find it with a kernel panic error. Not sure what is happening with my server, so wanted to get a review of things. 1. last week my cache drive crashed and now I get this message: Share appdata set to cache-only, but files / folders exist on the array 1a. could this be the result of the recovery folder on the main array? It is there from when I was saving off my data 2. /boot/config/unraid_notify.cfg corrupted - not sure what this means I am able to access the file and make edits both in the UI and the file itself, nothing appears corrupt 3. not really sure of the kernel panic. Prior to the cache drive failing, system was up over 100days and before that probably longer without issue. 4. at the time of the cache failure, log memory was pegged to 100%, so not sure if that is because of the failing cache or something else. It currently sits at 2% Just trying to level set on these recent troubles the system is experiencing tower-diagnostics-20220414-0750.zip
  3. got. In looking for an answer, I did see a post mentioning the switch to a directory based setup vs the docker img file as a result of excessive writes. Never actually looked or paid attention, but given the 5 year life of the SSD, I would not be surprised if excessive writing did it in early. Do not believe I can get the old smart data, unless it is in a log on the serve somewhere.
  4. thanks. I will ignore it. Wish devices reported uniformly to avoid such silly potential panic that my server is suddenly overheating. meanwhile the other 15 drives appeared normal. However, in the wake of just losing the cache drive and this being the replacement, it was genuine cause for concern. I will proceed to ignore the message for now. This also helps explains the jump in value when I made changes to try and increase cooling. Before the change it was 70 and after it was 80. Given the above explanation, I did good, lowered the actual temp from 30 to 20. appreciate the info
  5. But is having high airflow temp a bad thing? Normal temp reading appears more normal at 19. So it is simply the high value and red on airflow temp that has me worried. However, if normal temp is much lower, maybe the reading is a False positive and can simply be ignored.?
  6. Recently replaced cache and restored docker img. Switched drive to xfs from btrfs. During this realize docker img is btrfs still on the xfs cache, is this bad? Can it be easily resolved?
  7. Not sure what is happening. Had what appears to be a failed ssd. Added ssd as cache formatted xfs, everything was looking ok except this line. 190 Airflow temperature cel 0x0023 081 067 026 Pre-fail Always Never 81
  8. quick update. Went through the FAQ page about attempting recovery. I was able to mount the disk and copy the contents off. While waiting, I saw the posts about btrfs becoming corrupted, so I figured lets switch to xfs. stopped the array, changed the file system, started it, told it to format the disk. Seems complete (took a while), but it still shows as unmountable. is it possible the disk died? It is an SSD and I was able to read off the old data, so I can only assume it is not dead, but I am not certain what to make of this. One more update, perhaps the drive was barely hanging on and the format killed it. ran fdisk -l and disk was not listed, so I rebooted and on reboot, the disk does not show in the UI any more.
  9. Server was running for over 130 days. Before that was probably even longer. It has been a bit, so no idea what happened. Had issue getting to my docker containers, they were suddenly unavailable. Logged into the UI and saw memory Log 100%. Went to Docker, show advanced, thinking I would be able to see what was sucking up all the memory, but did not see anything. Kids wanted to watch a movie, so I told the server to reboot from the web UI. It rebooted and now cache shows unmountable. This is obviously why docker will not start, so now I am dead in the water trying to determine what happened. attaching logs post reboot, not sure if they tell us what we need. In the problems page, I see 2 errors that seem bad. 1. cache (SanDisk_SD8SB8U1T001122_164103801883) has file system errors () 2. /boot/config/unraid_notify.cfg corrupted tower-diagnostics-20220409-1618.zip
  10. yeah, so the question is, should it be possible as an unassigned device or do I want to put it in another system? As an unassigned drive it only shows 802GB, as size vs 3TB. and it is not mounted or shown mounted. The only immediate option in the UI is format which I will not do. I assume there might be a way to access the data via CLI or do we just consider it dead
  11. mounted as USB, but I believe that limits the SMART capabilities. I can try other things if needed. smartctl -H /dev/sdr smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.28-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: FAILED! Drive failure expected in less than 24 hours. SAVE ALL DATA. Failed Attributes: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 001 001 005 Pre-fail Always FAILING_NOW 2005
  12. This perhaps is a misunderstanding. The drive that is now xfs is the new 8tb drive I put in to replace the failing 3tb drive. The failed/failing drive is still reiserfs and has my data, it is simply sitting on the counter. The 8tb drive is xfs, new and empty and never had anything on it. All which makes me think maybe I try usb mounting and start copying folders. Question is, is this where I user reiserfsck knowing unbalance was not cutting it prior to swapping the disks.
  13. As a side topic. I thought we want xfs these days. I know I started going xfs for that reason with new disks. But this now makes me question my doings.
  14. I'll try this to try and get my most critical data first and see where it gets me.
  15. are you suggesting there is corruption restored to the new disk or that by it is better off since I formatted it off for the new disk? As for getting the original data back, can we do a bit level recovery if I mount it as a external USB? They are crc rewrites and not a clicking disk, so not sure what is possible. I know when I was trying to get it all off with unbalance, it started at like 30 hours and 192 errors and when I stopped it it was 50 hours in with 75 hours remaining and over 1200 errors, so not sure what is possible with it.