grandprix

Members
  • Posts

    345
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

grandprix's Achievements

Contributor

Contributor (5/14)

1

Reputation

  1. Roger that, I suppose given the circumstance, BOINC being overwhelmed is a great thing. PS, sending you a PM about something totally unrelated.
  2. Understood now. These are my settings. In fact I'm not yet limiting via pinning (though it seems that wont matter from what I recall reading above), only percentage from within the client configuration. I don't suppose I have any control as to how many tasks it grabs? From what I recall reading years ago, the tasks/units have to be made by a person and I suspect given the COVID epidemic, they're "dried up". I appreciate the response.
  3. I somewhat wish I had the issue others are experiencing in regards to the LSIO BOINC docker utilizing "too many" resources, as mine is the complete opposite, whereas, it's using only two cpu/ht's. I admit to being ignorant to how BOINC (or any shared computing type of program) works, so perhaps the "task" or "work" only needs two cpu/ht's? Though, it would seem like a waste of the 16/32 machine it is running on.
  4. Crud. Ok. Appreciate the reply. Off to correcting I go then, or since data was inadvertently written to the array already, will that affect the correcting? Here's for hoping the UPS settings don't flip again, strange, never had that happen before.
  5. It's been quite a long time that I've ever had sync errors, years (and when I did it was a bad cable, and on version 4.7, MAYBE 5.x) so forgive me. I cannot locate a clear, concise to the point answer on this. Is the parity check performed upon start-up of Unraid after an improper shutdown a correcting or non-correction? I believe I recall having to have to do a second parity check after the initial in cases such as I experienced (*I'll get to that), but for some reason I'm second guessing myself into believing it does a correcting from the get go upon start-up after improper shutdown. So, I'm effectively "lost". I'm also a little concerned because a darn external (to unraid) program wrote some data to the array before I got back home. I always thought it would do a parity check but not bring up the array in such situation. I was wrong, thus the purpose of this post. I appreciate the response. * I'm not sure how it got switched, but, my "Turn off UPS after shutdown" got flipped from Yes to No. Electric went out for an hour (according to logs), came back on for a whopping 4 minutes, then went out again (before UPS could recharge). Thus, the issue above.
  6. Hate to 9 month necro this post, but, it seems if you are using a Marvell 88SE9215 based controller (in my case specifically the SYBA/IOCrest PEX40064) that is going bad, it will just do random reads, not quite sure the process but evidently a communication between the card and whatever drive. CH1 that is. It crapped out entirely on me two days ago. Got the replacement today, rebuilding the drive now (after I confirmed via SMART it was fine on another computer using the SATA cable I was using to ensure it wasn't that either). It's on 24/7 and had been, oh, I guess around the time it began it's multiple reads on CH1 of the card, it was 2 years old? It was a small PITA, but, I got the stock heatsink off the new one, attached a copper heatsink with long "fins" and attached a fan to it. Fortunately, it is the last PCIe card in the line up, so the heatsink and fan do not obstruct nor create an obstruction. I'm hoping to get at least two years out of this one.
  7. In my quest to find out why a few (roughly 50-100) artwork's were missing from movies and/or why it wouldn't let me manually upload ones (as a great deal would show only a clip from the video), I'm stumped. It seems a permissions issue, but, I wouldn't know where to begin as I'm using the linuxserver.io docker. I admit, I'm far more ignorant to docker operation/containers than I probably should be. I cannot seem to find a rhyme or reason as to why the movies missing artwork are actually missing them. I suspect that's an issue on Plex's scrapping end, I really don't know though. I've changed nothing, beside finally going from 6.5 to 6.5.2 two weeks ago, though in talking with my family (friends/users/etc.) it seems the problem of missing artwork has been around for longer than I would have liked. So I cannot say for certain when this all began, sorry. Log attached, if any others are needed please don't hesitate. The SQLite errors concern me, suggesting perhaps DB corruption? But not sure that would tie in with permission issues if there even is corruption, so, hence the entire PMS log. Thanks in advance any and all. I decided to take a peek at the directory in line 2142 (of the attached log), to notice that the files in: /config/Library/Application Support/Plex Media Server/Metadata/Movies/f/64abc8a978d651df67dfefd48882464097bc49c.bundle/Contents/_combined/posters/ Are user root group root, whereas, going to another (or most others) they are user nobody and group (and begin with @com.plexapp.agents, vice just com.plexapp.agents) FWIW. Plex Media Server.log
  8. My "time on" field is blank and all is well. Using same model with the exception of VA (using 1500 for each PS). Curious, the first screen shot, would you care taking another but only after you have initiated and confirmed (green ball) all drives are spun up. It's worked for me, albeit, it's yet to get "tested" after upgrading to 6.4 fwiw.
  9. Issue still persists, I'm at a total loss. 162 million reads on this drive at the moment, with 42 million reads on the drive that is currently storing the newest media (so is the most popular drive at the moment so to speak). I'm afraid at this rate, I'm afraid the platters will get a hole worn in them <grin>
  10. No such luck. Stopped all dockers for 24 hours (actually a little longer, as I simply forgot about them until family went to watch Plex). Disk 20 still sees an unusually high number of reads. File Activity plugin for what it's worth, seems to catch all files opened (touched?) fairly well. I'm no closer to figuring this out unfortunately. Oddly, with the dockers stopped, not only did it have a high number of reads, but, 42 million reads (mind you, I cleared statistics after stopping all dockers and disabling cache_dirs). (shrug)
  11. Will do. Cleared the statistics and stopped cache_dirs. Activity from Sonarr, Radarr, SAB appeared to show up in File Activity logs, as the processes (open, etc) coincided with activity of those docker's logs. I wonder why if it is cache_dirs why it would hammer up just one disk though? I admit, I'm ignorant to precisely what cache_dirs does (aside from keeping directory structure in memory?).
  12. Did this card go well or did it nuke Tinker off the Earth?
  13. Another 16 million reads, now up to 29 million reads. According to File Activity plugin/app, still only the two files that were "open"ed as before the reads climbed up another 16 million. Driving me a bit bonkers. That seems like an awful lot of reads, for no obvious reason. Including the diagnostics, as mentioned my eyes didn't catch anything, but that might be part of the issue as well. Hoping someone may be able to lead me to some idea why disk 20 is getting hammered with reads. TIA! tower-diagnostics-20180111-1135.zip
  14. After attempting to find the cause of this on my own, I believe another set or sets of eyes (or ideas for that matter) may be what the doctor ordered. I have one drive in my array that has a great deal of reads. I cleared the statistics just 24 hours ago, and on low usage unRAID box, this drive (disk 20) has managed to climb up to 13 million reads, despite all the other drives being under 400,000 reads (besides disk 2 which contains this weeks programming so that makes sense). I used File Activity app/plugin where this was the result over the same 24 hour span from the time I cleared statistics, was just two files opened. Hardly explains the 13 million reads I would think? The logs show nothing of significance, just business as usual. Ever entry concerning sdh (disk 20) appeared normal and consistent with the other drives in the array. I'm at a loss. It's not part of a spin up group (the act of spinning up doesnt increase reads though does it?). As a result the drive is powered up, having a power on hours count of 3 years now, where disk 18 and 19 were put into the system at the same time, but have power on hours of only 1 year 8 months. I read that to mean, unnecessary wear on disk 20, but from what? I dunno. Attachment showing the drive stats for a 24 hour period (give or take two hours) after clearing stats.
  15. For what it's worth, these settings are working out nicely for me. It's a finicky piece of software when it comes to naming conventions of both existing media (even Plex isn't as finicky) which is fixable, though also of grabbed media which stinks a little, as most times that happens the names "make sense" to me. I cannot figure for the life of me just yet how to manually import (I select the artist but, it doesn't lock in so to speak), but, I'm hanging around the Lidarr discord server where there is a responsive friendly crew.