Jump to content

RobJ

Members
  • Posts

    7,135
  • Joined

  • Last visited

  • Days Won

    4

RobJ last won the day on March 27 2017

RobJ had the most liked content!

6 Followers

Retained

  • Member Title
    The Cat in the Lime Hat

Converted

  • Gender
    Male
  • Location
    Tampa, Florida
  • Personal Text
    Epox MF570 (nForce570)

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

RobJ's Achievements

Mentor

Mentor (12/14)

173

Reputation

1

Community Answers

  1. I think you missed my point...The fact that you have two different size drives and different format the question in my mind is how can parity be valid when I swap out a 2TB RFS drive with 4TB xfs drive. I know it works out now but at the time it was a strech to comprehend this. From a parity standpoint, size doesn't matter, format doesn't matter, data doesn't matter, nothing matters but the bits on every drive, whether you are using them or not. From a parity standpoint, drives are all the same size, exactly as big as the parity drive. They just have zeroes past the end of the physical drive. Here are links explaining parity (the second has more links): Parity-Protected Array, from the Manual How does parity work?
  2. Done. That's why my very first step is to recommend a parity check, so that you know there are no drive problems to take care of, and that parity is good. There's no reason it should not stay good throughout. Keep it coming! I have also added a summary of the method to begin it with, and a new Method section, with the various factors that are involved and comparative verbiage between the different methods. The methods are only summarized. Will it be helpful? Probably not, so many more words added...
  3. That should be : rsync -avPX /mnt/disk3/ /mnt/disk7 Note the slash after the 3. Without that slash, you will end up with a disk3 folder on Disk 7 (/mnt/disk7/disk3). With the slash added, you will end up with the entire contents of Disk 3 on Disk 7, and no disk3 folder.
  4. tunetyme, would you mind checking the change I've made? Here's the old Step 16: You should see all array disks with a blue icon, a warning that the parity disk will be erased, and a check box for Parity is already valid; IMPORTANT! click the check box, make sure it's checked to indicate that Parity is already valid or your Parity disk will be rebuilt! then click the Start button to start the array; it should start up without issue and look almost identical to what it looked like before the swap, with no parity check needed; however the XFS disk is now online and its files are now being shared as they normally would; check it all if you are in doubt And here's the new version: You should see all array disks with blue icons, and a warning (All data on the parity drive will be erased when array is started), and a check box for Parity is already valid. VERY IMPORTANT! Click the check box! Make sure that it's checked to indicate that Parity is already valid or your Parity disk will be rebuilt! Then click the Start button to start the array. It should start up without issue (and without erasing and rebuilding parity), and look almost identical to what it looked like before the swap, with no parity check needed. However the XFS disk is now online and its files are now being shared as they normally would. Check it all if you are in doubt. Before you click the Start button, you may still see the warning about the parity drive being erased, but if you have put a check mark in the checkbox for Parity is already valid, then the parity drive will NOT be erased or touched. It is considered to be already fully valid.
  5. Paul, I take back much of what I said - I never saw that comment by JonP, or any similar comments that there was *ANY* Ryzen support available yet, at all! And I also was completely unaware that any Ryzen support had been added to 4.9.10. ALL of the comments related to that seemed to be that they were waiting for 4.10 or 4.11. I do apologize for that. But aren't there comments that LimeTech wasn't completely successful yet? I take that to mean they aren't done making appropriate changes. Also, have you seen any info on whether KVM/QEMU and related are updated for Ryzen yet? That's fairly important I think. There is certainly a lot of interest in this thread, probably a lot more than anyone here realizes. And Paul, while we can't possibly pay you for the investigative work you have done, it is invaluable, has been and will be very helpful!
  6. I'm sorry guys, but this all seems way too premature! You're trying to get older tech to work with the newest tech object, without any compatibility updates specific to the new tech. I would not expect JonP or anyone else to participate here until they had first added what they could, a Ryzen friendly kernel and kernel hardware support and Ryzen tweaked KVM and its related modules, and various system tweaks to optimize the Ryzen experience. After that, then they can join you and participate. It's like having an old version with bugs, and an update with fixes. Why would a developer want to discuss problems with the old. They are always going to want you to update first and test, then you can talk. There's so much great work in this thread, especially from Paul, but it's based on the old stuff, not on what you will be using, so it seems to me that much of the effort is wasted. Patience!!!
  7. Not sure but you may have to run those commands within the docker container environment. There a way to exec a shell within it, possibly in the Docker FAQ. But probably better built into the container startup somewhere. Try asking the elasticsearch container author to add it.
  8. tunetyme, you started off by saying that you followed the wiki step by step, but take a look again at step 16. Somehow you missed that one. I do apologize for the instructions seeming convoluted to you, but the step was there. I looked at each of the ways you were suggesting to do it, and I have to be honest, they not only will take twice or 3 times as long, they also seem more convoluted to me, when you add in all the little details needed. I still believe (and it's just my opinion!) that if you want the easiest and fastest way to do it, AND want to preserve parity and User Share configuration, then the wiki method is the best one. Obviously I need someone else to write it though! Except to prepare the initial disk, there is absolutely no clearing done, no Preclearing done, no parity builds done, and no file is copied more than ONE time ever. I'll add more words to step 16 to display the message you saw ("All data on the parity drive will be erased when array is started"), then tell you to ignore it and click the checkbox to indicate "Parity is already valid". Perhaps that will make it clearer? After the copying of a drive is done, it only takes a few minutes before you can start copying the next drive - stop the array, New Config with Retain:All, swap the drive assignments, correct the file system formats, optional start and stop of the array to check it, change the file format of the cleared drive to XFS, start the array and allow it to be formatted, and you're ready to start copying again. Here's a summary of the wiki method: - Steps 1 - 7 are just prep, figuring out a strategy, and preparing the initial drive. Plus, I recommend a parity check so you don't run into drive problems during the process, and because if parity is not good, there's no point in preserving it. - Steps 8 - 9 are copying, with optional additional verification. - Steps 10 - 18 are just the few minutes swapping the drives and formatting for the next copy - Step 19 just tells you to loop back to Step 8 to start copying again. At the end, I do tell you there are a few redundant steps in there, but I prefer having them in there because it seems safer that way. But overall, there's just 3 steps - prep, copy, swap, then repeat. But I really do welcome improvements and suggestions for simplification, or even full rewrites. I'd like to add a summary at the top of the various possible methods. I think if a user read the summary first, like the one above, they would be less likely to feel it's convoluted. Plus, if it suddenly did start to feel convoluted or wrong, then they would know they had gone off the track somewhere. There is a faster method, if you have a lot of data. Unassign the parity drive, turn off User Shares, and skip the swapping. That will make the copying faster (no parity drive), but you will still have to allow a day or 2 afterward to rebuild parity. And while you won't have to worry about the complications of messed up inclusions and exclusions and file duplication during the process, you will still have to locate where everything is afterward, and correct all of the inclusions. The advantage of the wiki method is the array always stays the same except for brief intervals (a few minutes each), both before you start, and during the process, and after you're done. The only difference is that each logical drive is now a different physical drive. Parity was always preserved, and so was your User Share configuration, and except for those brief intervals normal operation was fine. If you had a second parity drive, it would need to be rebuilt, but that is true for almost all methods. This would be a good feature request, and I agree with that. It's not really a flaw, as New Config is basically resetting the array back to nothing. So it's as if it has never seen the disks you may then assign to it, which is also why it MUST present the message that parity will be rebuilt when you use New Config. It assumes this is a new array, new disks it has never seen, and a new parity drive. The Retain feature is essentially brand new for us, but modifies the New Config to reset the array config but retain some or all of the previous assignments. What we need now is for the Retain feature to also retain the existing file system info for each drive. This would save steps for us, and avoid some risk and confusion. I'm sorry if I sound defensive about what I've written. I do welcome improvement. jonathanm has been pointing out one of the constraining elements of my method, and I want to comment on that, and other things, like the problems of unBALANCE, but in another post.
  9. I don't know, for a specific card. But some cards have their own BIOS setup screen, or jumpers, a way to make significant changes in the PCI configuration of the card. I have no idea what your card has. Perhaps Tom will check on that, can't speak for him. But remember, it's not clear what the issue is, may not be the driver.
  10. Uh oh! Now I'm feeling pressured! As you've probably noticed, I'm easily and constantly side-tracked! And I have a bunch of little projects I'm either working on, or wanted to work on, plus other projects I don't want to work on, but my relatives do want me working on! I'll try to put a priority on it though. But the first draft won't be a step by step, but rather a summary of what has to be done. I have some reservations about the use of unBALANCE, the more I've thought about it. It's doable, but there are special issues that can come up, and I don't know what happens then. I first need to create a post in the unBALANCE thread with some questions I have, as to what happens in certain cases. The only simple case (I think!) is the case where the user doesn't use includes or excludes, all shares exist on all drives. Any other case is going to have extra issues and steps.
  11. There's a FAQ entry for that. Let me know if it needs improvement.
  12. The card or driver is broken, not working. Here are the relevant syslog sections: Looks like memory region conflicts (possibly with itself?!?). Both Microsoft and Linux kernel devs have gotten good about detecting and working around hardware 'quirks', and perhaps Microsoft has done a better job here. You can try reconfiguring the card, to see if it will improve its PCI declarations. Or perhaps Tom ( @limetech ) will have an idea, once he sees this. Could also be useful to see @ezhik's syslog, to compare the same sections.
  13. And if you add that, might be nice to have a timed logout, based on an activity timer (e.g. no user activity for 30 minutes - log the session out). I doubt many users would use it, but useful for special circumstances, shared machines.
  14. This looks like it might be an old issue returned, involving extended attributes and AFP. I'm guessing you have accessed this drive over AFP at some point? Read the discussions and fixes found in these threads: A "mover" issue? (Mover uses rsync too) Running mover does not successfully move items Extended Attributes Fix
×
×
  • Create New...