boof

Members
  • Posts

    800
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

boof's Achievements

Collaborator

Collaborator (7/14)

0

Reputation

  1. This isn't just a legacy issue. It affected my Ubuntu 18.04.3 LTS clients as well. Stale mounts quite quickly after the initial mount. There are other threads linking it to cached data and mover - but very much a general NFS issue.
  2. The vmware courses / training all use vsphere running virtually within vsphere (vmware inception) to provide lab sessions anyway - or at least the ones I've been on have So there's no expectation on the vcp side to run it on real tin. Only question would be how well you could run esx in kvm. Vcenter will be ok as it just needs a windows server guest. Don't know the answers to esx in kvm I'm afraid. esx can be picky at the best of times on real tin so it could go either way.Though being honest I would actually tend to agree with buying a new box - and just go and get a little HP microserver, stick esxi on it and then build your vsphere environment virtually in there. Possibly easier in the long run and something quite common to do.
  3. There is also ecryptfs which has some level of kernel adoption. It's what ubuntu (perhaps others?) use to provide encrypted homedirs etc I believe. It can behave in the same way as encfs (normal user, single directory) https://en.wikipedia.org/wiki/ECryptfs
  4. Great read - thanks. mergerfs is a new one on me - like you the alternatives you listed in the article have always had an issue that stopped them being attractive. I'm excited to go and take a look at mergerfs if it resolves them.
  5. Had the same thoughts as you. I bought the same case and rammed it full of 1-3TB drives years back. As those drives show signs of failure or I need to upgrade capacity I've been replacing them with 8TB drives and consolidating. I'm now down to only using 2 of the 3 SAS HBA's I was using before for physical drive connectivity - whilst having way more storage thanks to density! So yes - for me I don't need that size a case anymore by a long shot. If I ever do another full refresh I'll be looking at smaller cases - there are some neat mini / micro atx cases around with some interesting builds already in the forums here. Hard to decide to do though as the larger case isn't causing any issues and migrating will only cost money - so in that regard the large case was a good purchase in terms of longevity. Failure rates / data density isn't a factor in my thinking at all really. Drives will fail either way and how you handle that should be the same regardless.
  6. General rule of thumb is don't over provision memory for guests. As above ballooning is a last resort but isn't magic and can't always fix the issue. You could investigate KVM being able to use disk as swap for guest memory / migrate to vmware which does do swapping and accept the performance penalties when ballooning fails and the hypervisor swaps your guest memory out. My hunch would be KVM *does* try to swap out guest memory (rather than, as you're seeing killing the guest or more accurately having the KVM process oom'd ?) but uses the system swap space to do so. And I don't think unraid runs with swap? I can't check my unraid machine right now to confirm. If that's the case you could configure unraid with some swap space and see if that will allow KVM to swap guest memory. Performance penalties but your vms would keep running.
  7. Ballooning isn't a magic bullet. It can only reap memory the guest OS has allocated but isn't actually using (cache etc etc) and / or apply other clever tricks (dedupe of memory across guests etc) to try and free some real memory. If all your vm's are, genuinely, actively paging all memory in the guests - the hypervisor won't be able to do much about it. You're rolling the dice if you force the hypervisor into a position where it needs to start thinking about this. I'm not sure on kvm's default behaviour if guests exhaust memory - vmware will start swapfiling so killing performance but at least keeping the guests up. My understanding anyway.
  8. Very old firmware. Without double checking- P7. Getting these cards flashed was a nightmare for me (only have one mboard that will allow it) so I've been in no rush to keep updating them when they've been working (apparently) fine. 10TB drives - no. Only ones I'm aware of are the HGST models which need host drivers for SMR and so wouldn't work regardless. No idea if they're even out yet.
  9. I'm only backing up the things that are irreplaceable (photos, documents etc). Anything that can be generated again (dvd rips, flac etc) I don't bother with. That said it's only because the amount of time it would take to push it all offsite. Don't feel bad about pushing 5TB to crashplan. It's what they advertise and what you pay for. I've read anecdotal tales from others with far more than that in there.. They'll have designed their business model to cope with a small proportion of people eating lots of space - whilst most people only use a small amount.
  10. I have three M1015's and have some 8TB disks hanging off them. As above, I'd so presume 5TB would be ok.
  11. This is exactly what the current docker is. It's based off the phusion base image which is just a slightly 'docker friendly' tweaked ubuntu install.
  12. I'd inject a note of caution that the crashplan updater has a history of not really being that great at doing updates. This isn't the first time it's gone a bit wrong. It doesn't update that often though which helps smooth this out. This doesn't change the argument of having the docker build kept up to date (though it's easily forkable as necessary, in theory just change the path in the current one to the new crashplan install bundle) but does mean that regardless of the state of any crashplan install crashplan itself will always attempt to auto update itself. And it may not go smoothly. This will apply equally to crashplan running inside docker images, inside your own vm's, on baremetal installs etc. It was ever thus with crashplan sadly. Having an updated docker image won't help with this as by the time an updated docker image is necessary your running instance will have tried to update itself. It will have either succeeded in which case you don't care about a docker image update, or failed in which case your backups are broken until you notice and / or until you notice there is a docker image update and act accordingly to have it pulled down. Certainly for this specific upgrade none of this would have helped with the changing client token exchange requirements which, as far as I can tell, aren't documented by crashplan. So would always have been at the mercy of someone bright in the community figuring it out. All three of my pre-existing crashplan installs (three seperate machines, only one running inside an unraid docker container. The other two directly installed on the hosts) needed a bit of a prod during this round of updates to come back to life.
  13. Depending on the licensing costs, and how granular they are (i.e will you have to absorb upfront costs or can you 'pay as you go' per implementation) then you may have a path to have another tier of unraid licensing. Pay more for the 'Unraid 6 Double Protection ' license to unlock the feature - and that uplift cost covers your backend licensing fees and a little on top for your trouble. It may be low volume in terms of sales but that might not matter. Or if not needing all disks to spin up it may be a very popular license option for customers. Or the backend licensing fees could be so low that it can just be rolled into the unraid base without any fuss and the general unraid license cost increased by a small amount across the board. Charging a fee for a new unraid license or unraid upgrade come version 7 for existing users (presuming this feature would be included) might not cause any problems. If you'd charged again for version 6 I would have happily paid given the improvement in feature set. Something like this would bring enough addiitonal value to the use of th eproduct that I would see it reasonable to pay for v7 if necessary.
  14. Hopefully this will be an option. One of the appeal for unraid to me is the drive spin down maximisation. Happy to have a parity write penalty as a result - and mitigate as best I can with the cache drive. Appreciate others will have different needs but hopefully this won't be an enforced change.