Jump to content

boof

Members
  • Posts

    800
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

boof's Achievements

Collaborator

Collaborator (7/14)

0

Reputation

  1. This isn't just a legacy issue. It affected my Ubuntu 18.04.3 LTS clients as well. Stale mounts quite quickly after the initial mount. There are other threads linking it to cached data and mover - but very much a general NFS issue.
  2. The vmware courses / training all use vsphere running virtually within vsphere (vmware inception) to provide lab sessions anyway - or at least the ones I've been on have So there's no expectation on the vcp side to run it on real tin. Only question would be how well you could run esx in kvm. Vcenter will be ok as it just needs a windows server guest. Don't know the answers to esx in kvm I'm afraid. esx can be picky at the best of times on real tin so it could go either way.Though being honest I would actually tend to agree with buying a new box - and just go and get a little HP microserver, stick esxi on it and then build your vsphere environment virtually in there. Possibly easier in the long run and something quite common to do.
  3. There is also ecryptfs which has some level of kernel adoption. It's what ubuntu (perhaps others?) use to provide encrypted homedirs etc I believe. It can behave in the same way as encfs (normal user, single directory) https://en.wikipedia.org/wiki/ECryptfs
  4. This is exactly what the current docker is. It's based off the phusion base image which is just a slightly 'docker friendly' tweaked ubuntu install.
  5. I'd inject a note of caution that the crashplan updater has a history of not really being that great at doing updates. This isn't the first time it's gone a bit wrong. It doesn't update that often though which helps smooth this out. This doesn't change the argument of having the docker build kept up to date (though it's easily forkable as necessary, in theory just change the path in the current one to the new crashplan install bundle) but does mean that regardless of the state of any crashplan install crashplan itself will always attempt to auto update itself. And it may not go smoothly. This will apply equally to crashplan running inside docker images, inside your own vm's, on baremetal installs etc. It was ever thus with crashplan sadly. Having an updated docker image won't help with this as by the time an updated docker image is necessary your running instance will have tried to update itself. It will have either succeeded in which case you don't care about a docker image update, or failed in which case your backups are broken until you notice and / or until you notice there is a docker image update and act accordingly to have it pulled down. Certainly for this specific upgrade none of this would have helped with the changing client token exchange requirements which, as far as I can tell, aren't documented by crashplan. So would always have been at the mercy of someone bright in the community figuring it out. All three of my pre-existing crashplan installs (three seperate machines, only one running inside an unraid docker container. The other two directly installed on the hosts) needed a bit of a prod during this round of updates to come back to life.
  6. It's been a while since I set it up and I don't have it to hand to check - but I don't think you need touch the switch. Just make sure the vmxnet driver is working ok in the hosts alongside vmtools. You should be able to use ethtool or similar to check the 'physical' link speed as the OS sees it. You obviously need to do this on all the machines you want to talk to each other at this rate. Linux 3.0.31-unRAID. root@unraid:~# ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 1000baseT/Full 10000baseT/Full Supports auto-negotiation: No Advertised link modes: Not reported Advertised pause frame use: No Advertised auto-negotiation: No Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: off MDI-X: Unknown Supports Wake-on: uag Wake-on: d Link detected: yes edited to add: I think the vmxnet driver might be in the kernel in newer rc releases, lessening the need for vmware tools. But you should really have the tools installed anyway for the other features it brings.
  7. Just mount unraid in that vm using nfs or (more likely, given the nfs issues at present) samba. Speeds will depend but if you're going to a cache drive and have the nice 10G internal vmware networking setup correctly it should be more than adequate.
  8. Got there in the end.. I have a uefi bios and, it turns out, you have an additional hoop to jump through. When you press ctrl+c / ctrl+h to enter the bios of the m1015 nothing much happens - but the server appears to soft reboot and you get the system BIOS screen again. At this point you can go into the system BIOS as normal and you now have a new boot option called something like 'option rom' and you can then tell the system bios to boot from that - which gets you into the 1015 option rom / bios. What a pain, but it worked. From there I just added the second card into the ordering within the m1015 BIOS and that's it! Thanks for the help, your tip on setting the order made me quite comfortable that I was doing the right thing! This is on an asrock z68 extreme4 board.
  9. Unfortunately I've now hit a brick wall in getting two of these running. I've never been able to enter the card's BIOS on any system I've tried (inc the one that flashed them). Now when I have two cards inserted in my unraid system only one HBA is detected alongside an 'Adapter configuration may have changed, reconfiguration is suggested!" during the drive scan in the cards BIOS. I preusme this is because I now have a second card and they need tickled to play nice with each other. However as I can't enter the bios..I can't really do much Any ideas?
  10. If you're having problems at the -cleanflash stage try rebooting before attempting. I.e backup the sbr, wipe the BIOS but before doing megarec -cleanflash 0 reboot. You won't see the card BIOS but you might find you can then cleanflash successfully. I do this on my asus p6t to get it past that stage. Once cleanflashed, continue with the instructions as normal (i.e reboot then us 5it.bat or similar). I've done two cards this way and was bashing my head off the wall at the -cleanflash stage until I figured this out. Apologies if this has already been mentioned in this thread - it's a big un! I posted this because I did a second card this afternoon and had to remember it all over again, so posted on the interweb for posterity
  11. Plenty of uk forumites so I'm sure something could be arranged if the end result was still cost effective. If the US handles imports and taxes anything like we do then I can imagine it probably won't be though
  12. I'm guessing any shipping and tax considerations make it not worthwhile buying from the UK? I know all about the cost from the US -> UK but little about the other way round.
  13. Whilst that's a shame - you guys get so many excellent deals through MIR's that we in the UK will never see, it's nice for us to have something useful like that for once. Although a shame you can't get the benefit as well.
×
×
  • Create New...