• Content count

  • Joined

  • Last visited

Community Reputation

29 Good

About NAS

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I cant see anything but I wanted to check to make sure since this is such a long thread. I am in the process of migrating to XFS and I have found that one RFS disk that will mount happily with unraid wont mount at all with UD. dmesg shows [Tue Jun 19 15:40:36 2018] REISERFS warning (device sdc1): sh-2021 reiserfs_fill_super: can not find reiserfs on sdc1 so before I get into this are there any know relevant issues? Update: well it turns out its pretty obvious why UD cant mount this reiser disk.... thats because it is xfs. # file -sL /dev/sdc* /dev/sdc: DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 4294967295 sectors, extended partition table (last) /dev/sdc1: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs) any ideas why UD is trying to mount a XFS disks as ResierFS?
  2. NAS

    Docker stats in GUI " After tweeting this article out @benhall pointed out that actually the ro setting on the volume mount doesn’t have a lot of effect in terms of security. An attacker with ro access to the socket can still create another container and do something like mount /etc/ into it from the host, essentially giving them root access to the host. So bottom line is don’t mount docker.sock into a container unless you trust its provenance and security… " In general security terms you know things are fundamentally not right with a generic approach when your trying to herd this many cats just to make it less insecure... not secure... less evil.
  3. NAS

    [REQUEST] Traefik reverse proxy

    There is always a risk of breakout of any container but this is the holy grail hack of such a system. But to be clear what this sock feature does. Essentially it gives the container root access as a member of the docker group on the HOST machine.... not the container... the host. This is a specific feature required by the traefik container and not required by almost any other container. It is very very very rare and for good reason.
  4. NAS

    [REQUEST] Traefik reverse proxy

    No I am specifically referring to the exceptional requirement of this container to activate the docker socket feature. This is very unusual.
  5. NAS

    [REQUEST] Traefik reverse proxy

    Long story short if someone roots a container with docker socket enabled its pretty much game over. This is why, much as I think traefik is a beautiful piece of engineering, is build on a hill of sand.
  6. NAS

    [REQUEST] Traefik reverse proxy Worth a read (or one of the countless other links explaining why this is very bad) before you commit to this as a solution.
  7. NAS

    Docker stats in GUI

    If we are advocating the use of this container to fix a core OS limitation we need to explain the serious security implications of docker sock access. At the very least this will require a zone like setup with networking to mitigate some security risks. I would in my day job always advise strongly against any solution that could have public facing containers on the same instance as a backend sock instance
  8. Great. Its non ideal but at least now I know. I honestly have been wracking my brains what I was doing wrong. Time for me to move to XFS completely then. Lets close as resolved. Appreciated.
  9. Sorry I should have said, 6.5.1.
  10. Any thoughts on this. Best case scenario is that it is PEBCAK but it doesnt seem to be obvious if it is. The only thing I can think off I do differernt than "some" is I often use "/mnt/cache" rather than "/mnt/user/" although I dont see why this would cause spin up on. As an illustration here is a before and after performing a single docker container stop. Notice the parity come up as well. Comparing write counts before and after (sorry I edited them out by mistake) I can see most of the RFS disks write count increase by 7 or 8. I am wondering if this is some sort of spin up caused by the docker container issuing a disk flush such as SYNC and RFS handing that differently than XFS. This is really annoying so any insight would be very much appreciated.
  11. To be fair so did I for many years but I realised the 90% of my "mc" sessions were between 2 or 3 base locations so once I figured out the solution above I just setup a few aliases and now its second nature to type "unmcm" for "unRAID Midnight Commander Movies".
  12. yes, if you are working with array files you should run mc as the typical array file owner nobody and not root. If you try to do this mc will complain without above fix. As a example of my usage for content only sudo -u nobody mc /mnt/user/downloads/ /mnt/user/movies/
  13. I am in the process of cleaning up years of unRAID tweaks. The way unRAID ships when you install mc and run it as "nobody" you have problems. This fixes it for me and some variation could probably fix it for everyone. I dont remember if mc is shipped native, if not it should be. #==================================================================================================== #Fix mc running as nobody #==================================================================================================== /usr/bin/mkdir -p /.cache/mc /usr/bin/mkdir -p /.local/share/mc /usr/bin/mkdir -p /.config/mc #====================================================================================================
  14. I now have a mixed array of reiser and xfs disks and I can confirm that when I stop a docker container, if the array disks are spun down ONLY the resier disks spin up. Is this a known issue or a config specific problem?
  15. emHTTP bombed hard as soon as I hit reboot to apply the update here. Had to force a reboot from command line with the obvious consequences. I was having addon issues before hand though where updates would fail to apply due to the addon claiming not to be installed. Probably a me specific issue rather than a problem with the release. Update: my addon issue was specifically related to Advanced Buttons deprecation which I was sure i removed but somehow had not. Fix here

Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.