Jump to content

limetech

Administrators
  • Posts

    10,192
  • Joined

  • Last visited

  • Days Won

    196

limetech last won the day on February 20

limetech had the most liked content!

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

limetech's Achievements

Veteran

Veteran (13/14)

3.3k

Reputation

7

Community Answers

  1. The symlink idea is not something that's going to get into -beta7 and probably not in 6.0-final. Just too many other critical features to address. What we're going to do is create some warning notice on the Share Settings page that talks about this issue: that it's not recommend to copy files directly from a disk share to user share UNLESS: the source disk is not configured to be part of user shares. That is, on the Share Settings page, there are "global" Include/Exclude masks. Say for example, you want to empty all the files from "disk2" to the user share file system. What you would do is set "Excluded disks(s)" to "disk2" and click Apply. Now it should work fine to copy everything off disk2 to a user share, letting shfs decide where to write things. There's really no other way to prevent user from shooting himself in foot without giving up some other functionality. I don't want to change the meaning of the share-level Include/Exclude masks, again proper documentation is required here as well.
  2. Ok, this has all come back to me.... I am aware of this issue, and it's on a real old 'todo' list. The problem is that 'cp' and 'mv' and friends execute a 'stat' on each source and target, to check among other things, if it's the same file. It does this for example, to check if two links (hard or soft) point to the same file. To do this it checks both the st_dev field and the st_ino field in the stat structure. The 'system' fills in st_dev and in our caes, FUSE fills in st_ino. It is possible to tell FUSE to pass through the st_ino field from the underlying branch (this is the 'use_ino' FUSE option). This does not solve the problem however, because st_dev is still filled in by the 'stat' code outside of FUSE. There's no way to 'spoof' the st_dev field to pass through the branch st_dev field - doing so is not advisable anyway because this can cause other issues. Fundamentally then, it's not possible to solve this problem given that we want to retain the option of referencing the same file via two different file systems. Other union-like filesystems solve this by not making the individual branches accessible. I guess this is not an option for unRaid because it can break how lots of ppl are using disk/user shares. There is another solution which I've wanted to explore because it helps with NFS stale file handles. What I would do is make a change in shfs that made every single file look like a symlink! So if you did an 'ls' on /mnt/user/someshare then what you'd see is all directories normal, but all files are symlinks to the "real" file on a disk share. this would be completely transparent to SMB and NFS, and would eliminate need to 'cache' FUSE nodes corresponding to files for sake of stopping stale file handle errors. I dunno, might throw this in as an experimental option...
  3. I will work on this some more. There is a solution but I have to check if it will work with SMB. What I'm concerned with is this. If user copies a very large file, say 25GB, I think SMB might do this: open src file read a chunk close source file open tar file write chunk close tar file repeat until all chunks written. I need to check if indeed I see those open/closes in there.....
  4. Right, there's that. Two ways to address that: 1) if you want to write to a disk share, temporarily export it, write, then un-export it. I can see issues with this. 2) define a 'read-only' mode for the user shares. If in read-only then permit all disk shares to be exported. Starting to get complex.
  5. Yes this is why I say "casual user". For someone transferring files via command line or MC it will be possible to do the wrong thing. One idea to mitigate that is to put a dot in front of the mount point name if the share is not exported, eg: /mnt/disk1 /mnt/.disk2 /mnt/disk3 /mnt/user In this case if you type a 'ls /mnt' you would not "see" .disk2 mount point. MC can be configured to now show hidden files. But if someone is using the command line or MC they are not a casual user and they should be expected to "know" what they are doing.
  6. Here is what I want to do about this: "If a disk share is being exported via any protocol, then that disk does not participate at all in the shfs union." For example suppose we have this: disk1/movies/A disk2/movies/B disk3/movies/C Under Network in windows, currently you would see: disk1 A disk2 B disk3 C movies A B C ie, everything visible. Assuming no config changes, after I make this change we have this: disk1 A disk2 B disk3 C movies ie, all the shares are there but all the files in the movies share disappears because all the disks are still exported. To fix this user would set 'export' to No on each disk, resulting in this: movies A B C ie, disk shares not there of course, but their contents now show up under movies user share. Now suppose user wanted to copy files off disk2. First they would "export" disk2, resulting in this: disk2 B movies A C Now disk2 is visible again, but its contents are no longer visible in the user share. At this point user can use windows explorer and drag B to 'movies'. shfs will put B either on disk1 or disk3. This change prevents casual user from clobbering data. User could still use console to clobber data The other way to "fix" this is to change meaning of include/exclude masks to do the same thing. At present these masks only affect which disks are eligible for new object creation. I could change them so that they completely define (if set) what disks are used at all in shfs. The problem with this though, is that it still doesn't prevent the casual user from clobbering data - they would have to know what these masks do, and carefully set them to accomplish what the want do do (move data off a disk and onto a user share, letting shfs decide where to put the files). Thoughts?
  7. Thank you for reporting all the glitches. The devs working on the forum migration are on the East coast and were up most of the night. Hopefully we'll get most of this sorted by tomorrow!
  8. Life is too short to not have some kind of sense of humor. I approved this little "prank" so come one, let me have it!
  9. No spin - I honestly don't know what you are asking.
  10. It's any publicly published version of Unraid OS, including "major" version updates, eg, from v6 to v7. We're also not going to play any games like coming up with "NewUnraid OS" where all of a sudden your key won't work or we start charging an extra fee to keep using.
  11. Not all quacks are ducks... but the recent docker vulnerability kinda forced our hand to get 6.12.8 released ASAP.
  12. You Plus key will function the same way it always has.
  13. The current 28-device limitation applies to the unRAID array. You can have any number of btrfs/zfs pools with other devices. An upcoming version will let you have multiple unRAID arrays though we don't plan on increasing the width of a single unRAID array. With your Pro key you'll get this update for free 🙂
  14. A small point: it's not a subscription fee in this sense: with a subscription if you don't renew then the service ends. By contrast if you do not extend your Starter or Unleashed key your server still runs as before and you still have complete access to your data, etc.
×
×
  • Create New...