eek

Members
  • Posts

    69
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

eek's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Kaldek is correct that the real attack vector isn't going to be attacks from the internet but rather attacks from malicious code hidden within existing plugins. Now thankfully most of these plugins are small python that can be easily checked but some checks are going to be needed somewhere to stop malicious changes being made. And probably an education effort (even if it's just a warning banner) saying Dockers and VMs should 1) only come from trusted sources 2) only grant access to the folders they need access to and absolutely NOTHING else.
  2. Not the biggest issue but potentially an issue if you aren't paying attention as I almost pressed the reboot button even though the server was in the closing stages of rebuilding the parity disk (14tb drives take far longer to run than 4tb drives). Ideally whatever logic displays the banner should check if a parity operation is in progress and stop it appearing or at least check that no parity operation is in place when if you reboot the server (I don't know if that is already the case, I noticed that the parity operation hadn't finished before I pressed the reboot option).
  3. A minor issue (one I can live with if it's impossible to fix) is that the duration is reset to zero when you restart / unpause a check. This means you end up with speeds like the one attached - I was in a meeting otherwise I would have paused and restarted with a minute to go to really demonstrate the issue.
  4. It's included in 6.7 see the releasr notes for rc1 above (unraid is currently at rc4)
  5. Yes - my mistake as I'm not at home to look at the portal. Without ZFS and as btrfs is really not ready for Array use it's not really a goer...
  6. https://github.com/zfsonlinux/zfs-auto-snapshot/wiki/Samba has a nice outline for what is required to do this with instructions for Debian. It doesn't look that difficult if the drives are configured for ZFS (as I suspect is most people's arrays will be nowadays) and the files are written directly to the array - cache drives would add a lot of additional complexity...
  7. For those who won't have seen it now the new look and feel as part of 6.7 is a great improvement. One (very minor) suggestion would be to start paging at 24 apps rather than 25 as you currently end up with a lone app at the bottom of the screen.
  8. That's not a bug - whilst I disagree with the approach Unraid use* - the latest release in the next version branch is the 6.6.0-rc4 release. * Personally the next release branch should contain all (appropriate) production releases as well as rc releases.
  9. Just checked my log files and I can confirm that cron isn't running as the scripts that should be triggered aren't running. tower-diagnostics-20181108-1318.zip
  10. Sorry but public testing should be about ensuring more hardware combinations are tested than would otherwise be the case. Hence you do need multiple people to test the software as ideally you want your test team to reflect all your customer base. And regardless of that I do visit the site - I do visit it prior to installing any rc release (to ensure there isn't an obvious showstopper making installing it pointless) and then afterwards to see if there are any issues that could impact what I use the system for. The issue here is that 3 days after installing it (unless I see problems that I needed to report ) I have no need to revisit the forums...
  11. And that argument is fine. However, it would mean that unless I had visited this forum I would have continued to run 6.6.0-rc4 until the Next branch revealed 6.6.1-rc1 and my system prompted me to update it (after all previously rc releases have been the latest "test" release for months).. And you can test a system without visiting this forum, the only reason for doing so would be if something went wrong and you needed to report the bug.. As I said it's a slight annoyance - I just wanted to highlight the risk that unless people are told about the final (general / production) release when we get to 6.6.1-rc1 a lot of testers could be moving to that from 6.6.0-rc4 rather than 6.6.0...
  12. Not exactly a big issue but the Next Branch of the update system doesn't have a record of 6.6.0 being released so the first I knew about it was when I visited the forum. Now I know that the differences between 6.6.0rc4 and 6.6.0 are negligible, but it would be nice if the next branch featured the final releases so that testers move from 6.6.0rc4 to 6.6.0 (final) and then to 6.6.1rc1.
  13. I've not currently got any VM's on the server worth testing but I did do a parity check 6.5.3-rc1 - 2018-05-20, 19:51:00 10 hr, 35 min, 16 sec 105.0 MB/s OK 0 6.5.2-rc1 - 2018-05-04, 10:13:55 12 hr, 13 min, 54 sec 90.9 MB/s OK 0 100MB/s to 105MB/s so this is in the normal range. M/B: ASRock - Z77 Pro4-M CPU: Intel® Core™ i7-3770 CPU @ 3.40GHz HVM: Enabled IOMMU: Enabled Cache: 1024 kB, 128 kB, 8192 kB Memory: 32 GB (max. installable capacity 32 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500
  14. It seems that a recent change to Mylar has broken a lot of container based installations including my instance https://github.com/evilhero/mylar/issues/1929 has the bug report and a suggested fix