Jump to content

guitarlp

Members
  • Posts

    301
  • Joined

Converted

  • Gender
    Undisclosed

guitarlp's Achievements

Contributor

Contributor (5/14)

1

Reputation

  1. I moved my smokeping docker container from my bridge network to my custom br0 network. However, once I did that, all my targets connecting via IPv6 are getting packet loss. Here's an example (anything that isn't green is when I get packet loss and the container is on the br0 network): When I run a ping from within that container, I'm always getting packet loss when I run the command forcing IPv6. I don't get any packet loss on IPv4: 80c35650254d:/config# ping -4 -c 5 google.com PING google.com (142.251.40.46): 56 data bytes 64 bytes from 142.251.40.46: seq=0 ttl=118 time=1.429 ms 64 bytes from 142.251.40.46: seq=1 ttl=118 time=1.337 ms 64 bytes from 142.251.40.46: seq=2 ttl=118 time=1.262 ms 64 bytes from 142.251.40.46: seq=3 ttl=118 time=1.401 ms 64 bytes from 142.251.40.46: seq=4 ttl=118 time=1.400 ms --- google.com ping statistics --- 5 packets transmitted, 5 packets received, 0% packet loss round-trip min/avg/max = 1.262/1.365/1.429 ms 80c35650254d:/config# ping -6 -c 5 google.com PING google.com (2607:f8b0:4007:819::200e): 56 data bytes 64 bytes from 2607:f8b0:4007:819::200e: seq=0 ttl=117 time=1.918 ms 64 bytes from 2607:f8b0:4007:819::200e: seq=2 ttl=117 time=1.812 ms 64 bytes from 2607:f8b0:4007:819::200e: seq=3 ttl=117 time=1.952 ms 64 bytes from 2607:f8b0:4007:819::200e: seq=4 ttl=117 time=1.884 ms --- google.com ping statistics --- 5 packets transmitted, 4 packets received, 20% packet loss round-trip min/avg/max = 1.812/1.891/1.952 ms I monitor sites like google.com using smokeping, but I also monitor IP addresses like Quad9's DNS servers. With IP addresses, I'm not seeing any packet loss when testing the IPv4 9.9.9.9: But I am seeing packet loss when I test Quad 9's IPv6 DNS servers: So my issue is that any outbound connection to an IPv6 address is always getting some packet loss when in my custom br0 mode. Any ideas on what I should look into? I'm on the latest unRAID 6.12.9. I'm using pfSense are my gateway, and IPv6 is handled via SLAAC. Here are my interface settings: Here is my routing table: Here are my docker settings:
  2. Why do some containers using a custom br0 network show port mappings, and others do not? I understand that when you use a custom network, the port mappings no longer matter. But I'm confused on why some containers in the unRAID GUI are showing port mappings, and others are not, for the same br0 network. For those containers that don't list them, is there a way for me to get them showing up, so that things are consistent in the GUI? See the attached screenshot. The `swag` and `Plex-Media-Server` docker containers use my br0 network and have port mappings defined. But `homeasistant` has no port mappings showing.
  3. Thank you for the response. I understand that part though. I default to encrypt everything, but I'm wondering if doing an encrypted XFS drive is going to fix this, or if I need to ultimately do an unencrypted XFS disk. Normally encryption doesn't impact perceived performance... but normally raid1 doesn't cause huge iowaits with freezing docker containers :).
  4. This is an old thread (apologies), but this still doesn't appear to be fixed on the latest `6.11.5`? When I switch from btrfs raid 1 to XFS non raid, can I do that encrypted, or does it have be be an unencrypted XFS disk? I read something in this thread about encryption possibly attributing to this issue.
  5. Is the Dynamix SSD Trim plugin still required on pools that use BTRFS with encryption? The plugin mentions: But I have read that the `discard=async` BTRFS option included as of unRAID 6.9 doesn't work on encrypted drives.
  6. Would it be possible to exclude docker updates creating warnings? I run the CA Auto Update Applications app to update Docker apps every day. However, if Fix Common Problems runs before an update is installed, warnings get created which give me small heart attack when I receive those via Pushover notifications. I can ignore each notification, but that's per docker app (and only after a warning fires). I have 27 dockers right now, and today I was able to ignore 4 of them, but that means I have 23 apps that may cause warnings in the future and every time I add a new docker that adds another chance for a warning to appear.
  7. Is this container still up to date and okay for use? It failed to install a couple of times, but now that it did install unRAID is showing the version as "not available".
  8. I'm running unRAID 6.4.0, but this also happened on 6.3.5. I'm on MacOS 10.13.2. My Samba shares are exported as private where only certain users can read and write to these shares. I've ran the New Permissions scripts on all of my shares. When I connect to a an unRAID share over Samba I can connect fine (smb://TOWER/share_name). I can see all my shares and connect to each of them. However, once I'm connected to a share all of the folders within that share have the red permission icon on the folder. If I try to open the folder, I'm presented with the following message: If, within unRAID, I enable the `Enhanced OS X Interoperability` for that share the issue is fixed, but I don't want to enable this on all my shares. The reason being, when this feature is enabled, MacOS writes at non-standard permissions to the shares. Instead of a new folder being 777 permissions like it is when writing from one of my freeBSD machines, the permissions may be 755. For files, they'll end up as 644. That's a problem with my workflow, because other users on my network then can't rename or delete these files since they don't have permission to do so. I'd be okay enabling that feature mentioned above if I could force 777 and 666 permissions on my shares, but even though I've played around with `Samba extra configuration` settings, I haven't been able to get the permissions to be forced to certain values. So the reason for this post... why is it that, when I don't have `Enhanced OS X Interoperability` enabled for a share, my Mac can see the share, but I can not open any folders within said share? Here's an example of what I see on my Mac for a share that does not have the Enhanced OS X Interoperability feature enabled: And here's an example of what I see when I do have that feature enabled: Both folders, on unRAID, have 777 permissions with nobody:users as the owners.
  9. No problem. I had the same issue and banged my head around it for about an hour. Then I re-read the instructions (carefully this time) and once I removed that folder things worked for me as well.
  10. Remove the new syslinux folder from 5.0.4. You should only be copying over the bzimage and bzroot files (along with the readme if you want).
  11. What did the pre-clear report show at the end of this pre-clear? I've never noticed this (don't usually watch the pre-clear until it's done) ... but now that you noted it, I undoubtedly will on my next pre-clear And of course I'll wonder the same thing if the "+" number is anything but zero !! I'm surprised Joe L didn't provide some feedback on this. Joe L ?? I did the pre-clear on two drives and both of them had increasing "+" values as the pre-clear continued. The reports at the end showed no issues on either drive. I checked the Smart reports before and after and didn't see anything concerning. So for those that see this, I don't believe it's anything to worry about. I'd still like to know what it means though because it did concern me to see a number increasing during a pre-clear like that. I assumed it was bad sectors getting replaced or something of the sort. Trying to google for what it means didn't return anything relevant.
  12. What does it mean if the records in and records out values contain a positive + value? For example, I'm current preclearing a 3TB WD Red drive that shows the following: 624648+4 records in 624648+4 records out What's the +4 value? Is that a bad sign? Edit: Now they're at +6: 660271+6 records in 660271+6 records out
  13. Doing the calculations from this thread with my setup: 14 green/red drives (14*2) = 28a 10 7200 drives (10*3) = 30a 2 SSD's = 2a? This might be on the 5v rail though. 8 120mm fans and 3 140mm fans = 1.5a System = 5a 3 PCI-E cards = 3a 2 USB devices = 2 a Adding that all up I'm at 71.5a. A Seasonic 1000 watt PSU has 83a on the 12v rail. The 860 watt PSU is 71a. The 1000 watt would be the safe bet, but wouldn't be as effecient. The 860 watt would be cutting it close. Although, I believe the 2a green and 3a 72000 estimations are already a bit inflated. It seems the greens are usually 1.6-1.8. I've also heard the Seasonic's can provide short bursts of power over their stated specs. If both of those are true, the 860 watt PSU would be my best bet.
  14. You'll want to put some splitters on your molex cables to increase the total number of connections. Monoprice is a great place to get those: http://www.monoprice.com/products/subdepartment.asp?c_id=102&cp_id=10245&cs_id=1024501 If you need more SATA connectors: http://www.monoprice.com/products/subdepartment.asp?c_id=102&cp_id=10226&cs_id=1022604 For example, if you wanted to change 1 molex into 2, you could get this for less then $1: http://www.monoprice.com/products/product.asp?c_id=102&cp_id=10245&cs_id=1024501&p_id=1313&seq=1&format=2 Just be careful on connecting a bunch of these to one cable from your PSU. You should try to use all the Molex/SATA cables evenly so that the power is distributed across your cables. I don't feel comfortable with more then 7 hard drives on a single cable from my PSU. Too many devices on a single cable could cause issues as the cable wire itself may not be able to supply enough current for all the devices on that chain.
×
×
  • Create New...