jameson_uk

Members
  • Posts

    78
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jameson_uk's Achievements

Rookie

Rookie (2/14)

5

Reputation

  1. I am currrently running an old Gen 7 HP Microserver that has 4 x 2 TB drives and a 500GB SSD cache drive. This is now starting to show its age and I was just wondering about upgrade options. One main consideration is the aging GT710 GPU I have in therre for Plex decoding. I would love to be able to shift to an iGPU (I am not interested in 4k and the like and is mainly for H.264 108i0p content) As I guess, as with others, the ubiquity of streaming services has meant my use as a true NAS device has somewhat deminished and the main use of my server now is for running docker containers (Hoime Assistant, Pihole, Zigbee2MQTT....) If I didn't still need the NAS at all I would just replace everything with an Alder Lake N100 mini PC with a TDP of 6W should save me some money on running costs and give me a big boost in performance. All the low power mini PCs seem to come with at most two SATA connectors which makes sense given the small power supply.which rules them out as a direct replacement. The HP microserver averages about 50W (so about 1.2 kWh per day) so wondering whether there is any modern upgrade that is going to beat this in terms of power consumption whilst being quite (ideally fanless), give me a performance boost and give me better graphics?
  2. I have my network divided up into a few VLANs. Primarily trusted stuff vs IOT devices etc. This all works great and IOT stuff is nicely segregated on VLAN tagged traffic and trusted stuff is untagged. On the Unraid side I have enabled VLANs in network settings and configured a a new address on the IOT VLAN (and excluded this from the management interfaces). I have some containers I want on the trusted side and this all works well via a bridge network and port mappings of <trusted IP>:<port> so they are only accessible on the trusted network. I then have a bunch of containers I want on the untrusted side. I have created a docker bridge network and have all these containers running in there. Those I want to be accessible have port mappings of <untrusted IP>:<port> and the rest have no mappings at all (these are only accessible to other containers in this docker network-). All good so far in that some of the containers are not accessible externally (eg. MQTT server which only sits between Zigbee2MQTT and Home Assistant so has no need to be visible elsewhere) and the the containers I do want to be accessible appear on the untrusted VLAN. Only issue is outbound traffic from containers in this bridge network are able to connect to the trusted LAN. So whilst inbound traffic is tied to the IOT VLAN, outbound is not. I guess this makes sense as the bridge just bridges the docker network to the host (Unraid) and that has a route to the trusted LAN so it can get through. Is there any way to force outbound traffic back out through the VLAN tagged (virtual) interface tp prevent this??
  3. I am hoping it fixed /run running out of space ??
  4. I use the card for H.264 decoding in Plex. (Is just about the latest passive card I could find) I don't need the latest driver I am just wondering if I did need an updated driver whether it would appear in the plugin or not.
  5. How does available versions get populated? I am running an old GT-710 so I am on 470.182.03 which shows up under available versions but I see that NVidia released 470.199.02 a few days ago. Not in any particular hurry to upgrade but jjust wondering whether updates will show up in the plugin or because it is the old legacy driver whether I need to update it manually.
  6. My array has 4x2TB drives running reiserfs. As this is no longer supported I have been looking at switching to XFS but just looking at the best way to achieve this. AIUI the only viable way to do this is to copy the drive contents to another drive, format the original and copy it back. Just looking at the fastest way to achieve this without spending a huge amount. I have a pretty old HP Microserver which is fairly limited in resources (Array + cache take up all the SATA slots and USB is USB 2.0 only). There is however an e-sata port. I have an old USB WD Elements drive which I have hooked up and just running rsync now and it has taken 2 hours to copy 182GB so I an looking at 20 something hours to copy the 1..4 TB that is used on the drive and then 20+ hours to copy it all back and then repeat for the other two data drives. Isn't the end of world but just wondering if there is a better way to do this? I did look at connecting up a drive via e-sata but seems there are limited options and most the enclosures you can get are £100+ and I don't want to be spending anywhere near that amount.... Any thoughts on any cheap ways to speed this up? Thinking I could crack open the WD external HDD and then take the array offline, replace one of the disks with the WD one and then hopefully get it a bit quicker over SATA (is only SATA 2.0) or perhaps use a USB 3 PCIe card to connect the WD Elements drive?? Any other ideas?
  7. Upgrade mostly went well and the ability to limit admin access by interface seems better and I have been able to remove some of my hacks Docker however seems to have an issue filling up tmfs on /run I have been running the same containers for a couple of years without issue but since moving to 6.12 I have had issue with containers crashing and refusing to start. Adding --nohealthcheck has helped but this is a workaround rather than a solution. Has something changes around docker using /run in 6.12 ?
  8. indeed. I actually reduced the number of containers I am running when I upgraded to 6.12. I am assuming the issues are since upgrading tp 6.12?
  9. So coming back to this now (and it appears this might be a cleaner in 6.12). First part is adding Unraid onto VLAN and preventing access to the admin stuff (SSH, web GUI etc.). This was a little pain previously and involved some hacks but seems you can do this relatively easily in 6.12. So adding Unraid onto VLAN was just a case of defining the VLAN on the network settings and assigning an IP. Now there is an Interface Extra section in the config where I set the listening interface to br0 and excluded br0.10 This means I can only access admin services on the main (br0) IP (and annoyance like the ftp server refusing to stay stopped, SAMBA needing hacks to config files etc appears to all be in the past). I then created my own docker network docker network create --subnet x.x.x.x/24 iot-bridge then I went through each of the containers I wanted to link and set their network to iot-bridge. On most I removed the exposed ports so they are only accessible by other containers running on the same network. Final step was to set the exposed ports on the containers I did want accessible to include the IP. So one thing I have done is put at least one service behind a Nginx reverse proxy (so I could enable TLS). So on the nginx container I changed the port from 443 to x.x.x.x:443 (x.x.x.x being the IP of the Unraid box on eth0.10). Now the nginx proxy is only accessible on VLAN 10 and the the container behind it is not directly accessible at all. I have a few other things like an MQTT server that are purely accessible on the docker network which makes me a lot happier than where I was previously
  10. The solution lies in No ideal but turning off healthchecks for the containers stops /run filling up.
  11. Having the same issue here. Looks like the fundamental root cause is the /run has run out of space
  12. It is volumes rather than images. I only update my containers through unraid UI (that I can think of anyway). Looking again now I am not sure whether this is something odd in Portainer. It was showing lots of unused volumes so I deleted them. It is now showing two volumes but I have many more than that (Portainer itself shows volumes against containers correctly just not in the volume list 😕) Will see if I can establish when they appear and what they relate to
  13. Not sure if this is just a docker feature or whether this is down to how unraid updates containers but I regularly have to go in and seem to end up deleting about 10 unused volumes. I am guessing that it must be something like when container is updated the old volume gets disconnected and a new one created and the old one then sits there in limbo? Is there any way to stop this build up of volumes happening?
  14. OK I am not sure what format it is in (it has worked fine for ages) https://hub.docker.com/r/koenkk/zigbee2mqtt/ I did however see https://github.com/Koenkk/zigbee2mqtt/pull/16297 which was merged yesterday which seems to link back to https://github.com/docker/buildx/issues/1509 so looks like that is probably the issue. I will try later when I get change
  15. I have one particular container which seems to have lost it's ability to show the update status. It is a standard container pulled from Docker Hub but it has suddenly started showing up a "Not Available". All the other containers are fine and if I do a force update then the status goes back to "Up to date" but it then just goes back to "Not Available" (Haven't checked when but I assume it is when container restarts). I have deleted and image and recreated it but the issue is stlll there Any ideas why this one container is behaving like this?