talmania

Members
  • Posts

    205
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

talmania's Achievements

Explorer

Explorer (4/14)

4

Reputation

  1. Thanks all for the advice. I’m closing this as solved but for anyone coming by this thread the solution is a combination of both pcie lanes and writing to cache vs writing to a user share that is cache backed.
  2. Well that was definitely it. Got the new build in a state where I could do some preliminary testing and found that writing to cache backed user shares with another same model ConnectX-4 was a sustained 450-525MBps and when I wrote directly to the cache drive I was getting sustained 935MBps to 1.05GBps.
  3. Well I guess this just gives me extra incentive to finish up my Z690 build I’ve been lazy about. Thankfully it does have an 4.0 x16 slot that will operates at x16. Time to test…
  4. It's a user share with cache enabled. Just attempted to write a file directly to the cache folder and found it would sustain 625-700MB/s write
  5. Thank you for responding!! I think you may be onto something here...the ConnectX4 I purchased is actually a 25Gbe card (MCX4121A-ACAT) as I'm awaiting the arrival of a 100gbe switch here shortly. The motherboard is a ASUS TUF gaming x570-Plus WiFi and the manual does show 2x PCIe 4.0 x16 slots but when dual vga/pcie cards are used the 2nd slot is PCIe 4.0 at x4 instead of x8. OK bear with me here and please correct me if I'm totally wrong (probably am!). I could not locate a board diagram for the card but assuming it's 50gb (2x 25gb ports) across the 8 lanes. Each PCIe 4.0 lane is 2GB/s of bandwidth. PCIe 3.0 is 1GB/s. So assuming the card is split 4 lanes for each port that's 4GB/s for each port or 4000 MB/s with 1000MB/s for each lane for each port. If that's running at half speed with the fewer x4 lanes (2 per port??) that's 500MB/s per lane or 1000MB/s for each port but I'm guessing it's split between send and receive?? And holy hell there's the problem! Do I have that right? The 7.5Gbps result from iPerf would be both lanes at 500MB/s to get to 1000MB/s and the 937.5MB/s result from iPerf with overhead.
  6. I've ran 10gb on my unraid for a really long time now and have finally spent some time trying to optimize my transfer speeds. Some history first: Initial Config & Revision: Workstation: AMD Ryzen 3500 Intel x540-T1 with CAT5e to Brocade 10gb switch Transfer speeds: wildly fluctuating from 450MB/s to 70MB/s Workstation Revision 3/2024: ConnectX-4 SMF Fiber Consistent ~400-480MB/s transfer speeds UNRAID: Supermicro X10DRH-IT Dual E5-2670v3 128GB RAM Intel x540 Dual Port 10Gbase T Cache: Dual Crucial P3 Plus 4TB PCIe 4.0 x4 in a PCI3.0 x4 slot (Mirror--theoretically 4100MB/s write) After seeing lots of variances in speed since the initial 6.x upgrade when writing to cache and never being super satisfied with transfer speed (for reference I would get ~400MB/s peaks pre 6.x) Well I decided I wanted to do something about it and I ended up running SMF fiber under my house, replaced my NIC with a Mellanox ConnectX-4 and now I'm seeing consistent~400 to 480MB/s transfer speeds to the cache drive. When I run iperf3 to the server itself from my workstation I'm seeing on average about 7.5Gbps (~935MB/s). Network is flat, there's a single nic enabled on Unraid with no teaming, jumbo frames or any other tweaks done. I'm curious what step I should take next to troubleshoot and see if I can't get a better transfer rate? Thanks for any and all advice.
  7. Thank you for the link and this topic! I updated and noticed that Immich was broken and after trying to figure out how to add vectors to the postgresql15 DB (and failing) I found this thread. @captainfeeny link worked perfectly using the postgresql15 repo.
  8. There's conflicting information on Immich's site about what version is required (some spots say 14, others 15)--having just installed it I went with postgres15 so I guess I lucked out.
  9. Thank you for sharing your config. I was having a helluva time getting swag to work with Immich inspite of the fact I host tons of other apps and domains and sure enough when I used 8080 (instead of the 8081 I was redirecting to) it started working. Really appreciate you sharing your notes here as this one has been a struggle. Thank you!
  10. Thank you! I optimized my search and actually found your response earlier in this thread with a similar question. It was EXACTLY what i needed and helped immensely. Thank you very much for helping!
  11. I'm hoping someone can help me with my thought process and see if I'm off base or what I'm thinking is possible with my current swag configuration: It's working perfectly for all my various containers on Unraid---however I have a new use case where I want to reverse proxy to another host on the local network (completely separate network outside the dockernetwork I'm using). Routing is going to be a problem no? My goal is to reverse proxy external traffic looking for server1.myhome.com to hit SWAG and then proxied to server1 internal IP address and port. My guess is that the routing from the docker network to the local area network (which unraid sits on as well) is going to the problem? Would I use the subdomain template to reverse proxy this request? Thanks for any advice and insight!
  12. Just curious if anyone has moved to NextCloud 22? I noticed current rev is 22.2 (running 21.0.5 here) and wondering if moving to 22 is a technology challenge (requiring a manual install) or simply a channel release thing? Thanks!
  13. To get some GPU integration I upgraded to 6.9rc2 and everything went perfectly fine. Everything that I've tested has worked as expected save for 10Gbe transfer speeds which have become slower and unpredictable. I've run iperf3 and found it maxing out at the speeds of my SSD's as it did pre upgrade at 500MBps. Since the 6.9rc2 upgrade I've found my transfers to be erratic with occasional speeds hitting 350MBps but mostly in the 150-225MBps range. I've confirmed that directI/O is still enabled and nothing else has changed. I'm going to turn on jumbo frames for testing but never had them enabled before and was hitting a constant 500MBps without fail (occasional drop when loaded or mover running of course). Thanks! Edit: My 10gbe adapter is an X540-T2
  14. Hi all---I replaced my cache drives the other day and found when i turned back on dockers that nothing was listed at all. So I added back in my templates and that seemed to work just fine save my swag docker. Long story short, I ended up renaming the entire /config folder (which was a LONG time in use from very early letsencrypt days) and and seeing if a complete reinstall worked. Got caught with the rate limit of letsencrypt. Is there a way I can move over the certs that were generated in the old /config structure? Thanks! RESOLVED: In case anyone comes across this I came across a thread about CA Backup/Restore and completely forgot the app was running on my system. Did a restore of everything and it's working perfectly now.
  15. Thank you! That worked just fine--appreciate the help!