-Daedalus

Members
  • Posts

    426
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

-Daedalus's Achievements

Enthusiast

Enthusiast (6/14)

73

Reputation

  1. Not sure what the root cause of this is. I didn't have time to dig into it, but a reboot has resolved it for the moment. They're i210s so should be very well supported, but it is a new board. Hopefully not borked hardware. I've since pluged a second interface into the bond, so hopefully if this happens again it won't take down both. Figured I'd flag it anyway. Diags attached, let me know if I can provide anything else. server-diagnostics-20240315-0811.zip
  2. Thanks for the responses guys. Is this as simple as just replacing the bz* files, or am I missing something?
  3. I recently bought, among other things, an 8500G (AM5 upgrade) for a server with a dead motherboard. Found out thet hard way that the CPU's IGP causes a kernel panic on boot. I've got a small window of time before I have to return it. Any chance we'll get a 6.13 RC on newer kernel this week?
  4. +1 It's been asked before under different names, but some kind of "mirror" or "both" setting for mover would be great.
  5. Didn't find this, nice one. I don't use Connect, but I do use UD. I restarted php-fpm again, and a second one seems to have done it, things seem back to normal now. I'll keep an eye on that thread. Cheers!
  6. Hi all, I'm not sure if this is thet same as the other bugs posted, because the server isn't hung - the issue only appears to affect the management plane. It's I'm seeing the following in syslog: Oct 7 08:50:57 server php-fpm[14227]: [WARNING] [pool www] child 25071 exited on signal 9 (SIGKILL) after 42.661800 seconds from start Oct 7 08:50:59 server php-fpm[14227]: [WARNING] [pool www] child 28547 exited on signal 9 (SIGKILL) after 30.913435 seconds from start Oct 7 08:51:01 server php-fpm[14227]: [WARNING] [pool www] child 29871 exited on signal 9 (SIGKILL) after 28.662132 seconds from start Oct 7 08:51:03 server php-fpm[14227]: [WARNING] [pool www] child 30910 exited on signal 9 (SIGKILL) after 29.121180 seconds from start Oct 7 08:51:05 server php-fpm[14227]: [WARNING] [pool www] child 930 exited on signal 9 (SIGKILL) after 19.329105 seconds from start Oct 7 08:51:08 server php-fpm[14227]: [WARNING] [pool www] child 2269 exited on signal 9 (SIGKILL) after 15.648647 seconds from start Oct 7 08:51:11 server php-fpm[14227]: [WARNING] [pool www] child 2556 exited on signal 9 (SIGKILL) after 16.420770 seconds from start Oct 7 08:51:13 server php-fpm[14227]: [WARNING] [pool www] child 3185 exited on signal 9 (SIGKILL) after 15.064497 seconds from start I tried restarting php-fpm, which helped a little - I was able to access the syslog page in ~20 seconds rather than the 15 minutes it took for the Dashboard > Tools > Syslog navigation before that, but it seems to be getting worse now since. I tried to look for a dedicated log for this, but couldn't find one. I thought it might be spamming. Diags attached. I'm probably going to end up rebooting it soon as I had planned some maintenance and this is a low-usage period. server-diagnostics-20231007-0903.zip
  7. Hurray thread necromancy! For what it's worth, I'm 100% with @mgutt on this one. I'm really not sure why USB was even brought up; I can think of very little that would write less. If you really wanted you could write it to RAM and copy it to flash once an hour or something, but still miniscule. For what it's worth: +1, I like the idea. Probably wouldn't use it too much myself, but seems like a nice little QoL add.
  8. I can't add to this, except to put a question 4 in there: Why was there not a big red flashing alert in the UI for this?
  9. Not sure of the technical limitations for this, but figured I'd flag it anyway given similar work is probably happening with the 6.13 train. Kind of like split levels in shares now. When a share is created on a ZFS pool, another setting would appear with a dropdown. This would allow the user to create child datasets x levels deep in the share. eg. Create share 'appdata' Setting: Create folders as child datasets: Only create datasets at the top level Only create datasets at the top two levels Only create datasets at the top three levels Automatically create datasets at any level Manual - Do not automatically create datasets The obvious usecase for this being the ability for people to snapshot individual VMs or containers, as they're more likely to be using ZFS for cache pools rather than the array. And I know, Spaceinvader has a video series for this, but I'd like it native.
  10. I actually did this because I found another one of your answers to a similar issue. It was, I think, this: NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC /dev/loop1 0 0 1 1 /boot/bzfirmware 0 512 /dev/loop0 0 0 1 1 /boot/bzmodules 0 512 Or, at least, these were the two files listed. The values may have differed. fwiw
  11. For anyone looking at this in future: I ended up forcing it down. The only thing I could think of still running is the syslog server. It was writing to a dataset on the 'ssd' pool. When the server came back up, that share was on the array.
  12. This has happened both times I've tried to restart the array since upgrading to 6.12.4, and changing my two pools from BTRFS to ZFS. Syslog: Sep 25 08:44:09 server emhttpd: shcmd (2559484): /usr/sbin/zpool export ssd Sep 25 08:44:09 server root: cannot unmount '/mnt/ssd': pool or dataset is busy Sep 25 08:44:09 server emhttpd: shcmd (2559484): exit status: 1 Sep 25 08:44:09 server emhttpd: Retry unmounting disk share(s)... Sep 25 08:44:14 server emhttpd: Unmounting disks... Sep 25 08:44:14 server emhttpd: shcmd (2559485): /usr/sbin/zpool export ssd Sep 25 08:44:14 server root: cannot unmount '/mnt/ssd': pool or dataset is busy Sep 25 08:44:14 server emhttpd: shcmd (2559485): exit status: 1 Sep 25 08:44:14 server emhttpd: Retry unmounting disk share(s)... Sep 25 08:44:19 server emhttpd: Unmounting disks... I tried to unmount /mnt/ssd/ as well as forcibly, but no luck. I'd post diags, but they seem to be stuck collecting (10+ mins now) Any output I can get on this before I force it down?
  13. That would goes a way towards explaining it. Cache still useful if it's NVME/SSD vs spinners, but certainly less so alright.
  14. I wasn't the one above quoting disk speeds, that was @PassTheSalt. It is weird that he's getting such high speeds to array though. I only get around 90 or so sustained.