kubed_zero

Community Developer
  • Posts

    167
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

kubed_zero's Achievements

Apprentice

Apprentice (3/14)

31

Reputation

  1. To clarify on this: Unraid has the ability to install arbitrary .txz packages automatically at boot, anything in the /boot/extra/ directory will be installed. Furthermore, the terminal command `installpkg path/to/package.txz` can also be used to install packages. I still use this plugin's Github page as a repository to download packages https://github.com/UnRAIDES/unRAID-NerdTools/tree/main/packages/6.11 but I also use https://slackware.uk/slackware/slackware64-current/ as a source. Sometimes I find that packages need to be built manually as well, such as Python, which I have a thread for over at So in essence, many people asking for plugins just need to find Slackware-comatible versions online, download them to their boxes, and then run the install command. There may be other methods as well, such as a package manager similar to apt/yum/dnf/homebrew or a similar GUI, but at the end of the day downloading to your box and running an install command are all these options are doing. Also, it's safe to say this is a lie and an excuse to abandon the project. There's been minimal support on this which I called out over six months ago when trying to get my Python change merged. It still isn't merged 🙃 https://github.com/UnRAIDES/unRAID-NerdTools/pull/84
  2. Thanks for checking! No problem, feel free to use it.
  3. https://slackware.uk/search?p=%2F&q=glibc-2.37 https://slackware.uk/cumulative/slackware64-current/slackware64/l/glibc-2.37-x86_64-1.txz https://slackware.uk/cumulative/slackware64-current/slackware64/l/glibc-2.37-x86_64-2.txz https://slackware.uk/cumulative/slackware64-current/slackware64/l/glibc-2.37-x86_64-3.txz Behold! With some digging through that website, old versions can be found
  4. I think I was able to find everything I needed on https://slackware.uk/slackware/ but some general googling allowed me to find the older versions. I've also uploaded some of the older packages as part of this post: glibc-2.37-x86_64-2.txzopenssl-3.1.1-x86_64-1.txz Let me know if you need anything else!
  5. @gombihu see my post from above. Its instructions are working for me right now on 6.12.6. You missed a step with the fruit file. Good luck!
  6. I just finished building Python 3.12.1 using similar instructions to above. I now run "find usr/lib*/python* -name '*.so' | xargs strip --strip-unneeded" instead of "find . -print0 | xargs -0 file | grep -e "executable" -e "shared object" | grep ELF | cut -f 1 -d : | xargs strip --strip-unneeded 2> /dev/null || true" I used a longer list of packages installed at compile time, some of these not needed for Python but needed for BorgBackup I found that using newer versions of OpenSSL and glibc would allow for successful compilation, but failures during executions of Python with errors such as "python3: /lib64/libm.so.6: version GLIBC_2.38' not found (required by python3)" and sure enough, Unraid 6.12.6 has only "/lib64/libm-2.37.so" Python-3.12.1.tar.xz libffi-3.4.4-x86_64-1.txz acl-2.3.1-x86_64-1.txz libmpc-1.3.1-x86_64-1.txz binutils-2.41-x86_64-1.txz libzip-1.10.1-x86_64-1.txz bzip2-1.0.8-x86_64-3.txz lzlib-1.13-x86_64-1.txz expat-2.5.0-x86_64-1.txz make-4.4.1-x86_64-1.txz fuse3-3.15.0-x86_64-1.txz openssl-3.1.1-x86_64-1.txz gc-8.2.4-x86_64-1.txz openssl-solibs-3.1.1-x86_64-1.txz gcc-13.2.0-x86_64-1.txz openssl11-1.1.1w-x86_64-1.txz gcc-g++-13.2.0-x86_64-1.txz openssl11-solibs-1.1.1w-x86_64-1.txz gdbm-1.23-x86_64-1.txz pkg-config-0.29.2-x86_64-4.txz git-2.43.0-x86_64-1.txz readline-8.2.001-x86_64-1.txz glibc-2.37-x86_64-2.txz xz-5.4.3-x86_64-1.txz guile-3.0.9-x86_64-1.txz zlib-1.2.13-x86_64-1.txz kernel-headers-6.1.64-x86-1.txz I've attached the compiled python3-3.12.1 TXZ file, as well as the compiled pyfuse3-3.3.0 and borgbackup-1.2.7 WHL files that can be installed with Pip and are not otherwise available for Unraid. As far as I've tested in my use cases, they work as expected. Below are my notes on the Python Wheel compilation for borgbackup and pyfuse3: ## 2023-12-10 Pyfuse3 Update - Installing pyfuse3 before BorgBackup, as it's a dependency of Borg - Installing some Slackware packages first: ``` binutils-2.41-x86_64-1.txz glibc-2.37-x86_64-2.txz fuse3-3.16.2-x86_64-1.txz kernel-headers-6.1.64-x86-1.txz gcc-13.2.0-x86_64-1.txz pkg-config-0.29.2-x86_64-4.txz ``` - Then run `pip3 install pyfuse3` which is building 3.3.0 - It installs a few pip3 packages as dependencies: ``` trio-0.23.1-py3-none-any.whl attrs-23.1.0-py3-none-any.whl sortedcontainers-2.4.0-py2.py3-none-any.whl idna-3.6-py3-none-any.whl outcome-1.3.0.post0-py2.py3-none-any.whl sniffio-1.3.0-py3-none-any.whl Successfully installed attrs-23.1.0 idna-3.6 outcome-1.3.0.post0 pyfuse3-3.3.0 sniffio-1.3.0 sortedcontainers-2.4.0 trio-0.23.1 ``` - Permanently fetch all these packages (other than pyfuse3) with `pip3 download attrs==23.1.0 idna==3.6 outcome==1.3.0.post0 sniffio==1.3.0 sortedcontainers==2.4.0 trio==0.23.1` and put them in `/boot/python_wheels/` for offline installation at boot time - Copy the `pyfuse3` wheel file from the directory stated in the build logs `cp /root/.cache/pip/wheels/b0/c1/b2/bd1f9969742c3b690d74ae13233287eec544c5a9135497443e/pyfuse3-3.3.0-cp312-cp312-linux_x86_64.whl /boot/python_wheels/` - This would have appeared to be something similar to `Created wheel for pyfuse3: filename=pyfuse3-3.3.0-cp312-cp312-linux_x86_64.whl size=1283409 sha256=8471a14517ee73366b6b7fd7dc2921c57327830f0bb2d58a4569abf0e2ca50ad Stored in directory: /root/.cache/pip/wheels/b0/c1/b2/bd1f9969742c3b690d74ae13233287eec544c5a9135497443e` - Note that renaming needs to follow some strict guidelines https://peps.python.org/pep-0491/#file-name-convention otherwise Pip will error out and give responses such as `ERROR: pyfuse3-3.3.0-cp312-cp312-linux_x86_64-kubed20231210.whl is not a valid wheel filename` or `ERROR: pyfuse3-3.3.0-cp312-cp312-linux_x86_64_kubed20231210.whl is not a supported wheel on this platform` - Add a line to the `/boot/config/go` file to automatically install `pyfuse3` at boot, or just run it adhoc to confirm it works: `/usr/bin/pip3 install /boot/python_wheels/pyfuse3* --no-index --find-links file:///boot/python_wheels` ## 2023-12-10 Borg 1.2.7 Update - OK, I'm now running Python 3.12.1 as compiled above - I also have pyfuse3 installed in Pip in preparation - (Sidenote) https://forums.unraid.net/topic/129200-plug-in-nerdtools/?do=findComment&comment=1291205 are my old instructions for compiling Borg - (Sidenote) there's now a SlackBuild for installing Borg on Slackware! Updated in 2023Q3 which is more recent than the last time I updated this https://slackbuilds.org/slackbuilds/15.0/system/borgbackup/borgbackup.SlackBuild - Following https://borgbackup.readthedocs.io/en/stable/installation.html#pip-installation generally - First it notes to follow https://borgbackup.readthedocs.io/en/stable/installation.html#source-install and get some Slackware dependencies installed - Intentionally skipping the installation of `libxxhash` and `libzstd` and `liblz4` as the desire is to use the bundled code instead of the system-provided libraries - The instructions call for `acl` and `pkg-config` only, but the others are needed based on trial and error ``` acl-2.3.1-x86_64-1.txz glibc-2.37-x86_64-2.txz binutils-2.41-x86_64-1.txz kernel-headers-6.1.64-x86-1.txz gcc-13.2.0-x86_64-1.txz pkg-config-0.29.2-x86_64-4.txz ``` - Then it asks to install some Pip dependencies JUST for the build process: `wheel`, `setuptools`, `pkgconfig` - This can be done with `pip3 download pkgconfig setuptools wheel && pip3 install *.whl` ``` pkgconfig-1.5.5-py3-none-any.whl wheel-0.42.0-py3-none-any.whl setuptools-69.0.2-py3-none-any.whl ``` - Now run `pip3 install "borgbackup[pyfuse3]"` to install borgbackup and build the wheel. Note that this adds `pyfuse3` integration, but that can be skipped by running the simplified install command `pip3 install borgbackup` - It installs a few pip3 packages as dependencies: ``` msgpack-1.0.7-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl packaging-23.2-py3-none-any.whl Successfully installed borgbackup-1.2.7 msgpack-1.0.7 packaging-23.2 ``` - Permanently fetch these packages (other than borgbackup) with `pip3 download msgpack==1.0.7 packaging==23.2` and put them in `/boot/python_wheels/` for offline installation at boot time - Copy the `borgbackup` wheel file from the directory stated in the build logs `cp /root/.cache/pip/wheels/b0/c1/b2/bd1f9969742c3b690d74ae13233287eec544c5a9135497443e/pyfuse3-3.3.0-cp312-cp312-linux_x86_64.whl /boot/python_wheels/` - This would have appeared to be something similar to `Created wheel for borgbackup: filename=borgbackup-1.2.7-cp312-cp312-linux_x86_64.whl size=6248226 sha256=830ec3fbcc5c74922f7e269e8378aa08052c8c9cec51f407c53166d989d62e7c Stored in directory: /root/.cache/pip/wheels/64/90/02/c60bc19558d5e1e7c5ed13041bc27d25d41518ddfdb2def852` - Note that renaming needs to follow some strict guidelines https://peps.python.org/pep-0491/#file-name-convention otherwise Pip will error out - Add a line to the `/boot/config/go` file to automatically install `pyfuse3` at boot, or just run it adhoc to confirm it works: `/usr/bin/pip3 install /boot/python_wheels/borgbackup* --no-index --find-links file:///boot/python_wheels` - The final list of wheel files ready to install at boot time are: ``` attrs-23.1.0-py3-none-any.whl borgbackup-1.2.7-cp312-cp312-linux_x86_64.whl idna-3.6-py3-none-any.whl msgpack-1.0.7-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl outcome-1.3.0.post0-py2.py3-none-any.whl packaging-23.2-py3-none-any.whl pyfuse3-3.3.0-cp312-cp312-linux_x86_64.whl sniffio-1.3.0-py3-none-any.whl sortedcontainers-2.4.0-py2.py3-none-any.whl trio-0.23.1-py3-none-any.whl ``` pyfuse3-3.3.0-cp312-cp312-linux_x86_64.whl python3-3.12.1-x86_64-1-kubed20231210.txz borgbackup-1.2.7-cp312-cp312-linux_x86_64.whl
  7. There's been no activity on the GitHub repository since I posted my PR 3 months ago https://github.com/UnRAIDES/unRAID-NerdTools/pull/84. I suspect the maintainers UnRAIDES and/or EUGENI_CAT have abandoned this effort (August 24 is the last date EUGENI_CAT logged into this forum)
  8. I'm not aware of any reason for it not to be added. The repo owner seems to be MIA, not responding to the PR, not updating the repo, not commenting here. I have notes in my PR for installing the necessary compiling tools to compile on Unraid https://github.com/UnRAIDES/unRAID-NerdTools/pull/84 I also made an application I call DiffLens, which uses Blake3 hashes instead of MD5 hashes to do file integrity validations on Unraid shares https://github.com/UnRAIDES/unRAID-NerdTools/pull/84
  9. I can't say for certain, this is not a scenario I can test another VM on the machine is providing me network access to Unraid. I want to emphasize though that in both of these cases, Unraid ran fine as a VM for 3-5 years, and it's only in the past few months that I've been seeing this.
  10. I tried searching around but couldn't find much on this. As long as I don't reboot Unraid, it won't have parity errors. Most recently, I ran the parity three times in a row without rebooting, and it reported 0 sync errors each time. I then rebooted (safely, stopping the array and then rebooting) and started another parity check, and now there are errors. I see a consistent 1025 errors on this particular Unraid box when errors do pop up, which is suspicious. Looking at the parity check history, there are only ever errors after a reboot. Parity History, showing the last three runs had zero errors: I then immediately rebooted and started another parity check, taking this screenshot: This is a Supermicro motherboard with ECC RAM. Unraid is running as an ESXi 6.7 VM, with the Intel SATA controller passed through to the VM I have a second Unraid server with the same setup (albeit newer hardware and newer ESXi), and it does something similar: 0 errors on repeated Parity Check operations, but the second I reboot and try to run a parity check, it'll start finding errors. Both Unraid systems have been running as VMs for a few years, and did not always have this issue. I've tried running New Permissions just in case something wacky happened to some of the files, but that did not help. Diagnostics from both systems attached. The parity check errors can be seen in one of them. I can grab new/better diagnostics later if need be. I'm looking for help in troubleshooting next steps, as this leaves me less confident in restoring valid data should a drive fail. I did find this blog post https://blog.insanegenius.com/2020/01/10/unraid-repeat-parity-errors-on-reboot/ which has the same errors I did, "Jan 3 10:03:07 Server-2 kernel: md: recovery thread: P corrected, sector=1962934168" but I don't think it's relevant in this case as I'm just using the SATA ports directly on the motherboards, without using any LSI or SAS/HBA cards. diagnostics.zip
  11. I have Time Machine working on Unraid 6.12.3 and macOS Ventura 13.5.1, and wanted to make sure the historical threads were updated with what my solution ended up being. Adding to both the SMB Extras global settings AND the smb-fruit settings ended up being necessary.
  12. I have Time Machine fully working on Unraid 6.12.3 and macOS Ventura 13.5.1, and wanted to make sure the historical threads were updated with what my solution ended up being. Adding to both the SMB Extras global settings AND the smb-fruit settings ended up being necessary.
  13. I wanted to share that I too have mostly figured out the Samba macOS nuance. I'm on Unraid 6.12.3 with macOS Ventura 13.5.1 To start, setting fruit:metadata = stream in the SMB Extras in the Unraid UI was the single biggest contributor to getting things working. Here's exactly what I have, in its entirety: [Global] fruit:metadata = stream Note that I don't use Unassigned Devices, which I think would add to these lines. After adding this and stopping/starting the array, pre-existing Time Machine backups were NOT working reliably, so I also had to create new Time Machine backups from scratch. I kept the old sparsebundles around just in case. Once new initial backups were made successfully, one of my MacBooks was able to reliably back up on a daily cadence. It's been running this way for a couple months. Meanwhile, one of my other MacBooks refused to work well with Time Machine, making one successful backup every few weeks, contingent on a recent Unraid reboot. I couldn't deal with this, so I factory reset it (reinstalling macOS) and created an additional new Time Machine backup on Unraid. Then it worked flawlessly. Then one of my MacBooks died, so I needed to restore data from Time Machine. I first tried to connect to Unraid and mount the sparsebundle through Finder, but it would time out, beachball, and overall never end up working. I was however able to get it mounted and accessible through the Terminal/CLI using the command `hdiutil attach /Volumes/path/to/sparsebundle` and with that, access the list of Time Machine snapshots and the files I wanted to recover. Then, I tried to use Apple's Migration Assistant to attempt to fully restore from a Time Machine backup. I was able to connect to the Unraid share and it was able to list the sparsebundles, but it would get stuck with "Loading Backup..." indefinitely. I moved some of the other computers' sparsebundles out of the share so it could focus on just the one sparsebundle I wanted, but even after waiting 24 hours, it would still say that it was loading backups. Looking on the Open Files plugin's tab in Unraid, I would see it reading one band file at a time. After enough of this, I tried to access a different sparsebundle that only had two backups, instead of months of backups, and "Loading Backups..." went away within 10 minutes and I was able to proceed with the Time Machine restoration, albeit slowly, and not with the data I wanted. This did clue me in to something, though. Using `find /path/to/sparsebundle/bands/ -type f | wc -l` to get the file count inside the sparsebundle, the one that made it through Migration Assistant was only 111 files, and the one that stalled for 24h was over 9000 files. I then went back to the Unraid SMB settings and tried to fiddle around a bit more. I found, as others did, that changing the following settings in smb-fruit.conf caused big improvements. The defaults for these settings are `yes` so I changed them to `no`: readdir_attr:aapl_rsize = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_max_access = no As the Samba Fruit man page notes in https://www.samba.org/samba/docs/current/man-html/vfs_fruit.8.html, `readdir_attr:aapl_max_access = no` is probably the most significant of these, as the setting is described: "Return the user's effective maximum permissions in SMB2 FIND responses. This is an expensive computation. Enabled by default" My suspicion is that the thousands of files that make up a sparsebundle end up getting bottlenecked when read through Samba, causing Migration Assistant to fail. After adding these lines to `/etc/samba/smb-fruit.conf` and copying that updated file over to `/boot/config/smb-fruit.conf` and stopping and starting the array, I confirmed the settings were applied with `testparm -s` and looking at the output: [global] ~~~shortened~~~ fruit:metadata = stream fruit:nfs_aces = No ~~~shortened~~~ [TimeMachine] path = /mnt/user/TimeMachine valid users = backup vfs objects = catia fruit streams_xattr write list = backup fruit:time machine max size = 1250000M fruit:time machine = yes readdir_attr:aapl_max_access = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_rsize = no fruit:encoding = native Now that the new settings were in place, Migration Assistant got through the "Loading Backups" stage within a minute or two, and I was able to successfully fully restore the old backup sparseimage with thousands of files. I know there's some nuance around setting Apple/fruit settings depending on the first device to connect to Samba, so this entire experiment took place with only Macs connecting to Unraid. I did not yet repeat the experiment with Windows connecting first or in parallel, but I hope the behavior is the same as I cannot guarantee Macs will always connect before Windows computers in my network. Anyway, I wanted to share as I avoided updating Unraid 6.9.2 for literal years to keep a working Time Machine backup. I then jumped for joy at the MacOS improvements forum post a year ago just to find it didn't help in any way, and was again excited to update to 6.12, just to find it STILL didn't work reliably with default settings. Very disappointing, LimeTech. And a huge thanks to the folks in these threads that have shared their updates and what has or has not worked for them. Let's keep that tradition going, as it's clear we are on our own here. Some Time Machine related posts from over the years. I'll make update posts in each directing here. TLDR: Working Time Machine integration. Adding fruit:metadata = stream to the global settings and then readdir_attr:aapl_max_access = no and readdir_attr:aapl_finder_info = no and readdir_attr:aapl_rsize = no into the smb-fruit settings allowed me to run Time Machine backups AND restore from or mount them using Finder and Migration Assistant.
  14. cross-linking to the other feature requests asking for the same thing
  15. cross-linking to the other feature requests asking for the same thing