xhaloz

Members
  • Posts

    142
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

xhaloz's Achievements

Apprentice

Apprentice (3/14)

11

Reputation

  1. Anyone know why this happens with latest update (05-comfy-ui) 04/15/2024 08:40:42 PM Prestartup times for custom nodes: 04/15/2024 08:40:42 PM 0.9 seconds: /config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager 04/15/2024 08:40:42 PM 04/15/2024 08:40:54 PM Total VRAM 12037 MB, total RAM 96462 MB 04/15/2024 08:40:54 PM Set vram state to: NORMAL_VRAM 04/15/2024 08:40:54 PM Device: cuda:0 NVIDIA GeForce RTX 3060 : cudaMallocAsync 04/15/2024 08:40:54 PM VAE dtype: torch.bfloat16 04/15/2024 08:41:02 PM Using pytorch cross attention 04/15/2024 08:41:09 PM Traceback (most recent call last): 04/15/2024 08:41:09 PM File "/config/05-comfy-ui/ComfyUI/nodes.py", line 1864, in load_custom_node 04/15/2024 08:41:09 PM module_spec.loader.exec_module(module) 04/15/2024 08:41:09 PM File "<frozen importlib._bootstrap_external>", line 940, in exec_module 04/15/2024 08:41:09 PM File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed 04/15/2024 08:41:09 PM File "/config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager/__init__.py", line 18, in <module> 04/15/2024 08:41:09 PM from .glob import manager_core as core 04/15/2024 08:41:09 PM File "/config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager/glob/manager_core.py", line 9, in <module> 04/15/2024 08:41:09 PM import git 04/15/2024 08:41:09 PM ModuleNotFoundError: No module named 'git' 04/15/2024 08:41:09 PM 04/15/2024 08:41:09 PM Cannot import /config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager module for custom nodes: No module named 'git' 04/15/2024 08:41:09 PM 04/15/2024 08:41:09 PM Import times for custom nodes: 04/15/2024 08:41:09 PM 0.0 seconds: /config/05-comfy-ui/ComfyUI/custom_nodes/websocket_image_save.py 04/15/2024 08:41:09 PM 0.1 seconds (IMPORT FAILED): /config/05-comfy-ui/ComfyUI/custom_nodes/ComfyUI-Manager 04/15/2024 08:41:09 PM 04/15/2024 08:41:09 PM Setting output directory to: /config/outputs/05-comfy-ui 04/15/2024 08:41:09 PM Starting server 04/15/2024 08:41:09 PM 04/15/2024 08:41:09 PM To see the GUI go to: http://0.0.0.0:9000
  2. Oh also, the https://github.com/facefusion/facefusion uses pytorch now instead of tensorflow but the one within the hollafain uses tensorflow for some reason even though the script pulls from the original repo, thus creating that issue as well. I'll check this out later tonight and see if I can push a change to their github
  3. No but I did find a solution that needs to be tested. Apparently when you install extensions they install the onnx-runtime which creates a conflict with existing onnx files. When this happens it disables the CUDA. But again, I have not tested this out. If I do I will report back here and tag you.
  4. Bruh, you are a champion. Thank you and happy holidays!
  5. Ok I've done some digging. If I add face fusion via the docker settings or within automatic1111 extensions, it does not detect GPU thus not giving me a "CUDA" option. The container works fine with the GPU for generating art, and device shows up in nvidia-smi when you docker exec into the container. However if I activate the venv within the docker container and run python3 -c "import tensorflow; print(tensorflow.config.experimental.list_physical_devices('GPU'))" It returns [] For some reason tensorflow does not detect GPU and facefusion needs that detection for it to provide the CUDA option. Any thoughts?
  6. I know you mentioned some models are not production ready. Face fusion included. Is that warning the reason I don't see "CUDA" as a provider on the face fusion webui?
  7. Thank you for the awesome plugin and keeping the nerd packages alive. Could you please add ripgrep?
  8. Question, if I set the bonding mode to NO as indicated, how do you access VMs now? They pull the 122.X network address due to BR0 going away from using that setting.
  9. I am having this same issue. Any custom dockers I create will not boot up after I reboot the server. It says network id not found and I have to recreate the containers. Did you ever find a fix for this?
  10. I can get this up and running by modifying a shell file within the docker container itself. However I am curious why the environment variable BLUEIRIS_VERSION=4 does not work.
  11. Jesus christ, thank you so much. I was going nuts. I used a password manager to generate the DB password and it had some symbols in it to include a '#'. I appreciate you!!
  12. You can reduce the size by converting to a QCOW2 image instead. It is MUCH smaller than the .img file above. The command for that is this qemu-img convert -f vmdk -O qcow2 ./GNS3_VM-disk1.vmdk ./GNS3VM-disk1.qcow2 I've been running GNS3 for about 2 years now with this method. Here are my file sizes for reference 1.5G ./GNS3VM-disk1.qcow2 42M ./GNS3VM-disk2.qcow2 532M ./GNS3_VM-disk1.vmdk 1.9M ./GNS3_VM-disk2.vmdk As far as the settings for the VM, the only thing I changed was the bios stays SeaBIOS but the Machine is i440fx-4.2. Also the Primary vDisk Bus is now SCSI instead of SATA. Let me know if you get stuck!
  13. Yeah I can provide my borg script here. If you need help on it let me know. But borg makes a local backup and rsync clones it off site. This gives you 3 copies of your data and 2 of them local. Also the script will not re-run if rsync hasn't finished its last operation (slow internet) or if parity sync is running. The key factor in not having everything being constantly checked by Borg is the files-cache=mtime,size. I was noticing everytime I ran Borg it would index files that haven't changed. This command fixed that which has to do with unRAID's constant changing inode values. The borg docs are very good (https://borgbackup.readthedocs.io/en/stable/) Let me know if you get stuck. Obviously this script wont work until you setup your repository. #!/bin/sh LOGFILE="/boot/logs/TDS-Log.txt" LOGFILE2="/boot/logs/Borg-RClone-Log.txt" # Close if rclone/borg running if pgrep "borg" || pgrep "rclone" > /dev/null then echo "$(date "+%m-%d-%Y %T") : Backup already running, exiting" 2>&1 | tee -a $LOGFILE exit exit fi # Close if parity sync running #PARITYCHK=$(/root/mdcmd status | egrep STARTED) #if [[ $PARITYCHK == *"STARTED"* ]]; then # echo "Parity check running, exiting" # exit # exit #fi #This is the location your Borg program will store the backup data to export BORG_REPO='/mnt/disks/Backups/Borg/' #This is the location you want Rclone to send the BORG_REPO to export CLOUDDEST='GDrive:/Backups/borg/TDS-Repo-V2/' #Setting this, so you won't be asked for your repository passphrase: export BORG_PASSPHRASE='<MYENCRYPTIONKEYPASSWORD>' #or this to ask an external program to supply the passphrase: (I leave this blank) #export BORG_PASSCOMMAND='' #I store the cache on the cache instead of tmp so Borg has persistent records after a reboot. export BORG_CACHE_DIR='/mnt/user/appdata/borg/cache/' export BORG_BASE_DIR='/mnt/user/appdata/borg/' #Backup the most important directories into an archive (I keep a list of excluded directories in the excluded.txt file) SECONDS=0 echo "$(date "+%m-%d-%Y %T") : Borg backup has started" 2>&1 | tee -a $LOGFILE borg create \ --verbose \ --info \ --list \ --filter AMEx \ --files-cache=mtime,size \ --stats \ --show-rc \ --compression lz4 \ --exclude-caches \ --exclude-from /mnt/disks/Backups/Borg/Excluded.txt \ \ $BORG_REPO::'{hostname}-{now}' \ \ /mnt/user/Archive \ /mnt/disks/Backups/unRAID-Auto-Backup \ /mnt/user/Backups \ /mnt/user/Nextcloud \ /mnt/user/system/ \ >> $LOGFILE2 2>&1 backup_exit=$? # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. The '{hostname}-' prefix is very important to # limit prune's operation to this machine's archives and not apply to # other machines' archives also: #echo "$(date "+%m-%d-%Y %T") : Borg pruning has started" 2>&1 | tee -a $LOGFILE borg prune \ --list \ --prefix '{hostname}-' \ --show-rc \ --keep-daily 7 \ --keep-weekly 4 \ --keep-monthly 6 \ >> $LOGFILE2 2>&1 prune_exit=$? #echo "$(date "+%m-%d-%Y %T") : Borg pruning has completed" 2>&1 | tee -a $LOGFILE # use highest exit code as global exit code global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit )) # Execute if no errors if [ ${global_exit} -eq 0 ]; then borgstart=$SECONDS echo "$(date "+%m-%d-%Y %T") : Borg backup completed in $(($borgstart/ 3600))h:$(($borgstart% 3600/60))m:$(($borgstart% 60))s" | tee -a >> $LOGFILE 2>&1 #Reset timer SECONDS=0 echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync has started" >> $LOGFILE rclone sync $BORG_REPO $CLOUDDEST -P --stats 1s -v 2>&1 | tee -a $LOGFILE2 rclonestart=$SECONDS echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync completed in $(($rclonestart/ 3600))h:$(($rclonestart% 3600/60))m:$(($rclonestart% 60))s" 2>&1 | tee -a $LOGFILE # All other errors else echo "$(date "+%m-%d-%Y %T") : Borg has errors code:" $global_exit 2>&1 | tee -a $LOGFILE fi exit ${global_exit}