Vaseer

Members
  • Posts

    106
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Vaseer's Achievements

Apprentice

Apprentice (3/14)

6

Reputation

  1. I would like set up Nextcloud with SSD for cache (small and frequent accessed/changed files) and HDD in array for large files. I have searched this forum and used my Googling skills, but didn't find anything for my planned configuration. My current Nextcloud configuration is using 2x 6TB HDDs, mounted in BTFRS RAID1 as unassigned devices. After ~2 years only 10% of available space is used, mostly by family pictures and some other large and rarely accessed files. My plan is to use 2x 1TB SSDs (or 2TB) in cache pool RAID1 and 1x 6TB HDD in array. SSD for small and frequent accessed/changed files. HDD for large and rarely accessed files. Is this doable with native unRAID/Nextcloud options?
  2. I would like to double check my calculations for planned UPS. My unRAID server: PSU: Seasonic X-Series X-650 KM3 650W ATX 2.3 (SS-650KM3) CPU: AMD Ryzen 7 2700 GPU: none Drives: M2 SSD: 1 SATA SSD: 2 HDD: currently 10 (max 15) HBA: LSI 9211-8ihb Use case: NAS storage for Kodi (no transcoding done on unRAID) and personal files. Docker and VM platform. VM: currently none, planned is OPNsense and optionally 1 or 2 more. Docker: Nextcloud, UniFi controller, YT downloader, PiHole... nothing too power hungry. My unRAID is located in rack with other network gear and some of it I would also like connect to UPS. Peak power consumption measured in past couple months (during this time there were several parity checks, so this is included) of all devices in rack is 255W (and power meter reports Power factor 0.8). I am planning to add up to 5 more HDDs and 2 or 3 VMs, so power draw will increase, but I think it shouldn't go over 300W peak power, since not all devices will be connected to UPS. I was looking for UPS with 500W (or more) and (if my calculation is correct) 625VA. If I understand correctly, this is max power draw that can UPS handle on both - battery protected and unprotected - outputs? I found APC BX1600MI-GR and looks interesting, especially with more W/VA and longer run time. My plan is to run unRAID no longer than 3-5 minutes on UPS. If power isn't back within that period, it is major problem or planned power disconnect. I appreciate any feedback, so I will know if I am looking in right direction. Please let me know if I missed anything.
  3. I found the cause of my problem - my OneDrive is blocked by MS and all files on OneDrive are read only. It is account from eBay for couple of €, so something that I was expecting to happen, but it lasted quite a while (I bought this account in 2019).
  4. For couple of days I have problem with Duplicati backing up my data to Office 365 One Drive. I am getting error in Duplicati UI: Failed to connect: Forbidden: Forbidden error from request https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati System.Net.HttpWebResponse { "error": { "code": "accessDenied", "message": "Database Is Read Only", "innerError": { "code": "serviceReadOnly", "date": "...", "request-id": "...", "client-request-id": "..." } } } and over mail notification: Failed: Forbidden: Forbidden error from request https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati:/children System.Net.HttpWebResponse { "error": { "code": "accessDenied", "message": "Database Is Read Only", "innerError": { "code": "serviceReadOnly", "date": "...", "request-id": "...", "client-request-id": "..." } } } Details: Duplicati.Library.Backend.MicrosoftGraph.MicrosoftGraphException: Forbidden: Forbidden error from request https://graph.microsoft.com/v1.0/me/drive/root:/Duplicati:/children System.Net.HttpWebResponse { "error": { "code": "accessDenied", "message": "Database Is Read Only", "innerError": { "code": "serviceReadOnly", "date": "...", "request-id": "...", "client-request-id": "..." } } } at Duplicati.Library.Main.BackendManager.List () [0x00049] in <e60bc008dd1b454d861cfacbdd3760b9>:0 at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.DoRun (Duplicati.Library.Main.Database.LocalDatabase dbparent, System.Boolean updating, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+NumberedFilterFilelistDelegate filelistfilter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+BlockVolumePostProcessor blockprocessor) [0x00084] in <e60bc008dd1b454d861cfacbdd3760b9>:0 at Duplicati.Library.Main.Operation.RecreateDatabaseHandler.Run (System.String path, Duplicati.Library.Utility.IFilter filter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+NumberedFilterFilelistDelegate filelistfilter, Duplicati.Library.Main.Operation.RecreateDatabaseHandler+BlockVolumePostProcessor blockprocessor) [0x00037] in <e60bc008dd1b454d861cfacbdd3760b9>:0 at Duplicati.Library.Main.Operation.RepairHandler.RunRepairLocal (Duplicati.Library.Utility.IFilter filter) [0x000ba] in <e60bc008dd1b454d861cfacbdd3760b9>:0 at Duplicati.Library.Main.Operation.RepairHandler.Run (Duplicati.Library.Utility.IFilter filter) [0x00158] in <e60bc008dd1b454d861cfacbdd3760b9>:0 at Duplicati.Library.Main.Controller+<>c__DisplayClass18_0.<Repair>b__0 (Duplicati.Library.Main.RepairResults result) [0x0001c] in <e60bc008dd1b454d861cfacbdd3760b9>:0 at Duplicati.Library.Main.Controller.RunAction[T] (T result, System.String[]& paths, Duplicati.Library.Utility.IFilter& filter, System.Action`1[T] method) [0x0011c] in <e60bc008dd1b454d861cfacbdd3760b9>:0 Duplicati is on latest version (updated during the weekend; don't have information about version and/or access to server right now). Problem is only with Office 365 One Drive. Other locations (example Mega) are working ok. From what I read in error text I think that problem is on One Drive side, but I am posting here if anyone had problem like this before. Regarding One Drive: - I can login to One Drive and upload data manually. - I have regenerated login key/token for Duplicati and problem persists. Any help and advice is appreciated! Thank you!
  5. I had the same problem. My solution was manual upgrade to version 18.0.13 (from 17.0.x). I did as described in first post of this thread, option 3. Manual upgrade using occ. All commands are the same, except docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest.tar.bz2 -P /config must be replaced with docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-18.tar.bz2 -P /config Reason for this is that you can't skip major versions (i.e. if your Nextcloud instance is on version 17.x, you must first upgrade to version 18.x). Hope it helps.
  6. I didn't get any information (searched unRAID forum and Googled it) to answer this question so I am asking again and adding some new test results. Uploading ~5GB test file to Nextcloud over local network: Before upload: Couple seconds after finished upload: Is this normal/correct way of uploading files to Nextcloud? NC container mappings: /data <--> /mnt/disks/nextcloud/nextcloud-data/ /config <--> /mnt/cache/docker/appdata/nextcloud unRAID version: 6.6.6 Edit: Found it! This only happens when I upload files from my Fedora PC via webdav/dav connection (davs://[email protected]/remote.php/webdav). If I upload file via browser, docker.img size doesn't change. Is this bug or expected behavior?
  7. I have mappings for all containers configured and this happens when I upload ~5 GB file to Nextcloud: Before upload: Seconds after finished upload: After 10-20 seconds values return to same state as before upload. Is this container specific (if so, I will ask in NC thread) or could be something wrong with my docker configuration? My unRAID version is 6.6.6
  8. To clarify question: I am using Transmission container, which is using mapped volume /downloads <--> /mnt/disks/downloads for transferred data. docker.img is on SSD cache drive, /mnt/disks/downloads is unassigned HDD. In Transmission I see cumulative DL/UL data size of ~10 TB, which, if my calculations are correct, corresponds to cache SSD S.M.A.R.T. attribute "246 Total host sector write" which is 21877021800. SSD Sector sizes: 512 bytes logical, 4096 bytes physical. In addition to Transmission's data I also saw docker.img size increase when I uploaded some large files (couple of GB) to Nextcloud.
  9. Got so used to docker-shell command, that I totally forgot it is not build in to unRAID as default command 😀 I recommend watching Spaceinvader One tutorial, where he explains how to get and enable command.
  10. This still works. Instead of docker exec -it nextcloud bash you can use command docker-shell and you will get list of all Docker containers. Press corresponding number next to Nextcloud Docker and you will access Nextcloud shell. All other commands are still the same for this version of NC.
  11. Today I was uploading some files to my Nextcloud and one of them was 6 GB of size. When I was uploading this file, I got email notice from unRAID server "Docker image disk utilization of 72%". When the upload finished, I got new email notice: "Docker image disk utilization returned to normal level". This made me curious of how does file transfer or rather file write to Nextcloud storage works? I always thought that files are written directly to Nextcloud HDDs. But it seems that files are initially stored to Nextcloud Docker instance which is on cache SDD and then written to Nextcloud storage HDDs. Is this proper way for file transfer to Nextcloud or did I do something wrong with my configuration?
  12. I did WebUI upgrade from 16.0.1 to 17.0.1 and got same error as @T0a and @PSYCHOPATHiO. My solution was manual upgrade as described in this post. Nextcloud is now upgraded to version 17.0.1 and working. Edit: typo
  13. I have set transcoding temporary path to /ramtranscode (mapping it to /tmp in docker config), restarted the Emby docker and Kodi client but the problem still persists. I have changed one of the users' Vero 4K configuration to use Emby in native mode (not add-on) and all video files are working fine. Was reading around and got information, that transcoding is done only when Emby add-on on client (Vero, Kodi) send information to server, that it can not play original file. Most of "problematic" files are in MKV HEVC format. The most interesting part is that in native mode (with or without using Emby on client) all video files are working fine. Where/how does Emby add-on get information, that client can't play original file? I can't say for certain but something has had to change with Emby (server or client add-on), because I noticed same problem with video files that was working fine 1 or 2 months back. Setting "Enable hardware acceleration when available" to No doesn't resolve the problem. For all Emby users (on server) I have same Media Playback configuration: YES - Allow media playback NO - Allow audio playback that requires transcoding NO - Allow video playback that requires transcoding YES - Allow video playback that requires conversion without re-encoding If I have correct information, there is no way to completely disable transcoding on Emby (to always stream original file)?
  14. Transcoding is done on Kodi clients and all have same problem. Clients are: Vero 4K, Windows 10 PC (Ryzen 3 2200G), Ubuntu PC (i7 gen 3, with Nvidia GPU)...