All Activity

This stream auto-updates     

  1. Past hour
  2. [Support] - Domoticz

    I haven't actually run Domoticz on another system, but I might look at firing it up on a Linux VM, my list of modifications after a pull is getting a bit long. Or maybe it should move onto a pi. Sent from my SM-G935F using Tapatalk
  3. [Support] - airsonic

    It's not going to work as unraid doesn't have drivers for you audio.
  4. Hi, Can you please upgrade the webgrab docker. I get a a error :
  5. allright that did the trick. I've started a backup of my cache on another drive (there are files there that do not belong on a particular share, as i'm using the cache for Transmission and such). When i come back home later today i'll simply stop the array, switch the drive to the 1TB and restore the backup folders on the new cache. Does this sounds good? Is there something else i need to take care of?
  6. [Support] - airsonic

    Jukebox (playing the sound locally on the machine where airsonic is running) doesn't work for me. If I configure the jukebox player and start a song I get this error in the log: Error in jukebox: java.lang.IllegalArgumentException: No line matching interface SourceDataLine supporting format PCM_SIGNED 44100.0 Hz, 16bit, stereo, 4 bytes/frame, big-endian is supported
  7. Great stuff - thank you dlandon.
  8. A HDD can only handle about 100 I/O requests per second. If it spins at 7200 rpm, then that is 120 revolutions/second. Which means the HDD first have to move the heads to the correct cylinder and then wait for the correct sector to rotate in under the head. A normal SSD can handle more than 50,000 IOPS. Some M.2-based SSD can handle over 400,000 IOPS. This means that moving from a HDD to a SSD will make a huge difference in how many file accesses Plex can do per second. When it comes to transfer rates, the difference is less. A SATA-connected SSD only manages about 2-3 times the transfer rate of a HDD. But it's normally the seek speed that is important when you browse for information.
  9. Today
  10. Diagnosing server load

    Why am I not getting email notifications of your replies? This comment is not directed at you, just speaking out loud. Thanks for the tip, I am pretty sure its the NZBGet docker as its the only container doing anything each day. I also install iotop to help me, thanks for your help I appreciate it.
  11. SSD Model String help request

    Binary file (standard input) matches Model Number: WDC WD30EFRX-68EUZN0 Model Number: WDC WD20EARX-00PASB0 Model Number: WDC WD30EFRX-68EUZN0 Model Number: WDC WD20EARS-00MVWB0 Model Number: ST3000VN000-1H4167 Model Number: WDC WD20EARX-00PASB0 Model Number: WDC WD30EFRX-68AX9N0 Model Number: ST31000524AS Model Number: WDC WD30EFRX-68EUZN0 Model Number: Crucial_CT525MX300SSD1
  12. [Support] - Letsencrypt (Nginx)

    Ok, so I found that upnp was forcing a separate port 80 config. This caused the conflict. I cleared my conflict and now have a cert. In configuring the reverse proxy, any http(s):// pulls the main html and not the service.
  13. This generally means a write to the disk failed and it needs to be rebuilt.
  14. Removing share from cache

    Prefer will keep data on the cache until there is no not enough space left and then write to the array. Yes will use the cache until mover runs No will only use array disks
  15. This is my first time using unraid, and the estimated finish time seems crazy long: 10 days 3 hours 59 minutes. Background: I precleared 2x 4Tb drives and a 2Tb drive. No smart issues on any. I then set the parity to one of the 4Tb, added the other 4Tb and the 2Tb drive as data drives, then hit start. It said that the parity was faulty and started the "Parity-Sync/Data-Rebuild in progress." I thought that was a little odd since I'd done the preclear on all the drives, I would have thought that it would be smart enough to recognize the signature that the preclear put on and just start up in seconds. (it's supposed to be 'even-based parity' and it should see that all the drives have been zero'd out, which is the point of pre-clear I thought) Anyway, now it says that the estimated speed is 4.4 MB/sec, It's stuck at 98.3 GB (2.5 %) and isn't really moving... When I first started it (over 8 hours ago) it said that it estimated it would be done in 7-ish hours. Doing 98Gb in 8+ hours is excessively slow. I then thought maybe the fact that they weren't formatted was the problem, so I hit the 'format drives' checkbox and confirmed... but that seems to have done nothing (still shows as unmountable, no fs) Should I reboot it? What should I do? only put one drive in at a time? The Pre-clear showed only good things for all 3 drives. Please advise,
  16. 10Gbe weird behavior...

    You may further check does lot of pkt error record by unRAID in "Dashboard" -> "Network" -> "System status (pull down - error)" I have similar setup, but i5-3570K was a PC. If 1 session file transfer (SMB) also ~300MB, the overall speed will increase with more session transfer but PC will stuck / lag. BTW, my unRAID NVMe / system never got share eject or crash. ** remarks : iperf test at both end can reach 900MB+ and no lag **
  17. I installed a new sas controller card, before hand all drives were plugged into the mainboard. I cross-flashed a dell perc h310 to to lsi 9211 it mode. plugged all my drives into it and booted up. Well it didn't like my ssd cache drive and rejected it and put it in unassigned devices. so i plugged that back into the mainboard and it was recognized again. But now some of my dockers are not working properly, specifically plexpy and the tautilla app, could be more those are just the two ive noticed so far. I'm afraid it created a new appdata directory and files when it auto started the array after the first reboot. How or what do i do to fix the dockers? Thanks in advanced.
  18. Removing share from cache

    Just out of curiosity. I'm sure that there is information around here. But I watched spaceone and he said to use prefer. My question is, what is the difference? Prefer seems to hold the data. Does yes only hold the data temporarily? No doesn't hold anything. I'm just trying to understand this. Thanks again for everything.
  19. 10Gbe weird behavior...

    Hi all! So I've got my unRAID 6.3.5 system up and running, and it's configured thusly (short profile below, full profile attached) Model: Custom M/B: Gigabyte Technology Co., Ltd. - Z77X-UP5 TH-CF CPU: Intel® Core™ i7-3770K CPU @ 3.50GHz HVM: Disabled IOMMU: Disabled Cache: 128 kB, 1024 kB, 8192 kB Memory: 32 GB (max. installable capacity 32 GB) Network: eth0: 1000 Mb/s, full duplex, mtu 1500 eth1: 10000 Mb/s, full duplex, mtu 1500 Kernel: Linux 4.9.30-unRAID x86_64 OpenSSL: 1.0.2k I've currently got two NICs as you can see. The first is set up for general network access, and is connected to the outside world through the standard gigabit switch I have in my office. The second NIC (10GBe) is configured with a static IP and is connected directly to my new iMac Pro, which has its own static address on the same network. This makes it possible for me to transfer files directly between the two computers at speeds in excess of 250 MB/s I've tried using AFS, SMB, and NFS, but as soon as I push the network hard, the shares disappear. In the case of SMB and AFS shares, it causes the computer to crash. NFS doesn't cause a crash, but it does make things a little unhappy. If I turn off my cache drive (512 GB NVMe drive attached on PCI-e bus with a 4cx adaptor) then the shares aren't ejected or suffer any issues, presumably because I'm not pushing the NIC too hard. If I turn the cache drive On (or to prefer or only, for example) then the system gets all upset, eventually causing the share to eject. I'm making this post because I suspect the occam's razor answer here is that I'm using a bad ethernet cable, and that there's significant packet loss at high transfer speeds. BUT I want to make sure I'm not missing anything else. I'm not in the office for a few days, so it's going to take a while to get an answer. But if anyone has any suggestions/ideas/experience I haven't got, please share. Thanks in advance! Nikki. Profile.xml
  20. @csmccarron followed everything and was able to find the server. Its only under the LAN tab, does this mean that the forwarding isn't working and my friends cant find the server? but when i connect to it i get the following message. my game is also v276.42
  21. [Support] - Organizr

    I've made some guides here
  22. FileBot containers

    @coppit Thanks for the follow up. It has been some time since I have messed with this with the holidays and work. I will try and get some more information for you in the coming days. I will first try to remove the container completely and reinstall. As you mentioned this may also be another problem. Thanks again!
  23. Removing share from cache

    I just started the mover. Once it gets done, I'll change it to no. I hope it works. I'll let you know. Thanks squid. I appreciate your help.
  24. Disk name changed

    Ok bare with me here! This was my first server a r710 and I upgraded the raid controller to a h700 to make it 4tb+ compatible. Then I wanted to try unraid. I know it was wrong at the time, but I couldn't get the drives to show up individually (or at all from memory). What I ended up doing through the raid config was created a virtual drive for each drive and this made them show up in unraid. Like I said it probably isn't the ideal way but it worked and it was a great way to test unraid. Safe to say I was hooked on unraid and just left everything as is got everything installed and never looked back until last night when I drive failed. Now I should have checked the wiki first but I didn't. I shut down and pulled the drive. Turns out that wasn't the right drive. put it back in and pulled another drive, that wasn't it either. Third times a charm, I pulled it and then thought he I should just put it back in in case it wasn't seated right or something. So I slid it back in and booted and its fine. The raid config has now given these 3 drives new names for the virtual drives (the numbers at the end have changed, PERC_H700_00e9755f0db018542100ef70de90f648_36848f690de70ef00215418b00d5f75e9 for example) and now unraid just shows up 'wrong' when I try assign the drives. Also, one of the drives is the parity drive! I can't find a way to rename the drives to what they should nor can I force unraid to just accept it. Unless there's another way I'm not aware of. Do I mount these drives as new drives or will that erase them? I just need the backups off it because I only just started doing backups and hadn't set a spot to move them out of the array yet of course. At this point I assume I'll have to wipe everything and start over. Any help would be appreciated.
  25. [Support] - Letsencrypt (Nginx)

    I'm still getting timeouts when it's trying to validate. It's so close, and I've absolutely verified that port 80 externally shows the ACME challenge server from my phone's LTE connection. Of course, that only runs for a few moments, but I definitely see it. No idea why it might be timing out though. domain is, subdomains are a few domains I want (plex, etc). results are: IMPORTANT NOTES:- The following errors were reported by the server:Domain: davos.mysubdomain.duckdns.orgType: connectionDetail: Fetching mysubdomain.duckdns.orgType: connectionDetail: Fetching sonarr.mysubdomain.duckdns.orgType: connectionDetail: Fetching radarr.mysubdomain.duckdns.orgType: connectionDetail: Fetching plex.mysubdomain.duckdns.orgType: connectionDetail: Fetching Verified that port 80 is not blocked by forwarding 80:80 on my router temporarily, and yep, there was my unraid config. What's going on here? Like I said, I've confirmed the server itself is accessible on port 80 from an external connection, so the only thing I can think of is the paths are borked -- how would I go about validating that things are where they're supposed to be?
  26. [Support] - Letsencrypt (Nginx)

    If your provider block the port 80, the only other way at the moment is the dns challenge, I suggest you to read the forum from this post. However, it require you to use a dns provider with an API, such as cloudflare, and 2 scripts specific to your dns provider.
  1. Load more activity

Copyright © 2005-2017 Lime Technology, Inc. unRAID® is a registered trademark of Lime Technology, Inc.