Tinlad

Members
  • Posts

    9
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Tinlad's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Yep, just spent two hours trying to figure out why I lost all external DNS after updating before discovering this. My new, working, config.gateway.json looks like this: { "service": { "dns": { "forwarding": { "options": [ "rebind-domain-ok=/unraid.net/", "all-servers", "cname=unifi.mydomain,unifi.local,unifi", "server=1.1.1.1", "server=1.0.0.1" ] } } } }
  2. OK, some progress. Two things I've done: 1) Updated to 6.8-rc5, on the basis that there were some comments of 6.7 having issues with disk performance in certain scenarios. This bumped my parity check speed up to about 55 MB/s (from about 30 MB/s). 2) NCQ was set to 'off' and nr_requests was set to 128. I've now set both of these to 'Auto'. Parity check speed is now up to about 65 MB/s. I ran DiskSpeed after each change, and nothing has changed from my post above. I'm still only getting about 65 MB/s from my spinning disks and 150 MB/s from my SSDs. johnnie.black, I agree that this seems to be a controller related issue. It seems highly unlikely to be a drive issue, given it's affecting all of them. I don't have another controller (or any other drives) available to test unfortunately. Is there anything in the BIOS settings I should be looking for? Anything in the UnRAID settings or tunables that would affect every disk on the controller?
  3. Thanks for your reply johnnie.black. I've run the DiskSpeed docker and got the results below (x2 repeats). I stopped all my others dockers, there were no VMs running, and nothing on the network should have been accessing the server and competing for throughput. And here's the controller benchmark: Disk 1 is the 1 TB Red. That seems low to me...
  4. Hello, Earlier in the year I swapped out the mobo/CPU/RAM in my server: - From: Core i5-4570S (Asus H87I-Plus), 16GB - To: Xeon D-1520 (AsrockRack D1520D4I), 16GB ECC The disks remain identical: - 2TB WD Red parity - 2TB + 1TB WD Reds data - 2x 240GB OCZ SSD cache pool They are now connected via a mini-SAS to 4x SATA cable, plus one directly in a SATA port on the board. All are reporting that they are connected at SATA 3.0 (6 Gbps). I noticed today that, since this change, parity checks have been taking about three times as long! Below is my history - you can clearly see when I changed the hardware in April. 2019-10-01, 16:16:11 16 hr, 16 min, 9 sec 34.2 MB/s OK 0 2019-09-01, 17:44:43 17 hr, 44 min, 42 sec 31.3 MB/s OK 0 2019-08-01, 16:28:53 16 hr, 28 min, 52 sec 33.7 MB/s OK 0 2019-07-01, 16:24:50 16 hr, 24 min, 49 sec 33.9 MB/s OK 0 2019-06-01, 16:13:10 16 hr, 13 min, 9 sec 34.3 MB/s OK 0 2019-05-01, 16:40:44 16 hr, 40 min, 43 sec 33.3 MB/s OK 0 2019-04-01, 05:30:55 5 hr, 30 min, 54 sec 100.8 MB/s OK 0 2019-02-01, 05:30:07 5 hr, 30 min, 6 sec 101.0 MB/s OK 0 2019-01-01, 05:30:06 5 hr, 30 min, 5 sec 101.0 MB/s OK 0 2018-12-01, 05:30:06 5 hr, 30 min, 5 sec 101.0 MB/s OK 0 2018-11-01, 05:30:33 5 hr, 30 min, 32 sec 100.9 MB/s OK 0 I've tried to figure out whether it's just a parity check issue or a more general disk issue. I'm not a Linux guru, so I'm not sure if this is the best way to test this, but the outputs of the following dd commands seem to indicate I'm only getting 25-30 MB/s write speed on my data disks (and a much more reasonable 175 MB/s to my cache pool)? root@Enthalpy:~# dd if=/dev/zero of=/mnt/disk1/test bs=1G count=20 oflag=dsync 20+0 records in 20+0 records out 21474836480 bytes (21 GB, 20 GiB) copied, 858.472 s, 25.0 MB/s root@Enthalpy:~# dd if=/dev/zero of=/mnt/disk2/test bs=1G count=20 oflag=dsync 20+0 records in 20+0 records out 21474836480 bytes (21 GB, 20 GiB) copied, 715.322 s, 30.0 MB/s root@Enthalpy:~# dd if=/dev/zero of=/mnt/cache/test bs=1G count=10 oflag=dsync 10+0 records in 10+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 61.4057 s, 175 MB/s Based on some Googling I've tried disabling hot plugging and SATA Aggressive Link Power Management in the BIOS, but it's not made any difference. Diagnostics are attached. I'd be grateful for any ideas, and am happy to try further diagnostics. Thanks! enthalpy-diagnostics-20191101-1035.zip
  5. I also have this behaviour (also having upgraded from 6.1.9 to 6.2b21). This is both on a pre-existing W10 VM I had and on a new W10 VM I created post-upgrade. Don't have access to the machine at present but can provide logs/diagnostics if required later.
  6. As previously mentioned in this thread, I have this hardware providing passthrough. If I remember correctly, VT-d and VT-x are separate entries in the BIOS. They're in completely different sections though - one is in 'System Agent' and the other in 'CPU' settings.
  7. This board features the ICH7 southbridge, which I believe means that you'll run into the 2TB volume limit.
  8. I'm not using the Q25 (I'm not the OP, sorry if my post made it seem like I was!) - I'm using a Fractal Design Node 304. I chose the 750 Ti specifically for its compact dimensions (<7" long) and low TDP - it gets all its power from the PCIe slot without an additional cable and runs very cool and quiet. It fits in the case with lots of room to spare. I'd expect it to work very well in the Q25 too. I have the VM running Steam with In-Home Streaming enabled, so I can play on my laptop with the server doing all the hard work!
  9. Posting this here just as a record, as I couldn't find a definite answer elsewhere: I've got the H87I-Plus paired with an i5 4570S (which is VT-d capable, unlike the i3 being discussed here) and can confirm that device passthrough works with this motherboard. Currently passing a 750ti through to a Windows 8.1 VM on 6.0-rc5. I couldn't find any clear information online on whether it would work with the H87 chipset (and this board specifically, despite VT-d appearing in the BIOS), so I just wanted to get this information online for anyone in the same position in the future! Sorry if there's a more appropriate place for it.