TimSmall

Members
  • Posts

    19
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

TimSmall's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I haven't had any problems personally, whereas I have had trouble with Marvell and LSI (and the occasional bug with the 4 port PCI-X Silicon Image chips, but otherwise the Silicon Image have been good). ASMedia are owned by Asus, and they also design the integrated SATA controllers on newer AMD chipsets (Q6 at https://www.anandtech.com/show/11177/making-amd-tick-a-very-zen-interview-with-dr-lisa-su-ceo ). I have even considered making my own 16 port SATA cards using 8 of the ASM1061 chips with a PCIe switch chip onboard for a large commercial project. It's a shame that they only do a 2 port standalone version. This open source NAS design chose the ASMedia chips - https://lwn.net/Articles/743609/
  2. I've found these (IOCrest/Syba SY-PEX40039 - asm1061 chipset AHCI) to be reliable - on par with the Silicon Image 3132, and better than the Marvell cards (I've had various reliability problems with Marvell-based cards - which appear to have been down to bugs in the chip implementations). You can also get no-name cards using the same chip for about half that price (avoid the ones with eSATA ports and internal SATA ports on them). e.g. https://www.ebay.co.uk/itm/323172773980 (but I've not tried these, only the IOCrest branded ones).
  3. I'm after picking up one of these off ebay for £25 (i think thats a good deal, I cant find much to compare it to) I know its supposedly plug and play but I was just wondering is this still true?. Or if anyone has this card , do you have any tips when using it. I eventually gave up on these after intermittent problems, same goes for all Marvell controllers (both ahci and not), as couldn't get 100% reliability out of any of them. The only controllers I currently use are: 1. Intel and AMD AHCI motherboard controllers 2. Silicon Image 3132 (had some trouble with interrupt delivery on the 3124s, but generally less trouble than the Marvells). 3. Asmedia 1062 I even considered trying to commission the design of a card hosting multiple Asmedia chips (they've very cheap, but only 2-port), and a PCI-e bridge chip to make an 8+ port card, but I don't run enough storage servers to make this practical. Newer Marvell chips may be better (but the one's I've tried have been so consistently bad, I've not bothered trying). If I were backblaze or Amazon, I would have done this already! (perhaps they have). Tim.
  4. In limited testing, I've found ASMedia ASM1061 based cards to be quick - given the limitation of PCIe 2.0 x1 - and (most importantly) reliable (or at least better than all the Marvell ones I've used). Has anyone seen any problems with these? I think there's scope to create an open design controller card which uses a bunch of these and a PCIe switch chip to create a multi port card - e.g. 12 port SATA, with a PCIe 2.0 x4 host interface - if the price is reasonably low (e.g. sub $200) would anyone here be interested in such a design?
  5. I've found the Marvell based cards (e.g. Rocket RAID) to be less reliable than the 3132 under Linux. The only problems which I'm aware of with the 3132s is that some implementations are cheap/poor (e.g. I'd steer clear of all of the ones which I've seen which offer you the ability to switch between SATA and eSATA using a jumper). I have debugged the problems with the Marvell AHCI cards extensively (I'm a developer with a couple of drivers in the Linux kernel - very much a 3rd tier kernel developer, but one none-the-less), and came to the conclusion that there were design faults in the chips themselves on more than one card (at the very least, you'll need to delay NCQ). I admin a JBOD server which I built for a client with 27 drives, and I've replaced all of the AHCI Marvell cards with Silicon Image 3132s and 3124s. What problems are you seeing with the 3132s? If you have anything which is reproducible, then post to the linux-ide mailing list... Cheers, Tim.
  6. I don't trust any of the AHCI marvell stuff (e.g. RR 620) - in my experience marvell's tech guys don't answer emails from Linux kernel developers if there is any mention of bugs in their chips, and you need to sign an NDA to see the errata list for their chipset (in contrast to the full-disclosure policy of Intel etc. etc.). Sadly, the only stable add-in cards I've found are the Silicon Image ones. If you do go for a Marvell AHCI card, then you'll need to disable NCQ support I think. http://marc.info/?l=linux-ide&m=130768923727513&w=2
  7. See my previous post on this. The 3132 only has two ports - the esata+sata cards have sets of jumpers to select which port is used (int or ext) for each of the two ports. The actual sata signal goes through these jumpers and gets degraded as it does so. As a result, I'd recommend steering clear of these "designs". Tim.
  8. WD ship the Marvell 88SE9125-based rocketraid 620WDA with their 3TB drives. They can be had at around $25 US I think. I have one and initial testing shows good reliability, but I've (accidentally - thanks for a crappy flash script Marvell) put a non-Highpoint firmware on it. Max throughput is a lot higher than the 3132s. I'll report back here if I have any further conclusions. Tim.
  9. Those jumpers actually divert the SATA signals themselves (i.e. the sata data goes through those jumpers). That's a really bogus way of doing an internal/external switch, and leads to signal degradation which can result in data loss. In fact I have a card with does this myself (looks identical to the pic), and it is unreliable except with very short SATA cables. Probably OK, if you remove the jumpers, and pins, and solder it, but I'd recommend just getting a similar less screwed card design instead. e.g. http://cgi.ebay.co.uk/PCIE-PCI-E-SATA-2-PORT-RAID-CARD-SIL3132-WORK-WINDOWS-7-/160595468201 (not that I've ever bought from that supplier myself - I've just looked at the pic, and it doesn't have those stupid jumpers). Tim.
  10. Personally, I've had loads of trouble from LSI cards, and really poor support from them. YMMV.
  11. smartctl -x /dev/sdb etc. will often show up PHY errors caused by bad cables.... eg. SATA Phy Event Counters (GP Log 0x11) ID Size Value Description 0x0001 2 0 Command failed due to ICRC error 0x0002 2 0 R_ERR response for data FIS 0x0003 2 0 R_ERR response for device-to-host data FIS 0x0004 2 0 R_ERR response for host-to-device data FIS 0x0005 2 0 R_ERR response for non-data FIS 0x0006 2 0 R_ERR response for device-to-host non-data FIS 0x0007 2 0 R_ERR response for host-to-device non-data FIS 0x000a 2 8 Device-to-host register FISes sent due to a COMRESET
  12. Yep. It'd be good to see something using a few more PCIe lanes too. The Marvell guy doesn't state how many SATA ports this has, but I'm guessing it'll be a 4x 6Gbps port, PCIe 2.0 2 lane (x2) chip. You could of course use third party PCI express switch chips such as those made by PLX to put multiple controller chips on the one card (Like HighPoint have done with two 2 port controllers on an x4 card). You could e.g. put 4x of these Marvell chips on one card, and give it a PCIe x8 interface (via a suitable PLX switch chip). Assuming they are 4 port, then you'd have 16 ports on one card. It would of course be possible to go up to a 32 port x16 card by the same technique. Tim.
  13. run: lspci -vv as root. LinkSta shows the current link status, LinkCap show the max the device is capable of... Tim.
  14. Thanks for the recommendation - I'm using 2TB Hitachi 5K3000s exclusively. The CPU is a 34w TDP Core i3 3100T, so current draw outside of the drives is not going to be much. Tim.
  15. Hmm, that's a PITA, and does make PUIS pretty much useless. It'd be nice if the kernel dealt with that more gracefully - e.g. by having a mechanism to set the maximum number of drives which are allowed to be brought out of a spun-down state at any the sane time. That'd require a few days of coding tho, which I don't have time for at the moment tho' - not going to get much in the way of savings on a single server either... Actually, on this box I'll have multiple arrays, but that means all disks in a particular array will be spun up (I plan on having multiple arrays with this server), but still, it'll be significant, as you say... Yep, I've dealt with such issues in the past, and was planning on doing some supply testing with a current clamp and digital scope. Incidentally, I know that some WD drives have an option to limit the max-spin-up-current, but I can't find any way to do this with Hitachi drives (albeit I did find a Hitachi patent mentioning this feature), unless anyone knows any different? Tim.