Zonediver

After changing the Cache-SSD, errors in syslog

20 posts in this topic Last Reply

Recommended Posts

Posted (edited)

Hi folks,

changed my cache-ssd yesterday to a new Samsung 860 PRO (ata16) and now, i have errors in syslog:


 
Jun 10 20:01:03 unraid kernel: ata16.00: exception Emask 0x0 SAct 0x7fff00 SErr 0x0 action 0x6 frozen
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: WRITE FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 10 20:01:03 unraid kernel: ata16: hard resetting link

This error appears 4x

Is there a known issue with this SSD?

 

From tomshardware.com:

In the past, Samsung's SATA SSDs couldn't execute queued TRIM commands in a Linux environment. The company fixed the issue in the 860 series. The 860 Pro is also the first consumer SSD advertised for use in a NAS environment.

 

My scheduler is running trim at 20:00 and the first error happend at 20:00 - did a test now and trim is working - so no idea, what error this is...

Edited by Zonediver

Share this post


Link to post

Try Googling   failed command: SEND FPDMA QUEUED    You might want to include the SDD maker and model to limit the number of hits.  

Share this post


Link to post

Try a different SATA cable, Samsung SSDs are very picky and prefer high quality cables.

Share this post


Link to post

Ok... the SATA-cable again... strange... the cable from the SSD is only 13 months old. Since unraid v6, the "cable-issues" are a weird problem, but ok, will change it and see what happens next.

Thanks for your help.

Share this post


Link to post
3 minutes ago, Zonediver said:

Since unraid v6, the "cable-issues" are a weird problem

It has nothing to do with v6, it has to do with poor quality cables, and Samsung SSDs especially are very picky.

Share this post


Link to post
Posted (edited)
4 minutes ago, johnnie.black said:

It has nothing to do with v6, it has to do with poor quality cables, and Samsung SSDs especially are very picky.

 

I know - its strange that the quality droped significantly over the last 7 years. The question is now: "what is a good quality cable"?

But i think, noone can answer this question...

If someone knows a "High-Quality-Cable-Manufactorer", please let me know ;-)

Edited by Zonediver

Share this post


Link to post
1 hour ago, Zonediver said:

I know - its strange that the quality droped significantly over the last 7 years. The question is now: "what is a good quality cable"?

 

It isn't the cable quality that has dropped. It's the requirements on the cables that have increased.

 

The original SATA rev 1.0 cables needed to handle 1.5 Gbit/s transfers.

 

Then the SATA rev 2.0 standard introduced a higher transfer rate of 3 Gbit/s.

 

And now you need to handle SATA 3.0 which requires 6 Gbit/s when connecting a SSD.

 

And since the power needed increases with the frequency of the signal, they can't use very high voltages for the signals - with high voltages (that better handles external noise) the driver circuit feeding the cable signals will start to consume huge amounts of power. So the signals has a swing of 1.5 V.

Share this post


Link to post
Posted (edited)

Ok... it happened again - see this:

 

Jun 11 20:00:35 unraid kernel: ata16.00: exception Emask 0x0 SAct 0x7fff8001 SErr 0x0 action 0x6 frozen
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: WRITE FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 61/b0:00:d8:33:0d/01:00:17:00:00/40 tag 0 ncq dma 221184 out
Jun 11 20:00:35 unraid kernel:         res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:78:00:00:00/00:00:00:00:00/a0 tag 15 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:80:00:00:00/00:00:00:00:00/a0 tag 16 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:88:00:00:00/00:00:00:00:00/a0 tag 17 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:90:00:00:00/00:00:00:00:00/a0 tag 18 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:98:00:00:00/00:00:00:00:00/a0 tag 19 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:a0:00:00:00/00:00:00:00:00/a0 tag 20 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:a8:00:00:00/00:00:00:00:00/a0 tag 21 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:b0:00:00:00/00:00:00:00:00/a0 tag 22 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:b8:00:00:00/00:00:00:00:00/a0 tag 23 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:c0:00:00:00/00:00:00:00:00/a0 tag 24 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:c8:00:00:00/00:00:00:00:00/a0 tag 25 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:d0:00:00:00/00:00:00:00:00/a0 tag 26 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:d8:00:00:00/00:00:00:00:00/a0 tag 27 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: SEND FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 64/01:e0:00:00:00/00:00:00:00:00/a0 tag 28 ncq dma 512 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: READ FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 60/08:e8:d8:09:fc/00:00:16:00:00/40 tag 29 ncq dma 4096 in
Jun 11 20:00:35 unraid kernel:         res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16.00: failed command: WRITE FPDMA QUEUED
Jun 11 20:00:35 unraid kernel: ata16.00: cmd 61/b8:f0:20:2e:0d/05:00:17:00:00/40 tag 30 ncq dma 749568 out
Jun 11 20:00:35 unraid kernel:         res 40/00:01:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
Jun 11 20:00:35 unraid kernel: ata16.00: status: { DRDY }
Jun 11 20:00:35 unraid kernel: ata16: hard resetting link
Jun 11 20:00:36 unraid kernel: ata16: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jun 11 20:00:36 unraid kernel: ata16.00: supports DRM functions and may not be fully accessible
Jun 11 20:00:36 unraid kernel: ata16.00: supports DRM functions and may not be fully accessible
Jun 11 20:00:36 unraid kernel: ata16.00: configured for UDMA/133
Jun 11 20:00:36 unraid kernel: ata16: EH complete

...NCQ??? Its off - Tunable (enable NCQ): No

Edited by Zonediver

Share this post


Link to post
2 minutes ago, Zonediver said:

...ncq and DRM???

 

NCQ is Native Command Queueing.

So you can send multiple commands to the drive.

The drive may then reorder the commands and handle in most efficient order.

Then send back ack/nack for the individual queued commands.

 

DRM is Digital Rights Management.

I don't think this one is relevant here.

I think you should see it as a note that if there are DRM-protected content on the drive, then you haven't supplied any encryption key allowing you to access this protected content.

Share this post


Link to post
Posted (edited)
5 minutes ago, pwm said:

 

NCQ is Native Command Queueing.

So you can send multiple commands to the drive.

The drive may then reorder the commands and handle in most efficient order.

Then send back ack/nack for the individual queued commands.

 

DRM is Digital Rights Management.

I don't think this one is relevant here.

I think you should see it as a note that if there are DRM-protected content on the drive, then you haven't supplied any encryption key allowing you to access this protected content.

 

I know this - but NCQ is off and i dont use DRM...

Edited by Zonediver

Share this post


Link to post
4 minutes ago, Zonediver said:

 

I know this - but NCQ is off and i dont use DRM...

 

Have you turned off NCQ? If you use AHCI then NCQ should normally be on without you turning on anything.

 

And I believe that the DRM line is just informational - that the drive supports it.

Share this post


Link to post
1 minute ago, pwm said:

 

Have you turned off NCQ? If you use AHCI then NCQ should normally be on without you turning on anything.

 

And I believe that the DRM line is just informational - that the drive supports it.

 

NCQ is OFF

Share this post


Link to post
12 minutes ago, Zonediver said:

 

NCQ is OFF

 

Where have you verified this?

 

I only found a very old syslog you had posted from last year. And I don't know if it's from the same system.

But on boot, it wrote:

Aug  5 02:11:21 unraid kernel: ata11.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Aug  5 02:11:21 unraid kernel: ata12.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Aug  5 02:11:21 unraid kernel: ata14.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Aug  5 02:11:21 unraid kernel: ata9.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Aug  5 02:11:21 unraid kernel: ata13.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA

So that system, at that time, did use NCQ.

Share this post


Link to post
Posted (edited)
5 minutes ago, pwm said:

 

Where have you verified this?

 

I only found a very old syslog you had posted from last year. And I don't know if it's from the same system.

But on boot, it wrote:


Aug  5 02:11:21 unraid kernel: ata11.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Aug  5 02:11:21 unraid kernel: ata12.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Aug  5 02:11:21 unraid kernel: ata14.00: 5860533168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Aug  5 02:11:21 unraid kernel: ata9.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
Aug  5 02:11:21 unraid kernel: ata13.00: 7814037168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA

So that system, at that time, did use NCQ.

 

Its set to "No" - but maybe there is something wrong...

 

NCQ.jpg

Edited by Zonediver

Share this post


Link to post

OK. But the drive boots with NCQ enabled.

The question is how soon the unRAID setting will take effect and turn off NCQ.

And what happens with the NCQ state after a link reset.

Share this post


Link to post
1 minute ago, pwm said:

OK. But the drive boots with NCQ enabled.

The question is how soon the unRAID setting will take effect and turn off NCQ.

And what happens with the NCQ state after a link reset.

 

The strange thing is, it only happens at 20:00 when the scheduler shall trim the SSD - i tried to change the time to 19:00 and all was working fine - i am confused...

I am not a programmer or code-analyst so i cant understand what this error does...

Share this post


Link to post

I think you already got good founding of the problem.

Does any read/write happen at 20:00 but not 19:00. Seems problem on new hardware compatibility, problem should need SSD's firmware update.

Share this post


Link to post
Posted (edited)
8 minutes ago, Benson said:

I think you already got good founding of the problem.

Does any read/write happen at 20:00 but not 19:00. Seems problem on new hardware compatibility, problem should need SSD's firmware update.

 

There is nothing happen at 20:00 except the trim schedule - the SSD 860 PRO is brand new and there is no other firmware at the moment.

The only thing today at 20:00 was, my sister was watching Plex and on the SSD there is the temp folder for transcoding.

The old Transcent SSD 370S had never problems with transconding and trim at the same time.

Edited by Zonediver

Share this post


Link to post
Posted (edited)
3 hours ago, Zonediver said:

i tried to change the time to 19:00 and all was working fine

 

But what means of this ?

 

14 hours ago, Zonediver said:

From tomshardware.com:

In the past, Samsung's SATA SSDs couldn't execute queued TRIM commands in a Linux environment. The company fixed the issue in the 860 series. The 860 Pro is also the first consumer SSD advertised for use in a NAS environment

 

If Samsung have such issue before, why don't simple think it still have such problem even they said fix on 860 series. ( In fact just know that issue )

Edited by Benson

Share this post


Link to post
Posted (edited)
8 minutes ago, Benson said:

 

But what means of this ?

 

That means, i changed the trim schedule from 20:00 to 19:00 and it was working - but my sister didnt watch plex at this time.

I bought this SSD because my old one is out of warranty and the new one has a high TBW - i didnt know of any problems with Samsung SSDs  and unraid.

I found this article today but i bought the 860 PRO already three days ago.

Edited by Zonediver

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


Copyright © 2005-2018 Lime Technology, Inc.
unRAID® is a registered trademark of Lime Technology, Inc.