* attempting task abort with block: remove artifical max_hw_sectors cap
@ 2015-01-06 11:16 Stefan Priebe - Profihost AG
2015-01-07 14:27 ` Christoph Hellwig
0 siblings, 1 reply; 2+ messages in thread
From: Stefan Priebe - Profihost AG @ 2015-01-06 11:16 UTC (permalink / raw)
To: linux-ide
Dear list,
while testing 595b8ecd47e4dde64f704bf78d8c9b97e070ac67 ([PATCH] block:
remove artifical max_hw_sectors cap), i came around a problem with some
crucial m500 ssds direct attached.
With the patch applied, i see the following max_sectors_kb value:
# cat /sys/block/sdi/queue/max_sectors_kb
16383
But this results in the following error messages (at random time - most
probably at high load):
Write(10): 2a 00 04 f9 db f0 00 0c d8 00
scsi target0:0:7: handle(0x000a), sas_address(0x4433221107000000), phy(7)
scsi target0:0:7: enclosure_logical_id(0x5003048016aee700), slot(5)
sd 0:0:7:0: task abort: SUCCESS scmd(ffff88018c6b4700)
sd 0:0:7:0: attempting task abort! scmd(ffff8807a12e6800)
sd 0:0:7:0: [sdh] CDB: Read(10): 28 00 15 d4 f9 60 00 00 10 00
scsi target0:0:7: handle(0x000a), sas_address(0x4433221107000000), phy(7)
scsi target0:0:7: enclosure_logical_id(0x5003048016aee700), slot(5)
sd 0:0:7:0: task abort: SUCCESS scmd(ffff8807a12e6800)
sd 0:0:7:0: attempting task abort! scmd(ffff88078d989800)
sd 0:0:7:0: [sdh] CDB: Read(10): 28 00 15 aa d2 f9 00 00 20 00
Stefan
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: attempting task abort with block: remove artifical max_hw_sectors cap
2015-01-06 11:16 attempting task abort with block: remove artifical max_hw_sectors cap Stefan Priebe - Profihost AG
@ 2015-01-07 14:27 ` Christoph Hellwig
0 siblings, 0 replies; 2+ messages in thread
From: Christoph Hellwig @ 2015-01-07 14:27 UTC (permalink / raw)
To: Stefan Priebe - Profihost AG; +Cc: linux-ide, linux-scsi, Sreekanth Reddy
Hi Stefan,
On Tue, Jan 06, 2015 at 12:16:35PM +0100, Stefan Priebe - Profihost AG wrote:
> while testing 595b8ecd47e4dde64f704bf78d8c9b97e070ac67 ([PATCH] block:
> remove artifical max_hw_sectors cap), i came around a problem with some
> crucial m500 ssds direct attached.
>
> With the patch applied, i see the following max_sectors_kb value:
> # cat /sys/block/sdi/queue/max_sectors_kb
> 16383
>
> But this results in the following error messages (at random time - most
> probably at high load):
>
> Write(10): 2a 00 04 f9 db f0 00 0c d8 00
> scsi target0:0:7: handle(0x000a), sas_address(0x4433221107000000), phy(7)
> scsi target0:0:7: enclosure_logical_id(0x5003048016aee700), slot(5)
Seems like I misdirected you to only linux-ide, as from this printk it
seems like the SSD is attached to a SAS HBA using the mpt2 or mpt3
drivers.
But 16383 kilobytes if 32766 sectors, which is still only half of what
ATA disks should support. It would be interesting if other people
also see an issue with too large I/Os on a m500 SSD when attached to
an ATA controller.
> sd 0:0:7:0: task abort: SUCCESS scmd(ffff88018c6b4700)
> sd 0:0:7:0: attempting task abort! scmd(ffff8807a12e6800)
> sd 0:0:7:0: [sdh] CDB: Read(10): 28 00 15 d4 f9 60 00 00 10 00
> scsi target0:0:7: handle(0x000a), sas_address(0x4433221107000000), phy(7)
> scsi target0:0:7: enclosure_logical_id(0x5003048016aee700), slot(5)
> sd 0:0:7:0: task abort: SUCCESS scmd(ffff8807a12e6800)
> sd 0:0:7:0: attempting task abort! scmd(ffff88078d989800)
> sd 0:0:7:0: [sdh] CDB: Read(10): 28 00 15 aa d2 f9 00 00 20 00
>
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ide" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
---end quoted text---
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2015-01-07 14:27 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-01-06 11:16 attempting task abort with block: remove artifical max_hw_sectors cap Stefan Priebe - Profihost AG
2015-01-07 14:27 ` Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).