linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chris Friesen <chris.friesen@windriver.com>
To: Jens Axboe <axboe@kernel.dk>, lkml <linux-kernel@vger.kernel.org>,
	<linux-scsi@vger.kernel.org>, Mike Snitzer <snitzer@redhat.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>
Subject: Re: absurdly high "optimal_io_size" on Seagate SAS disk
Date: Thu, 6 Nov 2014 11:16:53 -0600	[thread overview]
Message-ID: <545BAD05.3050800@windriver.com> (raw)
In-Reply-To: <545BA625.40308@windriver.com>

On 11/06/2014 10:47 AM, Chris Friesen wrote:
> Hi,
>
> I'm running a modified 3.4-stable on relatively recent X86 server-class
> hardware.
>
> I recently installed a Seagate ST900MM0026 (900GB 2.5in 10K SAS drive)
> and it's reporting a value of 4294966784 for optimal_io_size.  The other
> parameters look normal though:
>
> /sys/block/sda/queue/hw_sector_size:512
> /sys/block/sda/queue/logical_block_size:512
> /sys/block/sda/queue/max_segment_size:65536
> /sys/block/sda/queue/minimum_io_size:512
> /sys/block/sda/queue/optimal_io_size:4294966784

<snip>

> According to the manual, the ST900MM0026 has a 512 byte physical sector
> size.
>
> Is this a drive firmware bug?  Or a bug in the SAS driver?  Or is there
> a valid reason for a single drive to report such a huge value?
>
> Would it make sense for the kernel to do some sort of sanity checking on
> this value?

Looks like this sort of thing has been seen before, in other drives (one 
of which is from the same family as my drive):

http://www.spinics.net/lists/linux-scsi/msg65292.html

http://iamlinux.technoyard.in/blog/why-is-my-ssd-disk-not-reconized-by-the-rhel6-anaconda-installer/

Perhaps the ST900MM0026 should be blacklisted as well?

Or maybe the SCSI code should do a variation on Mike Snitzer's original 
patch and just ignore any values above some reasonable threshold?  (And 
then we could remove the blacklist on the ST900MM0006.)

Chris

  reply	other threads:[~2014-11-06 17:17 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-06 16:47 absurdly high "optimal_io_size" on Seagate SAS disk Chris Friesen
2014-11-06 17:16 ` Chris Friesen [this message]
2014-11-06 17:34   ` Martin K. Petersen
2014-11-06 17:45     ` Chris Friesen
2014-11-06 18:12       ` Martin K. Petersen
2014-11-06 18:15         ` Jens Axboe
2014-11-06 19:14         ` Chris Friesen
2014-11-07  1:56           ` Martin K. Petersen
2014-11-07  5:35             ` Chris Friesen
2014-11-07 15:18               ` Dale R. Worley
2014-11-07 16:25               ` Martin K. Petersen
2014-11-07 17:42                 ` Martin K. Petersen
2014-11-07 17:51                   ` Chris Friesen
2014-11-07 18:03                     ` Martin K. Petersen
2014-11-07 18:48                 ` Chris Friesen
2014-11-07 19:17                   ` Martin K. Petersen
2014-11-07 21:04                     ` Chris Friesen
2014-11-07 17:10             ` Elliott, Robert (Server Storage)
2014-11-07 17:40               ` Martin K. Petersen
2014-11-07 20:15               ` Douglas Gilbert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=545BAD05.3050800@windriver.com \
    --to=chris.friesen@windriver.com \
    --cc=axboe@kernel.dk \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=snitzer@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).