public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: James Bottomley <James.Bottomley@suse.de>
To: "Miller, Mike (OS Dev)" <Mike.Miller@hp.com>
Cc: "scameron@beardog.cce.hp.com" <scameron@beardog.cce.hp.com>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>
Subject: RE: 16 commands per lun limitation bug?
Date: Wed, 10 Feb 2010 17:22:40 -0500	[thread overview]
Message-ID: <1265840560.2769.584.camel@mulgrave.site> (raw)
In-Reply-To: <0F5B06BAB751E047AB5C87D1F77A778869C9E3AA47@GVW0547EXC.americas.hpqcorp.net>

On Wed, 2010-02-10 at 22:19 +0000, Miller, Mike (OS Dev) wrote:
> 
> > -----Original Message-----
> > From: James Bottomley [mailto:James.Bottomley@suse.de] 
> > Sent: Wednesday, February 10, 2010 3:59 PM
> > To: scameron@beardog.cce.hp.com
> > Cc: linux-scsi@vger.kernel.org; Miller, Mike (OS Dev)
> > Subject: Re: 16 commands per lun limitation bug?
> > 
> > On Wed, 2010-02-10 at 14:19 -0600, scameron@beardog.cce.hp.com wrote:
> > > We have seen the amount of commands per lun that are sent 
> > to low level 
> > > scsi driver limited to 16 commands per lun, (seemingly 
> > artificially, 
> > > well below our can_queue and cmd_per_lun limits of 1020)
> > > 
> > > 2.6.29 does not exhibit this bad behavior.
> > > 2.6.30, 2.6.31, 2.6.32 (2.6.32.1 through 2.6.32.8) do 
> > exhibit this bad 
> > > behavior
> > > 2.6.31-rc1 does not exhibit this bad behavior
> > 
> > I can't think of any reason for this.  Best guess at the fix 
> > would be the new queue full ramp up ramp down code, but no 
> > clue as to what the problem is ... no other drivers seem to 
> > have noticed the performance problems this would likely cause 
> > ... and 2.6.32 is becoming the standard enterprise kernel.
> > 
> James,
> I'm not sure there's much hardware out there that's capable of queuing
> up so many commands without choking the disks. I _think_ in most cases
> if you queue up say 64 commands on a single scsi disk that's just too
> much. But with Smart Array we can queue up to 1024 commands (on most
> controllers) and those are then distributed across all the drives in
> the array(s). IOW, we're thinking not many people would have noticed
> such a change. Hope this makes sense.

Fibre drivers to FC arrays would regard a depth of 16 as "fiddling small
change" to quote hitchhikers.  If any of the FC drivers got limited in
this regard, we'll see substantial enterprise performance drops ...
which I haven't actually heard about yet.

James



  reply	other threads:[~2010-02-10 22:22 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-10 20:19 16 commands per lun limitation bug? scameron
2010-02-10 21:59 ` James Bottomley
2010-02-10 22:19   ` Miller, Mike (OS Dev)
2010-02-10 22:22     ` James Bottomley [this message]
2010-02-11  2:27       ` scameron

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1265840560.2769.584.camel@mulgrave.site \
    --to=james.bottomley@suse.de \
    --cc=Mike.Miller@hp.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=scameron@beardog.cce.hp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox