From: James Bottomley <James.Bottomley@steeleye.com>
To: Jeff Garzik <jgarzik@pobox.com>
Cc: Anton Blanchard <anton@samba.org>, Jens Axboe <axboe@suse.de>,
Andrew Vasquez <andrew.vasquez@qlogic.com>,
SCSI Mailing List <linux-scsi@vger.kernel.org>
Subject: Re: PATCH [5/15] qla2xxx: SG tablesize update
Date: 14 Mar 2004 17:27:09 -0500 [thread overview]
Message-ID: <1079303231.2848.69.camel@mulgrave> (raw)
In-Reply-To: <20040314204158.GB1463@havoc.gtf.org>
On Sun, 2004-03-14 at 15:41, Jeff Garzik wrote:
> So, the block layer supports this hardware situation just fine... it's a
> matter of getting SCSI to understand that, I suppose :)
We already do all of that, if you look ...
The place where all of this becomes rather unoptimal for SCSI is in an
SMP where we get multiple queues racing on multiple CPUs for different
devices then having to be coalesced into a single queue for the host
adapter. It causes cache line bouncing like you wouldn't believe...
> The issue of hardware-global resources constraining multiple device
> queues is one that requires a bit of thought. Not impossible... I did
> it in my Carmel driver. But so far, I have not seen any _real_
> solutions excepts for mine. All the others have been hacks -- such as
> "hardware queue size 1024 for all devices, so limit each device
> to '1024 / n_devices' requests at a time."
We do this too.
If you look int the scsi_host structure you see can_queue, which is the
maximum number of total requests we allow to the entire host. Then
there's queue_depth in scsi_device, which is the maximum number of
requests to the individual devices. We even have a very simplistic
fairness scheme to prevent one device hogging all of the host queue
elements.
For dynamic resource situations we have the queuecommand return codes
SCSI_MLQUEUE_HOST_BUSY which means the entire host is temporarily out of
resources and causes the mid layer to hold off all commands for that
host until we get one back from any device on the host and
SCSI_MLQUEUE_DEVICE_BUSY which means that the device queue is
temporarily out of resources, hold off all commands to that device until
one returns from that device.
James
next prev parent reply other threads:[~2004-03-14 22:27 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-03-14 8:24 PATCH [5/15] qla2xxx: SG tablesize update Andrew Vasquez
2004-03-14 14:49 ` James Bottomley
2004-03-14 14:51 ` Jens Axboe
2004-03-14 14:59 ` James Bottomley
2004-03-14 15:15 ` Jens Axboe
2004-03-14 15:18 ` Anton Blanchard
2004-03-14 15:31 ` James Bottomley
2004-03-14 15:47 ` Anton Blanchard
2004-03-14 15:55 ` James Bottomley
2004-03-14 16:01 ` Anton Blanchard
2004-03-14 20:41 ` Jeff Garzik
2004-03-14 22:27 ` James Bottomley [this message]
2004-03-15 16:12 ` Jeff Garzik
2004-03-14 20:36 ` Jeff Garzik
2004-03-14 22:31 ` James Bottomley
2004-03-15 16:09 ` Jeff Garzik
-- strict thread matches above, loose matches on Subject: below --
2004-03-15 23:43 Andrew Vasquez
2004-03-16 3:37 ` James Bottomley
2004-03-16 6:40 ` Jeremy Higdon
2004-03-16 11:32 ` Anton Blanchard
2004-03-16 21:49 ` James Bottomley
2004-03-16 22:09 Andrew Vasquez
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1079303231.2848.69.camel@mulgrave \
--to=james.bottomley@steeleye.com \
--cc=andrew.vasquez@qlogic.com \
--cc=anton@samba.org \
--cc=axboe@suse.de \
--cc=jgarzik@pobox.com \
--cc=linux-scsi@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox