From: Patrick Mansfield <patmans@us.ibm.com>
To: James Bottomley <James.Bottomley@SteelEye.com>
Cc: SCSI Mailing List <linux-scsi@vger.kernel.org>
Subject: Re: [PATCH] 6/7 add and use a per-scsi_device queue_lock
Date: Thu, 27 Mar 2003 16:30:47 -0800 [thread overview]
Message-ID: <20030327163047.A14782@beaverton.ibm.com> (raw)
In-Reply-To: <1048781376.1789.49.camel@mulgrave>; from James.Bottomley@SteelEye.com on Thu, Mar 27, 2003 at 10:09:34AM -0600
On Thu, Mar 27, 2003 at 10:09:34AM -0600, James Bottomley wrote:
> On Tue, 2003-03-25 at 20:01, Patrick Mansfield wrote:
> > The above doesn't happen because of the while() check: if anyone gets put
> > on or re-added to the starved list, we drop out of the while loop. The
> > current code also allows mutiple IO completions to run starved queues in
> > parallel (I doubt it helps any, but I have no evidence one way or the
> > other).
>
> Well, not necessarily. Dropping out of the while loop assumes that
> host_busy goes below can_queue. However, if something else is sourcing
(Assume you meant hits can_queue.)
> requests, it may get in between dropping the host_lock and acquiring the
> queue_lock. In this case, running the block queues would simply re-add
> the request and keep host_busy at can_queue, thus we could get into a
> loop.
I don't see how new requests can cause a loop there, since a new request
will increment host_busy.
Or do you mean new IO can prevent other IO? It's possible for a new IO
(iff 1 request is sent) sneaking in and preventing a starved queue from
running (even with a list_splice algorithm).
Or do you mean the following case?
An IO completion with highly unlikely synchronization could lead to
a loop, for example:
cpu1 cpu2
t1:
in scsi_queue_next_request
pulled sdev off starved
list, called __blk_run_queue,
one IO was sent, sdev was put
back on starved_list
t2:
got host_lock, decremented
host_busy
t3:
waits for host_lock just
at the end of the while loop
releases host_lock
go to t1
calls scsi_queue_next_request
and generally does nothing
since cpu1 ran the queue(s)
go to t2
It is probably impossible to keep the above syncronization, I don't know
if the irqs will route to another cpu in such a synchronized fashion (use
cpu1, but then use cpu2 for other interrupts from the same adapter).
Using list_splice (or somehow limiting it to one iteration accross all
sdev's) would prevent the above from occuring.
> > > I'd also like to see avoidance of the locking hierarchy where possible.
> > > i.e. in the scsi_request_fn for instance, can we move the actions that
> > > need the host_lock outside the queue_lock?
> >
> > I tried coding to avoid the lock hierarchy but did not quite make it.
>
> OK.
>
> What about making the host processing in the scsi_request_fn exceptional
> (i.e. putting it outside the queue lock after the request has been
> dequeued). That way we'd have to do a queue push back if it was
> unacceptable (and get the counter decrements correct)?
I missed the dequeueing step.
> Hmm, moving them should be OK. The !req case probably wants to be
> treated the same as starvation (except you have to beware the
> device_busy == 0 && host_busy == 0 case where the queue needs plugging).
>
OK, I'll try to move those up, and do host locking/checking after the
blkdev_dequeue_request, and then push req back on the queue if the host
can't queue.
> > There is a still an issue with the single_lun checks, where we allow
> > continued IO to a device that has device_busy set, but not for a device
> > that has device_busy == 0 and target_busy != 0; so we have to get the
> > lock for device_busy, and the lock for target busy. I could keep the lock
> > hierarchy only in single_lun cases, or add another target lock that would
> > be part of a lock hierarchy.
>
> That's probably OK, since these checks are now exceptional (we only get
> into the logic if the flag is set).
OK - so I will keep the lock hierarchy for single_lun.
> > Should I add a target lock?
>
> Not unless you can provide an *extremely* good justification for it.
Good, no target lock.
> > Related note: the new workq code in blk probably messes up our device_blocked
> > and host_blocked handling, since there is now a 3 millisecond timeout (see
> > blk_plug_queue). device_blocked and host_blocked should really be use some
> > type of timeout mechanism. Maybe a new blk_plug_timeout(). This might also
> > allow us to get rid of the one remaining recursive call into
> > scsi_request_fn.
>
> Since we never gave any guarantees other than "eventually" about the
> restart time, I don't necessarily think it does.
It won't fail but we will quickly retry IO for a host busy or queue full
(when host_busy or device_busy == 0).
-- Patrick Mansfield
next prev parent reply other threads:[~2003-03-28 0:30 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-03-25 1:53 [PATCH] 0/7 per scsi_device queue lock patches Patrick Mansfield
2003-03-25 1:54 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Patrick Mansfield
2003-03-25 2:02 ` [PATCH] 2/7 add missing scsi_queue_next_request calls Patrick Mansfield
2003-03-25 2:02 ` [PATCH] 3/7 consolidate single_lun code Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 6/7 add and use a per-scsi_device queue_lock Patrick Mansfield
2003-03-25 2:04 ` [PATCH] 7/7 fix single_lun code for " Patrick Mansfield
2003-03-25 21:23 ` Luben Tuikov
2003-03-26 21:47 ` Patrick Mansfield
2003-03-26 22:12 ` Luben Tuikov
2003-03-25 21:03 ` [PATCH] 6/7 add and use a " Luben Tuikov
2003-03-26 21:33 ` Patrick Mansfield
2003-03-25 21:20 ` James Bottomley
2003-03-26 2:01 ` Patrick Mansfield
2003-03-27 16:09 ` James Bottomley
2003-03-28 0:30 ` Patrick Mansfield [this message]
2003-03-25 7:12 ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Christoph Hellwig
2003-03-25 7:18 ` Jens Axboe
2003-03-25 21:32 ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Luben Tuikov
2003-03-26 0:58 ` Patrick Mansfield
2003-03-26 17:07 ` Luben Tuikov
2003-03-26 17:13 ` Patrick Mansfield
2003-03-26 17:25 ` Luben Tuikov
2003-03-25 20:36 ` [PATCH] 3/7 consolidate single_lun code Luben Tuikov
2003-03-26 19:11 ` Patrick Mansfield
2003-03-26 22:05 ` Luben Tuikov
2003-03-27 22:43 ` Patrick Mansfield
2003-03-28 15:09 ` Luben Tuikov
2003-03-28 20:06 ` Patrick Mansfield
2003-03-25 20:50 ` Luben Tuikov
2003-03-25 19:41 ` [PATCH] 2/7 add missing scsi_queue_next_request calls Luben Tuikov
2003-03-25 19:39 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Luben Tuikov
2003-03-27 16:14 ` [PATCH] 0/7 per scsi_device queue lock patches James Bottomley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030327163047.A14782@beaverton.ibm.com \
--to=patmans@us.ibm.com \
--cc=James.Bottomley@SteelEye.com \
--cc=linux-scsi@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox