public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: James Bottomley <James.Bottomley@steeleye.com>
To: Patrick Mansfield <patmans@us.ibm.com>
Cc: SCSI Mailing List <linux-scsi@vger.kernel.org>
Subject: Re: [PATCH] 6/7 add and use a per-scsi_device queue_lock
Date: 27 Mar 2003 10:09:34 -0600	[thread overview]
Message-ID: <1048781376.1789.49.camel@mulgrave> (raw)
In-Reply-To: <20030325180133.A1874@beaverton.ibm.com>

On Tue, 2003-03-25 at 20:01, Patrick Mansfield wrote: 
> On Tue, Mar 25, 2003 at 03:20:32PM -0600, James Bottomley wrote:
> > and leads to awful problems
> > for your starved_list: the host_lock is actually protecting the starved
> > list, so the way you currently have it, you have to drop the host lock
> > to acquire the queue_lock to run the block queue.  Unfortunately, doing
> > this could lead to the device being readded when running the queue. 
> > Thus, the queue_next_request can spin removing and adding the device to
> > the starved list---probably working from a copy of the starved list will
> > help with this.
> 
> The above doesn't happen because of the while() check: if anyone gets put
> on or re-added to the starved list, we drop out of the while loop. The
> current code also allows mutiple IO completions to run starved queues in
> parallel (I doubt it helps any, but I have no evidence one way or the
> other).

Well, not necessarily.  Dropping out of the while loop assumes that
host_busy goes below can_queue.  However, if something else is sourcing
requests, it may get in between dropping the host_lock and acquiring the
queue_lock.  In this case, running the block queues would simply re-add
the request and keep host_busy at can_queue, thus we could get into a
loop. 

> > I'd also like to see avoidance of the locking hierarchy where possible. 
> > i.e. in the scsi_request_fn for instance, can we move the actions that
> > need the host_lock outside the queue_lock?
> 
> I tried coding to avoid the lock hierarchy but did not quite make it.

OK.

What about making the host processing in the scsi_request_fn exceptional
(i.e. putting it outside the queue lock after the request has been
dequeued).  That way we'd have to do a queue push back if it was
unacceptable (and get the counter decrements correct)?

> Without a lock hierarchy, (in scsi_request_fn) I have to lock, check for
> resources, increment host_busy or device_busy, and unlock.
> 
> Then, if unable to get a host resource, I have to decrement device_busy.
> 
> For the blk_queue_empty(q) or the "if !(req)" cases, I have to decrement
> both device_busy and host_busy.
> 
> I thought (past tense) moving up blk_queue_empty(q) and the "if (!req)"
> checks would cause problems by not decrementing device_blocked and
> host_blocked, and that there is no way to handle host_busy being
> decremented back to 0 there.
> 
> On further thought, those checks can be moved up before checking any of
> the sdev or shost values (if blk_queue_empty, we don't care about
> device_blocked or host_blocked). Agree? If so that issue goes away.

Hmm, moving them should be OK.  The !req case probably wants to be
treated the same as starvation (except you have to beware the
device_busy == 0 && host_busy == 0 case where the queue needs plugging).

> There is a still an issue with the single_lun checks, where we allow
> continued IO to a device that has device_busy set, but not for a device
> that has device_busy == 0 and target_busy != 0; so we have to get the
> lock for device_busy, and the lock for target busy. I could keep the lock
> hierarchy only in single_lun cases, or add another target lock that would
> be part of a lock hierarchy.

That's probably OK, since these checks are now exceptional (we only get
into the logic if the flag is set).

> Should I add a target lock? 

Not unless you can provide an *extremely* good justification for it.

> Excluding the single_lun case, do you think it's worthwhile to avoid the
> lock hierarchy?

As long as it doesn't contort the code, yes.  Lock hierarchies are
accidents waiting to happen, so taking reasonable pains to avoid them is
good.

> Related note: the new workq code in blk probably messes up our device_blocked
> and host_blocked handling, since there is now a 3 millisecond timeout (see
> blk_plug_queue). device_blocked and host_blocked should really be use some
> type of timeout mechanism. Maybe a new blk_plug_timeout(). This might also
> allow us to get rid of the one remaining recursive call into
> scsi_request_fn.

Since we never gave any guarantees other than "eventually" about the
restart time, I don't necessarily think it does.

James



  reply	other threads:[~2003-03-27 16:09 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-03-25  1:53 [PATCH] 0/7 per scsi_device queue lock patches Patrick Mansfield
2003-03-25  1:54 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Patrick Mansfield
2003-03-25  2:02   ` [PATCH] 2/7 add missing scsi_queue_next_request calls Patrick Mansfield
2003-03-25  2:02     ` [PATCH] 3/7 consolidate single_lun code Patrick Mansfield
2003-03-25  2:03       ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Patrick Mansfield
2003-03-25  2:03         ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Patrick Mansfield
2003-03-25  2:03           ` [PATCH] 6/7 add and use a per-scsi_device queue_lock Patrick Mansfield
2003-03-25  2:04             ` [PATCH] 7/7 fix single_lun code for " Patrick Mansfield
2003-03-25 21:23               ` Luben Tuikov
2003-03-26 21:47                 ` Patrick Mansfield
2003-03-26 22:12                   ` Luben Tuikov
2003-03-25 21:03             ` [PATCH] 6/7 add and use a " Luben Tuikov
2003-03-26 21:33               ` Patrick Mansfield
2003-03-25 21:20             ` James Bottomley
2003-03-26  2:01               ` Patrick Mansfield
2003-03-27 16:09                 ` James Bottomley [this message]
2003-03-28  0:30                   ` Patrick Mansfield
2003-03-25  7:12           ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Christoph Hellwig
2003-03-25  7:18             ` Jens Axboe
2003-03-25 21:32         ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Luben Tuikov
2003-03-26  0:58           ` Patrick Mansfield
2003-03-26 17:07             ` Luben Tuikov
2003-03-26 17:13               ` Patrick Mansfield
2003-03-26 17:25                 ` Luben Tuikov
2003-03-25 20:36       ` [PATCH] 3/7 consolidate single_lun code Luben Tuikov
2003-03-26 19:11         ` Patrick Mansfield
2003-03-26 22:05           ` Luben Tuikov
2003-03-27 22:43             ` Patrick Mansfield
2003-03-28 15:09               ` Luben Tuikov
2003-03-28 20:06                 ` Patrick Mansfield
2003-03-25 20:50       ` Luben Tuikov
2003-03-25 19:41     ` [PATCH] 2/7 add missing scsi_queue_next_request calls Luben Tuikov
2003-03-25 19:39   ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Luben Tuikov
2003-03-27 16:14 ` [PATCH] 0/7 per scsi_device queue lock patches James Bottomley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1048781376.1789.49.camel@mulgrave \
    --to=james.bottomley@steeleye.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=patmans@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox