public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Patrick Mansfield <patmans@us.ibm.com>
To: Luben Tuikov <luben@splentec.com>
Cc: linux-scsi@vger.kernel.org
Subject: Re: [PATCH] 3/7 consolidate single_lun code
Date: Wed, 26 Mar 2003 11:11:46 -0800	[thread overview]
Message-ID: <20030326111146.A3548@beaverton.ibm.com> (raw)
In-Reply-To: <3E80BDC3.7000503@splentec.com>; from luben@splentec.com on Tue, Mar 25, 2003 at 03:36:19PM -0500

On Tue, Mar 25, 2003 at 03:36:19PM -0500, Luben Tuikov wrote:
> Patrick Mansfield wrote:
> > Consolidate scsi single_lun code.
> > 

> >  /*
> > + * Called for single_lun devices. The target associated with current_sdev can
> > + * only handle one active command at a time. Scan through all of the luns
> > + * on the same target as current_sdev, return 1 if any are active.
> > + */
> 
> Here is parth of your comment:
> 	A single_lun (single_lu:1) marked target can handle a single
> 	active LU at a time.

OK I'll change "one active command" to "one active LU". Plus it does not
actually check current_sdev->device_busy, I'll clarify that comment.

> > +static int scsi_single_lun_check(struct scsi_device *current_sdev)
> 
> Here is the name:
> 
> static int scsi_single_lun_active(struct scsi_device *dev);
> 
> The reason is that this makes sense:
> 
> 	if (scsi_single_lun_active(dev)) {
> 		/* then there is an active lu */
> 	} else {
> 		/* No, there's no active devices or
> 		the target is NOT a single lun. */
> 	}
> 
> or the negation:
> 
> 	if (!scsi_single_lun_active(dev)) {
> 		/* no active lun devices or target is not a single
> 		LU, -- either way we can continue here: */
> 		...

OK - change the name to scsi_single_lun_active.

I was trying to keep the main path code fast, so we do not call out to any
functions if not the single_lun case. We can change
scsi_queue_next_request to be consistent with this approach: have one
function to handle leftover IO, one for single_lun cases, and one for
starved. And only call these functions when necessary.

> > + */
> > +static void scsi_single_lun_run(struct scsi_device *current_sdev,
> > +				 struct request_queue *q)
> 
> Now here I've got a problem.  This just pulls out the old logic
> into a separate fn.

That is the main intention of this particular patch - move the code out
of the way. Then the later patches add a per scsi_device queue_lock, and
then fix the single_lun code to work with the new locking.

> I'd rather you did this: in the request fn put the single_lun device
> into the starved devices list, just as you'd do for other devices.

> If you want you to discern between devices, you can add single_lun
> devices to the front and fully capable devices into the tail of the
> starved devices list -- this would make the starved devices list
> into a priority queue.

But the last single_lun LU that ran should have priority over any other
LU's on that same target (well it should really get some slice of IO time,
rather than allowing IO til it is no longer busy), and separately, the
first starved device should have priority over all other starved devices,
I can't do that (simply) with one list. 

single_lun devices are likely slow (CDROM), and we really don't want to
give them priority over other starved devices.

> static inline void scsi_run_starved_devs(struct Scsi_Host *shost)
> {
> 	unsigned long flags;
> 	LIST_HEAD(starved);
> 	
> 	spin_lock_irqsave(&shost->starved_devs_lock, flags);
> 	list_splice_init(&shost->starved_devs, &starved);
> 	spin_unlock_irqrestore(&shost->starved_devs_lock, flags);
> 	
> 	while (!list_empty(&starved)) {
> 		struct scsi_device *dev;
> 
> 		if (shost->host_blocked || shost->host_self_blocked ||
> 		    (shost->can_queue > 0 &&
> 		     shost->host_busy >= shost->can_queue))
> 			break;
> 	
> 		dev=list_entry(starved.next,struct scsi_device,starved_entry);
> 		list_del_init(dev->starved_entry);
> 		spin_lock_irqsave(dev->request_queue->queue_lock, flags);
> 		__blk_run_queue(dev->request_queue);
> 		spin_unlock_irqrestore(dev->request_queue->queue_lock, flags);
> 	}
> 	
> 	if (!list_empty(&starved)) {
> 		spin_lock_irqsave(&shost->starved_devs_lock, flags);
> 		list_splice_init(&starved, &shost->starved_devs);
> 		spin_unlock_irqrestore(&shost->starved_devs_lock, flags);
> 	}
> }

I did not take the list_splice an approach mainly because I wanted to
allow multiple CPU's to run the starved queues (host_blocked cases, not
can_queue). I don't have any data to back this up, and it probably does
not matter. 

For the can_queue limit being hit, with continuous IO across multiple LU's
on a host, there we will (likely) send only one more IO (run one starved
queue) and stop - the last few lines of the above code would always be
executed.

Also the same lock has to be used to protect the scsi_host->starved_list
and scsi_device->starved_entry. In the above code the first
list_splice_init above touches the shost->starved_devs->next->prev,
corresponding to a scsi_device->starved_entry->prev, while holding
starved_devs_lock but the following list_del_init is done holding the
queue_lock.

> Now since __blk_run_queue(q) call the scsi_request_fn() it MAY
> add the device back to the starved list, but that's no problem
> as it is already emtpy.  If the host blocks in due time, then
> the left over devices are added to the front and we get
> prioritization by time (submittal).

Yes, that happens whether we splice out a list or not.

> And this is how scsi_queue_next_request() might look like:
> (i.e. a more general form)
> 

> 	scsi_run_starved_devs(shost);
> 
> 	spin_lock_irqsave(q->queue_lock, flags);
> 	if (cmd != NULL) {
> 
> 		/*
> 		 * For some reason, we are not done with this request.
> 		 * This happens for I/O errors in the middle of the request,
> 		 * in which case we need to request the blocks that come after
> 		 * the bad sector.
> 		 */
> 		cmd->request->special = cmd;
> 		if (blk_rq_tagged(cmd->request))
> 			blk_queue_end_tag(q, cmd->request);
> 
> 		/*
> 		 * set REQ_SPECIAL - we have a command
> 		 * clear REQ_DONTPREP - we assume the sg table has been
> 		 *	nuked so we need to set it up again.
> 		 */
> 		cmd->request->flags |= REQ_SPECIAL;
> 		cmd->request->flags &= ~REQ_DONTPREP;
> 		__elv_add_request(q, cmd->request, 0, 0);
> 	}
> 
> 	__blk_run_queue(q);
> 
> 	spin_unlock_irqrestore(q->queue_lock, flags);

I did not move starved or single_lun handling before the incomplete IO
retry, as the retry should have higher priority.

-- Patrick Mansfield

  reply	other threads:[~2003-03-26 19:11 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-03-25  1:53 [PATCH] 0/7 per scsi_device queue lock patches Patrick Mansfield
2003-03-25  1:54 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Patrick Mansfield
2003-03-25  2:02   ` [PATCH] 2/7 add missing scsi_queue_next_request calls Patrick Mansfield
2003-03-25  2:02     ` [PATCH] 3/7 consolidate single_lun code Patrick Mansfield
2003-03-25  2:03       ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Patrick Mansfield
2003-03-25  2:03         ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Patrick Mansfield
2003-03-25  2:03           ` [PATCH] 6/7 add and use a per-scsi_device queue_lock Patrick Mansfield
2003-03-25  2:04             ` [PATCH] 7/7 fix single_lun code for " Patrick Mansfield
2003-03-25 21:23               ` Luben Tuikov
2003-03-26 21:47                 ` Patrick Mansfield
2003-03-26 22:12                   ` Luben Tuikov
2003-03-25 21:03             ` [PATCH] 6/7 add and use a " Luben Tuikov
2003-03-26 21:33               ` Patrick Mansfield
2003-03-25 21:20             ` James Bottomley
2003-03-26  2:01               ` Patrick Mansfield
2003-03-27 16:09                 ` James Bottomley
2003-03-28  0:30                   ` Patrick Mansfield
2003-03-25  7:12           ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Christoph Hellwig
2003-03-25  7:18             ` Jens Axboe
2003-03-25 21:32         ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Luben Tuikov
2003-03-26  0:58           ` Patrick Mansfield
2003-03-26 17:07             ` Luben Tuikov
2003-03-26 17:13               ` Patrick Mansfield
2003-03-26 17:25                 ` Luben Tuikov
2003-03-25 20:36       ` [PATCH] 3/7 consolidate single_lun code Luben Tuikov
2003-03-26 19:11         ` Patrick Mansfield [this message]
2003-03-26 22:05           ` Luben Tuikov
2003-03-27 22:43             ` Patrick Mansfield
2003-03-28 15:09               ` Luben Tuikov
2003-03-28 20:06                 ` Patrick Mansfield
2003-03-25 20:50       ` Luben Tuikov
2003-03-25 19:41     ` [PATCH] 2/7 add missing scsi_queue_next_request calls Luben Tuikov
2003-03-25 19:39   ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Luben Tuikov
2003-03-27 16:14 ` [PATCH] 0/7 per scsi_device queue lock patches James Bottomley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20030326111146.A3548@beaverton.ibm.com \
    --to=patmans@us.ibm.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=luben@splentec.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox