From: Luben Tuikov <luben@splentec.com>
To: Patrick Mansfield <patmans@us.ibm.com>
Cc: linux-scsi@vger.kernel.org
Subject: Re: [PATCH] 3/7 consolidate single_lun code
Date: Wed, 26 Mar 2003 17:05:09 -0500 [thread overview]
Message-ID: <3E822415.2080306@splentec.com> (raw)
In-Reply-To: 20030326111146.A3548@beaverton.ibm.com
Patrick Mansfield wrote:
>
> But the last single_lun LU that ran should have priority over any other
> LU's on that same target (well it should really get some slice of IO time,
> rather than allowing IO til it is no longer busy), and separately, the
> first starved device should have priority over all other starved devices,
> I can't do that (simply) with one list.
>
> single_lun devices are likely slow (CDROM), and we really don't want to
> give them priority over other starved devices.
So, using the starved_list as a priority queue will not work?
Given: consumer of starved_list takes from its front.
``the last single_lun LU that run'' has priority over all others,
*IF* added at the front. (LIFO) (since any latter one of the same
sort (single lun) will be added at the front)
The *first* starved device DOES HAVE priority over all others,
*IF* added at the tail. (FIFO) (since all others of the same
sort (non-single lun) all *also* be added at the tail)
So, so for single_lun, our priority queue (starved_list) behaves
as LIFO, and for non-single_lun, our priority queue (starved_list)
behaves as FIFO.
The bottom line is that you have one, universal logic for single
lun and non-single lun device simply by manipulating where
you insert in the starved list.
The insertion is, of course, done at scsi_request_fn.
(cont'd)
>
>>static inline void scsi_run_starved_devs(struct Scsi_Host *shost)
>>{
>> unsigned long flags;
>> LIST_HEAD(starved);
>>
>> spin_lock_irqsave(&shost->starved_devs_lock, flags);
>> list_splice_init(&shost->starved_devs, &starved);
>> spin_unlock_irqrestore(&shost->starved_devs_lock, flags);
>>
>> while (!list_empty(&starved)) {
>> struct scsi_device *dev;
>>
>> if (shost->host_blocked || shost->host_self_blocked ||
>> (shost->can_queue > 0 &&
>> shost->host_busy >= shost->can_queue))
>> break;
>>
>> dev=list_entry(starved.next,struct scsi_device,starved_entry);
>> list_del_init(dev->starved_entry);
>> spin_lock_irqsave(dev->request_queue->queue_lock, flags);
>> __blk_run_queue(dev->request_queue);
>> spin_unlock_irqrestore(dev->request_queue->queue_lock, flags);
>> }
>>
>> if (!list_empty(&starved)) {
>> spin_lock_irqsave(&shost->starved_devs_lock, flags);
>> list_splice_init(&starved, &shost->starved_devs);
>> spin_unlock_irqrestore(&shost->starved_devs_lock, flags);
>> }
>>}
>
>
> I did not take the list_splice an approach mainly because I wanted to
> allow multiple CPU's to run the starved queues (host_blocked cases, not
> can_queue). I don't have any data to back this up, and it probably does
> not matter.
>
> For the can_queue limit being hit, with continuous IO across multiple LU's
> on a host, there we will (likely) send only one more IO (run one starved
> queue) and stop - the last few lines of the above code would always be
> executed.
>
> Also the same lock has to be used to protect the scsi_host->starved_list
> and scsi_device->starved_entry. In the above code the first
> list_splice_init above touches the shost->starved_devs->next->prev,
> corresponding to a scsi_device->starved_entry->prev, while holding
> starved_devs_lock but the following list_del_init is done holding the
> queue_lock.
We're working with the local list now!
So that starved_entry is in the local starved list now,
and shost->starved_list is EMPTY!
On the ``following list_del_init'' dev->starved_entry is in
the local ``starved'' list!
--
Luben
next prev parent reply other threads:[~2003-03-26 22:05 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-03-25 1:53 [PATCH] 0/7 per scsi_device queue lock patches Patrick Mansfield
2003-03-25 1:54 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Patrick Mansfield
2003-03-25 2:02 ` [PATCH] 2/7 add missing scsi_queue_next_request calls Patrick Mansfield
2003-03-25 2:02 ` [PATCH] 3/7 consolidate single_lun code Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 6/7 add and use a per-scsi_device queue_lock Patrick Mansfield
2003-03-25 2:04 ` [PATCH] 7/7 fix single_lun code for " Patrick Mansfield
2003-03-25 21:23 ` Luben Tuikov
2003-03-26 21:47 ` Patrick Mansfield
2003-03-26 22:12 ` Luben Tuikov
2003-03-25 21:03 ` [PATCH] 6/7 add and use a " Luben Tuikov
2003-03-26 21:33 ` Patrick Mansfield
2003-03-25 21:20 ` James Bottomley
2003-03-26 2:01 ` Patrick Mansfield
2003-03-27 16:09 ` James Bottomley
2003-03-28 0:30 ` Patrick Mansfield
2003-03-25 7:12 ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Christoph Hellwig
2003-03-25 7:18 ` Jens Axboe
2003-03-25 21:32 ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Luben Tuikov
2003-03-26 0:58 ` Patrick Mansfield
2003-03-26 17:07 ` Luben Tuikov
2003-03-26 17:13 ` Patrick Mansfield
2003-03-26 17:25 ` Luben Tuikov
2003-03-25 20:36 ` [PATCH] 3/7 consolidate single_lun code Luben Tuikov
2003-03-26 19:11 ` Patrick Mansfield
2003-03-26 22:05 ` Luben Tuikov [this message]
2003-03-27 22:43 ` Patrick Mansfield
2003-03-28 15:09 ` Luben Tuikov
2003-03-28 20:06 ` Patrick Mansfield
2003-03-25 20:50 ` Luben Tuikov
2003-03-25 19:41 ` [PATCH] 2/7 add missing scsi_queue_next_request calls Luben Tuikov
2003-03-25 19:39 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Luben Tuikov
2003-03-27 16:14 ` [PATCH] 0/7 per scsi_device queue lock patches James Bottomley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3E822415.2080306@splentec.com \
--to=luben@splentec.com \
--cc=linux-scsi@vger.kernel.org \
--cc=patmans@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox