From: Luben Tuikov <luben@splentec.com>
To: Patrick Mansfield <patmans@us.ibm.com>
Cc: linux-scsi@vger.kernel.org
Subject: Re: [PATCH] 3/7 consolidate single_lun code
Date: Fri, 28 Mar 2003 10:09:22 -0500 [thread overview]
Message-ID: <3E8465A2.7020501@splentec.com> (raw)
In-Reply-To: <20030327144326.A14457@beaverton.ibm.com>
Patrick Mansfield wrote:
> On Wed, Mar 26, 2003 at 05:05:09PM -0500, Luben Tuikov wrote:
>
>>Patrick Mansfield wrote:
>>
[This is your premise:]
>>>But the last single_lun LU that ran should have priority over any other
>>>LU's on that same target (well it should really get some slice of IO time,
>>>rather than allowing IO til it is no longer busy), and separately, the
>>>first starved device should have priority over all other starved devices,
>>>I can't do that (simply) with one list.
>>Given: consumer of starved_list takes from its front.
Here I'm quoting the sinlge lun part of your premise above:
>>``the last single_lun LU that run'' has priority over all others,
>>*IF* added at the front. (LIFO) (since any latter one of the same
>>sort (single lun) will be added at the front)
>
>
> Yes if using the list_splice code, and if it is added even for cases when
> it is able to send IO. This would differ from the current code - given 3
> single_lun devices on a target, one can be prevented from sending IO (in
> addition to all cases where one single_lun device can prevent IO to
> other single_lun devices).
>
Here I'm quoting the second part of your premise above:
>>The *first* starved device DOES HAVE priority over all others,
>>*IF* added at the tail. (FIFO) (since all others of the same
>>sort (non-single lun) all *also* be added at the tail)
>
>
> Then it has priority over other starved devices, but not over single_lun
> devices put on the head. So a single_lun device put on the head of the
> starved_list could potentially prevent other non-single_lun devices on the
> same scsi_host from sending IO.
You see, all I'm doing is trying to come up with a suitable ADT
and algorithm for the premise which _you_ gave.
The problem is that you have no fixed arbitration criterium between single
lun and fully capable targets. If you did, it's a little thing to get
the sinlge list as priority queue working.
I.e. What I'm asking is this: _even_ if you had 2 lists, one
for sinlge lun and another for fully capable targets,
how would you arbitrate between the two in scsi_request_fn().
Because if you tell me how, a sinlge list travesal of a single
starved devices as priority queue as I described will work.
What I'm trying to say it that, there exists an ordering, i.e.
insertion mechanism, for the problem at hand.
I.e. you know that the last finished single target is at the front,
the last finished fully capable target is at the tail, and
you can arbitrate any which way you want just from a single
list traversal:
- last finished lu on a single lu target (front)
- last finished lu on a fully cap. target (tail)
- first finished lu on a single lu target (last single lun F->T)
- first finished lu on a single lu target (last f. cap. lun T->F)
- any other starved lu (single lu or fully cap.; arb.)
> We could add another list_head for the single_lun, but I don't think it
> is worth the extra code, data, and effort.
Right. It is *not* worth it, mainly because I think that the task
can be accomplished with a single list.
>>>Also the same lock has to be used to protect the scsi_host->starved_list
>>>and scsi_device->starved_entry. In the above code the first
>>>list_splice_init above touches the shost->starved_devs->next->prev,
>>>corresponding to a scsi_device->starved_entry->prev, while holding
>>>starved_devs_lock but the following list_del_init is done holding the
>>>queue_lock.
>>
>>We're working with the local list now!
>>
>>So that starved_entry is in the local starved list now,
>>and shost->starved_list is EMPTY!
>>
>>On the ``following list_del_init'' dev->starved_entry is in
>>the local ``starved'' list!
>
>
> dev->starved_entry is still visible and used in scsi_request_fn, removing
> it from the starved list does not prevent scsi_request_fn from being
> called and referencing it (if we have two locks and no lock hierarchy in
> scsi_request_fn).
Well, this is just unfortunate. This ``design feature'' is not very
good. It is *still* my conviction that a scsi_cmnd has to have
only *one* list entry and belong to only *one* list at a time.
That list representing its state. I think I'm repreating myself.
> I'm not against using the list_splice, it does not clean up the
> locking, but it probably fixes James issue (possible loop).
--
Luben
next prev parent reply other threads:[~2003-03-28 15:09 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-03-25 1:53 [PATCH] 0/7 per scsi_device queue lock patches Patrick Mansfield
2003-03-25 1:54 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Patrick Mansfield
2003-03-25 2:02 ` [PATCH] 2/7 add missing scsi_queue_next_request calls Patrick Mansfield
2003-03-25 2:02 ` [PATCH] 3/7 consolidate single_lun code Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Patrick Mansfield
2003-03-25 2:03 ` [PATCH] 6/7 add and use a per-scsi_device queue_lock Patrick Mansfield
2003-03-25 2:04 ` [PATCH] 7/7 fix single_lun code for " Patrick Mansfield
2003-03-25 21:23 ` Luben Tuikov
2003-03-26 21:47 ` Patrick Mansfield
2003-03-26 22:12 ` Luben Tuikov
2003-03-25 21:03 ` [PATCH] 6/7 add and use a " Luben Tuikov
2003-03-26 21:33 ` Patrick Mansfield
2003-03-25 21:20 ` James Bottomley
2003-03-26 2:01 ` Patrick Mansfield
2003-03-27 16:09 ` James Bottomley
2003-03-28 0:30 ` Patrick Mansfield
2003-03-25 7:12 ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Christoph Hellwig
2003-03-25 7:18 ` Jens Axboe
2003-03-25 21:32 ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Luben Tuikov
2003-03-26 0:58 ` Patrick Mansfield
2003-03-26 17:07 ` Luben Tuikov
2003-03-26 17:13 ` Patrick Mansfield
2003-03-26 17:25 ` Luben Tuikov
2003-03-25 20:36 ` [PATCH] 3/7 consolidate single_lun code Luben Tuikov
2003-03-26 19:11 ` Patrick Mansfield
2003-03-26 22:05 ` Luben Tuikov
2003-03-27 22:43 ` Patrick Mansfield
2003-03-28 15:09 ` Luben Tuikov [this message]
2003-03-28 20:06 ` Patrick Mansfield
2003-03-25 20:50 ` Luben Tuikov
2003-03-25 19:41 ` [PATCH] 2/7 add missing scsi_queue_next_request calls Luben Tuikov
2003-03-25 19:39 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Luben Tuikov
2003-03-27 16:14 ` [PATCH] 0/7 per scsi_device queue lock patches James Bottomley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3E8465A2.7020501@splentec.com \
--to=luben@splentec.com \
--cc=linux-scsi@vger.kernel.org \
--cc=patmans@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox