public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Patrick Mansfield <patmans@us.ibm.com>
To: Luben Tuikov <luben@splentec.com>
Cc: linux-scsi@vger.kernel.org
Subject: Re: [PATCH] 3/7 consolidate single_lun code
Date: Fri, 28 Mar 2003 12:06:44 -0800	[thread overview]
Message-ID: <20030328120644.A20061@beaverton.ibm.com> (raw)
In-Reply-To: <3E8465A2.7020501@splentec.com>; from luben@splentec.com on Fri, Mar 28, 2003 at 10:09:22AM -0500

On Fri, Mar 28, 2003 at 10:09:22AM -0500, Luben Tuikov wrote:
> Patrick Mansfield wrote:
> > On Wed, Mar 26, 2003 at 05:05:09PM -0500, Luben Tuikov wrote:

> You see, all I'm doing is trying to come up with a suitable ADT
> and algorithm for the premise which _you_ gave.
> 
> The problem is that you have no fixed arbitration criterium between single
> lun and fully capable targets.  If you did, it's a little thing to get
> the sinlge list as priority queue working.
> 
> I.e. What I'm asking is this: _even_ if you had 2 lists, one
> for sinlge lun and another for fully capable targets,
> how would you arbitrate between the two in scsi_request_fn().

OK I see your point, and I don't have an answer.

So we should be able to put single_lun sdev's on the starved_list, and
maintain the same priority (current scsi-locking-2.5 with all 7 patches
applied runs single_lun queue's, and then runs starved queues; it puts all
devices on the starved_list if the can_queue or other host limits are hit;
no device is put on the starved_list unless it is unable to send IO).

I'm working on getting rid of the lock hierarchy now, I would rather not
touch any more of the single_lun code (I would rather such devices go
away). If you or anyone can submit a patch (against the scsi-lock-2.5) for
single_lun code to use the starved_list that would be nice :)

I tested the single_lun code by booting with scsi_dev_flags=vend:prod:0x10
for some of the disks on my system, or you can modify the scsi_scan.c
table.

I'm not sure if we can ever prioritize the starved_list (holding a list of
sdev's or their request functions) without locking out one or the other of
the single lun or starved devices.

We might need a per-target or per-host queue (of requests, not of sdev's)
to fix it properly. A per-target (or per-host?) queue might also allow an
easier fix to prevent a one single_lun device from starving all others. I
would rather leave this as a broken hardware or bad configuration, the fix
is to get rid of the single_lun hardware, or only attach one of them
(target) to an adapter.

In any case the starved_list patch is an improvement over previous
handling for the starved cases, and for single_lun cases we should not be
any worse than before.

-- Patrick Mansfield

  reply	other threads:[~2003-03-28 20:06 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-03-25  1:53 [PATCH] 0/7 per scsi_device queue lock patches Patrick Mansfield
2003-03-25  1:54 ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Patrick Mansfield
2003-03-25  2:02   ` [PATCH] 2/7 add missing scsi_queue_next_request calls Patrick Mansfield
2003-03-25  2:02     ` [PATCH] 3/7 consolidate single_lun code Patrick Mansfield
2003-03-25  2:03       ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Patrick Mansfield
2003-03-25  2:03         ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Patrick Mansfield
2003-03-25  2:03           ` [PATCH] 6/7 add and use a per-scsi_device queue_lock Patrick Mansfield
2003-03-25  2:04             ` [PATCH] 7/7 fix single_lun code for " Patrick Mansfield
2003-03-25 21:23               ` Luben Tuikov
2003-03-26 21:47                 ` Patrick Mansfield
2003-03-26 22:12                   ` Luben Tuikov
2003-03-25 21:03             ` [PATCH] 6/7 add and use a " Luben Tuikov
2003-03-26 21:33               ` Patrick Mansfield
2003-03-25 21:20             ` James Bottomley
2003-03-26  2:01               ` Patrick Mansfield
2003-03-27 16:09                 ` James Bottomley
2003-03-28  0:30                   ` Patrick Mansfield
2003-03-25  7:12           ` [PATCH] 5/7 alloc a request_queue on each scsi_alloc_sdev call Christoph Hellwig
2003-03-25  7:18             ` Jens Axboe
2003-03-25 21:32         ` [PATCH] 4/7 cleanup/consolidate code in scsi_request_fn Luben Tuikov
2003-03-26  0:58           ` Patrick Mansfield
2003-03-26 17:07             ` Luben Tuikov
2003-03-26 17:13               ` Patrick Mansfield
2003-03-26 17:25                 ` Luben Tuikov
2003-03-25 20:36       ` [PATCH] 3/7 consolidate single_lun code Luben Tuikov
2003-03-26 19:11         ` Patrick Mansfield
2003-03-26 22:05           ` Luben Tuikov
2003-03-27 22:43             ` Patrick Mansfield
2003-03-28 15:09               ` Luben Tuikov
2003-03-28 20:06                 ` Patrick Mansfield [this message]
2003-03-25 20:50       ` Luben Tuikov
2003-03-25 19:41     ` [PATCH] 2/7 add missing scsi_queue_next_request calls Luben Tuikov
2003-03-25 19:39   ` [PATCH] 1/7 starved changes - use a list_head for starved queue's Luben Tuikov
2003-03-27 16:14 ` [PATCH] 0/7 per scsi_device queue lock patches James Bottomley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20030328120644.A20061@beaverton.ibm.com \
    --to=patmans@us.ibm.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=luben@splentec.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox