public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: James Bottomley <James.Bottomley@steeleye.com>
To: Luben Tuikov <luben@splentec.com>
Cc: SCSI Mailing List <linux-scsi@vger.kernel.org>
Subject: Re: [PATCH / RFC] scsi_error handler update. (1/4)
Date: 14 Feb 2003 16:20:39 -0500	[thread overview]
Message-ID: <1045257640.1726.23.camel@mulgrave> (raw)
In-Reply-To: <3E4D44EB.1090402@splentec.com>

On Fri, 2003-02-14 at 14:35, Luben Tuikov wrote:

> Now, we have:
> 
> struct scsi_core {
> 	struct list_head entry;
> 
> 	struct list_head portal_list;
> 	spinlock_t       portal_list_lock;
> 	atomic_t         portal_list_count;
> 	...
> };
> 
> struct scsi_portal {
> 	struct list_head entry;
> 
> 	struct list_head target_list;
> 	spinlock_t       target_list_lock;
> 	atomic_t         target_list_count;
> 	...
> };
> 
> struct scsi_target {
> 	struct list_head entry;
> 
> 	struct list_head lun_list;
> 	spinlock_t       lun_list_lock;
> 	atomic_t         lun_list_count;
> 	...
> };
> 
> struct scsi_lun {
> 	struct list_head entry;
> 	
> 	/* pending execution status by the device server */
> 	struct list_head pending_cmd_q;
> 	spinlock_t       pending_cmd_q_lock;
> 	atomic_t         pending_cmd_q_count;
> 	...
> };
> 
> struct scsi_command {
> 	struct list_head entry;
> 	...
> };
> 
> Where each predecessor can contain 0/1:N of its successor.

I don't like this on several grounds:

1. You have a really complex locking hierarchy.  It may be
philosophical, but I regard locking heirarchies as evil.  The only good
they can possibly serve is if you give a clear demonstration that they
avoid hot lock contention.  Otherwise, they tend to create more problems
than they solve.

2. The data structures you propose don't match reality.  The reality is
that we will have to cope with SPI for a while yet, and it doesn't have
scsi_portal.

3. I don't see where the mid-layer has a use for the data.  By and
large, we queue at the LUN level and we control parameters at the host
level.  The only current times we need to even associate puns and luns
is for error handling (reset non-locality) and for certain times when
the host is negotiating transfer parameters.  The current sibling list
may be a bit ugly, but it serves its purpose.

> Points:
> 
> 1) If an object of type struct scsi_core exists, then this means
> that SCSI Core is loaded, thus its representation in sysfs (and SCSI
> Core itself, i.e. it represents itself... :-) ).
> ... this was the reason I wanted scsi_core in current-scsi-core,
> alas, no one saw it...

> ((Is anyone dreaming of multiple scsi_core's, say on a NUMA machine...?))
> 
> 2) struct scsi_portal represents a service delivery subsystem type,
> e.g. SPI, USB, FC, iSCSI, SAS, etc.  Nitty-gritty of the *actual*
> delivery subsystem should have their own sub-sub structure, since
> struct scsi_portal unifies them all for scsi_core and scsi_target's
> sake.
> 
> 3) struct scsi_target: this is an *important* point: those appear
> and dissapear asynchronously at the scsi_portal's request!  That is,
> once a scsi_portal is registered with scsi_core and operational,
> it will do something like:
> 	/* async: new device appeared ... */
> 	struct scsi_target *t = NULL;
> 	t = scsi_get_target(this_portal); /* get a struct from scsi_core */
> 	if (t) {
> 		this_portal_conf_target(portal_target_representation, t);
> 		scsi_register_target(t);
> 	}
> 
> The reason for this is that the service delivery subsystem *knows*
> how to discover targets, scsi_core does not -- SCSI architecture.
> New_scsi_core will discover luns present on targets.
> 
> This also means that a scsi_portal will have an interface exported
> via sysfs so that one can control how/when/where/etc targets are
> discovered -- this is important for FC and iSCSI ppl.
> 
> 4) struct scsi_lun: this is the actual device, and can be any
> kind of device at all.  It will have more command queues than this.
> But, the only queue where commands will actually ``pend'' is
> the pending device queue.  A done_q will most likely exist
> in scsi_core.  An incoming_q will exist in either the lun
> struct or the target struct, and will represent a command
> which is being worked upon, i.e. someone is currently setting
> the cdb, and other members; as soon as this is done, it will
> go on the pending_q and *then* void queuecommand() called,
> this fixes some races (been there, done that).
> 
> The locks could be a macro so that they could be either
> a spinlock or rw_lock, as I have it for my own mini-scsi-core.
> 
> Furthermore using this layout there's no ether-pointers
> on which SCSI Core structurally depends.
> 
> The drawback is that LLDD are not compatible with this
> new-scsi-core structure.
> 
> There's a few more quirks in scsi_command, but I'll leave those
> out for now, until I hear any kudos, comments and flames
> regarding this layout.
> 
> BTW, if you like it, please say so, *NOT* privately to me,
> but here publicly on this list -- this will not make you any
> less of a man! :-)
> 
> Happy Valentine's everyone,
> -- 
> Luben
> 
> 
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



  reply	other threads:[~2003-02-14 21:20 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-02-11  8:13 [PATCH / RFC] scsi_error handler update. (1/4) Mike Anderson
2003-02-11  8:15 ` [PATCH / RFC] scsi_error handler update. (2/4) Mike Anderson
2003-02-11  8:17   ` [PATCH / RFC] scsi_error handler update. (3/4) Mike Anderson
2003-02-11  8:19     ` [PATCH / RFC] scsi_error handler update. (4/4) Mike Anderson
2003-02-11 22:38     ` [PATCH / RFC] scsi_error handler update. (3/4) James Bottomley
2003-02-12  7:16       ` Mike Anderson
2003-02-12 14:26         ` Luben Tuikov
2003-02-12 14:37         ` James Bottomley
2003-02-12 22:34     ` James Bottomley
2003-02-13  8:24       ` Mike Anderson
2003-02-11 16:49 ` [PATCH / RFC] scsi_error handler update. (1/4) Luben Tuikov
2003-02-11 17:22   ` Mike Anderson
2003-02-11 19:05     ` Luben Tuikov
2003-02-11 20:14       ` Luben Tuikov
2003-02-11 21:14       ` Mike Anderson
     [not found]       ` <3E495862.3050709@splentec.com>
2003-02-11 21:20         ` Mike Anderson
2003-02-11 21:22           ` Luben Tuikov
2003-02-11 22:41             ` Christoph Hellwig
2003-02-12 20:10               ` Luben Tuikov
2003-02-12 20:46                 ` Christoph Hellwig
2003-02-12 21:23                   ` Mike Anderson
2003-02-12 22:15                     ` Luben Tuikov
2003-02-12 21:46                   ` Luben Tuikov
2003-02-13 15:47                     ` Christoph Hellwig
2003-02-13 18:55                       ` Luben Tuikov
2003-02-14  0:24                         ` Doug Ledford
2003-02-14 16:38                           ` Patrick Mansfield
2003-02-14 16:58                           ` Mike Anderson
2003-02-14 18:50                             ` Doug Ledford
2003-02-14 19:35                             ` Luben Tuikov
2003-02-14 21:20                               ` James Bottomley [this message]
2003-02-17 17:20                                 ` Luben Tuikov
2003-02-17 17:58                                   ` James Bottomley
2003-02-17 18:29                                     ` Luben Tuikov
2003-02-18  5:37                                       ` Andre Hedrick
2003-02-18 19:46                                         ` Luben Tuikov
2003-02-18 22:16                                           ` Andre Hedrick
2003-02-18 23:35                                             ` Luben Tuikov
2003-02-17 20:17                                   ` Doug Ledford
2003-02-17 20:19                                     ` Matthew Jacob
2003-02-17 21:12                                     ` Luben Tuikov
2003-02-17 17:35                                 ` Luben Tuikov
2003-02-14 21:27                               ` James Bottomley
2003-02-17 17:28                                 ` Luben Tuikov
2003-02-16  4:23                               ` Andre Hedrick
2003-02-11 18:00 ` Patrick Mansfield
2003-02-11 18:44   ` Mike Anderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1045257640.1726.23.camel@mulgrave \
    --to=james.bottomley@steeleye.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=luben@splentec.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox