linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: James Bottomley <jejb@linux.vnet.ibm.com>
To: Matthew Wilcox <willy@infradead.org>, Hannes Reinecke <hare@suse.de>
Cc: Douglas Gilbert <dgilbert@interlog.com>,
	linux-scsi@vger.kernel.org, martin.petersen@oracle.com
Subject: Re: [RFC v2 1/6] scsi: xarray hctl
Date: Tue, 26 May 2020 12:28:19 -0700	[thread overview]
Message-ID: <1590521299.11810.45.camel@linux.vnet.ibm.com> (raw)
In-Reply-To: <20200526183920.GI17206@bombadil.infradead.org>

On Tue, 2020-05-26 at 11:39 -0700, Matthew Wilcox wrote:
> On Tue, May 26, 2020 at 07:53:35PM +0200, Hannes Reinecke wrote:
> > On 5/26/20 4:27 PM, Matthew Wilcox wrote:
> > > Ah, OK.  I think for these arrays you'd be better off accepting
> > > the cost of an extra 4 bytes in the struct scsi_device rather
> > > than the cost of storing the scsi_device at the LUN.
> > > 
> > > Let's just work an example where you have a 64-bit LUN with 4
> > > ranges, each of 64 entries (this is almost a best-case scenario
> > > for the XArray). [0,63], 2^62+[0,63], 2^63+[0,63],
> > > 2^63+2^62+[0,63].
> > > 
> > > If we store them sequentially in an allocating XArray, we take up
> > > 256 * 4 bytes = 1kB extra space in the scsi_device.  The XArray
> > > will allocate four nodes plus one node to hold the four nodes,
> > > which is 5 * 576 bytes (2780 bytes) for a total of 3804 bytes.
> > > 
> > > Storing them in at their LUN will allocate a top level node which
> > > covers bits 60-66, then four nodes, each covering bits of 54-59,
> > > another four nodes covering bits 48-53, four nodes for 42-47,
> > > ...  I make it 41 nodes, coming to 23616 bytes.  And the pointer
> > > chase to get to each LUN is ten deep.  It'll mostly be cached,
> > > but still ...
> > > 
> > 
> > Which is my worry, too.
> > In the end we're having a massively large array space (128bit if we
> > take the numbers as they stand today), of which only a _tiny_
> > fraction is actually allocated.
> 
> In your scheme, yes.  Do you ever need to look up scsi_device from
> a scsi_host with only the C:T:L as a key (and it's a situation where
> speed matters)?  Everything I've seen so far is about iterating every
> sdev in an shost, but maybe this is a "when you have a hammer"
> situation.

I don't believe we ever do.  We've arranged pretty much everything so
the SCSI device is immediately obtainable from the enclosing structure
(sysfs, rw, ioctl, interrupt from device etc).  The only time we do a
direct lookup by H:C:T:L is the very old scsi proc API, which we're
trying to deprecate.

James

  reply	other threads:[~2020-05-26 19:29 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-24 15:58 [RFC v2 0/6] scsi: rework mid-layer with xarrays Douglas Gilbert
2020-05-24 15:58 ` [RFC v2 1/6] scsi: xarray hctl Douglas Gilbert
2020-05-25  6:57   ` Hannes Reinecke
2020-05-25 16:30     ` Douglas Gilbert
2020-05-25 17:40       ` Matthew Wilcox
2020-05-26  2:01         ` Douglas Gilbert
2020-05-26  3:01           ` Matthew Wilcox
2020-05-26  7:24           ` Hannes Reinecke
2020-05-26  6:21         ` Hannes Reinecke
2020-05-26 14:27           ` Matthew Wilcox
2020-05-26 17:53             ` Hannes Reinecke
2020-05-26 18:39               ` Matthew Wilcox
2020-05-26 19:28                 ` James Bottomley [this message]
2020-05-26 19:10               ` Douglas Gilbert
2020-05-26 20:27                 ` Douglas Gilbert
2020-05-27  2:53                   ` Douglas Gilbert
2020-05-24 15:58 ` [RFC v2 2/6] scsi: xarray, iterations on scsi_target Douglas Gilbert
2020-05-25  7:06   ` Hannes Reinecke
2020-05-24 15:58 ` [RFC v2 3/6] scsi: xarray mark sdev_del state Douglas Gilbert
2020-05-25  7:00   ` Hannes Reinecke
2020-05-25 16:47     ` Douglas Gilbert
2020-05-24 15:58 ` [RFC v2 4/6] scsi: improve scsi_device_lookup Douglas Gilbert
2020-05-25  7:07   ` Hannes Reinecke
2020-05-24 15:58 ` [RFC v2 5/6] scsi: add __scsi_target_lookup Douglas Gilbert
2020-05-24 15:58 ` [RFC v2 6/6] scsi: count number of targets Douglas Gilbert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1590521299.11810.45.camel@linux.vnet.ibm.com \
    --to=jejb@linux.vnet.ibm.com \
    --cc=dgilbert@interlog.com \
    --cc=hare@suse.de \
    --cc=jejb@linux.ibm.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=martin.petersen@oracle.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).