public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Patrick Mansfield <patmans@us.ibm.com>
To: James Bottomley <James.Bottomley@steeleye.com>
Cc: "Philip R. Auld" <pauld@egenera.com>,
	Simon Kelley <simon@thekelleys.org.uk>,
	SCSI Mailing List <linux-scsi@vger.kernel.org>,
	dm-devel@sistina.com
Subject: Re: Is there a grand plan for FC failover?
Date: Wed, 28 Jan 2004 16:55:34 -0800	[thread overview]
Message-ID: <20040128165534.A9202@beaverton.ibm.com> (raw)
In-Reply-To: <1075328066.2534.10.camel@mulgrave>; from James.Bottomley@steeleye.com on Wed, Jan 28, 2004 at 05:14:26PM -0500

On Wed, Jan 28, 2004 at 05:14:26PM -0500, James Bottomley wrote:
> On Wed, 2004-01-28 at 15:47, Patrick Mansfield wrote:
> > [cc-ing dm-devel]
> > 
> > My two main issues with dm multipath versus scsi core multipath are:
> > 
> > 1) It does not handle character devices.
> 
> Multi-path character devices are pretty much corner cases.  It's not
> clear to me that you need to handle them in kernel at all.  Things like
> multi-path tape often come with an application that's perfectly happy to
> take the presentation of two or more tape devices.  

I have not seen such applications. Standard applications like tar and cpio
are not going to work well.

If you plug a single ported tape drive or other scsi device into a fibre
channel SAN, it will show up multiple times, the hardware itself need not
be multiported.

> Do we have a
> character device example we need to support as a single device?

Not that I know of, but I have not worked in this area recently. I assume
there are also fibre attached media changers.

BTW, we need some sort of udev rules so we can have a multi-path device
(sd part, not dm part) actually show up multiple times.

> > 2) It does not have the information available about the state of the
> > scsi_device or scsi_host (for path selection), or about the elevator.
> 
> Well, this is one of those abstraction case things.  Can we make the
> information generic enough that the pathing layer makes the right
> decisions without worrying about what the underlying internals are? 

I don't think current interfaces and passing up error codes will be
enough, for example: a queue full on a given path (aka scsi_device) when
there is no other IO on that device could lead to starvation, similiar to
one node in a cluster starving other nodes out. 

Limiting IO via some sort of queue_depth in dm would help solve this
particular problem, but there is nothing in place today for dm to have its
own request queue or be request based, limiting the number of bio's to an
aribitrary value would suck, also the sdev->queue_depth is not visible to
dm today.

> That's where enhancements to the fastfail layer come in.  I believe we
> can get the fastfail information to the point where we can use it to
> make good decisions regardless of underlying transport (or even
> subsystem).

> > If we end up passing all the scsi information up to dm, and it does the
> > same things that we already do in scsi (or in block), what is the point of
> > putting the code into a separate layer?
> 
> It's for interpretation by those modular add-ons that are allowed to
> cater to specific devices.

I'm not sure what you mean - adding code or data that is only every used
by dm is wasted if you're not using dm.

> > More scsi fastfail like code is still needed - probably for all the cases
> > where scsi_dev_queue_ready and scsi_host_queue_ready return 0 - and more.
> > For example, should we somehow make sdev->queue_depth available to dm?
> 
> I agree.  We only have the basics at the moment.  Expanding the error
> indications is a necessary next step.

Yes, I was looking into this for use with changes Mike C is working on -
pass up an error via end_that_request_first or such.

> We had the "where does the elevator go" discussion at the OLS bof.  I
> think I heard agreement that the current situation of between dm and
> block is suboptimal and that we'd like a true coalescing elevator above
> dm with a vestigial one for the mid-layer to use for queueing below.  I
> think this is a requirement for dm multipath to work well, but it's not
> a requirement for it actually to work.

If the performance is bad enough, it doesn't matter if it works.

-- Patrick Mansfield

  reply	other threads:[~2004-01-29  0:56 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-01-26 14:18 Is there a grand plan for FC failover? Simon Kelley
2004-01-26 15:37 ` James Bottomley
2004-01-28 15:02   ` Philip R. Auld
2004-01-28 16:57     ` James Bottomley
2004-01-28 18:00       ` Philip R. Auld
2004-01-28 20:47         ` Patrick Mansfield
2004-01-28 22:14           ` James Bottomley
2004-01-29  0:55             ` Patrick Mansfield [this message]
2004-01-30 19:48               ` [dm-devel] " Joe Thornber
2004-01-31  9:30                 ` Jens Axboe
2004-01-31 16:59                   ` Philip R. Auld
2004-01-31 17:42                     ` Jens Axboe
2004-02-12 15:17                       ` Philip R. Auld
2004-02-12 15:28                         ` Arjan van de Ven
2004-02-12 16:03                           ` Philip R. Auld
2004-01-28 22:37         ` Mike Christie
2004-01-29 15:24           ` Philip R. Auld
2004-01-29 16:00             ` James Bottomley
2004-01-29 23:25               ` Mike Christie
  -- strict thread matches above, loose matches on Subject: below --
2004-01-28 21:02 Smart, James
2004-01-28 22:16 ` James Bottomley
2004-01-29 14:49   ` Philip R. Auld
2004-01-29 15:05     ` James Bottomley
2004-01-29 17:35 Smart, James
2004-01-29 18:31 ` Mike Anderson
2004-01-29 18:31 ` James Bottomley
2004-01-29 18:41 Smart, James
2004-01-29 19:37 Smart, James

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20040128165534.A9202@beaverton.ibm.com \
    --to=patmans@us.ibm.com \
    --cc=James.Bottomley@steeleye.com \
    --cc=dm-devel@sistina.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=pauld@egenera.com \
    --cc=simon@thekelleys.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox