linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: hare@suse.de (Hannes Reinecke)
Subject: [PATCH 2/2] nvme-multipath: round-robin I/O policy
Date: Wed, 21 Nov 2018 12:09:06 +0100	[thread overview]
Message-ID: <b3adb4fe-486f-481d-4dc9-13718d90e91d@suse.de> (raw)
In-Reply-To: <8a583536-151e-6f68-f4f9-98d8c4b853dd@broadcom.com>

On 11/20/18 9:41 PM, James Smart wrote:
> 
> 
> On 11/20/2018 12:30 PM, Hannes Reinecke wrote:
>>> So you never consider non-optimized paths here?
>>>
>> Yes, correct. Two reasons:
>> - We try to optimize for performance. Selecting a non-optimized path 
>> per definition doesn't provide that.
>> - The selection algorithm becomes overly complex, and will require an 
>> even more complex mechanism to figure out at which point the use of a 
>> non-optimized path become feasible.
> 
> so what do you do on an out-of-optimized-paths scenario ??? I would 
> assume the same logic should be applicable, only looking for 
> non-optimized.?? And if both optimized and non-optimized are 
> out-of-paths, then it should revert to the same logic, just inaccessible 
> paths.? I believe the last sentence is required for current devices(s) 
> out there to induce storage failover.
> 
The idea is to a round-robin load balancing for optimal paths, and fall 
back to single path once we run out of optimal paths.

As already explained, one could implement a round-robin algorithm for 
non-optimial paths, too, but the question then arises how to handle 
scenarios where we have both.
Clearly, in the optimal case we should be scheduling between optimal 
paths only.
But what happens when the optimal paths do down?
Shall we wait until all optimal paths are down, and only then switch 
over to the non-optimal ones?
But when doing that we might end up in a situation where we have only 
one optimal path, and several non-optimal ones.
And as the underlying reason was that several paths provide a better 
performance than a single one. there will be a cut-over point when 
several non-optimal paths might provide a better performance than a 
single optimal one.
But that would be pretty much implementation defined, and hard to quantify.

Hence I settled on the rather trivial algorithm with considering optimal 
paths only.

Cheers,

Hannes

  parent reply	other threads:[~2018-11-21 11:09 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-11-15 12:29 [PATCH 0/2] nvme-multipath: round-robin I/O policy Hannes Reinecke
2018-11-15 12:29 ` [PATCH 1/2] nvme-multipath: add 'iopolicy' subsystem attribute Hannes Reinecke
2018-11-15 17:35   ` Sagi Grimberg
2018-11-16  8:07     ` Hannes Reinecke
2018-11-15 12:29 ` [PATCH 2/2] nvme-multipath: round-robin I/O policy Hannes Reinecke
2018-11-20 16:42   ` Christoph Hellwig
2018-11-20 20:30     ` Hannes Reinecke
2018-11-21  8:28       ` Christoph Hellwig
2018-11-21 11:24         ` Hannes Reinecke
     [not found]       ` <8a583536-151e-6f68-f4f9-98d8c4b853dd@broadcom.com>
2018-11-21 11:09         ` Hannes Reinecke [this message]
2018-11-22 13:52         ` Hannes Reinecke
2018-11-16  8:26 ` [PATCH 0/2] " Christoph Hellwig
2018-11-20 16:02   ` Hannes Reinecke
2018-11-20 16:19     ` Christoph Hellwig
2018-12-05 20:05 ` Ewan D. Milne

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b3adb4fe-486f-481d-4dc9-13718d90e91d@suse.de \
    --to=hare@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).