Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Mohamed Khalfella <mkhalfella@purestorage.com>
To: Daniel Wagner <dwagner@suse.de>
Cc: Daniel Wagner <wagi@kernel.org>, Christoph Hellwig <hch@lst.de>,
	Sagi Grimberg <sagi@grimberg.me>, Keith Busch <kbusch@kernel.org>,
	Hannes Reinecke <hare@suse.de>,
	John Meneghini <jmeneghi@redhat.com>,
	randyj@purestorage.com, linux-nvme@lists.infradead.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC 3/3] nvme: delay failover by command quiesce timeout
Date: Wed, 16 Apr 2025 06:39:09 -0700	[thread overview]
Message-ID: <20250416133909.GH1868505-mkhalfella@purestorage.com> (raw)
In-Reply-To: <22e48664-63f3-4cc0-8b99-f56e98204e5b@flourine.local>

On 2025-04-16 08:57:19 +0200, Daniel Wagner wrote:
> On Tue, Apr 15, 2025 at 05:17:38PM -0700, Mohamed Khalfella wrote:
> > Help me see this:
> > 
> > - nvme_failover_req() is the only place reqs are added to failover_list.
> > - nvme_decide_disposition() returns FAILOVER only if req has REQ_NVME_MPATH set.
> > 
> > How/where do admin requests get REQ_NVME_MPATH set?
> 
> Admin commands don't set REQ_NVME_MPATH. This is what the current code
> does and I have deliberately decided not to touch this with this RFC.
> 
> Given how much discussion the CQT/CCR feature triggers, I don't think
> it's a good idea to add this topic to this discussion.
> 

The point is that holding requests at nvme_failover_req() does not cover
admin requests. Do you plan to add support for holding admin requests in
the next revision of these patches?

> > > > - What about requests that do not go through nvme_failover_req(), like
> > > >   passthrough requests, do we not want to hold these requests until it
> > > >   is safe for them to be retried?
> > > 
> > > Pasthrough commands should fail immediately. Userland is in charge here,
> > > not the kernel. At least this what should happen here.
> > > 
> > > > - In case of controller reset or delete if nvme_disable_ctrl()
> > > >   successfully disables the controller, then we do not want to add
> > > >   canceled requests to failover_list, right? Does this implementation
> > > >   consider this case?
> > > 
> > > Not sure. I've tested a few things but I am pretty sure this RFC is far
> > > from being complete.
> > 
> > I think it does not, and maybe it should honor this. Otherwise every
> > controller reset/delete will end up holding requests unnecessarily.
> 
> Yes, this is one of the problems with the failover queue. It could be
> solved by really starting to track the delay timeout for each commands.
> But this is a lot of logic code and complexity. Thus during the
> discussion at LSFMM everyone including me, said failover queue idea
> should not be our first choice.

Got it. I assume this will be addressed in the next revision?


  reply	other threads:[~2025-04-16 13:39 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-24 12:07 [PATCH RFC 0/3] nvme: add support for command quiesce timeout Daniel Wagner
2025-03-24 12:07 ` [PATCH RFC 1/3] nvmet: add command quiesce time Daniel Wagner
2025-04-01  9:33   ` Hannes Reinecke
2025-04-10  9:00   ` Mohamed Khalfella
2025-04-16 11:37     ` Daniel Wagner
2025-03-24 12:07 ` [PATCH RFC 2/3] nvme: store cqt value into nvme ctrl object Daniel Wagner
2025-04-01  9:34   ` Hannes Reinecke
2025-03-24 12:07 ` [PATCH RFC 3/3] nvme: delay failover by command quiesce timeout Daniel Wagner
2025-04-01  9:37   ` Hannes Reinecke
2025-04-15 12:00     ` Daniel Wagner
2025-04-01 13:32   ` Nilay Shroff
2025-04-15 12:05     ` Daniel Wagner
2025-04-10  8:51   ` Mohamed Khalfella
2025-04-14 22:28     ` Sagi Grimberg
2025-04-15 12:11       ` Daniel Wagner
2025-04-15 21:07         ` Sagi Grimberg
2025-04-15 23:02           ` Randy Jennings
2025-04-15 23:35             ` Sagi Grimberg
2025-04-15 23:57               ` Randy Jennings
2025-04-16 22:15                 ` Sagi Grimberg
2025-04-17  0:47                   ` Randy Jennings
2025-04-15 12:17     ` Daniel Wagner
2025-04-15 22:56       ` Randy Jennings
2025-04-16  6:39         ` Daniel Wagner
2025-04-16  0:17       ` Mohamed Khalfella
2025-04-16  6:57         ` Daniel Wagner
2025-04-16 13:39           ` Mohamed Khalfella [this message]
2025-04-16  0:40       ` Mohamed Khalfella
2025-04-16  8:30         ` Daniel Wagner
2025-04-16 13:53           ` Mohamed Khalfella
2025-04-16 22:21             ` Sagi Grimberg
2025-04-16 22:59               ` Mohamed Khalfella
2025-04-17  7:28                 ` Hannes Reinecke
2025-04-10 16:07   ` Jiewei Ke
2025-04-10 17:13   ` Jiewei Ke
2025-04-13 22:03   ` Sagi Grimberg
2025-04-16  8:51     ` Daniel Wagner
2025-04-16  0:23   ` Mohamed Khalfella
2025-04-16 11:33     ` Daniel Wagner
     [not found] <8F2489FD-1663-4A52-A50B-F15046AC2878@163.com>
2025-04-15 12:34 ` Daniel Wagner
2025-04-15 15:08   ` Jiewei Ke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250416133909.GH1868505-mkhalfella@purestorage.com \
    --to=mkhalfella@purestorage.com \
    --cc=dwagner@suse.de \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=jmeneghi@redhat.com \
    --cc=kbusch@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=randyj@purestorage.com \
    --cc=sagi@grimberg.me \
    --cc=wagi@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox