linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Joel Granados <joel.granados@kernel.org>
Cc: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@kernel.dk>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
	Klaus Jensen <k.jensen@samsung.com>,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver
Date: Mon, 14 Jul 2025 15:02:31 +0200	[thread overview]
Message-ID: <20250714130230.GA7752@lst.de> (raw)
In-Reply-To: <20250714-jag-cdq-v1-0-01e027d256d5@kernel.org>

On Mon, Jul 14, 2025 at 11:15:31AM +0200, Joel Granados wrote:
> Motivation
> ==========
> The main motivation is to enable Controller Data Queues as described in
> the 2.2 revision of the NVME base specification. This series places the
> kernel as an intermediary between the NVME controller producing CDQ
> entries and the user space process consuming them. It is general enough
> to encompass different use cases that require controller initiated
> communication delivered outside the regular I/O traffic streams (like
> LBA tracking for example).

That's rather blurbish.  The only use case for CDQs in NVMe 2.2 is
tracking of dirty LBAs for live migration, and the live migration
feature in 2.2 is completely broken because the hyperscalers wanted
to win a point.  So for CDQs to be useful in Linux we'll need the
proper live migration still under heavy development.  With that I'd
very much expect the kernel to manage the CDQs just like any other
queue, and not a random user ioctl.  So what would be the use case for
a user controlled CDQ?


  parent reply	other threads:[~2025-07-14 13:02 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-14  9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
2025-07-14  9:15 ` [PATCH RFC 1/8] nvme: Add CDQ command definitions for contiguous PRPs Joel Granados
2025-07-14  9:15 ` [PATCH RFC 2/8] nvme: Add cdq data structure to nvme_ctrl Joel Granados
2025-07-14  9:15 ` [PATCH RFC 3/8] nvme: Add file descriptor to read CDQs Joel Granados
2025-07-14  9:15 ` [PATCH RFC 4/8] nvme: Add function to create a CDQ Joel Granados
2025-07-14  9:15 ` [PATCH RFC 5/8] nvme: Add function to delete CDQ Joel Granados
2025-07-14  9:15 ` [PATCH RFC 6/8] nvme: Add a release ops to cdq file ops Joel Granados
2025-07-14  9:15 ` [PATCH RFC 7/8] nvme: Add Controller Data Queue (CDQ) ioctl command Joel Granados
2025-07-14  9:15 ` [PATCH RFC 8/8] nvme: Connect CDQ ioctl to nvme driver Joel Granados
2025-07-14 13:02 ` Christoph Hellwig [this message]
2025-07-18 11:33   ` [PATCH RFC 0/8] nvme: Add Controller Data Queue to the " Joel Granados
2025-07-21  6:26     ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250714130230.GA7752@lst.de \
    --to=hch@lst.de \
    --cc=axboe@kernel.dk \
    --cc=joel.granados@kernel.org \
    --cc=k.jensen@samsung.com \
    --cc=kbusch@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).