From: Jason Gunthorpe <jgg@ziepe.ca>
To: Joel Granados <joel.granados@kernel.org>
Cc: Keith Busch <kbusch@kernel.org>, Jens Axboe <axboe@kernel.dk>,
Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>,
Chaitanya Kulkarni <chaitanyak@nvidia.com>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC 0/5] nvme: Controller Data Queue (CDQ) support
Date: Fri, 24 Apr 2026 10:06:15 -0300 [thread overview]
Message-ID: <20260424130615.GW3611611@ziepe.ca> (raw)
In-Reply-To: <20260424-jag-cdq-lkml-v1-0-d773343a717c@kernel.org>
On Fri, Apr 24, 2026 at 01:37:50PM +0200, Joel Granados wrote:
> There is however, no clear consensus on how NVMe Live Migration should
> land in the Linux kernel. The 2022 discussion [1] explored a VFIO-based
> approach but reached no conclusion, likely because the specification was
> not yet mature.
Yes it was paused until the spec matures, then I expect it to go
forward.
> To move CDQ forward, I would like to understand where the LM logic belongs. I
> currently see two options (of which I have no particular preference):
>
> 1. VFIO: Implement NVMe LM following the VFIO state machine, similar to what
> was proposed in 2022.
> 2. VM manager interface: Bypass VFIO and implement LM logic in the interface
> between the VM manager (e.g., QEMU) and the NVMe driver.
I imagined it to be split between VFIO for the pci and volatile guest
state and something else for the namespace setup and media migration.
Media migration is only needed for local drive so there use cases that
don't need this component.
We have many drivers fitting into the VFIO scheme now and good VMM
coverage, I don't see a reason to throw it out.
> One aspect that has not received much attention in previous discussions
> is namespace migration as prior work focused on migrating state and not
> the actual data. Migrating potential terabytes is IMO a distinct use
> case worth considering.
Yes
Though IDK if just plumbing the entire CDQ to userspace is the right
choice for NVMe.. We don't know what future specs will add to CDQ, it
may not be appropriate to treat it so insecurely.
Jason
next prev parent reply other threads:[~2026-04-24 13:06 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-24 11:37 [PATCH RFC 0/5] nvme: Controller Data Queue (CDQ) support Joel Granados
2026-04-24 11:37 ` [PATCH RFC 1/5] nvme: Add CDQ data structures to nvme spec header Joel Granados
2026-04-24 11:37 ` [PATCH RFC 2/5] nvme: Add CDQ data structures to host driver Joel Granados
2026-04-24 11:37 ` [PATCH RFC 3/5] nvme: Add NVME_AER_ONE_SHOT callback handler Joel Granados
2026-04-24 11:37 ` [PATCH RFC 4/5] nvme: Implement CDQ core functionality Joel Granados
2026-04-24 11:37 ` [PATCH RFC 5/5] nvme: Add CDQ ioctl interface Joel Granados
2026-04-24 13:06 ` Jason Gunthorpe [this message]
2026-04-24 13:24 ` [PATCH RFC 0/5] nvme: Controller Data Queue (CDQ) support Christoph Hellwig
2026-04-27 18:24 ` Joel Granados
2026-04-27 18:59 ` Joel Granados
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260424130615.GW3611611@ziepe.ca \
--to=jgg@ziepe.ca \
--cc=axboe@kernel.dk \
--cc=chaitanyak@nvidia.com \
--cc=hch@lst.de \
--cc=joel.granados@kernel.org \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox