* [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver
@ 2025-07-14 9:15 Joel Granados
2025-07-14 9:15 ` [PATCH RFC 1/8] nvme: Add CDQ command definitions for contiguous PRPs Joel Granados
` (8 more replies)
0 siblings, 9 replies; 12+ messages in thread
From: Joel Granados @ 2025-07-14 9:15 UTC (permalink / raw)
To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg
Cc: Klaus Jensen, linux-nvme, linux-kernel, Joel Granados
This series introduces support for Controller Data Queues (CDQs) in the
NVMe driver. CDQs allow an NVME controller to post information to the
host through a single completion queue. This series adds data structures,
helpers, and the user interface required to create, read, and delete CDQs.
Motivation
==========
The main motivation is to enable Controller Data Queues as described in
the 2.2 revision of the NVME base specification. This series places the
kernel as an intermediary between the NVME controller producing CDQ
entries and the user space process consuming them. It is general enough
to encompass different use cases that require controller initiated
communication delivered outside the regular I/O traffic streams (like
LBA tracking for example).
What is done
============
* Added nvme_admin_cdq opcode and NVME_FEAT_CDQ feature flag
* Defined a new struct nvme_cdq command for create/delete operations
* Added a cdq_nvme_queue struct that holds the CDQ state
* Added an xarray for each nvme_ctrl that holds a reference to all
controller CDQs.
* Added a new ioctl (NVME_IOCTL_ADMIN_CDQ) and argument struct
(nvme_cdq_cmd) for CDQ creation
* Added helpers for consuming CDQs: nvme_cdq_{next,send_feature,traverse}
* Added helpers for CDQ admin: nvme_cdq_{free,alloc,create,delete}
In summary, this series implements creation, consumption, and cleanup of
Controller Data Queues, providing a file-descriptor based interface for
user space to read CDQ entries.
CDQ life cycle
==============
To create a CDQ, user space defines the number of entries, entry size,
location of the phase tag (8.1.6.2 NVME base spec), MOS field (5.1.4
NVME base spec) and if necessary, CQS field (5.1.4.1.1 NVME base spec).
All these are passed through the NVME_IOCTL_ADMIN_CDQ ioctl which
allocates and connects the controller to CDQ memory and returns the CDQ
ID (defined by the controller) and a CDQ file descriptor (CDQ FD).
The CDQ FD is used to consume entries through read system call. For
every "read", all available (new) entries are copied from the
internal Kernel CDQ buffer to the user space buffer.
The CDQ ID, on the other hand, is meant for interactions that are
outside CDQ creation and consumption. In these cases the caller is
expected to send NVME commands down through one of the already available
mechanisms (like the NVME_IOCTL_ADMIN_CMD ioctl).
CDQ data structures and memory are cleaned up when the release file
operation is called on the FD, which usually means the close system call
or the user process gets killed.
Testing
=======
The User Data Migration Queue (5.1.4.1.1 NVME base spec) implemented in
the QEMU NVME device [1] is used for testing purposes. CDQ creation,
consumption and deletion is shown by calling a CDQ example in libvfn [2]
(a low level NVME/PCIe library) from within QEMU. For brevity, I have
*not* included any of the testing commands; but I can provide them if
needed.
Questions
=========
Here are some questions that where on my mind.
1. I have used an ioctl for the CDQ creation. Any better alternatives?
2. The deletion is handled by closing the file descriptor. Should this
be handled by the ioctl?
Any feedback, questions or comments is greatly appreciated
Best
[1] https://github.com/SamsungDS/qemu/tree/nvme.tp4159
[2] https://github.com/Joelgranados/libvfn/blob/jag/cdq/examples/cdq.c
Signed-off-by: Joel Granados <joel.granados@kernel.org>
---
Joel Granados (8):
nvme: Add CDQ command definitions for contiguous PRPs
nvme: Add cdq data structure to nvme_ctrl
nvme: Add file descriptor to read CDQs
nvme: Add function to create a CDQ
nvme: Add function to delete CDQ
nvme: Add a release ops to cdq file ops
nvme: Add Controller Data Queue (CDQ) ioctl command
nvme: Connect CDQ ioctl to nvme driver
drivers/nvme/host/core.c | 253 ++++++++++++++++++++++++++++++++++++++++
drivers/nvme/host/ioctl.c | 47 +++++++-
drivers/nvme/host/nvme.h | 20 ++++
include/linux/nvme.h | 30 +++++
include/uapi/linux/nvme_ioctl.h | 12 ++
5 files changed, 361 insertions(+), 1 deletion(-)
---
base-commit: 0ff41df1cb268fc69e703a08a57ee14ae967d0ca
change-id: 20250624-jag-cdq-691ed7e68c1c
Best regards,
--
Joel Granados <joel.granados@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH RFC 1/8] nvme: Add CDQ command definitions for contiguous PRPs
2025-07-14 9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
@ 2025-07-14 9:15 ` Joel Granados
2025-07-14 9:15 ` [PATCH RFC 2/8] nvme: Add cdq data structure to nvme_ctrl Joel Granados
` (7 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Joel Granados @ 2025-07-14 9:15 UTC (permalink / raw)
To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg
Cc: Klaus Jensen, linux-nvme, linux-kernel, Joel Granados
Add nvme_cdq to the nvme_command union that handles creation and
deletion operations. NVME_FEAT_CDQ is added to the feature flags with a
value of 0x21. nvme_admin_cdq is added to the NVME opcodes with a value
of 0x45. Add support for contiguous PRPs only; the non-contiguous case
described in the NVME spec can be added later.
Signed-off-by: Joel Granados <joel.granados@kernel.org>
---
drivers/nvme/host/core.c | 1 +
include/linux/nvme.h | 30 ++++++++++++++++++++++++++++++
2 files changed, 31 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 6b04473c0ab73c61e208bb8fc230c2f9b65c69bc..7be6b42a1adcc3fdb3cec2e2d0e73fcf74244590 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -5133,6 +5133,7 @@ static inline void _nvme_check_size(void)
BUILD_BUG_ON(sizeof(struct nvme_rotational_media_log) != 512);
BUILD_BUG_ON(sizeof(struct nvme_dbbuf) != 64);
BUILD_BUG_ON(sizeof(struct nvme_directive_cmd) != 64);
+ BUILD_BUG_ON(sizeof(struct nvme_cdq) != 64);
BUILD_BUG_ON(sizeof(struct nvme_feat_host_behavior) != 512);
}
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index 2479ed10f53e37055973ea3c899060913923fa62..a2012ec00e60c2f0de1b06599ba39481eebe4263 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -1240,6 +1240,7 @@ enum nvme_admin_opcode {
nvme_admin_virtual_mgmt = 0x1c,
nvme_admin_nvme_mi_send = 0x1d,
nvme_admin_nvme_mi_recv = 0x1e,
+ nvme_admin_cdq = 0x45,
nvme_admin_dbbuf = 0x7C,
nvme_admin_format_nvm = 0x80,
nvme_admin_security_send = 0x81,
@@ -1309,6 +1310,7 @@ enum {
NVME_FEAT_PLM_WINDOW = 0x14,
NVME_FEAT_HOST_BEHAVIOR = 0x16,
NVME_FEAT_SANITIZE = 0x17,
+ NVME_FEAT_CDQ = 0x21,
NVME_FEAT_SW_PROGRESS = 0x80,
NVME_FEAT_HOST_ID = 0x81,
NVME_FEAT_RESV_MASK = 0x82,
@@ -1514,6 +1516,33 @@ struct nvme_directive_cmd {
__u32 rsvd16[3];
};
+struct nvme_cdq {
+ __u8 opcode;
+ __u8 flags;
+ __u16 command_id;
+ __u32 rsvd1[5];
+ __le64 prp1;
+ __u32 rsvd8[2];
+#define NVME_CDQ_SEL_CREATE_CDQ 0x0
+#define NVME_CDQ_SEL_DELETE_CDQ 0x1
+ __u8 sel;
+ __u8 rsvd10;
+ __le16 mos;
+ union {
+ struct {
+#define NVME_CDQ_CFG_PC_CONT (1 << 0)
+ __le16 cdq_flags;
+ __le16 cqs;
+ } create;
+ struct {
+ __le16 cdqid;
+ __le16 rsvd;
+ } delete;
+ };
+ __le32 cdqsize;
+ __u32 rsvd13[2];
+};
+
/*
* Fabrics subcommands.
*/
@@ -1923,6 +1952,7 @@ struct nvme_command {
struct nvmf_auth_receive_command auth_receive;
struct nvme_dbbuf dbbuf;
struct nvme_directive_cmd directive;
+ struct nvme_cdq cdq;
};
};
--
2.47.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RFC 2/8] nvme: Add cdq data structure to nvme_ctrl
2025-07-14 9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
2025-07-14 9:15 ` [PATCH RFC 1/8] nvme: Add CDQ command definitions for contiguous PRPs Joel Granados
@ 2025-07-14 9:15 ` Joel Granados
2025-07-14 9:15 ` [PATCH RFC 3/8] nvme: Add file descriptor to read CDQs Joel Granados
` (6 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Joel Granados @ 2025-07-14 9:15 UTC (permalink / raw)
To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg
Cc: Klaus Jensen, linux-nvme, linux-kernel, Joel Granados
Add a CDQ xarray to nvme_ctrl allowing for several CDQs per controller
(as per specification). The structure will house a pointer to its
controller (*ctrl), a pointer to the entry memory (*entries), number and
size of entries, current entry and phase bit value, the location of the
phase bit value within an entry, dma address, CDQ id and the file
pointer where the reading will be taking place.
Signed-off-by: Joel Granados <joel.granados@kernel.org>
---
drivers/nvme/host/core.c | 1 +
drivers/nvme/host/nvme.h | 15 +++++++++++++++
2 files changed, 16 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 7be6b42a1adcc3fdb3cec2e2d0e73fcf74244590..9b2de74d62f7a65aea2d28bbbed6681195d9afcd 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4868,6 +4868,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
mutex_init(&ctrl->scan_lock);
INIT_LIST_HEAD(&ctrl->namespaces);
xa_init(&ctrl->cels);
+ xa_init(&ctrl->cdqs);
ctrl->dev = dev;
ctrl->ops = ops;
ctrl->quirks = quirks;
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 8fc4683418a3a6929311c7b56da90ebcbbe16d86..800970a0bb87f7a3b6e855f56a2493a7deed1ecd 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -411,6 +411,7 @@ struct nvme_ctrl {
enum nvme_ctrl_type cntrltype;
enum nvme_dctype dctype;
u16 awupf; /* 0's based value. */
+ struct xarray cdqs; /* Controller Data Queue */
};
static inline enum nvme_ctrl_state nvme_ctrl_state(struct nvme_ctrl *ctrl)
@@ -553,6 +554,20 @@ static inline bool nvme_ns_has_pi(struct nvme_ns_head *head)
return head->pi_type && head->ms == head->pi_size;
}
+struct cdq_nvme_queue {
+ struct nvme_ctrl *ctrl;
+ void *entries;
+ u32 entry_nbyte;
+ u32 entry_nr;
+ u32 curr_entry;
+ u8 curr_cdqp;
+ uint cdqp_offset;
+ uint cdqp_mask;
+ dma_addr_t entries_dma_addr;
+ u16 cdq_id;
+ struct file *filep;
+};
+
struct nvme_ctrl_ops {
const char *name;
struct module *module;
--
2.47.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RFC 3/8] nvme: Add file descriptor to read CDQs
2025-07-14 9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
2025-07-14 9:15 ` [PATCH RFC 1/8] nvme: Add CDQ command definitions for contiguous PRPs Joel Granados
2025-07-14 9:15 ` [PATCH RFC 2/8] nvme: Add cdq data structure to nvme_ctrl Joel Granados
@ 2025-07-14 9:15 ` Joel Granados
2025-07-14 9:15 ` [PATCH RFC 4/8] nvme: Add function to create a CDQ Joel Granados
` (5 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Joel Granados @ 2025-07-14 9:15 UTC (permalink / raw)
To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg
Cc: Klaus Jensen, linux-nvme, linux-kernel, Joel Granados
The file descriptor provided by nvme_cdq_fd is to be used to consume the
entries in the newly created CDQ. This commit both adds the creation of
the file descriptor as well as the mechanism to read and copy entry data
back to user space.
All available entries are consumed on every read. Phase bits and current
head are updated before sending the cdq feature id which tells the
controller the entries have been cosumed.
The nvme_cdq_fd is not called anywhere yet as this is a preparation
commit for when the CDQ create and delete are added.
Signed-off-by: Joel Granados <joel.granados@kernel.org>
---
drivers/nvme/host/core.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 91 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 9b2de74d62f7a65aea2d28bbbed6681195d9afcd..8517253002941e1f892e62bb7dacac40395b16d9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -23,6 +23,7 @@
#include <linux/pm_qos.h>
#include <linux/ratelimit.h>
#include <linux/unaligned.h>
+#include <linux/anon_inodes.h>
#include "nvme.h"
#include "fabrics.h"
@@ -1228,6 +1229,96 @@ u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode)
}
EXPORT_SYMBOL_NS_GPL(nvme_passthru_start, "NVME_TARGET_PASSTHRU");
+/* Returns true if curr_entry forwarded by 1 */
+static bool nvme_cdq_next(struct cdq_nvme_queue *cdq)
+{
+ void *curr_entry = cdq->entries + (cdq->curr_entry * cdq->entry_nbyte);
+ u8 phase_bit = (*(u8 *)(curr_entry + cdq->cdqp_offset) & cdq->cdqp_mask);
+ /* if different, then its new! */
+ if (phase_bit != cdq->curr_cdqp) {
+ cdq->curr_entry = (cdq->curr_entry + 1) % cdq->entry_nr;
+ if (unlikely(cdq->curr_entry == 0))
+ cdq->curr_cdqp = ~cdq->curr_cdqp & 0x1;
+ return true;
+ }
+ return false;
+}
+
+static int nvme_cdq_send_feature_id(struct cdq_nvme_queue *cdq)
+{
+ struct nvme_command c = { };
+
+ c.features.opcode = nvme_admin_set_features;
+ c.features.fid = cpu_to_le32(NVME_FEAT_CDQ);
+ c.features.dword11 = cdq->cdq_id;
+ c.features.dword12 = cpu_to_le32(cdq->curr_entry);
+
+ return nvme_submit_sync_cmd(cdq->ctrl->admin_q, &c, NULL, 0);
+}
+
+/*
+ * Traverse the CDQ until max entries are reached or until the entry phase
+ * bit is the same as the current phase bit.
+ *
+ * cdq : Controller Data Queue
+ * count_nbyte : Count bytes to "traverse" before sending feature id
+ * priv_data : argument for consume
+ */
+static size_t nvme_cdq_traverse(struct cdq_nvme_queue *cdq, size_t count_nbyte,
+ void *priv_data)
+{
+ int ret;
+ char __user *to_buf = priv_data;
+ size_t tx_nbyte, target_nbyte = 0;
+ size_t orig_tail_nbyte = (cdq->entry_nr - cdq->curr_entry) * cdq->entry_nbyte;
+ void *from_buf = cdq->entries + (cdq->curr_entry * cdq->entry_nbyte);
+
+ while (target_nbyte < count_nbyte && nvme_cdq_next(cdq))
+ target_nbyte += cdq->entry_nbyte;
+ tx_nbyte = min(orig_tail_nbyte, target_nbyte);
+
+ if (copy_to_user(to_buf, from_buf, tx_nbyte))
+ return -EFAULT;
+
+ if (tx_nbyte < target_nbyte) {
+ /* Copy the entries that have been wrapped around */
+ from_buf = cdq->entries;
+ to_buf += tx_nbyte;
+ if (copy_to_user(to_buf, from_buf, target_nbyte - tx_nbyte))
+ return -EFAULT;
+ }
+
+ ret = nvme_cdq_send_feature_id(cdq);
+ if (ret < 0)
+ return ret;
+
+ return tx_nbyte;
+}
+
+static ssize_t nvme_cdq_fops_read(struct file *filep, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct cdq_nvme_queue *cdq = filep->private_data;
+ size_t nbytes = round_down(count, cdq->entry_nbyte);
+
+ if (*ppos)
+ return -ESPIPE;
+
+ if (count < cdq->entry_nbyte)
+ return -EINVAL;
+
+ if (nbytes > (cdq->entry_nr * cdq->entry_nbyte))
+ return -EINVAL;
+
+ return nvme_cdq_traverse(cdq, nbytes, buf);
+}
+
+static const struct file_operations cdq_fops = {
+ .owner = THIS_MODULE,
+ .open = nonseekable_open,
+ .read = nvme_cdq_fops_read,
+};
+
void nvme_passthru_end(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u32 effects,
struct nvme_command *cmd, int status)
{
--
2.47.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RFC 4/8] nvme: Add function to create a CDQ
2025-07-14 9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
` (2 preceding siblings ...)
2025-07-14 9:15 ` [PATCH RFC 3/8] nvme: Add file descriptor to read CDQs Joel Granados
@ 2025-07-14 9:15 ` Joel Granados
2025-07-14 9:15 ` [PATCH RFC 5/8] nvme: Add function to delete CDQ Joel Granados
` (4 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Joel Granados @ 2025-07-14 9:15 UTC (permalink / raw)
To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg
Cc: Klaus Jensen, linux-nvme, linux-kernel, Joel Granados
On CDQ initialization:
* The memory holding the CDQ structure (struct cdq_nvme_queue) is
allocated,
* DMAable memory for the entries is allocated.
* A CDQ create command is sent to the controller
* The newly created CDQ is stored in ctrl->cdqs
* A CDQ file descriptor is created
* The CDQ id returned by the controller and the file descriptor value
are returned to the user
Signed-off-by: Joel Granados <joel.granados@kernel.org>
---
drivers/nvme/host/core.c | 116 +++++++++++++++++++++++++++++++++++++++++++++++
drivers/nvme/host/nvme.h | 4 ++
2 files changed, 120 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 8517253002941e1f892e62bb7dacac40395b16d9..81b7183a4e3167290e68dc2eb26a8dbcd88c7924 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1319,6 +1319,122 @@ static const struct file_operations cdq_fops = {
.read = nvme_cdq_fops_read,
};
+static int nvme_cdq_fd(struct cdq_nvme_queue *cdq, int *fdno)
+{
+ int ret = 0;
+ struct file *filep;
+
+ *fdno = -1;
+
+ if (cdq->filep)
+ return -EINVAL;
+
+ filep = anon_inode_getfile("[cdq-readfd]", &cdq_fops, cdq, O_RDWR);
+ if (IS_ERR(filep)) {
+ ret = PTR_ERR(filep);
+ goto out;
+ }
+
+ *fdno = get_unused_fd_flags(O_CLOEXEC | O_RDONLY | O_DIRECT);
+ if (*fdno < 0) {
+ ret = *fdno;
+ goto out_fput;
+ }
+
+ fd_install(*fdno, filep);
+ cdq->filep = filep;
+
+ return 0;
+
+out_fput:
+ put_unused_fd(*fdno);
+ fput(filep);
+out:
+ return ret;
+}
+
+static int nvme_cdq_alloc(struct nvme_ctrl *ctrl, struct cdq_nvme_queue **cdq,
+ u32 entry_nr, u32 entry_nbyte)
+{
+ struct cdq_nvme_queue *ret_cdq = kzalloc(sizeof(*ret_cdq), GFP_KERNEL);
+
+ if (!ret_cdq)
+ return -ENOMEM;
+
+ ret_cdq->entries = dma_alloc_coherent(ctrl->dev,
+ entry_nr * entry_nbyte,
+ &ret_cdq->entries_dma_addr,
+ GFP_KERNEL);
+ if (!ret_cdq->entries) {
+ kfree(ret_cdq);
+ return -ENOMEM;
+ }
+
+ *cdq = ret_cdq;
+
+ return 0;
+}
+
+static void nvme_cdq_free(struct nvme_ctrl *ctrl, struct cdq_nvme_queue *cdq)
+{
+ dma_free_coherent(ctrl->dev, cdq->entry_nr * cdq->entry_nbyte,
+ cdq->entries, cdq->entries_dma_addr);
+ kfree(cdq);
+}
+
+
+int nvme_cdq_create(struct nvme_ctrl *ctrl, struct nvme_command *c,
+ const u32 entry_nr, const u32 entry_nbyte,
+ uint cdqp_offset, uint cdqp_mask,
+ u16 *cdq_id, int *cdq_fd)
+{
+ int ret, fdno;
+ struct cdq_nvme_queue *cdq, *xa_ret;
+ union nvme_result result = { };
+
+ ret = nvme_cdq_alloc(ctrl, &cdq, entry_nr, entry_nbyte);
+ if (ret)
+ return ret;
+ c->cdq.prp1 = cdq->entries_dma_addr;
+
+ ret = __nvme_submit_sync_cmd(ctrl->admin_q, c, &result, NULL, 0, NVME_QID_ANY, 0);
+ if (ret)
+ goto err_cdq_free;
+
+ cdq->cdq_id = le16_to_cpu(result.u16);
+ cdq->entry_nbyte = entry_nbyte;
+ cdq->entry_nr = entry_nr;
+ cdq->ctrl = ctrl;
+ cdq->cdqp_offset = cdqp_offset;
+ cdq->cdqp_mask = cdqp_mask;
+
+ xa_ret = xa_store(&ctrl->cdqs, cdq->cdq_id, cdq, GFP_KERNEL);
+ if (xa_is_err(xa_ret)) {
+ ret = xa_err(xa_ret);
+ goto err_cdq_free;
+ }
+
+ ret = nvme_cdq_fd(cdq, &fdno);
+ if (ret)
+ goto err_cdq_erase;
+
+ *cdq_id = cdq->cdq_id;
+ *cdq_fd = fdno;
+
+ return 0;
+
+err_cdq_erase:
+ xa_erase(&ctrl->cdqs, cdq->cdq_id);
+
+err_cdq_free:
+ cdq_id = NULL;
+ cdq_fd = NULL;
+ nvme_cdq_free(ctrl, cdq);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(nvme_cdq_create);
+
void nvme_passthru_end(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u32 effects,
struct nvme_command *cmd, int status)
{
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 800970a0bb87f7a3b6e855f56a2493a7deed1ecd..ddec5b44fe022831280458ed9fc1cb1ed11633b7 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -1207,6 +1207,10 @@ u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u8 opcode);
int nvme_execute_rq(struct request *rq, bool at_head);
void nvme_passthru_end(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u32 effects,
struct nvme_command *cmd, int status);
+int nvme_cdq_create(struct nvme_ctrl *ctrl, struct nvme_command *c,
+ const u32 entry_nr, const u32 entry_nbyte,
+ uint cdqp_offset, uint cdqp_mask,
+ u16 *cdq_id, int *cdq_fd);
struct nvme_ctrl *nvme_ctrl_from_file(struct file *file);
struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid);
bool nvme_get_ns(struct nvme_ns *ns);
--
2.47.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RFC 5/8] nvme: Add function to delete CDQ
2025-07-14 9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
` (3 preceding siblings ...)
2025-07-14 9:15 ` [PATCH RFC 4/8] nvme: Add function to create a CDQ Joel Granados
@ 2025-07-14 9:15 ` Joel Granados
2025-07-14 9:15 ` [PATCH RFC 6/8] nvme: Add a release ops to cdq file ops Joel Granados
` (3 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Joel Granados @ 2025-07-14 9:15 UTC (permalink / raw)
To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg
Cc: Klaus Jensen, linux-nvme, linux-kernel, Joel Granados
The delete function frees the memory held by the CDQ pointer, removes it
from the xarray and submits a NVME command informing the controller of
the delete. Call cdq delete on nvme_free_ctrl.
Signed-off-by: Joel Granados <joel.granados@kernel.org>
---
drivers/nvme/host/core.c | 36 ++++++++++++++++++++++++++++++++++++
drivers/nvme/host/nvme.h | 1 +
2 files changed, 37 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 81b7183a4e3167290e68dc2eb26a8dbcd88c7924..427e482530bdb5c7124d1230f35693ba756ce4d9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1435,6 +1435,41 @@ int nvme_cdq_create(struct nvme_ctrl *ctrl, struct nvme_command *c,
}
EXPORT_SYMBOL_GPL(nvme_cdq_create);
+int nvme_cdq_delete(struct nvme_ctrl *ctrl, const u16 cdq_id)
+{
+ int ret;
+ struct cdq_nvme_queue *cdq;
+ struct nvme_command c = { };
+
+ cdq = xa_erase(&ctrl->cdqs, cdq_id);
+ if (!cdq)
+ return -EINVAL;
+
+ c.cdq.opcode = nvme_admin_cdq;
+ c.cdq.sel = NVME_CDQ_SEL_DELETE_CDQ;
+ c.cdq.delete.cdqid = cdq->cdq_id;
+
+ ret = __nvme_submit_sync_cmd(ctrl->admin_q, &c, NULL, NULL, 0, NVME_QID_ANY, 0);
+ if (ret)
+ return ret;
+
+ nvme_cdq_free(ctrl, cdq);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(nvme_cdq_delete);
+
+static void nvme_free_cdqs(struct nvme_ctrl *ctrl)
+{
+ struct cdq_nvme_queue *cdq;
+ unsigned long i;
+
+ xa_for_each(&ctrl->cdqs, i, cdq)
+ nvme_cdq_delete(ctrl, i);
+
+ xa_destroy(&ctrl->cdqs);
+}
+
void nvme_passthru_end(struct nvme_ctrl *ctrl, struct nvme_ns *ns, u32 effects,
struct nvme_command *cmd, int status)
{
@@ -5029,6 +5064,7 @@ static void nvme_free_ctrl(struct device *dev)
if (!subsys || ctrl->instance != subsys->instance)
ida_free(&nvme_instance_ida, ctrl->instance);
nvme_free_cels(ctrl);
+ nvme_free_cdqs(ctrl);
nvme_mpath_uninit(ctrl);
cleanup_srcu_struct(&ctrl->srcu);
nvme_auth_stop(ctrl);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index ddec5b44fe022831280458ed9fc1cb1ed11633b7..07a1a9e4772281d68d0ade0423372a35f9f7055e 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -1211,6 +1211,7 @@ int nvme_cdq_create(struct nvme_ctrl *ctrl, struct nvme_command *c,
const u32 entry_nr, const u32 entry_nbyte,
uint cdqp_offset, uint cdqp_mask,
u16 *cdq_id, int *cdq_fd);
+int nvme_cdq_delete(struct nvme_ctrl *ctrl, const u16 cdq_id);
struct nvme_ctrl *nvme_ctrl_from_file(struct file *file);
struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid);
bool nvme_get_ns(struct nvme_ns *ns);
--
2.47.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RFC 6/8] nvme: Add a release ops to cdq file ops
2025-07-14 9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
` (4 preceding siblings ...)
2025-07-14 9:15 ` [PATCH RFC 5/8] nvme: Add function to delete CDQ Joel Granados
@ 2025-07-14 9:15 ` Joel Granados
2025-07-14 9:15 ` [PATCH RFC 7/8] nvme: Add Controller Data Queue (CDQ) ioctl command Joel Granados
` (2 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Joel Granados @ 2025-07-14 9:15 UTC (permalink / raw)
To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg
Cc: Klaus Jensen, linux-nvme, linux-kernel, Joel Granados
When user space calls close on the file descriptor or crashes,
nvme_cdq_fops_release will ensure everything related to the CDQ is
properly released.
Signed-off-by: Joel Granados <joel.granados@kernel.org>
---
drivers/nvme/host/core.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 427e482530bdb5c7124d1230f35693ba756ce4d9..4745b961c6b874375ff4399c104f312b5ac608b8 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1313,10 +1313,18 @@ static ssize_t nvme_cdq_fops_read(struct file *filep, char __user *buf,
return nvme_cdq_traverse(cdq, nbytes, buf);
}
+static int nvme_cdq_fops_release(struct inode *inode, struct file *filep)
+{
+ struct cdq_nvme_queue *cdq = filep->private_data;
+
+ return nvme_cdq_delete(cdq->ctrl, cdq->cdq_id);
+}
+
static const struct file_operations cdq_fops = {
.owner = THIS_MODULE,
.open = nonseekable_open,
.read = nvme_cdq_fops_read,
+ .release = nvme_cdq_fops_release,
};
static int nvme_cdq_fd(struct cdq_nvme_queue *cdq, int *fdno)
--
2.47.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RFC 7/8] nvme: Add Controller Data Queue (CDQ) ioctl command
2025-07-14 9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
` (5 preceding siblings ...)
2025-07-14 9:15 ` [PATCH RFC 6/8] nvme: Add a release ops to cdq file ops Joel Granados
@ 2025-07-14 9:15 ` Joel Granados
2025-07-14 9:15 ` [PATCH RFC 8/8] nvme: Connect CDQ ioctl to nvme driver Joel Granados
2025-07-14 13:02 ` [PATCH RFC 0/8] nvme: Add Controller Data Queue to the " Christoph Hellwig
8 siblings, 0 replies; 12+ messages in thread
From: Joel Granados @ 2025-07-14 9:15 UTC (permalink / raw)
To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg
Cc: Klaus Jensen, linux-nvme, linux-kernel, Joel Granados
New ioctl to create a CDQ.
Creating a CDQ:
Set the following memebers:
* entry_nr: Number of CDQ entries
* entry_nbytes: size in bytes of each CDQ entry
* cqs: Create Queue Specific. Value depends on CDQ type
* mos: Management Operation Specific. Value depends on CDQ type
* cdqp_{offset,mask}: Location of CDQ Phase tag bit within an entry
Return:
* cdq_id: The ID set by the controller for the created CDQ
* read_fd: The file descriptor that can be used to read the CDQ
Signed-off-by: Joel Granados <joel.granados@kernel.org>
---
include/uapi/linux/nvme_ioctl.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/include/uapi/linux/nvme_ioctl.h b/include/uapi/linux/nvme_ioctl.h
index 2f76cba6716637baff53e167a6141b68420d75c3..dc434628acc8462877e774a1eec9242a5df8a08a 100644
--- a/include/uapi/linux/nvme_ioctl.h
+++ b/include/uapi/linux/nvme_ioctl.h
@@ -92,6 +92,17 @@ struct nvme_uring_cmd {
__u32 rsvd2;
};
+struct nvme_cdq_cmd {
+ __u32 entry_nr;
+ __u32 entry_nbyte;
+ __u16 cdq_id;
+ __u16 cqs;
+ __u16 mos;
+ __u32 cdqp_offset;
+ __u32 cdqp_mask;
+ int read_fd;
+};
+
#define nvme_admin_cmd nvme_passthru_cmd
#define NVME_IOCTL_ID _IO('N', 0x40)
@@ -104,6 +115,7 @@ struct nvme_uring_cmd {
#define NVME_IOCTL_ADMIN64_CMD _IOWR('N', 0x47, struct nvme_passthru_cmd64)
#define NVME_IOCTL_IO64_CMD _IOWR('N', 0x48, struct nvme_passthru_cmd64)
#define NVME_IOCTL_IO64_CMD_VEC _IOWR('N', 0x49, struct nvme_passthru_cmd64)
+#define NVME_IOCTL_ADMIN_CDQ _IOR('N', 0x50, struct nvme_cdq_cmd)
/* io_uring async commands: */
#define NVME_URING_CMD_IO _IOWR('N', 0x80, struct nvme_uring_cmd)
--
2.47.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH RFC 8/8] nvme: Connect CDQ ioctl to nvme driver
2025-07-14 9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
` (6 preceding siblings ...)
2025-07-14 9:15 ` [PATCH RFC 7/8] nvme: Add Controller Data Queue (CDQ) ioctl command Joel Granados
@ 2025-07-14 9:15 ` Joel Granados
2025-07-14 13:02 ` [PATCH RFC 0/8] nvme: Add Controller Data Queue to the " Christoph Hellwig
8 siblings, 0 replies; 12+ messages in thread
From: Joel Granados @ 2025-07-14 9:15 UTC (permalink / raw)
To: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg
Cc: Klaus Jensen, linux-nvme, linux-kernel, Joel Granados
When deleting, call the nvme_cdq_delete directly as there is no
additional preparation needed. Construct the nvme admin command to
create before sending it down to the driver; this effectively sets mos
and cqs among other variables. Once the controller has returned, set the
cdq_id and cdq_fd for the ioctl caller.
Signed-off-by: Joel Granados <joel.granados@kernel.org>
---
drivers/nvme/host/ioctl.c | 47 ++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 46 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index ca86d3bf7ea49d0ec812640a6c0267a5aad40b79..6ab42381b6fe4e88bae341874b111ed4b7ade397 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -378,6 +378,46 @@ static int nvme_user_cmd64(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
return status;
}
+static int nvme_user_cdq(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ struct nvme_cdq_cmd __user *ucmd, unsigned int flags,
+ bool open_for_write)
+{
+ int status;
+ u16 cdq_id = 0;
+ int cdq_fd = 0;
+ struct nvme_command c = {};
+ struct nvme_cdq_cmd cmd = {};
+
+ if (copy_from_user(&cmd, ucmd, sizeof(cmd)))
+ return -EFAULT;
+
+ if (cmd.cdqp_offset >= cmd.entry_nbyte)
+ return -EINVAL;
+
+ c.cdq.opcode = nvme_admin_cdq;
+ c.cdq.sel = NVME_CDQ_SEL_CREATE_CDQ;
+ c.cdq.mos = cpu_to_le16(cmd.mos);
+ c.cdq.create.cdq_flags = cpu_to_le16(NVME_CDQ_CFG_PC_CONT);
+ c.cdq.create.cqs = cpu_to_le16(cmd.cqs);
+ /* >>2: size is in dwords */
+ c.cdq.cdqsize = (cmd.entry_nbyte * cmd.entry_nr) >> 2;
+
+ status = nvme_cdq_create(ctrl, &c,
+ cmd.entry_nr, cmd.entry_nbyte,
+ cmd.cdqp_offset, cmd.cdqp_mask,
+ &cdq_id, &cdq_fd);
+ if (status)
+ return status;
+
+ cmd.cdq_id = cdq_id;
+ cmd.read_fd = cdq_fd;
+
+ if (copy_to_user(ucmd, &cmd, sizeof(cmd)))
+ return -EFAULT;
+
+ return status;
+}
+
struct nvme_uring_data {
__u64 metadata;
__u64 addr;
@@ -541,7 +581,8 @@ static int nvme_uring_cmd_io(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
static bool is_ctrl_ioctl(unsigned int cmd)
{
- if (cmd == NVME_IOCTL_ADMIN_CMD || cmd == NVME_IOCTL_ADMIN64_CMD)
+ if (cmd == NVME_IOCTL_ADMIN_CMD || cmd == NVME_IOCTL_ADMIN64_CMD ||
+ cmd == NVME_IOCTL_ADMIN_CDQ)
return true;
if (is_sed_ioctl(cmd))
return true;
@@ -556,6 +597,8 @@ static int nvme_ctrl_ioctl(struct nvme_ctrl *ctrl, unsigned int cmd,
return nvme_user_cmd(ctrl, NULL, argp, 0, open_for_write);
case NVME_IOCTL_ADMIN64_CMD:
return nvme_user_cmd64(ctrl, NULL, argp, 0, open_for_write);
+ case NVME_IOCTL_ADMIN_CDQ:
+ return nvme_user_cdq(ctrl, NULL, argp, 0, open_for_write);
default:
return sed_ioctl(ctrl->opal_dev, cmd, argp);
}
@@ -874,6 +917,8 @@ long nvme_dev_ioctl(struct file *file, unsigned int cmd,
return -EACCES;
nvme_queue_scan(ctrl);
return 0;
+ case NVME_IOCTL_ADMIN_CDQ:
+ return nvme_user_cdq(ctrl, NULL, argp, 0, open_for_write);
default:
return -ENOTTY;
}
--
2.47.2
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver
2025-07-14 9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
` (7 preceding siblings ...)
2025-07-14 9:15 ` [PATCH RFC 8/8] nvme: Connect CDQ ioctl to nvme driver Joel Granados
@ 2025-07-14 13:02 ` Christoph Hellwig
2025-07-18 11:33 ` Joel Granados
8 siblings, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2025-07-14 13:02 UTC (permalink / raw)
To: Joel Granados
Cc: Keith Busch, Jens Axboe, Christoph Hellwig, Sagi Grimberg,
Klaus Jensen, linux-nvme, linux-kernel
On Mon, Jul 14, 2025 at 11:15:31AM +0200, Joel Granados wrote:
> Motivation
> ==========
> The main motivation is to enable Controller Data Queues as described in
> the 2.2 revision of the NVME base specification. This series places the
> kernel as an intermediary between the NVME controller producing CDQ
> entries and the user space process consuming them. It is general enough
> to encompass different use cases that require controller initiated
> communication delivered outside the regular I/O traffic streams (like
> LBA tracking for example).
That's rather blurbish. The only use case for CDQs in NVMe 2.2 is
tracking of dirty LBAs for live migration, and the live migration
feature in 2.2 is completely broken because the hyperscalers wanted
to win a point. So for CDQs to be useful in Linux we'll need the
proper live migration still under heavy development. With that I'd
very much expect the kernel to manage the CDQs just like any other
queue, and not a random user ioctl. So what would be the use case for
a user controlled CDQ?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver
2025-07-14 13:02 ` [PATCH RFC 0/8] nvme: Add Controller Data Queue to the " Christoph Hellwig
@ 2025-07-18 11:33 ` Joel Granados
2025-07-21 6:26 ` Christoph Hellwig
0 siblings, 1 reply; 12+ messages in thread
From: Joel Granados @ 2025-07-18 11:33 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Keith Busch, Jens Axboe, Sagi Grimberg, Klaus Jensen, linux-nvme,
linux-kernel
[-- Attachment #1: Type: text/plain, Size: 1821 bytes --]
On Mon, Jul 14, 2025 at 03:02:31PM +0200, Christoph Hellwig wrote:
> On Mon, Jul 14, 2025 at 11:15:31AM +0200, Joel Granados wrote:
> > Motivation
> > ==========
> > The main motivation is to enable Controller Data Queues as described in
> > the 2.2 revision of the NVME base specification. This series places the
> > kernel as an intermediary between the NVME controller producing CDQ
> > entries and the user space process consuming them. It is general enough
> > to encompass different use cases that require controller initiated
> > communication delivered outside the regular I/O traffic streams (like
> > LBA tracking for example).
Thx for the feedback. Much appreciated.
>
> That's rather blurbish. The only use case for CDQs in NVMe 2.2 is
> tracking of dirty LBAs for live migration, and the live migration
Yes, that is my understanding of nvme 2.2 as well.
> feature in 2.2 is completely broken because the hyperscalers wanted
> to win a point. So for CDQs to be useful in Linux we'll need the
> proper live migration still under heavy development. With that I'd
Do you mean in the specification body or patch series in the mailing
lists?
> very much expect the kernel to manage the CDQs just like any other
> queue, and not a random user ioctl.
This is a great segue to a question: If CDQ is like any other queue,
what is the best way of handling the lack of CDQ submission queues?
Something like snooping all submissions for these CDQs and triggering a
CDQ consume on every submission?
I went with the ioctl as the faster way to get it to work; I might
explore what having it as just another queue would look like.
> So what would be the use case for a user controlled CDQ?
Do you mean a hypothetical list besides LM in NVME 2.2?
Best
--
Joel Granados
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 659 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver
2025-07-18 11:33 ` Joel Granados
@ 2025-07-21 6:26 ` Christoph Hellwig
0 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2025-07-21 6:26 UTC (permalink / raw)
To: Joel Granados
Cc: Christoph Hellwig, Keith Busch, Jens Axboe, Sagi Grimberg,
Klaus Jensen, linux-nvme, linux-kernel
On Fri, Jul 18, 2025 at 01:33:34PM +0200, Joel Granados wrote:
> > to win a point. So for CDQs to be useful in Linux we'll need the
> > proper live migration still under heavy development. With that I'd
> Do you mean in the specification body or patch series in the mailing
> lists?
Actual code. As I said I very much expect CDQ creation and usage
to be kernel driven for live migration.
> > very much expect the kernel to manage the CDQs just like any other
> > queue, and not a random user ioctl.
> This is a great segue to a question: If CDQ is like any other queue,
> what is the best way of handling the lack of CDQ submission queues?
> Something like snooping all submissions for these CDQs and triggering a
> CDQ consume on every submission?
I don't understand this question and proposed answer at all all.
> I went with the ioctl as the faster way to get it to work;
Get _what_ to work?
> I might
> explore what having it as just another queue would look like.
>
> > So what would be the use case for a user controlled CDQ?
> Do you mean a hypothetical list besides LM in NVME 2.2?
As out line in the last two mails I don't see how live migration would
work with user controlled CDQs. Maybe I'm wrong, but nothing in this
thread seems to even try to explain how that would work.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2025-07-21 6:26 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-14 9:15 [PATCH RFC 0/8] nvme: Add Controller Data Queue to the nvme driver Joel Granados
2025-07-14 9:15 ` [PATCH RFC 1/8] nvme: Add CDQ command definitions for contiguous PRPs Joel Granados
2025-07-14 9:15 ` [PATCH RFC 2/8] nvme: Add cdq data structure to nvme_ctrl Joel Granados
2025-07-14 9:15 ` [PATCH RFC 3/8] nvme: Add file descriptor to read CDQs Joel Granados
2025-07-14 9:15 ` [PATCH RFC 4/8] nvme: Add function to create a CDQ Joel Granados
2025-07-14 9:15 ` [PATCH RFC 5/8] nvme: Add function to delete CDQ Joel Granados
2025-07-14 9:15 ` [PATCH RFC 6/8] nvme: Add a release ops to cdq file ops Joel Granados
2025-07-14 9:15 ` [PATCH RFC 7/8] nvme: Add Controller Data Queue (CDQ) ioctl command Joel Granados
2025-07-14 9:15 ` [PATCH RFC 8/8] nvme: Connect CDQ ioctl to nvme driver Joel Granados
2025-07-14 13:02 ` [PATCH RFC 0/8] nvme: Add Controller Data Queue to the " Christoph Hellwig
2025-07-18 11:33 ` Joel Granados
2025-07-21 6:26 ` Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).