From: Sagi Grimberg <sagi@grimberg.me>
To: Shivam Kumar <kumar.shivam43666@gmail.com>, hch@lst.de
Cc: mlombard@arkamax.eu, kch@nvidia.com,
linux-nvme@lists.infradead.org, kbusch@kernel.org,
gregkh@linuxfoundation.org, security@kernel.org
Subject: Re: [PATCH v3] nvmet-tcp: set and enforce a default MDTS for TCP transport
Date: Sun, 10 May 2026 23:42:48 +0300 [thread overview]
Message-ID: <d06002f7-2da6-44bd-a0df-d9f2ef87a66c@grimberg.me> (raw)
In-Reply-To: <20260508203912.320938-1-kumar.shivam43666@gmail.com>
On 08/05/2026 23:39, Shivam Kumar wrote:
> Unlike other fabrics transports, the TCP target does not set a default
> Maximum Data Transfer Size. With the configfs MDTS entry defaulting to 0
> (no limit), a remote attacker can send a CapsuleCmd with an arbitrarily
> large SGL length, causing sgl_alloc() in nvmet_tcp_map_data() to attempt
> an excessive kernel allocation that triggers the OOM killer.
>
> Set a default MDTS of 9 (2 MiB) for TCP. Enforce the limit server-side
> in nvmet_tcp_map_data() by rejecting commands whose SGL length exceeds
> the configured MDTS, returning NVME_SC_INVALID_FIELD as required by the
> NVMe specification. Admins can still adjust via the configfs mdts
> attribute if needed.
>
> Signed-off-by: Shivam Kumar <kumar.shivam43666@gmail.com>
> ---
> drivers/nvme/target/tcp.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
> index 164a564ba3b4..098cf877c358 100644
> --- a/drivers/nvme/target/tcp.c
> +++ b/drivers/nvme/target/tcp.c
> @@ -25,6 +25,7 @@
> #define NVMET_TCP_DEF_INLINE_DATA_SIZE (4 * PAGE_SIZE)
> #define NVMET_TCP_MAXH2CDATA 0x400000 /* 16M arbitrary limit */
> #define NVMET_TCP_BACKLOG 128
> +#define NVMET_TCP_DEF_MDTS 9 /* 2 MiB (2^(12+9)) */
>
> static int param_store_val(const char *str, int *val, int min, int max)
> {
> @@ -422,6 +423,13 @@ static int nvmet_tcp_map_data(struct nvmet_tcp_cmd *cmd)
> if (!len)
> return 0;
>
> + /* Enforce MDTS: abort commands exceeding the advertised limit */
> + if (cmd->req.port->mdts) {
> + u8 mdts = cmd->req.port->mdts;
> + if (mdts < 20 && len > (1U << (12 + mdts)))
> + return NVME_SC_INVALID_FIELD | NVME_STATUS_DNR;
> + }
> +
> if (sgl->type == ((NVME_SGL_FMT_DATA_DESC << 4) |
> NVME_SGL_FMT_OFFSET)) {
> if (!nvme_is_write(cmd->req.cmd))
> @@ -2077,6 +2085,8 @@ static int nvmet_tcp_add_port(struct nvmet_port *nport)
> INIT_WORK(&port->accept_work, nvmet_tcp_accept_work);
> if (port->nport->inline_data_size < 0)
> port->nport->inline_data_size = NVMET_TCP_DEF_INLINE_DATA_SIZE;
> + if (nport->mdts < 0)
> + nport->mdts = NVMET_TCP_DEF_MDTS;
>
> ret = sock_create(port->addr.ss_family, SOCK_STREAM,
> IPPROTO_TCP, &port->sock);
Shivam,
I think what we want is to limit the tcp to a sane limit similar to the
nvmet-rdma driver.
Also, we probably want it at least consistent with the (rather
arbitrary) MAXH2CDATA...
e.g. something like:
--
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index 20f150d17a96..32f33e5dbfdb 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -24,6 +24,7 @@
#define NVMET_TCP_DEF_INLINE_DATA_SIZE (4 * PAGE_SIZE)
#define NVMET_TCP_MAXH2CDATA 0x400000 /* 16M arbitrary limit */
+#define NVMET_TCP_MAX_MDTS 12
#define NVMET_TCP_BACKLOG 128
static int param_store_val(const char *str, int *val, int min, int max)
@@ -2220,6 +2221,11 @@ static ssize_t nvmet_tcp_host_port_addr(struct
nvmet_ctrl *ctrl,
(struct sockaddr *)&queue->sockaddr_peer);
}
+static u8 nvmet_tcp_get_mdts(const struct nvmet_ctrl *ctrl)
+{
+ return NVMET_TCP_MAX_MDTS;
+}
+
static const struct nvmet_fabrics_ops nvmet_tcp_ops = {
.owner = THIS_MODULE,
.type = NVMF_TRTYPE_TCP,
@@ -2231,6 +2237,7 @@ static const struct nvmet_fabrics_ops
nvmet_tcp_ops = {
.install_queue = nvmet_tcp_install_queue,
.disc_traddr = nvmet_tcp_disc_port_addr,
.host_traddr = nvmet_tcp_host_port_addr,
+ .get_mdts = nvmet_tcp_get_mdts,
};
static int __init nvmet_tcp_init(void)
--
next prev parent reply other threads:[~2026-05-10 20:42 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <2026031805-stretch-skid-ae5b@gregkh>
2026-03-19 1:26 ` [PATCH] nvmet-tcp: bound sgl->length check in nvmet_tcp_map_data() Shivam Kumar
2026-03-19 7:59 ` Maurizio Lombardi
2026-03-19 18:00 ` Shivam Kumar
2026-03-20 7:48 ` Christoph Hellwig
2026-04-05 19:46 ` Shivam Kumar
2026-04-07 6:30 ` Christoph Hellwig
2026-04-08 6:30 ` Maurizio Lombardi
2026-04-09 6:08 ` Christoph Hellwig
2026-04-26 23:57 ` Shivam Kumar
2026-04-27 0:44 ` [PATCH v2] nvmet-tcp: set a default MDTS of 2 MiB for TCP transport Shivam Kumar
2026-04-28 6:07 ` Maurizio Lombardi
2026-04-29 1:05 ` Shivam Kumar
2026-05-08 20:39 ` [PATCH v3] nvmet-tcp: set and enforce a default MDTS " Shivam Kumar
2026-05-10 20:42 ` Sagi Grimberg [this message]
2026-05-11 6:36 ` Christoph Hellwig
2026-05-11 8:01 ` Sagi Grimberg
2026-05-11 8:04 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d06002f7-2da6-44bd-a0df-d9f2ef87a66c@grimberg.me \
--to=sagi@grimberg.me \
--cc=gregkh@linuxfoundation.org \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=kumar.shivam43666@gmail.com \
--cc=linux-nvme@lists.infradead.org \
--cc=mlombard@arkamax.eu \
--cc=security@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox