public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
@ 2026-02-11  6:58 YunJe Shin
  2026-02-12  1:49 ` yunje shin
  0 siblings, 1 reply; 17+ messages in thread
From: YunJe Shin @ 2026-02-11  6:58 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig, Sagi Grimberg,
	Chaitanya Kulkarni
  Cc: Keith Busch, linux-nvme, linux-kernel, ioerts

Validate DH-HMAC-CHAP hash/DH list lengths before indexing the idlist halves to prevent out-of-bounds reads.

KASAN report:
[   37.160829] Call Trace:
[   37.160831]  <TASK>
[   37.160832]  dump_stack_lvl+0x5f/0x80
[   37.160837]  print_report+0xd1/0x640
[   37.160842]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
[   37.160846]  ? kfree+0x137/0x390
[   37.160850]  ? kasan_complete_mode_report_info+0x2a/0x200
[   37.160854]  kasan_report+0xe5/0x120
[   37.160856]  ? nvmet_execute_auth_send+0x19a9/0x1f00
[   37.160860]  ? nvmet_execute_auth_send+0x19a9/0x1f00
[   37.160863]  __asan_report_load1_noabort+0x18/0x20
[   37.160866]  nvmet_execute_auth_send+0x19a9/0x1f00
[   37.160870]  nvmet_tcp_io_work+0x17a8/0x2720
[   37.160874]  ? __pfx_nvmet_tcp_io_work+0x10/0x10
[   37.160877]  process_one_work+0x5e9/0x1020
[   37.160881]  ? __kasan_check_write+0x18/0x20
[   37.160885]  worker_thread+0x446/0xc80
[   37.160889]  ? __pfx_worker_thread+0x10/0x10
[   37.160891]  kthread+0x2d7/0x3c0
[   37.160894]  ? __pfx_kthread+0x10/0x10
[   37.160897]  ret_from_fork+0x39f/0x5d0
[   37.160900]  ? __pfx_ret_from_fork+0x10/0x10
[   37.160903]  ? __kasan_check_read+0x15/0x20
[   37.160906]  ? __switch_to+0xb45/0xf90
[   37.160910]  ? __switch_to_asm+0x39/0x70
[   37.160914]  ? __pfx_kthread+0x10/0x10
[   37.160916]  ret_from_fork_asm+0x1a/0x30
[   37.160920]  </TASK>
[   37.160921] 
[   37.174141] Allocated by task 11:
[   37.174377]  kasan_save_stack+0x3d/0x60
[   37.174697]  kasan_save_track+0x18/0x40
[   37.175043]  kasan_save_alloc_info+0x3b/0x50
[   37.175420]  __kasan_kmalloc+0x9c/0xa0
[   37.175762]  __kmalloc_noprof+0x197/0x480
[   37.176117]  nvmet_execute_auth_send+0x39e/0x1f00
[   37.176529]  nvmet_tcp_io_work+0x17a8/0x2720
[   37.176912]  process_one_work+0x5e9/0x1020
[   37.177275]  worker_thread+0x446/0xc80
[   37.177616]  kthread+0x2d7/0x3c0
[   37.177906]  ret_from_fork+0x39f/0x5d0
[   37.178238]  ret_from_fork_asm+0x1a/0x30
[   37.178591] 
[   37.178735] The buggy address belongs to the object at ffff88800aecc800
[   37.178735]  which belongs to the cache kmalloc-96 of size 96
[   37.179790] The buggy address is located 0 bytes to the right of
[   37.179790]  allocated 72-byte region [ffff88800aecc800, ffff88800aecc848)
[   37.180931] 
[   37.181079] The buggy address belongs to the physical page:
[   37.181572] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xaecc
[   37.182393] flags: 0x100000000000000(node=0|zone=1)
[   37.182819] page_type: f5(slab)
[   37.183080] raw: 0100000000000000 ffff888006c41280 dead000000000122 0000000000000000
[   37.183730] raw: 0000000000000000 0000000000200020 00000000f5000000 0000000000000000
[   37.184333] page dumped because: kasan: bad access detected
[   37.184783] 
[   37.184918] Memory state around the buggy address:
[   37.185315]  ffff88800aecc700: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
[   37.185835]  ffff88800aecc780: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
[   37.186336] >ffff88800aecc800: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
[   37.186839]                                               ^
[   37.187255]  ffff88800aecc880: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
[   37.187763]  ffff88800aecc900: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
[   37.188261] ==================================================================
[   37.188938] ==================================================================

Fixes: db1312dd95488 ("nvmet: implement basic In-Band Authentication")
Signed-off-by: YunJe Shin <ioerts@kookmin.ac.kr>
---
 drivers/nvme/target/fabrics-cmd-auth.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
index 5946681cb0e3..8ad3255aec4a 100644
--- a/drivers/nvme/target/fabrics-cmd-auth.c
+++ b/drivers/nvme/target/fabrics-cmd-auth.c
@@ -36,6 +36,7 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
 	struct nvmet_ctrl *ctrl = req->sq->ctrl;
 	struct nvmf_auth_dhchap_negotiate_data *data = d;
 	int i, hash_id = 0, fallback_hash_id = 0, dhgid, fallback_dhgid;
+	size_t idlist_half;
 
 	pr_debug("%s: ctrl %d qid %d: data sc_d %d napd %d authid %d halen %d dhlen %d\n",
 		 __func__, ctrl->cntlid, req->sq->qid,
@@ -72,6 +73,15 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
 	    NVME_AUTH_DHCHAP_AUTH_ID)
 		return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
 
+	/*
+	 * idlist[0..idlist_half-1]: hash IDs
+	 * idlist[idlist_half..]: DH group IDs
+	 */
+	idlist_half = sizeof(data->auth_protocol[0].dhchap.idlist) / 2;
+	if (data->auth_protocol[0].dhchap.halen > idlist_half ||
+	    data->auth_protocol[0].dhchap.dhlen > idlist_half)
+		return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+
 	for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
 		u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];
 
@@ -98,7 +108,8 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
 	dhgid = -1;
 	fallback_dhgid = -1;
 	for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
-		int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
+		int tmp_dhgid =
+			data->auth_protocol[0].dhchap.idlist[i + idlist_half];
 
 		if (tmp_dhgid != ctrl->dh_gid) {
 			dhgid = tmp_dhgid;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
  2026-02-11  6:58 [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds) YunJe Shin
@ 2026-02-12  1:49 ` yunje shin
  2026-02-18  4:04   ` yunje shin
  0 siblings, 1 reply; 17+ messages in thread
From: yunje shin @ 2026-02-12  1:49 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig, Sagi Grimberg,
	Chaitanya Kulkarni
  Cc: Keith Busch, linux-nvme, linux-kernel, ioerts

The function nvmet_auth_negotiate() parses the idlist array in the
struct nvmf_auth_dhchap_protocol_descriptor payload. This array is 60
bytes and is logically divided into two 30-byte halves: the first half
for HMAC IDs and the second half for DH group IDs. The current code
uses a hardcoded +30 offset for the DH list, but does not validate
halen and dhlen against the per-half bounds. As a result, if a
malicious host sends halen or dhlen larger than 30, the loops can read
beyond the intended half of idlist, and for sufficiently large values
read past the 60-byte array into adjacent slab memory, triggering the
observed KASAN slab-out-of-bounds read.

This patch fixes the issue by:
    - Computing the half-size from sizeof(idlist) (idlist_half)
instead of hardcoding 30
    - Validating both halen and dhlen are within idlist_half
    - Replacing the hardcoded DH offset with idlist_half

Thanks,
Yunje Shin

On Wed, Feb 11, 2026 at 3:59 PM YunJe Shin <yjshin0438@gmail.com> wrote:
>
> Validate DH-HMAC-CHAP hash/DH list lengths before indexing the idlist halves to prevent out-of-bounds reads.
>
> KASAN report:
> [   37.160829] Call Trace:
> [   37.160831]  <TASK>
> [   37.160832]  dump_stack_lvl+0x5f/0x80
> [   37.160837]  print_report+0xd1/0x640
> [   37.160842]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
> [   37.160846]  ? kfree+0x137/0x390
> [   37.160850]  ? kasan_complete_mode_report_info+0x2a/0x200
> [   37.160854]  kasan_report+0xe5/0x120
> [   37.160856]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> [   37.160860]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> [   37.160863]  __asan_report_load1_noabort+0x18/0x20
> [   37.160866]  nvmet_execute_auth_send+0x19a9/0x1f00
> [   37.160870]  nvmet_tcp_io_work+0x17a8/0x2720
> [   37.160874]  ? __pfx_nvmet_tcp_io_work+0x10/0x10
> [   37.160877]  process_one_work+0x5e9/0x1020
> [   37.160881]  ? __kasan_check_write+0x18/0x20
> [   37.160885]  worker_thread+0x446/0xc80
> [   37.160889]  ? __pfx_worker_thread+0x10/0x10
> [   37.160891]  kthread+0x2d7/0x3c0
> [   37.160894]  ? __pfx_kthread+0x10/0x10
> [   37.160897]  ret_from_fork+0x39f/0x5d0
> [   37.160900]  ? __pfx_ret_from_fork+0x10/0x10
> [   37.160903]  ? __kasan_check_read+0x15/0x20
> [   37.160906]  ? __switch_to+0xb45/0xf90
> [   37.160910]  ? __switch_to_asm+0x39/0x70
> [   37.160914]  ? __pfx_kthread+0x10/0x10
> [   37.160916]  ret_from_fork_asm+0x1a/0x30
> [   37.160920]  </TASK>
> [   37.160921]
> [   37.174141] Allocated by task 11:
> [   37.174377]  kasan_save_stack+0x3d/0x60
> [   37.174697]  kasan_save_track+0x18/0x40
> [   37.175043]  kasan_save_alloc_info+0x3b/0x50
> [   37.175420]  __kasan_kmalloc+0x9c/0xa0
> [   37.175762]  __kmalloc_noprof+0x197/0x480
> [   37.176117]  nvmet_execute_auth_send+0x39e/0x1f00
> [   37.176529]  nvmet_tcp_io_work+0x17a8/0x2720
> [   37.176912]  process_one_work+0x5e9/0x1020
> [   37.177275]  worker_thread+0x446/0xc80
> [   37.177616]  kthread+0x2d7/0x3c0
> [   37.177906]  ret_from_fork+0x39f/0x5d0
> [   37.178238]  ret_from_fork_asm+0x1a/0x30
> [   37.178591]
> [   37.178735] The buggy address belongs to the object at ffff88800aecc800
> [   37.178735]  which belongs to the cache kmalloc-96 of size 96
> [   37.179790] The buggy address is located 0 bytes to the right of
> [   37.179790]  allocated 72-byte region [ffff88800aecc800, ffff88800aecc848)
> [   37.180931]
> [   37.181079] The buggy address belongs to the physical page:
> [   37.181572] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xaecc
> [   37.182393] flags: 0x100000000000000(node=0|zone=1)
> [   37.182819] page_type: f5(slab)
> [   37.183080] raw: 0100000000000000 ffff888006c41280 dead000000000122 0000000000000000
> [   37.183730] raw: 0000000000000000 0000000000200020 00000000f5000000 0000000000000000
> [   37.184333] page dumped because: kasan: bad access detected
> [   37.184783]
> [   37.184918] Memory state around the buggy address:
> [   37.185315]  ffff88800aecc700: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> [   37.185835]  ffff88800aecc780: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> [   37.186336] >ffff88800aecc800: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
> [   37.186839]                                               ^
> [   37.187255]  ffff88800aecc880: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> [   37.187763]  ffff88800aecc900: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> [   37.188261] ==================================================================
> [   37.188938] ==================================================================
>
> Fixes: db1312dd95488 ("nvmet: implement basic In-Band Authentication")
> Signed-off-by: YunJe Shin <ioerts@kookmin.ac.kr>
> ---
>  drivers/nvme/target/fabrics-cmd-auth.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
> index 5946681cb0e3..8ad3255aec4a 100644
> --- a/drivers/nvme/target/fabrics-cmd-auth.c
> +++ b/drivers/nvme/target/fabrics-cmd-auth.c
> @@ -36,6 +36,7 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
>         struct nvmet_ctrl *ctrl = req->sq->ctrl;
>         struct nvmf_auth_dhchap_negotiate_data *data = d;
>         int i, hash_id = 0, fallback_hash_id = 0, dhgid, fallback_dhgid;
> +       size_t idlist_half;
>
>         pr_debug("%s: ctrl %d qid %d: data sc_d %d napd %d authid %d halen %d dhlen %d\n",
>                  __func__, ctrl->cntlid, req->sq->qid,
> @@ -72,6 +73,15 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
>             NVME_AUTH_DHCHAP_AUTH_ID)
>                 return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
>
> +       /*
> +        * idlist[0..idlist_half-1]: hash IDs
> +        * idlist[idlist_half..]: DH group IDs
> +        */
> +       idlist_half = sizeof(data->auth_protocol[0].dhchap.idlist) / 2;
> +       if (data->auth_protocol[0].dhchap.halen > idlist_half ||
> +           data->auth_protocol[0].dhchap.dhlen > idlist_half)
> +               return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> +
>         for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
>                 u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];
>
> @@ -98,7 +108,8 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
>         dhgid = -1;
>         fallback_dhgid = -1;
>         for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
> -               int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> +               int tmp_dhgid =
> +                       data->auth_protocol[0].dhchap.idlist[i + idlist_half];
>
>                 if (tmp_dhgid != ctrl->dh_gid) {
>                         dhgid = tmp_dhgid;
> --
> 2.43.0
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
  2026-02-12  1:49 ` yunje shin
@ 2026-02-18  4:04   ` yunje shin
  2026-03-08 15:09     ` yunje shin
  0 siblings, 1 reply; 17+ messages in thread
From: yunje shin @ 2026-02-18  4:04 UTC (permalink / raw)
  To: Hannes Reinecke, Christoph Hellwig, Sagi Grimberg,
	Chaitanya Kulkarni
  Cc: Keith Busch, linux-nvme, linux-kernel, ioerts

I've confirmed that the issue is still present and the KASAN
slab-out-of-bounds read is still reproducible. Please let me know if
there are any concerns or if a v2 is needed.

Thanks, Yunje Shin

On Thu, Feb 12, 2026 at 10:49 AM yunje shin <yjshin0438@gmail.com> wrote:
>
> The function nvmet_auth_negotiate() parses the idlist array in the
> struct nvmf_auth_dhchap_protocol_descriptor payload. This array is 60
> bytes and is logically divided into two 30-byte halves: the first half
> for HMAC IDs and the second half for DH group IDs. The current code
> uses a hardcoded +30 offset for the DH list, but does not validate
> halen and dhlen against the per-half bounds. As a result, if a
> malicious host sends halen or dhlen larger than 30, the loops can read
> beyond the intended half of idlist, and for sufficiently large values
> read past the 60-byte array into adjacent slab memory, triggering the
> observed KASAN slab-out-of-bounds read.
>
> This patch fixes the issue by:
>     - Computing the half-size from sizeof(idlist) (idlist_half)
> instead of hardcoding 30
>     - Validating both halen and dhlen are within idlist_half
>     - Replacing the hardcoded DH offset with idlist_half
>
> Thanks,
> Yunje Shin
>
> On Wed, Feb 11, 2026 at 3:59 PM YunJe Shin <yjshin0438@gmail.com> wrote:
> >
> > Validate DH-HMAC-CHAP hash/DH list lengths before indexing the idlist halves to prevent out-of-bounds reads.
> >
> > KASAN report:
> > [   37.160829] Call Trace:
> > [   37.160831]  <TASK>
> > [   37.160832]  dump_stack_lvl+0x5f/0x80
> > [   37.160837]  print_report+0xd1/0x640
> > [   37.160842]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
> > [   37.160846]  ? kfree+0x137/0x390
> > [   37.160850]  ? kasan_complete_mode_report_info+0x2a/0x200
> > [   37.160854]  kasan_report+0xe5/0x120
> > [   37.160856]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> > [   37.160860]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> > [   37.160863]  __asan_report_load1_noabort+0x18/0x20
> > [   37.160866]  nvmet_execute_auth_send+0x19a9/0x1f00
> > [   37.160870]  nvmet_tcp_io_work+0x17a8/0x2720
> > [   37.160874]  ? __pfx_nvmet_tcp_io_work+0x10/0x10
> > [   37.160877]  process_one_work+0x5e9/0x1020
> > [   37.160881]  ? __kasan_check_write+0x18/0x20
> > [   37.160885]  worker_thread+0x446/0xc80
> > [   37.160889]  ? __pfx_worker_thread+0x10/0x10
> > [   37.160891]  kthread+0x2d7/0x3c0
> > [   37.160894]  ? __pfx_kthread+0x10/0x10
> > [   37.160897]  ret_from_fork+0x39f/0x5d0
> > [   37.160900]  ? __pfx_ret_from_fork+0x10/0x10
> > [   37.160903]  ? __kasan_check_read+0x15/0x20
> > [   37.160906]  ? __switch_to+0xb45/0xf90
> > [   37.160910]  ? __switch_to_asm+0x39/0x70
> > [   37.160914]  ? __pfx_kthread+0x10/0x10
> > [   37.160916]  ret_from_fork_asm+0x1a/0x30
> > [   37.160920]  </TASK>
> > [   37.160921]
> > [   37.174141] Allocated by task 11:
> > [   37.174377]  kasan_save_stack+0x3d/0x60
> > [   37.174697]  kasan_save_track+0x18/0x40
> > [   37.175043]  kasan_save_alloc_info+0x3b/0x50
> > [   37.175420]  __kasan_kmalloc+0x9c/0xa0
> > [   37.175762]  __kmalloc_noprof+0x197/0x480
> > [   37.176117]  nvmet_execute_auth_send+0x39e/0x1f00
> > [   37.176529]  nvmet_tcp_io_work+0x17a8/0x2720
> > [   37.176912]  process_one_work+0x5e9/0x1020
> > [   37.177275]  worker_thread+0x446/0xc80
> > [   37.177616]  kthread+0x2d7/0x3c0
> > [   37.177906]  ret_from_fork+0x39f/0x5d0
> > [   37.178238]  ret_from_fork_asm+0x1a/0x30
> > [   37.178591]
> > [   37.178735] The buggy address belongs to the object at ffff88800aecc800
> > [   37.178735]  which belongs to the cache kmalloc-96 of size 96
> > [   37.179790] The buggy address is located 0 bytes to the right of
> > [   37.179790]  allocated 72-byte region [ffff88800aecc800, ffff88800aecc848)
> > [   37.180931]
> > [   37.181079] The buggy address belongs to the physical page:
> > [   37.181572] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xaecc
> > [   37.182393] flags: 0x100000000000000(node=0|zone=1)
> > [   37.182819] page_type: f5(slab)
> > [   37.183080] raw: 0100000000000000 ffff888006c41280 dead000000000122 0000000000000000
> > [   37.183730] raw: 0000000000000000 0000000000200020 00000000f5000000 0000000000000000
> > [   37.184333] page dumped because: kasan: bad access detected
> > [   37.184783]
> > [   37.184918] Memory state around the buggy address:
> > [   37.185315]  ffff88800aecc700: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > [   37.185835]  ffff88800aecc780: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > [   37.186336] >ffff88800aecc800: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
> > [   37.186839]                                               ^
> > [   37.187255]  ffff88800aecc880: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > [   37.187763]  ffff88800aecc900: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > [   37.188261] ==================================================================
> > [   37.188938] ==================================================================
> >
> > Fixes: db1312dd95488 ("nvmet: implement basic In-Band Authentication")
> > Signed-off-by: YunJe Shin <ioerts@kookmin.ac.kr>
> > ---
> >  drivers/nvme/target/fabrics-cmd-auth.c | 13 ++++++++++++-
> >  1 file changed, 12 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
> > index 5946681cb0e3..8ad3255aec4a 100644
> > --- a/drivers/nvme/target/fabrics-cmd-auth.c
> > +++ b/drivers/nvme/target/fabrics-cmd-auth.c
> > @@ -36,6 +36,7 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> >         struct nvmet_ctrl *ctrl = req->sq->ctrl;
> >         struct nvmf_auth_dhchap_negotiate_data *data = d;
> >         int i, hash_id = 0, fallback_hash_id = 0, dhgid, fallback_dhgid;
> > +       size_t idlist_half;
> >
> >         pr_debug("%s: ctrl %d qid %d: data sc_d %d napd %d authid %d halen %d dhlen %d\n",
> >                  __func__, ctrl->cntlid, req->sq->qid,
> > @@ -72,6 +73,15 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> >             NVME_AUTH_DHCHAP_AUTH_ID)
> >                 return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> >
> > +       /*
> > +        * idlist[0..idlist_half-1]: hash IDs
> > +        * idlist[idlist_half..]: DH group IDs
> > +        */
> > +       idlist_half = sizeof(data->auth_protocol[0].dhchap.idlist) / 2;
> > +       if (data->auth_protocol[0].dhchap.halen > idlist_half ||
> > +           data->auth_protocol[0].dhchap.dhlen > idlist_half)
> > +               return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> > +
> >         for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
> >                 u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];
> >
> > @@ -98,7 +108,8 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> >         dhgid = -1;
> >         fallback_dhgid = -1;
> >         for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
> > -               int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> > +               int tmp_dhgid =
> > +                       data->auth_protocol[0].dhchap.idlist[i + idlist_half];
> >
> >                 if (tmp_dhgid != ctrl->dh_gid) {
> >                         dhgid = tmp_dhgid;
> > --
> > 2.43.0
> >

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
  2026-02-18  4:04   ` yunje shin
@ 2026-03-08 15:09     ` yunje shin
  2026-03-09 18:04       ` Chris Leech
  0 siblings, 1 reply; 17+ messages in thread
From: yunje shin @ 2026-03-08 15:09 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Keith Busch, Chaitanya Kulkarni, Sagi Grimberg, Christoph Hellwig,
	linux-nvme, linux-kernel, ioerts

Just following up on this patch in case it got buried.
The KASAN slab-out-of-bounds read is still reproducible on my side.
I'd appreciate any feedback.

Thanks,
Yunje Shin

On Wed, Feb 18, 2026 at 1:04 PM yunje shin <yjshin0438@gmail.com> wrote:
>
> I've confirmed that the issue is still present and the KASAN
> slab-out-of-bounds read is still reproducible. Please let me know if
> there are any concerns or if a v2 is needed.
>
> Thanks, Yunje Shin
>
> On Thu, Feb 12, 2026 at 10:49 AM yunje shin <yjshin0438@gmail.com> wrote:
> >
> > The function nvmet_auth_negotiate() parses the idlist array in the
> > struct nvmf_auth_dhchap_protocol_descriptor payload. This array is 60
> > bytes and is logically divided into two 30-byte halves: the first half
> > for HMAC IDs and the second half for DH group IDs. The current code
> > uses a hardcoded +30 offset for the DH list, but does not validate
> > halen and dhlen against the per-half bounds. As a result, if a
> > malicious host sends halen or dhlen larger than 30, the loops can read
> > beyond the intended half of idlist, and for sufficiently large values
> > read past the 60-byte array into adjacent slab memory, triggering the
> > observed KASAN slab-out-of-bounds read.
> >
> > This patch fixes the issue by:
> >     - Computing the half-size from sizeof(idlist) (idlist_half)
> > instead of hardcoding 30
> >     - Validating both halen and dhlen are within idlist_half
> >     - Replacing the hardcoded DH offset with idlist_half
> >
> > Thanks,
> > Yunje Shin
> >
> > On Wed, Feb 11, 2026 at 3:59 PM YunJe Shin <yjshin0438@gmail.com> wrote:
> > >
> > > Validate DH-HMAC-CHAP hash/DH list lengths before indexing the idlist halves to prevent out-of-bounds reads.
> > >
> > > KASAN report:
> > > [   37.160829] Call Trace:
> > > [   37.160831]  <TASK>
> > > [   37.160832]  dump_stack_lvl+0x5f/0x80
> > > [   37.160837]  print_report+0xd1/0x640
> > > [   37.160842]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
> > > [   37.160846]  ? kfree+0x137/0x390
> > > [   37.160850]  ? kasan_complete_mode_report_info+0x2a/0x200
> > > [   37.160854]  kasan_report+0xe5/0x120
> > > [   37.160856]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> > > [   37.160860]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> > > [   37.160863]  __asan_report_load1_noabort+0x18/0x20
> > > [   37.160866]  nvmet_execute_auth_send+0x19a9/0x1f00
> > > [   37.160870]  nvmet_tcp_io_work+0x17a8/0x2720
> > > [   37.160874]  ? __pfx_nvmet_tcp_io_work+0x10/0x10
> > > [   37.160877]  process_one_work+0x5e9/0x1020
> > > [   37.160881]  ? __kasan_check_write+0x18/0x20
> > > [   37.160885]  worker_thread+0x446/0xc80
> > > [   37.160889]  ? __pfx_worker_thread+0x10/0x10
> > > [   37.160891]  kthread+0x2d7/0x3c0
> > > [   37.160894]  ? __pfx_kthread+0x10/0x10
> > > [   37.160897]  ret_from_fork+0x39f/0x5d0
> > > [   37.160900]  ? __pfx_ret_from_fork+0x10/0x10
> > > [   37.160903]  ? __kasan_check_read+0x15/0x20
> > > [   37.160906]  ? __switch_to+0xb45/0xf90
> > > [   37.160910]  ? __switch_to_asm+0x39/0x70
> > > [   37.160914]  ? __pfx_kthread+0x10/0x10
> > > [   37.160916]  ret_from_fork_asm+0x1a/0x30
> > > [   37.160920]  </TASK>
> > > [   37.160921]
> > > [   37.174141] Allocated by task 11:
> > > [   37.174377]  kasan_save_stack+0x3d/0x60
> > > [   37.174697]  kasan_save_track+0x18/0x40
> > > [   37.175043]  kasan_save_alloc_info+0x3b/0x50
> > > [   37.175420]  __kasan_kmalloc+0x9c/0xa0
> > > [   37.175762]  __kmalloc_noprof+0x197/0x480
> > > [   37.176117]  nvmet_execute_auth_send+0x39e/0x1f00
> > > [   37.176529]  nvmet_tcp_io_work+0x17a8/0x2720
> > > [   37.176912]  process_one_work+0x5e9/0x1020
> > > [   37.177275]  worker_thread+0x446/0xc80
> > > [   37.177616]  kthread+0x2d7/0x3c0
> > > [   37.177906]  ret_from_fork+0x39f/0x5d0
> > > [   37.178238]  ret_from_fork_asm+0x1a/0x30
> > > [   37.178591]
> > > [   37.178735] The buggy address belongs to the object at ffff88800aecc800
> > > [   37.178735]  which belongs to the cache kmalloc-96 of size 96
> > > [   37.179790] The buggy address is located 0 bytes to the right of
> > > [   37.179790]  allocated 72-byte region [ffff88800aecc800, ffff88800aecc848)
> > > [   37.180931]
> > > [   37.181079] The buggy address belongs to the physical page:
> > > [   37.181572] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xaecc
> > > [   37.182393] flags: 0x100000000000000(node=0|zone=1)
> > > [   37.182819] page_type: f5(slab)
> > > [   37.183080] raw: 0100000000000000 ffff888006c41280 dead000000000122 0000000000000000
> > > [   37.183730] raw: 0000000000000000 0000000000200020 00000000f5000000 0000000000000000
> > > [   37.184333] page dumped because: kasan: bad access detected
> > > [   37.184783]
> > > [   37.184918] Memory state around the buggy address:
> > > [   37.185315]  ffff88800aecc700: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > [   37.185835]  ffff88800aecc780: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > [   37.186336] >ffff88800aecc800: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
> > > [   37.186839]                                               ^
> > > [   37.187255]  ffff88800aecc880: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > [   37.187763]  ffff88800aecc900: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > [   37.188261] ==================================================================
> > > [   37.188938] ==================================================================
> > >
> > > Fixes: db1312dd95488 ("nvmet: implement basic In-Band Authentication")
> > > Signed-off-by: YunJe Shin <ioerts@kookmin.ac.kr>
> > > ---
> > >  drivers/nvme/target/fabrics-cmd-auth.c | 13 ++++++++++++-
> > >  1 file changed, 12 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
> > > index 5946681cb0e3..8ad3255aec4a 100644
> > > --- a/drivers/nvme/target/fabrics-cmd-auth.c
> > > +++ b/drivers/nvme/target/fabrics-cmd-auth.c
> > > @@ -36,6 +36,7 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > >         struct nvmet_ctrl *ctrl = req->sq->ctrl;
> > >         struct nvmf_auth_dhchap_negotiate_data *data = d;
> > >         int i, hash_id = 0, fallback_hash_id = 0, dhgid, fallback_dhgid;
> > > +       size_t idlist_half;
> > >
> > >         pr_debug("%s: ctrl %d qid %d: data sc_d %d napd %d authid %d halen %d dhlen %d\n",
> > >                  __func__, ctrl->cntlid, req->sq->qid,
> > > @@ -72,6 +73,15 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > >             NVME_AUTH_DHCHAP_AUTH_ID)
> > >                 return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> > >
> > > +       /*
> > > +        * idlist[0..idlist_half-1]: hash IDs
> > > +        * idlist[idlist_half..]: DH group IDs
> > > +        */
> > > +       idlist_half = sizeof(data->auth_protocol[0].dhchap.idlist) / 2;
> > > +       if (data->auth_protocol[0].dhchap.halen > idlist_half ||
> > > +           data->auth_protocol[0].dhchap.dhlen > idlist_half)
> > > +               return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> > > +
> > >         for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
> > >                 u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];
> > >
> > > @@ -98,7 +108,8 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > >         dhgid = -1;
> > >         fallback_dhgid = -1;
> > >         for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
> > > -               int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> > > +               int tmp_dhgid =
> > > +                       data->auth_protocol[0].dhchap.idlist[i + idlist_half];
> > >
> > >                 if (tmp_dhgid != ctrl->dh_gid) {
> > >                         dhgid = tmp_dhgid;
> > > --
> > > 2.43.0
> > >

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
  2026-03-08 15:09     ` yunje shin
@ 2026-03-09 18:04       ` Chris Leech
  2026-03-10 17:48         ` yunje shin
  0 siblings, 1 reply; 17+ messages in thread
From: Chris Leech @ 2026-03-09 18:04 UTC (permalink / raw)
  To: yunje shin
  Cc: Hannes Reinecke, Keith Busch, Chaitanya Kulkarni, Sagi Grimberg,
	Christoph Hellwig, linux-nvme, linux-kernel, ioerts

While validating halen and dhlen is a good idea, I don't understand the
reasoning behind the idlist_half calculations. idlist is a fixed sized 
60 byte array, and the DH IDs alway start 30 bytes in.

How did you trigger the KASAN issue?  Are you injecting an invalid
dhlen?  What is the host side, as the linux host driver has a hard coded
halen of 3 and dhlen of 6.

- Chris

On Mon, Mar 09, 2026 at 12:09:01AM +0900, yunje shin wrote:
> Just following up on this patch in case it got buried.
> The KASAN slab-out-of-bounds read is still reproducible on my side.
> I'd appreciate any feedback.
> 
> Thanks,
> Yunje Shin
> 
> On Wed, Feb 18, 2026 at 1:04 PM yunje shin <yjshin0438@gmail.com> wrote:
> >
> > I've confirmed that the issue is still present and the KASAN
> > slab-out-of-bounds read is still reproducible. Please let me know if
> > there are any concerns or if a v2 is needed.
> >
> > Thanks, Yunje Shin
> >
> > On Thu, Feb 12, 2026 at 10:49 AM yunje shin <yjshin0438@gmail.com> wrote:
> > >
> > > The function nvmet_auth_negotiate() parses the idlist array in the
> > > struct nvmf_auth_dhchap_protocol_descriptor payload. This array is 60
> > > bytes and is logically divided into two 30-byte halves: the first half
> > > for HMAC IDs and the second half for DH group IDs. The current code
> > > uses a hardcoded +30 offset for the DH list, but does not validate
> > > halen and dhlen against the per-half bounds. As a result, if a
> > > malicious host sends halen or dhlen larger than 30, the loops can read
> > > beyond the intended half of idlist, and for sufficiently large values
> > > read past the 60-byte array into adjacent slab memory, triggering the
> > > observed KASAN slab-out-of-bounds read.
> > >
> > > This patch fixes the issue by:
> > >     - Computing the half-size from sizeof(idlist) (idlist_half)
> > > instead of hardcoding 30
> > >     - Validating both halen and dhlen are within idlist_half
> > >     - Replacing the hardcoded DH offset with idlist_half
> > >
> > > Thanks,
> > > Yunje Shin
> > >
> > > On Wed, Feb 11, 2026 at 3:59 PM YunJe Shin <yjshin0438@gmail.com> wrote:
> > > >
> > > > Validate DH-HMAC-CHAP hash/DH list lengths before indexing the idlist halves to prevent out-of-bounds reads.
> > > >
> > > > KASAN report:
> > > > [   37.160829] Call Trace:
> > > > [   37.160831]  <TASK>
> > > > [   37.160832]  dump_stack_lvl+0x5f/0x80
> > > > [   37.160837]  print_report+0xd1/0x640
> > > > [   37.160842]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
> > > > [   37.160846]  ? kfree+0x137/0x390
> > > > [   37.160850]  ? kasan_complete_mode_report_info+0x2a/0x200
> > > > [   37.160854]  kasan_report+0xe5/0x120
> > > > [   37.160856]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> > > > [   37.160860]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> > > > [   37.160863]  __asan_report_load1_noabort+0x18/0x20
> > > > [   37.160866]  nvmet_execute_auth_send+0x19a9/0x1f00
> > > > [   37.160870]  nvmet_tcp_io_work+0x17a8/0x2720
> > > > [   37.160874]  ? __pfx_nvmet_tcp_io_work+0x10/0x10
> > > > [   37.160877]  process_one_work+0x5e9/0x1020
> > > > [   37.160881]  ? __kasan_check_write+0x18/0x20
> > > > [   37.160885]  worker_thread+0x446/0xc80
> > > > [   37.160889]  ? __pfx_worker_thread+0x10/0x10
> > > > [   37.160891]  kthread+0x2d7/0x3c0
> > > > [   37.160894]  ? __pfx_kthread+0x10/0x10
> > > > [   37.160897]  ret_from_fork+0x39f/0x5d0
> > > > [   37.160900]  ? __pfx_ret_from_fork+0x10/0x10
> > > > [   37.160903]  ? __kasan_check_read+0x15/0x20
> > > > [   37.160906]  ? __switch_to+0xb45/0xf90
> > > > [   37.160910]  ? __switch_to_asm+0x39/0x70
> > > > [   37.160914]  ? __pfx_kthread+0x10/0x10
> > > > [   37.160916]  ret_from_fork_asm+0x1a/0x30
> > > > [   37.160920]  </TASK>
> > > > [   37.160921]
> > > > [   37.174141] Allocated by task 11:
> > > > [   37.174377]  kasan_save_stack+0x3d/0x60
> > > > [   37.174697]  kasan_save_track+0x18/0x40
> > > > [   37.175043]  kasan_save_alloc_info+0x3b/0x50
> > > > [   37.175420]  __kasan_kmalloc+0x9c/0xa0
> > > > [   37.175762]  __kmalloc_noprof+0x197/0x480
> > > > [   37.176117]  nvmet_execute_auth_send+0x39e/0x1f00
> > > > [   37.176529]  nvmet_tcp_io_work+0x17a8/0x2720
> > > > [   37.176912]  process_one_work+0x5e9/0x1020
> > > > [   37.177275]  worker_thread+0x446/0xc80
> > > > [   37.177616]  kthread+0x2d7/0x3c0
> > > > [   37.177906]  ret_from_fork+0x39f/0x5d0
> > > > [   37.178238]  ret_from_fork_asm+0x1a/0x30
> > > > [   37.178591]
> > > > [   37.178735] The buggy address belongs to the object at ffff88800aecc800
> > > > [   37.178735]  which belongs to the cache kmalloc-96 of size 96
> > > > [   37.179790] The buggy address is located 0 bytes to the right of
> > > > [   37.179790]  allocated 72-byte region [ffff88800aecc800, ffff88800aecc848)
> > > > [   37.180931]
> > > > [   37.181079] The buggy address belongs to the physical page:
> > > > [   37.181572] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xaecc
> > > > [   37.182393] flags: 0x100000000000000(node=0|zone=1)
> > > > [   37.182819] page_type: f5(slab)
> > > > [   37.183080] raw: 0100000000000000 ffff888006c41280 dead000000000122 0000000000000000
> > > > [   37.183730] raw: 0000000000000000 0000000000200020 00000000f5000000 0000000000000000
> > > > [   37.184333] page dumped because: kasan: bad access detected
> > > > [   37.184783]
> > > > [   37.184918] Memory state around the buggy address:
> > > > [   37.185315]  ffff88800aecc700: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > [   37.185835]  ffff88800aecc780: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > [   37.186336] >ffff88800aecc800: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
> > > > [   37.186839]                                               ^
> > > > [   37.187255]  ffff88800aecc880: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > [   37.187763]  ffff88800aecc900: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > [   37.188261] ==================================================================
> > > > [   37.188938] ==================================================================
> > > >
> > > > Fixes: db1312dd95488 ("nvmet: implement basic In-Band Authentication")
> > > > Signed-off-by: YunJe Shin <ioerts@kookmin.ac.kr>
> > > > ---
> > > >  drivers/nvme/target/fabrics-cmd-auth.c | 13 ++++++++++++-
> > > >  1 file changed, 12 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
> > > > index 5946681cb0e3..8ad3255aec4a 100644
> > > > --- a/drivers/nvme/target/fabrics-cmd-auth.c
> > > > +++ b/drivers/nvme/target/fabrics-cmd-auth.c
> > > > @@ -36,6 +36,7 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > > >         struct nvmet_ctrl *ctrl = req->sq->ctrl;
> > > >         struct nvmf_auth_dhchap_negotiate_data *data = d;
> > > >         int i, hash_id = 0, fallback_hash_id = 0, dhgid, fallback_dhgid;
> > > > +       size_t idlist_half;
> > > >
> > > >         pr_debug("%s: ctrl %d qid %d: data sc_d %d napd %d authid %d halen %d dhlen %d\n",
> > > >                  __func__, ctrl->cntlid, req->sq->qid,
> > > > @@ -72,6 +73,15 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > > >             NVME_AUTH_DHCHAP_AUTH_ID)
> > > >                 return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> > > >
> > > > +       /*
> > > > +        * idlist[0..idlist_half-1]: hash IDs
> > > > +        * idlist[idlist_half..]: DH group IDs
> > > > +        */
> > > > +       idlist_half = sizeof(data->auth_protocol[0].dhchap.idlist) / 2;
> > > > +       if (data->auth_protocol[0].dhchap.halen > idlist_half ||
> > > > +           data->auth_protocol[0].dhchap.dhlen > idlist_half)
> > > > +               return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> > > > +
> > > >         for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
> > > >                 u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];
> > > >
> > > > @@ -98,7 +108,8 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > > >         dhgid = -1;
> > > >         fallback_dhgid = -1;
> > > >         for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
> > > > -               int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> > > > +               int tmp_dhgid =
> > > > +                       data->auth_protocol[0].dhchap.idlist[i + idlist_half];
> > > >
> > > >                 if (tmp_dhgid != ctrl->dh_gid) {
> > > >                         dhgid = tmp_dhgid;
> > > > --
> > > > 2.43.0
> > > >
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
  2026-03-09 18:04       ` Chris Leech
@ 2026-03-10 17:48         ` yunje shin
  2026-03-10 17:52           ` yunje shin
  0 siblings, 1 reply; 17+ messages in thread
From: yunje shin @ 2026-03-10 17:48 UTC (permalink / raw)
  To: Chris Leech, Hannes Reinecke, Keith Busch
  Cc: Chaitanya Kulkarni, Sagi Grimberg, Christoph Hellwig, linux-nvme,
	linux-kernel, ioerts

Test environment:
  - Kernel: v7.0-rc3 (mainline, commit torvalds/linux v7.0-rc3)
  - Config: CONFIG_KASAN=y, CONFIG_KASAN_GENERIC=y,
            CONFIG_NVME_TARGET=y, CONFIG_NVME_TARGET_TCP=y,
            CONFIG_NVME_TARGET_AUTH=y, CONFIG_CRYPTO_DH=y,
            CONFIG_CRYPTO_HMAC=y, CONFIG_CRYPTO_SHA256=y
  - Boot:   QEMU x86_64, 4G RAM, KVM, slub_debug=FZP

KASAN report from v7.0-rc3:

[    4.240693] ==================================================================
[    4.241646] BUG: KASAN: slab-out-of-bounds in
nvmet_execute_auth_send+0x19b8/0x2090
[    4.242874] Read of size 1 at addr ffff8881045754e8 by task kworker/1:1H/41
[    4.243796]
[    4.244015] CPU: 1 UID: 0 PID: 41 Comm: kworker/1:1H Not tainted
7.0.0-rc3 #2 PREEMPT(lazy)
[    4.244025] Hardware name: QEMU Ubuntu 24.04 PC v2 (i440FX + PIIX,
arch_caps fix, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
[    4.244030] Workqueue: nvmet_tcp_wq nvmet_tcp_io_work
[    4.244047] Call Trace:
[    4.244065]  <TASK>
[    4.244071]  dump_stack_lvl+0x53/0x70
[    4.244110]  print_report+0xd0/0x660
[    4.244142]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
[    4.244155]  ? nvmet_execute_auth_send+0x19b8/0x2090
[    4.244160]  kasan_report+0xce/0x100
[    4.244164]  ? nvmet_execute_auth_send+0x19b8/0x2090
[    4.244170]  nvmet_execute_auth_send+0x19b8/0x2090
[    4.244176]  nvmet_tcp_io_work+0x1709/0x2200
[    4.244181]  ? srso_alias_return_thunk+0x5/0xfbef5
[    4.244196]  ? srso_alias_return_thunk+0x5/0xfbef5
[    4.244201]  ? __pfx_nvmet_tcp_io_work+0x10/0x10
[    4.244206]  process_one_work+0x5e7/0xfe0
[    4.244227]  ? srso_alias_return_thunk+0x5/0xfbef5
[    4.244231]  ? assign_work+0x11d/0x370
[    4.244235]  worker_thread+0x446/0xd00
[    4.244241]  ? __pfx_worker_thread+0x10/0x10
[    4.244246]  ? __pfx_worker_thread+0x10/0x10
[    4.244250]  kthread+0x2c6/0x3b0
[    4.244259]  ? recalc_sigpending+0x15c/0x1e0
[    4.244266]  ? __pfx_kthread+0x10/0x10
[    4.244270]  ret_from_fork+0x38d/0x5c0
[    4.244283]  ? __pfx_ret_from_fork+0x10/0x10
[    4.244287]  ? srso_alias_return_thunk+0x5/0xfbef5
[    4.244291]  ? __switch_to+0x534/0xea0
[    4.244300]  ? __switch_to_asm+0x39/0x70
[    4.244305]  ? __switch_to_asm+0x33/0x70
[    4.244309]  ? __pfx_kthread+0x10/0x10
[    4.244312]  ret_from_fork_asm+0x1a/0x30
[    4.244320]  </TASK>
[    4.244322]
[    4.261451] Allocated by task 41:
[    4.261716]  kasan_save_stack+0x33/0x60
[    4.262034]  kasan_save_track+0x14/0x30
[    4.262338]  __kasan_kmalloc+0x8f/0xa0
[    4.262634]  __kmalloc_noprof+0x18e/0x480
[    4.262960]  nvmet_execute_auth_send+0x3be/0x2090
[    4.263339]  nvmet_tcp_io_work+0x1709/0x2200
[    4.263681]  process_one_work+0x5e7/0xfe0
[    4.263997]  worker_thread+0x446/0xd00
[    4.264327]  kthread+0x2c6/0x3b0
[    4.264591]  ret_from_fork+0x38d/0x5c0
[    4.264891]  ret_from_fork_asm+0x1a/0x30
[    4.265211]
[    4.265342] The buggy address belongs to the object at ffff8881045754a0
[    4.265342]  which belongs to the cache kmalloc-96 of size 96
[    4.266291] The buggy address is located 0 bytes to the right of
[    4.266291]  allocated 72-byte region [ffff8881045754a0, ffff8881045754e8)
[    4.267277]
[    4.267408] The buggy address belongs to the physical page:
[    4.267840] page: refcount:0 mapcount:0 mapping:0000000000000000
index:0x0 pfn:0x104575
[    4.268473] flags: 0x200000000000000(node=0|zone=2)
[    4.268855] page_type: f5(slab)
[    4.269120] raw: 0200000000000000 ffff888100042340 dead000000000100
dead000000000122
[    4.269714] raw: 0000000000000000 0000000000150015 00000000f5000000
0000000000000000
[    4.270337] page dumped because: kasan: bad access detected
[    4.270769]
[    4.270899] Memory state around the buggy address:
[    4.271284]  ffff888104575380: fc fc fc fc fc fc fc fc fc fc fc fc
00 00 00 00
[    4.271854]  ffff888104575400: 00 00 00 00 00 00 fc fc fc fc fc fc
fc fc fc fc
[    4.272418] >ffff888104575480: fc fc fc fc 00 00 00 00 00 00 00 00
00 fc fc fc
[    4.272971]                                                           ^
[    4.273488]  ffff888104575500: fc fc fc fc fc fc fc fc fc fc fc fc
fa fb fb fb
[    4.274053]  ffff888104575580: fb fb fb fb fb fb fb fb fc fc fc fc
fc fc fc fc
[    4.274607] ==================================================================
[    4.275336] Disabling lock debugging due to kernel taint

On Tue, Mar 10, 2026 at 3:04 AM Chris Leech <cleech@redhat.com> wrote:
>
> While validating halen and dhlen is a good idea, I don't understand the
> reasoning behind the idlist_half calculations. idlist is a fixed sized
> 60 byte array, and the DH IDs alway start 30 bytes in.
>
> How did you trigger the KASAN issue?  Are you injecting an invalid
> dhlen?  What is the host side, as the linux host driver has a hard coded
> halen of 3 and dhlen of 6.
>
> - Chris
>
> On Mon, Mar 09, 2026 at 12:09:01AM +0900, yunje shin wrote:
> > Just following up on this patch in case it got buried.
> > The KASAN slab-out-of-bounds read is still reproducible on my side.
> > I'd appreciate any feedback.
> >
> > Thanks,
> > Yunje Shin
> >
> > On Wed, Feb 18, 2026 at 1:04 PM yunje shin <yjshin0438@gmail.com> wrote:
> > >
> > > I've confirmed that the issue is still present and the KASAN
> > > slab-out-of-bounds read is still reproducible. Please let me know if
> > > there are any concerns or if a v2 is needed.
> > >
> > > Thanks, Yunje Shin
> > >
> > > On Thu, Feb 12, 2026 at 10:49 AM yunje shin <yjshin0438@gmail.com> wrote:
> > > >
> > > > The function nvmet_auth_negotiate() parses the idlist array in the
> > > > struct nvmf_auth_dhchap_protocol_descriptor payload. This array is 60
> > > > bytes and is logically divided into two 30-byte halves: the first half
> > > > for HMAC IDs and the second half for DH group IDs. The current code
> > > > uses a hardcoded +30 offset for the DH list, but does not validate
> > > > halen and dhlen against the per-half bounds. As a result, if a
> > > > malicious host sends halen or dhlen larger than 30, the loops can read
> > > > beyond the intended half of idlist, and for sufficiently large values
> > > > read past the 60-byte array into adjacent slab memory, triggering the
> > > > observed KASAN slab-out-of-bounds read.
> > > >
> > > > This patch fixes the issue by:
> > > >     - Computing the half-size from sizeof(idlist) (idlist_half)
> > > > instead of hardcoding 30
> > > >     - Validating both halen and dhlen are within idlist_half
> > > >     - Replacing the hardcoded DH offset with idlist_half
> > > >
> > > > Thanks,
> > > > Yunje Shin
> > > >
> > > > On Wed, Feb 11, 2026 at 3:59 PM YunJe Shin <yjshin0438@gmail.com> wrote:
> > > > >
> > > > > Validate DH-HMAC-CHAP hash/DH list lengths before indexing the idlist halves to prevent out-of-bounds reads.
> > > > >
> > > > > KASAN report:
> > > > > [   37.160829] Call Trace:
> > > > > [   37.160831]  <TASK>
> > > > > [   37.160832]  dump_stack_lvl+0x5f/0x80
> > > > > [   37.160837]  print_report+0xd1/0x640
> > > > > [   37.160842]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
> > > > > [   37.160846]  ? kfree+0x137/0x390
> > > > > [   37.160850]  ? kasan_complete_mode_report_info+0x2a/0x200
> > > > > [   37.160854]  kasan_report+0xe5/0x120
> > > > > [   37.160856]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> > > > > [   37.160860]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> > > > > [   37.160863]  __asan_report_load1_noabort+0x18/0x20
> > > > > [   37.160866]  nvmet_execute_auth_send+0x19a9/0x1f00
> > > > > [   37.160870]  nvmet_tcp_io_work+0x17a8/0x2720
> > > > > [   37.160874]  ? __pfx_nvmet_tcp_io_work+0x10/0x10
> > > > > [   37.160877]  process_one_work+0x5e9/0x1020
> > > > > [   37.160881]  ? __kasan_check_write+0x18/0x20
> > > > > [   37.160885]  worker_thread+0x446/0xc80
> > > > > [   37.160889]  ? __pfx_worker_thread+0x10/0x10
> > > > > [   37.160891]  kthread+0x2d7/0x3c0
> > > > > [   37.160894]  ? __pfx_kthread+0x10/0x10
> > > > > [   37.160897]  ret_from_fork+0x39f/0x5d0
> > > > > [   37.160900]  ? __pfx_ret_from_fork+0x10/0x10
> > > > > [   37.160903]  ? __kasan_check_read+0x15/0x20
> > > > > [   37.160906]  ? __switch_to+0xb45/0xf90
> > > > > [   37.160910]  ? __switch_to_asm+0x39/0x70
> > > > > [   37.160914]  ? __pfx_kthread+0x10/0x10
> > > > > [   37.160916]  ret_from_fork_asm+0x1a/0x30
> > > > > [   37.160920]  </TASK>
> > > > > [   37.160921]
> > > > > [   37.174141] Allocated by task 11:
> > > > > [   37.174377]  kasan_save_stack+0x3d/0x60
> > > > > [   37.174697]  kasan_save_track+0x18/0x40
> > > > > [   37.175043]  kasan_save_alloc_info+0x3b/0x50
> > > > > [   37.175420]  __kasan_kmalloc+0x9c/0xa0
> > > > > [   37.175762]  __kmalloc_noprof+0x197/0x480
> > > > > [   37.176117]  nvmet_execute_auth_send+0x39e/0x1f00
> > > > > [   37.176529]  nvmet_tcp_io_work+0x17a8/0x2720
> > > > > [   37.176912]  process_one_work+0x5e9/0x1020
> > > > > [   37.177275]  worker_thread+0x446/0xc80
> > > > > [   37.177616]  kthread+0x2d7/0x3c0
> > > > > [   37.177906]  ret_from_fork+0x39f/0x5d0
> > > > > [   37.178238]  ret_from_fork_asm+0x1a/0x30
> > > > > [   37.178591]
> > > > > [   37.178735] The buggy address belongs to the object at ffff88800aecc800
> > > > > [   37.178735]  which belongs to the cache kmalloc-96 of size 96
> > > > > [   37.179790] The buggy address is located 0 bytes to the right of
> > > > > [   37.179790]  allocated 72-byte region [ffff88800aecc800, ffff88800aecc848)
> > > > > [   37.180931]
> > > > > [   37.181079] The buggy address belongs to the physical page:
> > > > > [   37.181572] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xaecc
> > > > > [   37.182393] flags: 0x100000000000000(node=0|zone=1)
> > > > > [   37.182819] page_type: f5(slab)
> > > > > [   37.183080] raw: 0100000000000000 ffff888006c41280 dead000000000122 0000000000000000
> > > > > [   37.183730] raw: 0000000000000000 0000000000200020 00000000f5000000 0000000000000000
> > > > > [   37.184333] page dumped because: kasan: bad access detected
> > > > > [   37.184783]
> > > > > [   37.184918] Memory state around the buggy address:
> > > > > [   37.185315]  ffff88800aecc700: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > > [   37.185835]  ffff88800aecc780: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > > [   37.186336] >ffff88800aecc800: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
> > > > > [   37.186839]                                               ^
> > > > > [   37.187255]  ffff88800aecc880: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > > [   37.187763]  ffff88800aecc900: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > > [   37.188261] ==================================================================
> > > > > [   37.188938] ==================================================================
> > > > >
> > > > > Fixes: db1312dd95488 ("nvmet: implement basic In-Band Authentication")
> > > > > Signed-off-by: YunJe Shin <ioerts@kookmin.ac.kr>
> > > > > ---
> > > > >  drivers/nvme/target/fabrics-cmd-auth.c | 13 ++++++++++++-
> > > > >  1 file changed, 12 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
> > > > > index 5946681cb0e3..8ad3255aec4a 100644
> > > > > --- a/drivers/nvme/target/fabrics-cmd-auth.c
> > > > > +++ b/drivers/nvme/target/fabrics-cmd-auth.c
> > > > > @@ -36,6 +36,7 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > > > >         struct nvmet_ctrl *ctrl = req->sq->ctrl;
> > > > >         struct nvmf_auth_dhchap_negotiate_data *data = d;
> > > > >         int i, hash_id = 0, fallback_hash_id = 0, dhgid, fallback_dhgid;
> > > > > +       size_t idlist_half;
> > > > >
> > > > >         pr_debug("%s: ctrl %d qid %d: data sc_d %d napd %d authid %d halen %d dhlen %d\n",
> > > > >                  __func__, ctrl->cntlid, req->sq->qid,
> > > > > @@ -72,6 +73,15 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > > > >             NVME_AUTH_DHCHAP_AUTH_ID)
> > > > >                 return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> > > > >
> > > > > +       /*
> > > > > +        * idlist[0..idlist_half-1]: hash IDs
> > > > > +        * idlist[idlist_half..]: DH group IDs
> > > > > +        */
> > > > > +       idlist_half = sizeof(data->auth_protocol[0].dhchap.idlist) / 2;
> > > > > +       if (data->auth_protocol[0].dhchap.halen > idlist_half ||
> > > > > +           data->auth_protocol[0].dhchap.dhlen > idlist_half)
> > > > > +               return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> > > > > +
> > > > >         for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
> > > > >                 u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];
> > > > >
> > > > > @@ -98,7 +108,8 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > > > >         dhgid = -1;
> > > > >         fallback_dhgid = -1;
> > > > >         for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
> > > > > -               int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> > > > > +               int tmp_dhgid =
> > > > > +                       data->auth_protocol[0].dhchap.idlist[i + idlist_half];
> > > > >
> > > > >                 if (tmp_dhgid != ctrl->dh_gid) {
> > > > >                         dhgid = tmp_dhgid;
> > > > > --
> > > > > 2.43.0
> > > > >
> >
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
  2026-03-10 17:48         ` yunje shin
@ 2026-03-10 17:52           ` yunje shin
  2026-03-10 18:07             ` Chris Leech
  0 siblings, 1 reply; 17+ messages in thread
From: yunje shin @ 2026-03-10 17:52 UTC (permalink / raw)
  To: Chris Leech, Hannes Reinecke, Keith Busch
  Cc: Chaitanya Kulkarni, Sagi Grimberg, Christoph Hellwig, linux-nvme,
	linux-kernel, ioerts

Thanks for the review.

Yes, I triggered the KASAN issue by injecting an invalid dhlen The
reproduction steps are:
1. Connect to the NVMe/TCP target on port 4420 (ICReq + Fabrics CONNECT).
2. Send AUTH_SEND with a crafted NEGOTIATE payload where dhlen=200. 3.
The kernel target code in nvmet_auth_negotiate() then iterates

    for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
        int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];

With dhlen=200, this reads idlist[30..229], but idlist[] is only 60
bytes (indices 0..59). The accesses at indices 60 and beyond read past
the kmalloc'd slab object into adjacent slab memory, which KASAN
catches as a slab-out-of-bounds read.

While the standard Linux NVMe host driver does use hardcoded halen=3
and dhlen=6, the NVMe target is network-facing and must validate all
fields from the wire. A malicious or non-standard host can send
arbitrary values. The same applies to halen — if halen > 30, the
first loop also reads out of bounds.

Regarding idlist_half — yes, idlist is currently a fixed 60-byte array
and the DH offset is always 30. I derived it from sizeof(idlist)
rather than hardcoding 30 so that the bounds check and the DH offset
stay consistent with the array definition. If the struct ever changes,
the validation adapts automatically instead of silently going stale.

Thanks,
Yunje Shin

On Wed, Mar 11, 2026 at 2:48 AM yunje shin <yjshin0438@gmail.com> wrote:
>
> Test environment:
>   - Kernel: v7.0-rc3 (mainline, commit torvalds/linux v7.0-rc3)
>   - Config: CONFIG_KASAN=y, CONFIG_KASAN_GENERIC=y,
>             CONFIG_NVME_TARGET=y, CONFIG_NVME_TARGET_TCP=y,
>             CONFIG_NVME_TARGET_AUTH=y, CONFIG_CRYPTO_DH=y,
>             CONFIG_CRYPTO_HMAC=y, CONFIG_CRYPTO_SHA256=y
>   - Boot:   QEMU x86_64, 4G RAM, KVM, slub_debug=FZP
>
> KASAN report from v7.0-rc3:
>
> [    4.240693] ==================================================================
> [    4.241646] BUG: KASAN: slab-out-of-bounds in
> nvmet_execute_auth_send+0x19b8/0x2090
> [    4.242874] Read of size 1 at addr ffff8881045754e8 by task kworker/1:1H/41
> [    4.243796]
> [    4.244015] CPU: 1 UID: 0 PID: 41 Comm: kworker/1:1H Not tainted
> 7.0.0-rc3 #2 PREEMPT(lazy)
> [    4.244025] Hardware name: QEMU Ubuntu 24.04 PC v2 (i440FX + PIIX,
> arch_caps fix, 1996), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
> [    4.244030] Workqueue: nvmet_tcp_wq nvmet_tcp_io_work
> [    4.244047] Call Trace:
> [    4.244065]  <TASK>
> [    4.244071]  dump_stack_lvl+0x53/0x70
> [    4.244110]  print_report+0xd0/0x660
> [    4.244142]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
> [    4.244155]  ? nvmet_execute_auth_send+0x19b8/0x2090
> [    4.244160]  kasan_report+0xce/0x100
> [    4.244164]  ? nvmet_execute_auth_send+0x19b8/0x2090
> [    4.244170]  nvmet_execute_auth_send+0x19b8/0x2090
> [    4.244176]  nvmet_tcp_io_work+0x1709/0x2200
> [    4.244181]  ? srso_alias_return_thunk+0x5/0xfbef5
> [    4.244196]  ? srso_alias_return_thunk+0x5/0xfbef5
> [    4.244201]  ? __pfx_nvmet_tcp_io_work+0x10/0x10
> [    4.244206]  process_one_work+0x5e7/0xfe0
> [    4.244227]  ? srso_alias_return_thunk+0x5/0xfbef5
> [    4.244231]  ? assign_work+0x11d/0x370
> [    4.244235]  worker_thread+0x446/0xd00
> [    4.244241]  ? __pfx_worker_thread+0x10/0x10
> [    4.244246]  ? __pfx_worker_thread+0x10/0x10
> [    4.244250]  kthread+0x2c6/0x3b0
> [    4.244259]  ? recalc_sigpending+0x15c/0x1e0
> [    4.244266]  ? __pfx_kthread+0x10/0x10
> [    4.244270]  ret_from_fork+0x38d/0x5c0
> [    4.244283]  ? __pfx_ret_from_fork+0x10/0x10
> [    4.244287]  ? srso_alias_return_thunk+0x5/0xfbef5
> [    4.244291]  ? __switch_to+0x534/0xea0
> [    4.244300]  ? __switch_to_asm+0x39/0x70
> [    4.244305]  ? __switch_to_asm+0x33/0x70
> [    4.244309]  ? __pfx_kthread+0x10/0x10
> [    4.244312]  ret_from_fork_asm+0x1a/0x30
> [    4.244320]  </TASK>
> [    4.244322]
> [    4.261451] Allocated by task 41:
> [    4.261716]  kasan_save_stack+0x33/0x60
> [    4.262034]  kasan_save_track+0x14/0x30
> [    4.262338]  __kasan_kmalloc+0x8f/0xa0
> [    4.262634]  __kmalloc_noprof+0x18e/0x480
> [    4.262960]  nvmet_execute_auth_send+0x3be/0x2090
> [    4.263339]  nvmet_tcp_io_work+0x1709/0x2200
> [    4.263681]  process_one_work+0x5e7/0xfe0
> [    4.263997]  worker_thread+0x446/0xd00
> [    4.264327]  kthread+0x2c6/0x3b0
> [    4.264591]  ret_from_fork+0x38d/0x5c0
> [    4.264891]  ret_from_fork_asm+0x1a/0x30
> [    4.265211]
> [    4.265342] The buggy address belongs to the object at ffff8881045754a0
> [    4.265342]  which belongs to the cache kmalloc-96 of size 96
> [    4.266291] The buggy address is located 0 bytes to the right of
> [    4.266291]  allocated 72-byte region [ffff8881045754a0, ffff8881045754e8)
> [    4.267277]
> [    4.267408] The buggy address belongs to the physical page:
> [    4.267840] page: refcount:0 mapcount:0 mapping:0000000000000000
> index:0x0 pfn:0x104575
> [    4.268473] flags: 0x200000000000000(node=0|zone=2)
> [    4.268855] page_type: f5(slab)
> [    4.269120] raw: 0200000000000000 ffff888100042340 dead000000000100
> dead000000000122
> [    4.269714] raw: 0000000000000000 0000000000150015 00000000f5000000
> 0000000000000000
> [    4.270337] page dumped because: kasan: bad access detected
> [    4.270769]
> [    4.270899] Memory state around the buggy address:
> [    4.271284]  ffff888104575380: fc fc fc fc fc fc fc fc fc fc fc fc
> 00 00 00 00
> [    4.271854]  ffff888104575400: 00 00 00 00 00 00 fc fc fc fc fc fc
> fc fc fc fc
> [    4.272418] >ffff888104575480: fc fc fc fc 00 00 00 00 00 00 00 00
> 00 fc fc fc
> [    4.272971]                                                           ^
> [    4.273488]  ffff888104575500: fc fc fc fc fc fc fc fc fc fc fc fc
> fa fb fb fb
> [    4.274053]  ffff888104575580: fb fb fb fb fb fb fb fb fc fc fc fc
> fc fc fc fc
> [    4.274607] ==================================================================
> [    4.275336] Disabling lock debugging due to kernel taint
>
> On Tue, Mar 10, 2026 at 3:04 AM Chris Leech <cleech@redhat.com> wrote:
> >
> > While validating halen and dhlen is a good idea, I don't understand the
> > reasoning behind the idlist_half calculations. idlist is a fixed sized
> > 60 byte array, and the DH IDs alway start 30 bytes in.
> >
> > How did you trigger the KASAN issue?  Are you injecting an invalid
> > dhlen?  What is the host side, as the linux host driver has a hard coded
> > halen of 3 and dhlen of 6.
> >
> > - Chris
> >
> > On Mon, Mar 09, 2026 at 12:09:01AM +0900, yunje shin wrote:
> > > Just following up on this patch in case it got buried.
> > > The KASAN slab-out-of-bounds read is still reproducible on my side.
> > > I'd appreciate any feedback.
> > >
> > > Thanks,
> > > Yunje Shin
> > >
> > > On Wed, Feb 18, 2026 at 1:04 PM yunje shin <yjshin0438@gmail.com> wrote:
> > > >
> > > > I've confirmed that the issue is still present and the KASAN
> > > > slab-out-of-bounds read is still reproducible. Please let me know if
> > > > there are any concerns or if a v2 is needed.
> > > >
> > > > Thanks, Yunje Shin
> > > >
> > > > On Thu, Feb 12, 2026 at 10:49 AM yunje shin <yjshin0438@gmail.com> wrote:
> > > > >
> > > > > The function nvmet_auth_negotiate() parses the idlist array in the
> > > > > struct nvmf_auth_dhchap_protocol_descriptor payload. This array is 60
> > > > > bytes and is logically divided into two 30-byte halves: the first half
> > > > > for HMAC IDs and the second half for DH group IDs. The current code
> > > > > uses a hardcoded +30 offset for the DH list, but does not validate
> > > > > halen and dhlen against the per-half bounds. As a result, if a
> > > > > malicious host sends halen or dhlen larger than 30, the loops can read
> > > > > beyond the intended half of idlist, and for sufficiently large values
> > > > > read past the 60-byte array into adjacent slab memory, triggering the
> > > > > observed KASAN slab-out-of-bounds read.
> > > > >
> > > > > This patch fixes the issue by:
> > > > >     - Computing the half-size from sizeof(idlist) (idlist_half)
> > > > > instead of hardcoding 30
> > > > >     - Validating both halen and dhlen are within idlist_half
> > > > >     - Replacing the hardcoded DH offset with idlist_half
> > > > >
> > > > > Thanks,
> > > > > Yunje Shin
> > > > >
> > > > > On Wed, Feb 11, 2026 at 3:59 PM YunJe Shin <yjshin0438@gmail.com> wrote:
> > > > > >
> > > > > > Validate DH-HMAC-CHAP hash/DH list lengths before indexing the idlist halves to prevent out-of-bounds reads.
> > > > > >
> > > > > > KASAN report:
> > > > > > [   37.160829] Call Trace:
> > > > > > [   37.160831]  <TASK>
> > > > > > [   37.160832]  dump_stack_lvl+0x5f/0x80
> > > > > > [   37.160837]  print_report+0xd1/0x640
> > > > > > [   37.160842]  ? __pfx__raw_spin_lock_irqsave+0x10/0x10
> > > > > > [   37.160846]  ? kfree+0x137/0x390
> > > > > > [   37.160850]  ? kasan_complete_mode_report_info+0x2a/0x200
> > > > > > [   37.160854]  kasan_report+0xe5/0x120
> > > > > > [   37.160856]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> > > > > > [   37.160860]  ? nvmet_execute_auth_send+0x19a9/0x1f00
> > > > > > [   37.160863]  __asan_report_load1_noabort+0x18/0x20
> > > > > > [   37.160866]  nvmet_execute_auth_send+0x19a9/0x1f00
> > > > > > [   37.160870]  nvmet_tcp_io_work+0x17a8/0x2720
> > > > > > [   37.160874]  ? __pfx_nvmet_tcp_io_work+0x10/0x10
> > > > > > [   37.160877]  process_one_work+0x5e9/0x1020
> > > > > > [   37.160881]  ? __kasan_check_write+0x18/0x20
> > > > > > [   37.160885]  worker_thread+0x446/0xc80
> > > > > > [   37.160889]  ? __pfx_worker_thread+0x10/0x10
> > > > > > [   37.160891]  kthread+0x2d7/0x3c0
> > > > > > [   37.160894]  ? __pfx_kthread+0x10/0x10
> > > > > > [   37.160897]  ret_from_fork+0x39f/0x5d0
> > > > > > [   37.160900]  ? __pfx_ret_from_fork+0x10/0x10
> > > > > > [   37.160903]  ? __kasan_check_read+0x15/0x20
> > > > > > [   37.160906]  ? __switch_to+0xb45/0xf90
> > > > > > [   37.160910]  ? __switch_to_asm+0x39/0x70
> > > > > > [   37.160914]  ? __pfx_kthread+0x10/0x10
> > > > > > [   37.160916]  ret_from_fork_asm+0x1a/0x30
> > > > > > [   37.160920]  </TASK>
> > > > > > [   37.160921]
> > > > > > [   37.174141] Allocated by task 11:
> > > > > > [   37.174377]  kasan_save_stack+0x3d/0x60
> > > > > > [   37.174697]  kasan_save_track+0x18/0x40
> > > > > > [   37.175043]  kasan_save_alloc_info+0x3b/0x50
> > > > > > [   37.175420]  __kasan_kmalloc+0x9c/0xa0
> > > > > > [   37.175762]  __kmalloc_noprof+0x197/0x480
> > > > > > [   37.176117]  nvmet_execute_auth_send+0x39e/0x1f00
> > > > > > [   37.176529]  nvmet_tcp_io_work+0x17a8/0x2720
> > > > > > [   37.176912]  process_one_work+0x5e9/0x1020
> > > > > > [   37.177275]  worker_thread+0x446/0xc80
> > > > > > [   37.177616]  kthread+0x2d7/0x3c0
> > > > > > [   37.177906]  ret_from_fork+0x39f/0x5d0
> > > > > > [   37.178238]  ret_from_fork_asm+0x1a/0x30
> > > > > > [   37.178591]
> > > > > > [   37.178735] The buggy address belongs to the object at ffff88800aecc800
> > > > > > [   37.178735]  which belongs to the cache kmalloc-96 of size 96
> > > > > > [   37.179790] The buggy address is located 0 bytes to the right of
> > > > > > [   37.179790]  allocated 72-byte region [ffff88800aecc800, ffff88800aecc848)
> > > > > > [   37.180931]
> > > > > > [   37.181079] The buggy address belongs to the physical page:
> > > > > > [   37.181572] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0xaecc
> > > > > > [   37.182393] flags: 0x100000000000000(node=0|zone=1)
> > > > > > [   37.182819] page_type: f5(slab)
> > > > > > [   37.183080] raw: 0100000000000000 ffff888006c41280 dead000000000122 0000000000000000
> > > > > > [   37.183730] raw: 0000000000000000 0000000000200020 00000000f5000000 0000000000000000
> > > > > > [   37.184333] page dumped because: kasan: bad access detected
> > > > > > [   37.184783]
> > > > > > [   37.184918] Memory state around the buggy address:
> > > > > > [   37.185315]  ffff88800aecc700: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > > > [   37.185835]  ffff88800aecc780: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > > > [   37.186336] >ffff88800aecc800: 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc
> > > > > > [   37.186839]                                               ^
> > > > > > [   37.187255]  ffff88800aecc880: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > > > [   37.187763]  ffff88800aecc900: fa fb fb fb fb fb fb fb fb fb fb fb fc fc fc fc
> > > > > > [   37.188261] ==================================================================
> > > > > > [   37.188938] ==================================================================
> > > > > >
> > > > > > Fixes: db1312dd95488 ("nvmet: implement basic In-Band Authentication")
> > > > > > Signed-off-by: YunJe Shin <ioerts@kookmin.ac.kr>
> > > > > > ---
> > > > > >  drivers/nvme/target/fabrics-cmd-auth.c | 13 ++++++++++++-
> > > > > >  1 file changed, 12 insertions(+), 1 deletion(-)
> > > > > >
> > > > > > diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
> > > > > > index 5946681cb0e3..8ad3255aec4a 100644
> > > > > > --- a/drivers/nvme/target/fabrics-cmd-auth.c
> > > > > > +++ b/drivers/nvme/target/fabrics-cmd-auth.c
> > > > > > @@ -36,6 +36,7 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > > > > >         struct nvmet_ctrl *ctrl = req->sq->ctrl;
> > > > > >         struct nvmf_auth_dhchap_negotiate_data *data = d;
> > > > > >         int i, hash_id = 0, fallback_hash_id = 0, dhgid, fallback_dhgid;
> > > > > > +       size_t idlist_half;
> > > > > >
> > > > > >         pr_debug("%s: ctrl %d qid %d: data sc_d %d napd %d authid %d halen %d dhlen %d\n",
> > > > > >                  __func__, ctrl->cntlid, req->sq->qid,
> > > > > > @@ -72,6 +73,15 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > > > > >             NVME_AUTH_DHCHAP_AUTH_ID)
> > > > > >                 return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> > > > > >
> > > > > > +       /*
> > > > > > +        * idlist[0..idlist_half-1]: hash IDs
> > > > > > +        * idlist[idlist_half..]: DH group IDs
> > > > > > +        */
> > > > > > +       idlist_half = sizeof(data->auth_protocol[0].dhchap.idlist) / 2;
> > > > > > +       if (data->auth_protocol[0].dhchap.halen > idlist_half ||
> > > > > > +           data->auth_protocol[0].dhchap.dhlen > idlist_half)
> > > > > > +               return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> > > > > > +
> > > > > >         for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
> > > > > >                 u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];
> > > > > >
> > > > > > @@ -98,7 +108,8 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
> > > > > >         dhgid = -1;
> > > > > >         fallback_dhgid = -1;
> > > > > >         for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
> > > > > > -               int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> > > > > > +               int tmp_dhgid =
> > > > > > +                       data->auth_protocol[0].dhchap.idlist[i + idlist_half];
> > > > > >
> > > > > >                 if (tmp_dhgid != ctrl->dh_gid) {
> > > > > >                         dhgid = tmp_dhgid;
> > > > > > --
> > > > > > 2.43.0
> > > > > >
> > >
> >

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
  2026-03-10 17:52           ` yunje shin
@ 2026-03-10 18:07             ` Chris Leech
  2026-03-10 19:06               ` yunje shin
  0 siblings, 1 reply; 17+ messages in thread
From: Chris Leech @ 2026-03-10 18:07 UTC (permalink / raw)
  To: yunje shin
  Cc: Hannes Reinecke, Keith Busch, Chaitanya Kulkarni, Sagi Grimberg,
	Christoph Hellwig, linux-nvme, linux-kernel, ioerts

On Wed, Mar 11, 2026 at 02:52:36AM +0900, yunje shin wrote:
> Thanks for the review.
> 
> Yes, I triggered the KASAN issue by injecting an invalid dhlen The
> reproduction steps are:
> 1. Connect to the NVMe/TCP target on port 4420 (ICReq + Fabrics CONNECT).
> 2. Send AUTH_SEND with a crafted NEGOTIATE payload where dhlen=200. 3.
> The kernel target code in nvmet_auth_negotiate() then iterates
> 
>     for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
>         int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> 
> With dhlen=200, this reads idlist[30..229], but idlist[] is only 60
> bytes (indices 0..59). The accesses at indices 60 and beyond read past
> the kmalloc'd slab object into adjacent slab memory, which KASAN
> catches as a slab-out-of-bounds read.

Thank you, I appreciate understanding how this was triggered.
 
> While the standard Linux NVMe host driver does use hardcoded halen=3
> and dhlen=6, the NVMe target is network-facing and must validate all
> fields from the wire. A malicious or non-standard host can send
> arbitrary values. The same applies to halen — if halen > 30, the
> first loop also reads out of bounds.

Yes, this code absolutly should validate halen and dhlen bounds.

> Regarding idlist_half — yes, idlist is currently a fixed 60-byte array
> and the DH offset is always 30. I derived it from sizeof(idlist)
> rather than hardcoding 30 so that the bounds check and the DH offset
> stay consistent with the array definition. If the struct ever changes,
> the validation adapts automatically instead of silently going stale.

The 60-byte idlist (and 30:30 split) are part of the NVMe specification.
It's the maximum amount of space while keeping to a 64-byte struct.

I'd rather see this made clearer with a define for the limit, but not
adding code that appears to calculate it at runtime.

Thanks,
- Chris


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
  2026-03-10 18:07             ` Chris Leech
@ 2026-03-10 19:06               ` yunje shin
  2026-03-10 20:34                 ` Chris Leech
  2026-03-12  7:01                 ` Hannes Reinecke
  0 siblings, 2 replies; 17+ messages in thread
From: yunje shin @ 2026-03-10 19:06 UTC (permalink / raw)
  To: Chris Leech
  Cc: Hannes Reinecke, Keith Busch, Chaitanya Kulkarni, Sagi Grimberg,
	Christoph Hellwig, linux-nvme, linux-kernel, ioerts

Thank you for the clarification regarding the 64-byte structural
constraints. If this approach looks good to you, I will format it
properly with an updated commit message and send out a formal v2
patch.

diff --git a/drivers/nvme/target/fabrics-cmd-auth.c
b/drivers/nvme/target/fabrics-cmd-auth.c
index 5946681cb0e3..acba4878a873 100644
--- a/drivers/nvme/target/fabrics-cmd-auth.c
+++ b/drivers/nvme/target/fabrics-cmd-auth.c
@@ -72,6 +72,14 @@ static u8 nvmet_auth_negotiate(struct nvmet_req
*req, void *d)
      NVME_AUTH_DHCHAP_AUTH_ID)
  return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;

+ /*
+ * idlist[0..29]: hash IDs
+ * idlist[30..59]: DH group IDs
+ */
+ if (data->auth_protocol[0].dhchap.halen > NVME_AUTH_DHCHAP_MAX_HASH_IDS ||
+     data->auth_protocol[0].dhchap.dhlen > NVME_AUTH_DHCHAP_MAX_DH_IDS)
+ return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+
  for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
  u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];

@@ -97,7 +105,7 @@ static u8 nvmet_auth_negotiate(struct nvmet_req
*req, void *d)
  dhgid = -1;
  fallback_dhgid = -1;
  for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
- int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
+ int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i +
NVME_AUTH_DHCHAP_MAX_HASH_IDS];

  if (tmp_dhgid != ctrl->dh_gid) {
  dhgid = tmp_dhgid;
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index b09dcaf5bcbc..ea0393ab16fc 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -1824,6 +1824,8 @@ struct nvmf_auth_dhchap_protocol_descriptor {
  __u8 dhlen;
  __u8 idlist[60];
 };
+#define NVME_AUTH_DHCHAP_MAX_HASH_IDS 30
+#define NVME_AUTH_DHCHAP_MAX_DH_IDS 30

 enum {
  NVME_AUTH_DHCHAP_AUTH_ID = 0x01,
-- 
2.43.0

Thanks
Yunje Shin.

On Wed, Mar 11, 2026 at 3:07 AM Chris Leech <cleech@redhat.com> wrote:
>
> On Wed, Mar 11, 2026 at 02:52:36AM +0900, yunje shin wrote:
> > Thanks for the review.
> >
> > Yes, I triggered the KASAN issue by injecting an invalid dhlen The
> > reproduction steps are:
> > 1. Connect to the NVMe/TCP target on port 4420 (ICReq + Fabrics CONNECT).
> > 2. Send AUTH_SEND with a crafted NEGOTIATE payload where dhlen=200. 3.
> > The kernel target code in nvmet_auth_negotiate() then iterates
> >
> >     for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
> >         int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> >
> > With dhlen=200, this reads idlist[30..229], but idlist[] is only 60
> > bytes (indices 0..59). The accesses at indices 60 and beyond read past
> > the kmalloc'd slab object into adjacent slab memory, which KASAN
> > catches as a slab-out-of-bounds read.
>
> Thank you, I appreciate understanding how this was triggered.
>
> > While the standard Linux NVMe host driver does use hardcoded halen=3
> > and dhlen=6, the NVMe target is network-facing and must validate all
> > fields from the wire. A malicious or non-standard host can send
> > arbitrary values. The same applies to halen — if halen > 30, the
> > first loop also reads out of bounds.
>
> Yes, this code absolutly should validate halen and dhlen bounds.
>
> > Regarding idlist_half — yes, idlist is currently a fixed 60-byte array
> > and the DH offset is always 30. I derived it from sizeof(idlist)
> > rather than hardcoding 30 so that the bounds check and the DH offset
> > stay consistent with the array definition. If the struct ever changes,
> > the validation adapts automatically instead of silently going stale.
>
> The 60-byte idlist (and 30:30 split) are part of the NVMe specification.
> It's the maximum amount of space while keeping to a 64-byte struct.
>
> I'd rather see this made clearer with a define for the limit, but not
> adding code that appears to calculate it at runtime.
>
> Thanks,
> - Chris
>

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
  2026-03-10 19:06               ` yunje shin
@ 2026-03-10 20:34                 ` Chris Leech
  2026-03-12  7:01                 ` Hannes Reinecke
  1 sibling, 0 replies; 17+ messages in thread
From: Chris Leech @ 2026-03-10 20:34 UTC (permalink / raw)
  To: yunje shin
  Cc: Hannes Reinecke, Keith Busch, Chaitanya Kulkarni, Sagi Grimberg,
	Christoph Hellwig, linux-nvme, linux-kernel, ioerts

On Wed, Mar 11, 2026 at 04:06:44AM +0900, yunje shin wrote:
> Thank you for the clarification regarding the 64-byte structural
> constraints. If this approach looks good to you, I will format it
> properly with an updated commit message and send out a formal v2
> patch.

Yes, that looks much better to me.  Thanks!

- Chris

> diff --git a/drivers/nvme/target/fabrics-cmd-auth.c
> b/drivers/nvme/target/fabrics-cmd-auth.c
> index 5946681cb0e3..acba4878a873 100644
> --- a/drivers/nvme/target/fabrics-cmd-auth.c
> +++ b/drivers/nvme/target/fabrics-cmd-auth.c
> @@ -72,6 +72,14 @@ static u8 nvmet_auth_negotiate(struct nvmet_req
> *req, void *d)
>       NVME_AUTH_DHCHAP_AUTH_ID)
>   return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> 
> + /*
> + * idlist[0..29]: hash IDs
> + * idlist[30..59]: DH group IDs
> + */
> + if (data->auth_protocol[0].dhchap.halen > NVME_AUTH_DHCHAP_MAX_HASH_IDS ||
> +     data->auth_protocol[0].dhchap.dhlen > NVME_AUTH_DHCHAP_MAX_DH_IDS)
> + return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> +
>   for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
>   u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];
> 
> @@ -97,7 +105,7 @@ static u8 nvmet_auth_negotiate(struct nvmet_req
> *req, void *d)
>   dhgid = -1;
>   fallback_dhgid = -1;
>   for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
> - int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> + int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i +
> NVME_AUTH_DHCHAP_MAX_HASH_IDS];
> 
>   if (tmp_dhgid != ctrl->dh_gid) {
>   dhgid = tmp_dhgid;
> diff --git a/include/linux/nvme.h b/include/linux/nvme.h
> index b09dcaf5bcbc..ea0393ab16fc 100644
> --- a/include/linux/nvme.h
> +++ b/include/linux/nvme.h
> @@ -1824,6 +1824,8 @@ struct nvmf_auth_dhchap_protocol_descriptor {
>   __u8 dhlen;
>   __u8 idlist[60];
>  };
> +#define NVME_AUTH_DHCHAP_MAX_HASH_IDS 30
> +#define NVME_AUTH_DHCHAP_MAX_DH_IDS 30
> 
>  enum {
>   NVME_AUTH_DHCHAP_AUTH_ID = 0x01,
> -- 
> 2.43.0
> 
> Thanks
> Yunje Shin.
> 
> On Wed, Mar 11, 2026 at 3:07 AM Chris Leech <cleech@redhat.com> wrote:
> >
> > On Wed, Mar 11, 2026 at 02:52:36AM +0900, yunje shin wrote:
> > > Thanks for the review.
> > >
> > > Yes, I triggered the KASAN issue by injecting an invalid dhlen The
> > > reproduction steps are:
> > > 1. Connect to the NVMe/TCP target on port 4420 (ICReq + Fabrics CONNECT).
> > > 2. Send AUTH_SEND with a crafted NEGOTIATE payload where dhlen=200. 3.
> > > The kernel target code in nvmet_auth_negotiate() then iterates
> > >
> > >     for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
> > >         int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> > >
> > > With dhlen=200, this reads idlist[30..229], but idlist[] is only 60
> > > bytes (indices 0..59). The accesses at indices 60 and beyond read past
> > > the kmalloc'd slab object into adjacent slab memory, which KASAN
> > > catches as a slab-out-of-bounds read.
> >
> > Thank you, I appreciate understanding how this was triggered.
> >
> > > While the standard Linux NVMe host driver does use hardcoded halen=3
> > > and dhlen=6, the NVMe target is network-facing and must validate all
> > > fields from the wire. A malicious or non-standard host can send
> > > arbitrary values. The same applies to halen — if halen > 30, the
> > > first loop also reads out of bounds.
> >
> > Yes, this code absolutly should validate halen and dhlen bounds.
> >
> > > Regarding idlist_half — yes, idlist is currently a fixed 60-byte array
> > > and the DH offset is always 30. I derived it from sizeof(idlist)
> > > rather than hardcoding 30 so that the bounds check and the DH offset
> > > stay consistent with the array definition. If the struct ever changes,
> > > the validation adapts automatically instead of silently going stale.
> >
> > The 60-byte idlist (and 30:30 split) are part of the NVMe specification.
> > It's the maximum amount of space while keeping to a 64-byte struct.
> >
> > I'd rather see this made clearer with a define for the limit, but not
> > adding code that appears to calculate it at runtime.
> >
> > Thanks,
> > - Chris
> >
> 


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds)
  2026-03-10 19:06               ` yunje shin
  2026-03-10 20:34                 ` Chris Leech
@ 2026-03-12  7:01                 ` Hannes Reinecke
  2026-03-13  5:24                   ` [PATCH v2] nvmet: auth: validate dhchap id list lengths YunJe Shin
  1 sibling, 1 reply; 17+ messages in thread
From: Hannes Reinecke @ 2026-03-12  7:01 UTC (permalink / raw)
  To: yunje shin, Chris Leech
  Cc: Keith Busch, Chaitanya Kulkarni, Sagi Grimberg, Christoph Hellwig,
	linux-nvme, linux-kernel, ioerts

On 3/10/26 20:06, yunje shin wrote:
> Thank you for the clarification regarding the 64-byte structural
> constraints. If this approach looks good to you, I will format it
> properly with an updated commit message and send out a formal v2
> patch.
> 
> diff --git a/drivers/nvme/target/fabrics-cmd-auth.c
> b/drivers/nvme/target/fabrics-cmd-auth.c
> index 5946681cb0e3..acba4878a873 100644
> --- a/drivers/nvme/target/fabrics-cmd-auth.c
> +++ b/drivers/nvme/target/fabrics-cmd-auth.c
> @@ -72,6 +72,14 @@ static u8 nvmet_auth_negotiate(struct nvmet_req
> *req, void *d)
>        NVME_AUTH_DHCHAP_AUTH_ID)
>    return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> 
> + /*
> + * idlist[0..29]: hash IDs
> + * idlist[30..59]: DH group IDs
> + */
> + if (data->auth_protocol[0].dhchap.halen > NVME_AUTH_DHCHAP_MAX_HASH_IDS ||
> +     data->auth_protocol[0].dhchap.dhlen > NVME_AUTH_DHCHAP_MAX_DH_IDS)
> + return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
> +
>    for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
>    u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];
> 
> @@ -97,7 +105,7 @@ static u8 nvmet_auth_negotiate(struct nvmet_req
> *req, void *d)
>    dhgid = -1;
>    fallback_dhgid = -1;
>    for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
> - int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
> + int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i +
> NVME_AUTH_DHCHAP_MAX_HASH_IDS];
> 
>    if (tmp_dhgid != ctrl->dh_gid) {
>    dhgid = tmp_dhgid;
> diff --git a/include/linux/nvme.h b/include/linux/nvme.h
> index b09dcaf5bcbc..ea0393ab16fc 100644
> --- a/include/linux/nvme.h
> +++ b/include/linux/nvme.h
> @@ -1824,6 +1824,8 @@ struct nvmf_auth_dhchap_protocol_descriptor {
>    __u8 dhlen;
>    __u8 idlist[60];
>   };
> +#define NVME_AUTH_DHCHAP_MAX_HASH_IDS 30
> +#define NVME_AUTH_DHCHAP_MAX_DH_IDS 30
> 
>   enum {
>    NVME_AUTH_DHCHAP_AUTH_ID = 0x01,

Yes, this is far better.

You can add:

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2] nvmet: auth: validate dhchap id list lengths
  2026-03-12  7:01                 ` Hannes Reinecke
@ 2026-03-13  5:24                   ` YunJe Shin
  2026-03-13 15:30                     ` Chris Leech
  2026-03-17 14:51                     ` Christoph Hellwig
  0 siblings, 2 replies; 17+ messages in thread
From: YunJe Shin @ 2026-03-13  5:24 UTC (permalink / raw)
  To: hare, cleech
  Cc: hch, ioerts, kbusch, kch, linux-kernel, linux-nvme, sagi,
	yjshin0438, stable

From: Yunje Shin <ioerts@kookmin.ac.kr>

The function nvmet_auth_negotiate() parses the idlist array in the
struct nvmf_auth_dhchap_protocol_descriptor payload. This array is 60
bytes and is logically divided into two 30-byte halves: the first half
for HMAC IDs and the second half for DH group IDs. The current code
uses a hardcoded +30 offset for the DH list, but does not validate
halen and dhlen against the per-half bounds. As a result, if a
malicious host sends halen or dhlen larger than 30, the loop can
read past the 60-byte array into adjacent slab memory, triggering a
KASAN slab-out-of-bounds read.

KASAN splat:
[    4.241646] BUG: KASAN: slab-out-of-bounds in nvmet_execute_auth_send+0x19b8/0x2090
[    4.242874] Read of size 1 at addr ffff8881045754e8 by task kworker/1:1H/41
[    4.265342] The buggy address belongs to the cache kmalloc-96 of size 96
[    4.266291]  allocated 72-byte region [ffff8881045754a0, ffff8881045754e8)
[    4.270337] page dumped because: kasan: bad access detected

This patch fixes the issue by introducing NVME_AUTH_DHCHAP_MAX_HASH_IDS
and NVME_AUTH_DHCHAP_MAX_DH_IDS defined as 30, which explicitly indicates
the maximum boundaries allowed per NVMe specification. The lengths halen
and dhlen are validated against these boundaries before processing,
preventing the out-of-bounds reads.

Fixes: db1312dd95488 ("nvmet: implement basic In-Band Authentication")
Cc: stable@kernel.org
Signed-off-by: Yunje Shin <ioerts@kookmin.ac.kr>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
v2:
    - Replaced the runtime 'sizeof' calculation (idlist_half) with explicit 
      NVME_AUTH_DHCHAP_MAX_HASH_IDS and NVME_AUTH_DHCHAP_MAX_DH_IDS macros
      to clearly reflect the 30:30 split limit per Chris Leech's feedback.

 drivers/nvme/target/fabrics-cmd-auth.c | 11 ++++++++++-
 include/linux/nvme.h                   |  2 ++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/target/fabrics-cmd-auth.c b/drivers/nvme/target/fabrics-cmd-auth.c
index 5946681cb0e3..acba4878a873 100644
--- a/drivers/nvme/target/fabrics-cmd-auth.c
+++ b/drivers/nvme/target/fabrics-cmd-auth.c
@@ -72,6 +72,14 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
 	    NVME_AUTH_DHCHAP_AUTH_ID)
 		return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
 
+	/*
+	 * idlist[0..29]: hash IDs
+	 * idlist[30..59]: DH group IDs
+	 */
+	if (data->auth_protocol[0].dhchap.halen > NVME_AUTH_DHCHAP_MAX_HASH_IDS ||
+	    data->auth_protocol[0].dhchap.dhlen > NVME_AUTH_DHCHAP_MAX_DH_IDS)
+		return NVME_AUTH_DHCHAP_FAILURE_INCORRECT_PAYLOAD;
+
 	for (i = 0; i < data->auth_protocol[0].dhchap.halen; i++) {
 		u8 host_hmac_id = data->auth_protocol[0].dhchap.idlist[i];
 
@@ -97,7 +105,8 @@ static u8 nvmet_auth_negotiate(struct nvmet_req *req, void *d)
 	dhgid = -1;
 	fallback_dhgid = -1;
 	for (i = 0; i < data->auth_protocol[0].dhchap.dhlen; i++) {
-		int tmp_dhgid = data->auth_protocol[0].dhchap.idlist[i + 30];
+		int tmp_dhgid =
+			data->auth_protocol[0].dhchap.idlist[i + NVME_AUTH_DHCHAP_MAX_HASH_IDS];
 
 		if (tmp_dhgid != ctrl->dh_gid) {
 			dhgid = tmp_dhgid;
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index b09dcaf5bcbc..ea0393ab16fc 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -1824,6 +1824,8 @@ struct nvmf_auth_dhchap_protocol_descriptor {
 	__u8		dhlen;
 	__u8		idlist[60];
 };
+#define NVME_AUTH_DHCHAP_MAX_HASH_IDS 30
+#define NVME_AUTH_DHCHAP_MAX_DH_IDS 30
 
 enum {
 	NVME_AUTH_DHCHAP_AUTH_ID	= 0x01,
-- 
2.43.0

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2] nvmet: auth: validate dhchap id list lengths
  2026-03-13  5:24                   ` [PATCH v2] nvmet: auth: validate dhchap id list lengths YunJe Shin
@ 2026-03-13 15:30                     ` Chris Leech
  2026-03-17 14:51                     ` Christoph Hellwig
  1 sibling, 0 replies; 17+ messages in thread
From: Chris Leech @ 2026-03-13 15:30 UTC (permalink / raw)
  To: YunJe Shin
  Cc: hare, hch, ioerts, kbusch, kch, linux-kernel, linux-nvme, sagi,
	stable

On Fri, Mar 13, 2026 at 02:24:09PM +0900, YunJe Shin wrote:
> From: Yunje Shin <ioerts@kookmin.ac.kr>
> 
> The function nvmet_auth_negotiate() parses the idlist array in the
> struct nvmf_auth_dhchap_protocol_descriptor payload. This array is 60
> bytes and is logically divided into two 30-byte halves: the first half
> for HMAC IDs and the second half for DH group IDs. The current code
> uses a hardcoded +30 offset for the DH list, but does not validate
> halen and dhlen against the per-half bounds. As a result, if a
> malicious host sends halen or dhlen larger than 30, the loop can
> read past the 60-byte array into adjacent slab memory, triggering a
> KASAN slab-out-of-bounds read.
> 
> KASAN splat:
> [    4.241646] BUG: KASAN: slab-out-of-bounds in nvmet_execute_auth_send+0x19b8/0x2090
> [    4.242874] Read of size 1 at addr ffff8881045754e8 by task kworker/1:1H/41
> [    4.265342] The buggy address belongs to the cache kmalloc-96 of size 96
> [    4.266291]  allocated 72-byte region [ffff8881045754a0, ffff8881045754e8)
> [    4.270337] page dumped because: kasan: bad access detected
> 
> This patch fixes the issue by introducing NVME_AUTH_DHCHAP_MAX_HASH_IDS
> and NVME_AUTH_DHCHAP_MAX_DH_IDS defined as 30, which explicitly indicates
> the maximum boundaries allowed per NVMe specification. The lengths halen
> and dhlen are validated against these boundaries before processing,
> preventing the out-of-bounds reads.
> 
> Fixes: db1312dd95488 ("nvmet: implement basic In-Band Authentication")
> Cc: stable@kernel.org
> Signed-off-by: Yunje Shin <ioerts@kookmin.ac.kr>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> ---
> v2:
>     - Replaced the runtime 'sizeof' calculation (idlist_half) with explicit 
>       NVME_AUTH_DHCHAP_MAX_HASH_IDS and NVME_AUTH_DHCHAP_MAX_DH_IDS macros
>       to clearly reflect the 30:30 split limit per Chris Leech's feedback.

Reviewed-by: Chris Leech <cleech@redhat.com>


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2] nvmet: auth: validate dhchap id list lengths
  2026-03-13  5:24                   ` [PATCH v2] nvmet: auth: validate dhchap id list lengths YunJe Shin
  2026-03-13 15:30                     ` Chris Leech
@ 2026-03-17 14:51                     ` Christoph Hellwig
  2026-03-17 16:55                       ` yunje shin
  1 sibling, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2026-03-17 14:51 UTC (permalink / raw)
  To: YunJe Shin
  Cc: hare, cleech, hch, ioerts, kbusch, kch, linux-kernel, linux-nvme,
	sagi, stable

On Fri, Mar 13, 2026 at 02:24:09PM +0900, YunJe Shin wrote:
> +	/*
> +	 * idlist[0..29]: hash IDs
> +	 * idlist[30..59]: DH group IDs
> +	 */
> +	if (data->auth_protocol[0].dhchap.halen > NVME_AUTH_DHCHAP_MAX_HASH_IDS ||
> +	    data->auth_protocol[0].dhchap.dhlen > NVME_AUTH_DHCHAP_MAX_DH_IDS)

Overly lone lines. A local variable for data->auth_protocol[0].dhchap
would really help with readability here.

> diff --git a/include/linux/nvme.h b/include/linux/nvme.h
> index b09dcaf5bcbc..ea0393ab16fc 100644
> --- a/include/linux/nvme.h
> +++ b/include/linux/nvme.h
> @@ -1824,6 +1824,8 @@ struct nvmf_auth_dhchap_protocol_descriptor {
>  	__u8		dhlen;
>  	__u8		idlist[60];
>  };
> +#define NVME_AUTH_DHCHAP_MAX_HASH_IDS 30
> +#define NVME_AUTH_DHCHAP_MAX_DH_IDS 30

Tabs before the values.  Bonus points for a reference to the relevant
spec.


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2] nvmet: auth: validate dhchap id list lengths
  2026-03-17 14:51                     ` Christoph Hellwig
@ 2026-03-17 16:55                       ` yunje shin
  2026-03-20  7:49                         ` Christoph Hellwig
  0 siblings, 1 reply; 17+ messages in thread
From: yunje shin @ 2026-03-17 16:55 UTC (permalink / raw)
  To: Christoph Hellwig, hare, cleech
  Cc: ioerts, kbusch, kch, linux-kernel, linux-nvme, sagi, stable

On Tue, Mar 17, 2026 at 11:51 PM Christoph Hellwig <hch@lst.de> wrote:
> Overly lone lines. A local variable for data->auth_protocol[0].dhchap
> would really help with readability here.
...
> Tabs before the values.  Bonus points for a reference to the relevant
> spec.

I will fix the tab alignment for the macros and add a spec reference
in v3.

Regarding the local variable — I understand the readability concern,
but data->auth_protocol[0].dhchap is currently used without a local
variable in 8 places in target/fabrics-cmd-auth.c (lines 42-44, 71,
75-76, 100-101) and 12 places in host/auth.c (lines 145-156).
Adding one only for the bounds check felt inconsistent, so I kept
the existing style. Happy to hear your thoughts on this.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2] nvmet: auth: validate dhchap id list lengths
  2026-03-17 16:55                       ` yunje shin
@ 2026-03-20  7:49                         ` Christoph Hellwig
  2026-03-20  8:13                           ` yunje shin
  0 siblings, 1 reply; 17+ messages in thread
From: Christoph Hellwig @ 2026-03-20  7:49 UTC (permalink / raw)
  To: yunje shin
  Cc: Christoph Hellwig, hare, cleech, ioerts, kbusch, kch,
	linux-kernel, linux-nvme, sagi, stable

On Wed, Mar 18, 2026 at 01:55:13AM +0900, yunje shin wrote:
> Regarding the local variable — I understand the readability concern,
> but data->auth_protocol[0].dhchap is currently used without a local
> variable in 8 places in target/fabrics-cmd-auth.c (lines 42-44, 71,
> 75-76, 100-101) and 12 places in host/auth.c (lines 145-156).
> Adding one only for the bounds check felt inconsistent, so I kept
> the existing style. Happy to hear your thoughts on this.

Bonus points for fixing all of them up :)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2] nvmet: auth: validate dhchap id list lengths
  2026-03-20  7:49                         ` Christoph Hellwig
@ 2026-03-20  8:13                           ` yunje shin
  0 siblings, 0 replies; 17+ messages in thread
From: yunje shin @ 2026-03-20  8:13 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: hare, cleech, ioerts, kbusch, kch, linux-kernel, linux-nvme, sagi,
	stable, alistair23

On Tue, Mar 17, 2026 at 11:51 PM Christoph Hellwig <hch@lst.de> wrote:
>
> On Fri, Mar 13, 2026 at 02:24:09PM +0900, YunJe Shin wrote:
> > +#define NVME_AUTH_DHCHAP_MAX_HASH_IDS 30
> > +#define NVME_AUTH_DHCHAP_MAX_DH_IDS 30
>
> Tabs before the values.  Bonus points for a reference to the relevant
> spec.
>

 Thanks. Since Alistair has already sent the nvme.h macro patch with
the spec reference, I'll drop the header changes
  from my patch and send v3

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2026-03-20  8:14 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-11  6:58 [PATCH] nvmet: auth: validate dhchap id list lengths(KASAN: slab-out-of-bounds) YunJe Shin
2026-02-12  1:49 ` yunje shin
2026-02-18  4:04   ` yunje shin
2026-03-08 15:09     ` yunje shin
2026-03-09 18:04       ` Chris Leech
2026-03-10 17:48         ` yunje shin
2026-03-10 17:52           ` yunje shin
2026-03-10 18:07             ` Chris Leech
2026-03-10 19:06               ` yunje shin
2026-03-10 20:34                 ` Chris Leech
2026-03-12  7:01                 ` Hannes Reinecke
2026-03-13  5:24                   ` [PATCH v2] nvmet: auth: validate dhchap id list lengths YunJe Shin
2026-03-13 15:30                     ` Chris Leech
2026-03-17 14:51                     ` Christoph Hellwig
2026-03-17 16:55                       ` yunje shin
2026-03-20  7:49                         ` Christoph Hellwig
2026-03-20  8:13                           ` yunje shin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox