Linux PCI subsystem development
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Nathan Lynch via B4 Relay" <devnull+nathan.lynch.amd.com@kernel.org>
Cc: dmaengine@vger.kernel.org, linux-pci@vger.kernel.org,
	Frank.Li@kernel.org, vkoul@kernel.org
Subject: Re: [PATCH v2 11/23] dmaengine: sdxi: Add client context alloc and release APIs
Date: Wed, 13 May 2026 04:46:37 +0000	[thread overview]
Message-ID: <20260513044638.691DBC2BCFD@smtp.kernel.org> (raw)
In-Reply-To: <20260511-sdxi-base-v2-11-889cfed17e3f@amd.com>

Thank you for your contribution! Sashiko AI review found 5 potential issue(s) to consider:
- [Critical] Missing hardware synchronization and premature context teardown causes DMA Use-After-Free.
- [High] Forcefully freeing client contexts in `sdxi_unregister()` breaks ownership boundaries, causing Double-Free/UAF.
- [High] Broken XArray ID reservation allows concurrent contexts to receive the same ID, causing L1 table corruption.
- [High] Lockless `xa_for_each()` iteration over `client_cxts` races with synchronous deletion, leading to UAF.
- [Medium] Premature publication of context in `sdxi_cxt_new()` exposes partially-initialized objects.
--

commit 441cde89ecc6054dc9032c18c35585d44565e4ea
Author: Nathan Lynch <nathan.lynch@amd.com>

dmaengine: sdxi: Add client context alloc and release APIs

This commit exposes sdxi_cxt_new() and sdxi_cxt_exit() for creating and
releasing SDXI contexts. It tracks client contexts in a device-wide
allocating xarray and introduces a local sdxi_cxt_id class for early ID
allocation.

> diff --git a/drivers/dma/sdxi/context.c b/drivers/dma/sdxi/context.c
> index c0b55c945cc4e..c0b294836edeb 100644
> --- a/drivers/dma/sdxi/context.c
> +++ b/drivers/dma/sdxi/context.c
[ ... ]
> @@ -258,6 +282,17 @@ static int sdxi_publish_cxt(const struct sdxi_cxt *cxt)
[ ... ]
> +/* Invalidate a context. */
> +static void sdxi_rescind_cxt(struct sdxi_cxt *cxt)
> +{
> +	u8 l1_idx = ID_TO_L1_INDEX(cxt->id);
> +	struct sdxi_cxt_L1_ent *ent = &cxt->sdxi->L1_table->entry[l1_idx];
> +
> +	invalidate_L1_entry(ent);
> +	invalidate_cxtl_ctl(cxt->cxt_ctl);
> +	/* todo: need to send DSC_CXT_UPD to admin */

Since the hardware is not notified to halt processing here, and the context's
DMA memory is freed in sdxi_free_cxt() right after this returns, will the
active hardware continue writing to the freed physical memory pages?

> +}
[ ... ]
> @@ -288,3 +323,90 @@ int sdxi_admin_cxt_init(struct sdxi_dev *sdxi)
[ ... ]
> +static struct sdxi_cxt_id sdxi_cxt_id_ctor(struct sdxi_dev *sdxi)
> +{
> +	struct xa_limit limit = XA_LIMIT(1, sdxi->max_cxtid);
> +	u32 index;
> +
> +	return (struct sdxi_cxt_id) {
> +		.sdxi = sdxi,
> +		.index = xa_alloc(&sdxi->client_cxts, &index, NULL,
> +				  limit, GFP_KERNEL) ? 0 : (u16)index,

Does passing NULL as the entry to xa_alloc() fail to reserve the ID?

Under XArray semantics, storing a NULL entry marks the slot as free, meaning
concurrent calls to sdxi_cxt_new() might be allocated the exact same ID and
overwrite each other.

If this is fixed by storing a dummy entry, will the WARN_ON() in
sdxi_cxt_id_assign() below incorrectly trigger on every assignment, since
xa_store() returns the previous entry instead of NULL?

> +	};
> +}
[ ... ]
> +/*
> + * Transfer ownership of the id to the context object, recording the
> + * context pointer in the device's client_cxt xarray. sdxi_cxt_free()
> + * is responsible for releasing the id from now on.
> + */
> +static void sdxi_cxt_id_assign(struct sdxi_cxt *cxt, struct sdxi_cxt_id *cxt_id)
> +{
> +	/* We reserved the space in the constructor so this should not fail. */
> +	WARN_ON(xa_store(&cxt_id->sdxi->client_cxts,
> +			 cxt_id->index, cxt, GFP_KERNEL));
> +	cxt->id = cxt_id->index;
> +	cxt_id->index = 0;
> +}
> +
> +/*
> + * Allocate a context for in-kernel use. Starting the context is the
> + * caller's responsibility.
> + */
> +struct sdxi_cxt *sdxi_cxt_new(struct sdxi_dev *sdxi)
> +{
> +	/*
> +	 * Ensure an ID is available before allocating memory for the
> +	 * context and its control structures.
> +	 */
> +	CLASS(sdxi_cxt_id, id)(sdxi);
> +	if (!sdxi_cxt_id_valid(&id))
> +		return NULL;
> +
> +	struct sdxi_cxt *cxt __free(sdxi_cxt) = sdxi_alloc_cxt(sdxi);
> +	if (!cxt)
> +		return NULL;
> +
> +	sdxi_cxt_id_assign(cxt, &id);

Is it safe to publish the context in the xarray here before it is fully
initialized?

If a lockless reader traverses the xarray, could it fetch this partially
initialized context?

Also, if sdxi_publish_cxt() below fails, the cleanup handler will free the
context, potentially exposing a freed pointer in the xarray.

> +
> +	cxt->db = sdxi->dbs + cxt->id * sdxi->db_stride;
> +
> +	if (sdxi_publish_cxt(cxt))
> +		return NULL;
> +
> +	return_ptr(cxt);
> +}
[ ... ]
> diff --git a/drivers/dma/sdxi/device.c b/drivers/dma/sdxi/device.c
> index 8e621875b10bd..cc289b271ae22 100644
> --- a/drivers/dma/sdxi/device.c
> +++ b/drivers/dma/sdxi/device.c
> @@ -338,6 +340,12 @@ int sdxi_register(struct device *dev, const struct sdxi_bus_ops *ops)
>  void sdxi_unregister(struct device *dev)
>  {
>  	struct sdxi_dev *sdxi = dev_get_drvdata(dev);
> +	struct sdxi_cxt *cxt;
> +	unsigned long index;
> +
> +	xa_for_each(&sdxi->client_cxts, index, cxt)
> +		sdxi_cxt_exit(cxt);

Could this lockless xa_for_each() iteration race with a concurrent deletion?

Since xa_for_each() drops the RCU read lock internally for the loop body and
sdxi_free_cxt() destroys the context synchronously using kfree(), it seems a
concurrent call to sdxi_cxt_exit() could free the context while this loop is
processing it.

Additionally, since sdxi_cxt_new() explicitly transfers ownership to the
caller, does forcefully freeing the client contexts here violate that
ownership contract?

When the DMA provider eventually cleans up its channels after
sdxi_unregister() completes, it might call sdxi_cxt_exit() on the
already-freed context pointers.

> +	xa_destroy(&sdxi->client_cxts);
>  
>  	sdxi_dev_stop(sdxi);

By tearing down the contexts before sdxi_dev_stop() halts the hardware,
could this allow the active hardware to continue writing to the freed context
DMA memory?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260511-sdxi-base-v2-0-889cfed17e3f@amd.com?part=11

  reply	other threads:[~2026-05-13  4:46 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-11 19:16 [PATCH v2 00/23] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 01/23] PCI: Add SNIA SDXI accelerator sub-class Nathan Lynch via B4 Relay
2026-05-11 20:48   ` Frank Li
2026-05-12 23:50   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 02/23] MAINTAINERS: Add entry for SDXI driver Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 03/23] dmaengine: sdxi: Add PCI initialization Nathan Lynch via B4 Relay
2026-05-11 21:22   ` Frank Li
2026-05-13  0:05   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 04/23] dmaengine: sdxi: Feature discovery and initial configuration Nathan Lynch via B4 Relay
2026-05-11 21:30   ` Frank Li
2026-05-13  0:33   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 05/23] dmaengine: sdxi: Configure context tables Nathan Lynch via B4 Relay
2026-05-13  1:12   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 06/23] dmaengine: sdxi: Allocate DMA pools Nathan Lynch via B4 Relay
2026-05-13  1:30   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 07/23] dmaengine: sdxi: Allocate administrative context Nathan Lynch via B4 Relay
2026-05-13  2:20   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 08/23] dmaengine: sdxi: Install " Nathan Lynch via B4 Relay
2026-05-13  3:17   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 09/23] dmaengine: sdxi: Start functions on probe, stop on remove Nathan Lynch via B4 Relay
2026-05-13  3:35   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 10/23] dmaengine: sdxi: Complete administrative context jump start Nathan Lynch via B4 Relay
2026-05-13  3:54   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 11/23] dmaengine: sdxi: Add client context alloc and release APIs Nathan Lynch via B4 Relay
2026-05-13  4:46   ` sashiko-bot [this message]
2026-05-11 19:16 ` [PATCH v2 12/23] dmaengine: sdxi: Add descriptor ring management Nathan Lynch via B4 Relay
2026-05-13  5:21   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 13/23] dmaengine: sdxi: Add unit tests for descriptor ring reservations Nathan Lynch via B4 Relay
2026-05-13  5:48   ` sashiko-bot
2026-05-11 19:16 ` [PATCH v2 14/23] dmaengine: sdxi: Attach descriptor ring state to contexts Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 15/23] dmaengine: sdxi: Per-context access key (AKey) table entry allocator Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 16/23] dmaengine: sdxi: Generic descriptor manipulation helpers Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 17/23] dmaengine: sdxi: Add completion status block API Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 18/23] dmaengine: sdxi: Encode context start, stop, and sync descriptors Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 19/23] dmaengine: sdxi: Provide context start and stop APIs Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 20/23] dmaengine: sdxi: Encode nop, copy, and interrupt descriptors Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 21/23] dmaengine: sdxi: Add unit tests for descriptor encoding Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 22/23] dmaengine: sdxi: MSI/MSI-X vector allocation and mapping Nathan Lynch via B4 Relay
2026-05-11 19:16 ` [PATCH v2 23/23] dmaengine: sdxi: Add DMA engine provider Nathan Lynch via B4 Relay
2026-05-11 20:47   ` Frank Li
2026-05-11 22:28     ` Lynch, Nathan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260513044638.691DBC2BCFD@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=Frank.Li@kernel.org \
    --cc=devnull+nathan.lynch.amd.com@kernel.org \
    --cc=dmaengine@vger.kernel.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=sashiko-reviews@lists.linux.dev \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox