From: Jonathan Cameron <jonathan.cameron@huawei.com>
To: Nathan Lynch via B4 Relay <devnull+nathan.lynch.amd.com@kernel.org>
Cc: <nathan.lynch@amd.com>, Vinod Koul <vkoul@kernel.org>,
Wei Huang <wei.huang2@amd.com>,
Mario Limonciello <mario.limonciello@amd.com>,
"Bjorn Helgaas" <bhelgaas@google.com>,
<linux-pci@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<dmaengine@vger.kernel.org>
Subject: Re: [PATCH RFC 08/13] dmaengine: sdxi: Context creation/removal, descriptor submission
Date: Mon, 15 Sep 2025 15:12:57 +0100 [thread overview]
Message-ID: <20250915151257.0000253b@huawei.com> (raw)
In-Reply-To: <20250905-sdxi-base-v1-8-d0341a1292ba@amd.com>
On Fri, 05 Sep 2025 13:48:31 -0500
Nathan Lynch via B4 Relay <devnull+nathan.lynch.amd.com@kernel.org> wrote:
> From: Nathan Lynch <nathan.lynch@amd.com>
>
> Add functions for creating and removing SDXI contexts and submitting
> descriptors against them.
>
> An SDXI function supports one or more contexts, each of which has its
> own descriptor ring and associated state. Each context has a 16-bit
> index. A special context is installed at index 0 and is used for
> configuring other contexts and performing administrative actions.
>
> The creation of each context entails the allocation of the following
> control structure hierarchy:
>
> * Context L1 Table slot
> * Access key (AKey) table
> * Context control block
> * Descriptor ring
> * Write index
> * Context status block
>
> Co-developed-by: Wei Huang <wei.huang2@amd.com>
> Signed-off-by: Wei Huang <wei.huang2@amd.com>
> Signed-off-by: Nathan Lynch <nathan.lynch@amd.com>
Some superficial stuff inline.
I haven't yet reread the spec against this (and it's been a while
since I looked at sdxi!) but overall seems reasonable.
> ---
> drivers/dma/sdxi/context.c | 547 +++++++++++++++++++++++++++++++++++++++++++++
> drivers/dma/sdxi/context.h | 28 +++
> 2 files changed, 575 insertions(+)
>
> diff --git a/drivers/dma/sdxi/context.c b/drivers/dma/sdxi/context.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..50eae5b3b303d67891113377e2df209d199aa533
> --- /dev/null
> +++ b/drivers/dma/sdxi/context.c
> @@ -0,0 +1,547 @@
> +
> +static struct sdxi_cxt *alloc_cxt(struct sdxi_dev *sdxi)
> +{
> + struct sdxi_cxt *cxt;
> + u16 id, l2_idx, l1_idx;
> +
> + if (sdxi->cxt_count >= sdxi->max_cxts)
> + return NULL;
> +
> + /* search for an empty context slot */
> + for (id = 0; id < sdxi->max_cxts; id++) {
> + l2_idx = ID_TO_L2_INDEX(id);
> + l1_idx = ID_TO_L1_INDEX(id);
> +
> + if (sdxi->cxt_array[l2_idx] == NULL) {
> + int sz = sizeof(struct sdxi_cxt *) * L1_TABLE_ENTRIES;
> + struct sdxi_cxt **ptr = kzalloc(sz, GFP_KERNEL);
> +
> + sdxi->cxt_array[l2_idx] = ptr;
> + if (!(sdxi->cxt_array[l2_idx]))
> + return NULL;
> + }
> +
> + cxt = (sdxi->cxt_array)[l2_idx][l1_idx];
> + /* found one empty slot */
> + if (!cxt)
> + break;
> + }
> +
> + /* nothing found, bail... */
> + if (id == sdxi->max_cxts)
> + return NULL;
> +
> + /* alloc context and initialize it */
> + cxt = kzalloc(sizeof(struct sdxi_cxt), GFP_KERNEL);
sizeof(*ctx) usually preferred as it saves anyone checking types match.
> + if (!cxt)
> + return NULL;
> +
> + cxt->akey_table = dma_alloc_coherent(sdxi_to_dev(sdxi),
> + sizeof(*cxt->akey_table),
> + &cxt->akey_table_dma, GFP_KERNEL);
> + if (!cxt->akey_table) {
> + kfree(cxt);
Similar to below. You could use a DEFINE_FREE() to auto cleanup up ctx
on error.
> + return NULL;
> + }
> +
> + cxt->sdxi = sdxi;
> + cxt->id = id;
> + cxt->db_base = sdxi->dbs_bar + id * sdxi->db_stride;
> + cxt->db = sdxi->dbs + id * sdxi->db_stride;
> +
> + sdxi->cxt_array[l2_idx][l1_idx] = cxt;
> + sdxi->cxt_count++;
> +
> + return cxt;
> +}
> +struct sdxi_cxt *sdxi_working_cxt_init(struct sdxi_dev *sdxi,
> + enum sdxi_cxt_id id)
> +{
> + struct sdxi_cxt *cxt;
> + struct sdxi_sq *sq;
> +
> + cxt = sdxi_cxt_alloc(sdxi);
> + if (!cxt) {
> + sdxi_err(sdxi, "failed to alloc a new context\n");
> + return NULL;
> + }
> +
> + /* check if context ID matches */
> + if (id < SDXI_ANY_CXT_ID && cxt->id != id) {
> + sdxi_err(sdxi, "failed to alloc a context with id=%d\n", id);
> + goto err_cxt_id;
> + }
> +
> + sq = sdxi_sq_alloc_default(cxt);
> + if (!sq) {
> + sdxi_err(sdxi, "failed to alloc a submission queue (sq)\n");
> + goto err_sq_alloc;
> + }
> +
> + return cxt;
> +
> +err_sq_alloc:
> +err_cxt_id:
> + sdxi_cxt_free(cxt);
Might be worth doing a DEFINE_FREE() then you can use return_ptr(ctx); instead
of return ctx. Will allow you to simply return on any errors.
> +
> + return NULL;
> +}
> +
> +static const char *cxt_sts_state_str(enum cxt_sts_state state)
> +{
> + static const char *const context_states[] = {
> + [CXTV_STOP_SW] = "stopped (software)",
> + [CXTV_RUN] = "running",
> + [CXTV_STOPG_SW] = "stopping (software)",
> + [CXTV_STOP_FN] = "stopped (function)",
> + [CXTV_STOPG_FN] = "stopping (function)",
> + [CXTV_ERR_FN] = "error",
> + };
> + const char *str = "unknown";
> +
> + switch (state) {
> + case CXTV_STOP_SW:
> + case CXTV_RUN:
> + case CXTV_STOPG_SW:
> + case CXTV_STOP_FN:
> + case CXTV_STOPG_FN:
> + case CXTV_ERR_FN:
> + str = context_states[state];
I'd do a default to make it explicit that there are other states. If there
aren't then just return here and skip the return below. A compiler should
be able to see if you handled them all and complain loudly if a new one is
added that you haven't handled.
> + }
> +
> + return str;
> +}
> +
> +int sdxi_submit_desc(struct sdxi_cxt *cxt, const struct sdxi_desc *desc)
> +{
> + struct sdxi_sq *sq;
> + __le64 __iomem *db;
> + __le64 *ring_base;
> + u64 ring_entries;
> + __le64 *rd_idx;
> + __le64 *wr_idx;
> +
> + sq = cxt->sq;
> + ring_entries = sq->ring_entries;
> + ring_base = sq->desc_ring[0].qw;
> + rd_idx = &sq->cxt_sts->read_index;
> + wr_idx = sq->write_index;
> + db = cxt->db;
I'm not sure the local variables add anything, but if you really want
to keep them, then at least combine with declaration.
struct sdxi_sq *sq = ctx->sq;
__le64 __iomem *db = ctx->db;
just to keep thing code more compact.
Personally I'd just have a local sq and do the rest in the call
return sdxi_enqueue(desc->qw, 1, sq->desc_ring[0].wq,
etc
> +
> + return sdxi_enqueue(desc->qw, 1, ring_base, ring_entries,
> + rd_idx, wr_idx, db);
> +}
> +
> +static void sdxi_cxt_shutdown(struct sdxi_cxt *target_cxt)
> +{
> + unsigned long deadline = jiffies + msecs_to_jiffies(1000);
> + struct sdxi_cxt *admin_cxt = target_cxt->sdxi->admin_cxt;
> + struct sdxi_dev *sdxi = target_cxt->sdxi;
> + struct sdxi_cxt_sts *sts = target_cxt->sq->cxt_sts;
> + struct sdxi_desc desc;
> + u16 cxtid = target_cxt->id;
> + struct sdxi_cxt_stop params = {
> + .range = sdxi_cxt_range(cxtid),
> + };
> + enum cxt_sts_state state = sdxi_cxt_sts_state(sts);
> + int err;
> +
> + sdxi_dbg(sdxi, "%s entry: context state: %s",
> + __func__, cxt_sts_state_str(state));
> +
> + err = sdxi_encode_cxt_stop(&desc, ¶ms);
> + if (err)
> + return;
> +
> + err = sdxi_submit_desc(admin_cxt, &desc);
> + if (err)
> + return;
> +
> + sdxi_dbg(sdxi, "shutting down context %u\n", cxtid);
> +
> + do {
> + enum cxt_sts_state state = sdxi_cxt_sts_state(sts);
> +
> + sdxi_dbg(sdxi, "context %u state: %s", cxtid,
> + cxt_sts_state_str(state));
> +
> + switch (state) {
> + case CXTV_ERR_FN:
> + sdxi_err(sdxi, "context %u went into error state while stopping\n",
> + cxtid);
> + fallthrough;
I'd just return unless a later patch adds something more interesting to the next
cases.
> + case CXTV_STOP_SW:
> + case CXTV_STOP_FN:
> + return;
> + case CXTV_RUN:
> + case CXTV_STOPG_SW:
> + case CXTV_STOPG_FN:
> + /* transitional states */
> + fsleep(1000);
> + break;
> + default:
> + sdxi_err(sdxi, "context %u in unknown state %u\n",
> + cxtid, state);
> + return;
> + }
> + } while (time_before(jiffies, deadline));
> +
> + sdxi_err(sdxi, "stopping context %u timed out (state = %u)\n",
> + cxtid, sdxi_cxt_sts_state(sts));
> +}
> +
> +void sdxi_working_cxt_exit(struct sdxi_cxt *cxt)
> +{
> + struct sdxi_sq *sq;
> +
> + if (!cxt)
Superficially this looks like defensive programming that we don't need
as it makes not sense to call this if ctx is NULL. Add a comment if
there is a path where this actually happens.
> + return;
> +
> + sq = cxt->sq;
> + if (!sq)
Add a comment on why this might happen, or drop teh cehck.
> + return;
> +
> + sdxi_cxt_shutdown(cxt);
> +
> + sdxi_sq_free(sq);
> +
> + sdxi_cxt_free(cxt);
> +}
next prev parent reply other threads:[~2025-09-15 14:13 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-05 18:48 [PATCH RFC 00/13] dmaengine: Smart Data Accelerator Interface (SDXI) basic support Nathan Lynch via B4 Relay
2025-09-05 18:48 ` [PATCH RFC 01/13] PCI: Add SNIA SDXI accelerator sub-class Nathan Lynch via B4 Relay
2025-09-15 17:25 ` Bjorn Helgaas
2025-09-15 20:17 ` Nathan Lynch
2025-09-05 18:48 ` [PATCH RFC 02/13] dmaengine: sdxi: Add control structure definitions Nathan Lynch via B4 Relay
2025-09-05 18:48 ` [PATCH RFC 03/13] dmaengine: sdxi: Add descriptor encoding and unit tests Nathan Lynch via B4 Relay
2025-09-15 11:52 ` Jonathan Cameron
2025-09-15 19:30 ` Nathan Lynch
2025-09-16 14:20 ` Jonathan Cameron
2025-09-16 19:06 ` Nathan Lynch
2025-09-05 18:48 ` [PATCH RFC 04/13] dmaengine: sdxi: Add MMIO register definitions Nathan Lynch via B4 Relay
2025-09-05 18:48 ` [PATCH RFC 05/13] dmaengine: sdxi: Add software data structures Nathan Lynch via B4 Relay
2025-09-15 11:59 ` Jonathan Cameron
2025-09-16 19:07 ` Nathan Lynch
2025-09-16 9:38 ` Markus Elfring
2025-09-05 18:48 ` [PATCH RFC 06/13] dmaengine: sdxi: Add error reporting support Nathan Lynch via B4 Relay
2025-09-15 12:11 ` Jonathan Cameron
2025-09-15 20:42 ` Nathan Lynch
2025-09-16 14:23 ` Jonathan Cameron
2025-09-05 18:48 ` [PATCH RFC 07/13] dmaengine: sdxi: Import descriptor enqueue code from spec Nathan Lynch via B4 Relay
2025-09-15 12:18 ` Jonathan Cameron
2025-09-16 17:05 ` [External] : " ALOK TIWARI
2025-09-05 18:48 ` [PATCH RFC 08/13] dmaengine: sdxi: Context creation/removal, descriptor submission Nathan Lynch via B4 Relay
2025-09-15 14:12 ` Jonathan Cameron [this message]
2025-09-16 20:40 ` Nathan Lynch
2025-09-17 13:34 ` Jonathan Cameron
2025-09-15 19:42 ` Markus Elfring
2025-09-05 18:48 ` [PATCH RFC 09/13] dmaengine: sdxi: Add core device management code Nathan Lynch via B4 Relay
2025-09-15 14:23 ` Jonathan Cameron
2025-09-16 21:23 ` Nathan Lynch
2025-09-05 18:48 ` [PATCH RFC 10/13] dmaengine: sdxi: Add PCI driver support Nathan Lynch via B4 Relay
2025-09-05 19:14 ` Mario Limonciello
2025-09-10 15:25 ` Nathan Lynch
2025-09-05 20:05 ` Bjorn Helgaas
2025-09-10 15:28 ` Nathan Lynch
2025-09-15 15:03 ` Jonathan Cameron
2025-09-16 16:43 ` [External] : " ALOK TIWARI
2025-09-05 18:48 ` [PATCH RFC 11/13] dmaengine: sdxi: Add DMA engine provider Nathan Lynch via B4 Relay
2025-09-15 15:16 ` Jonathan Cameron
2025-09-05 18:48 ` [PATCH RFC 12/13] dmaengine: sdxi: Add Kconfig and Makefile Nathan Lynch via B4 Relay
2025-09-15 15:08 ` Jonathan Cameron
2025-09-15 16:44 ` Nathan Lynch
2025-09-05 18:48 ` [PATCH RFC 13/13] MAINTAINERS: Add entry for SDXI driver Nathan Lynch via B4 Relay
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250915151257.0000253b@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=bhelgaas@google.com \
--cc=devnull+nathan.lynch.amd.com@kernel.org \
--cc=dmaengine@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=mario.limonciello@amd.com \
--cc=nathan.lynch@amd.com \
--cc=vkoul@kernel.org \
--cc=wei.huang2@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox