From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A54F13B2FD5; Fri, 10 Apr 2026 13:07:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775826470; cv=none; b=IbFXGxY3wg6XcBwA/5yrs0MNsgN1YtZ6knFyUp/MwdyKX+QeMS3SxjD/KN7lpMtFarqFBGOoiibpFb3lsp3EYlcjsOoJLawZdncIB6Vfm24lgFTlLe82q45jM1ocVzb9gCc+Q+yV6UIXepV49kR8AHNM6bGxB3Fw/YlYYH6Hy0Q= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775826470; c=relaxed/simple; bh=SrdVR/zASpBIWlniEiIxoO0qyZvZr0iZfP+wJLzN6zk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=jUVJXYRuvMObKZQfJUlJykRvS0AlNoJLJpQTyjfgr1qNst0fJD0aCEmZ0Fypv0HYq2EfjcBb0TrRe6SjWUUGcMy2wZdNcUxCOlyjaEkf2b3E/B/kXz13glByUyNVR67+dxJmsPbkTmc16Qb5y3WCwCNQa3+8W5JJvtCEgOHHEfk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oZxD/Mjm; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oZxD/Mjm" Received: by smtp.kernel.org (Postfix) with ESMTPS id 89AF6C2BCF4; Fri, 10 Apr 2026 13:07:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775826470; bh=SrdVR/zASpBIWlniEiIxoO0qyZvZr0iZfP+wJLzN6zk=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=oZxD/MjmjiyABwcLxqXy+JOXPiNq9vQRaLf7Sb7z1vAsQYHdj+nVwPFkdJuKz/jME l/8/OqqYn8XPV73uaJ+xxnJft7edRUChoI5zRWQm2PDS36EgMZoth5EpvoJ/LyTE+I nP7zhBaRPsuc9K3IyyEBmOgoBApG4BJcTPORnuwLiaxj8d1HFBVEPsphBL/CcIC7+X rUr2Qq46/y5tfHJR4ARcfzKasxpm+LT0q771YvG3Qj6AjsZfMU9EoZYBLajwLffvKT ozI2pQYfPVpCZXlvPB1dirl3PgkaTtCPJtcdO+1yBxlZsm20T1nCdCgl+cogwM4Yh+ 6C7W/wLshm3GA== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83173F44861; Fri, 10 Apr 2026 13:07:50 +0000 (UTC) From: Nathan Lynch via B4 Relay Date: Fri, 10 Apr 2026 08:07:26 -0500 Subject: [PATCH 16/23] dmaengine: sdxi: Generic descriptor manipulation helpers Precedence: bulk X-Mailing-List: dmaengine@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20260410-sdxi-base-v1-16-1d184cb5c60a@amd.com> References: <20260410-sdxi-base-v1-0-1d184cb5c60a@amd.com> In-Reply-To: <20260410-sdxi-base-v1-0-1d184cb5c60a@amd.com> To: Vinod Koul Cc: Wei Huang , Mario Limonciello , Bjorn Helgaas , Jonathan Cameron , Stephen Bates , PradeepVineshReddy.Kodamati@amd.com, John.Kariuki@amd.com, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, dmaengine@vger.kernel.org, Nathan Lynch X-Mailer: b4 0.15.2 X-Developer-Signature: v=1; a=ed25519-sha256; t=1775826467; l=3976; i=nathan.lynch@amd.com; s=20260410; h=from:subject:message-id; bh=Lszd5dFpwIf8x9JZPBSkgBuvGWv9C1XzwrHoWKMViVE=; b=0RRy/1dL9AMFLIZJJ2oQ3qAPzAbErPl2xIHzjlVvUswvgk87MV9ZeramCKECR2+PXTRca1BTg sVfko36kstwAR3dNijg2HbLPD8Ze7Q2WULh499jkZawK3GM39j2BQRi X-Developer-Key: i=nathan.lynch@amd.com; a=ed25519; pk=PK4ozhq+/z9/2Jl5rgDmvHa9raVomv79qM8p1RAFpEw= X-Endpoint-Received: by B4 Relay for nathan.lynch@amd.com/20260410 with auth_id=728 X-Original-From: Nathan Lynch Reply-To: nathan.lynch@amd.com From: Nathan Lynch Introduce small helper functions for manipulating certain common properties of descriptors after their operation-specific encoding has been performed but before they are submitted. sdxi_desc_set_csb() associates an optional completion status block with a descriptor. sdxi_desc_set_fence() forces retirement of any prior descriptors in the ring before the target descriptor is executed. This is useful for interrupt descriptors that signal the completion of an operation. sdxi_desc_set_sequential() ensures that all writes from prior descriptor operations in the same context are made globally visible prior to making writes from the target descriptor globally visible. sdxi_desc_make_valid() sets the descriptor validity bit, transferring ownership of the descriptor from software to the SDXI implementation. (The implementation is allowed to execute the descriptor at this point, but the caller is still obligated to push the doorbell to ensure execution occurs.) Each of the preceding functions will warn if invoked on a descriptor that has already been released to the SDXI implementation (i.e. had its validity bit set). Co-developed-by: Wei Huang Signed-off-by: Wei Huang Signed-off-by: Nathan Lynch --- drivers/dma/sdxi/descriptor.h | 64 +++++++++++++++++++++++++++++++++++++++++++ drivers/dma/sdxi/hw.h | 9 ++++++ 2 files changed, 73 insertions(+) diff --git a/drivers/dma/sdxi/descriptor.h b/drivers/dma/sdxi/descriptor.h new file mode 100644 index 000000000000..c0f01b1be726 --- /dev/null +++ b/drivers/dma/sdxi/descriptor.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef DMA_SDXI_DESCRIPTOR_H +#define DMA_SDXI_DESCRIPTOR_H + +/* + * Facilities for encoding SDXI descriptors. + * + * Copyright Advanced Micro Devices, Inc. + */ + +#include +#include +#include +#include + +#include "hw.h" + +static inline void sdxi_desc_vl_expect(const struct sdxi_desc *desc, bool expected) +{ + u8 vl = FIELD_GET(SDXI_DSC_VL, le32_to_cpu(desc->opcode)); + + WARN_RATELIMIT(vl != expected, "expected vl=%u but got %u\n", expected, vl); +} + +static inline void sdxi_desc_set_csb(struct sdxi_desc *desc, dma_addr_t addr) +{ + sdxi_desc_vl_expect(desc, 0); + desc->csb_ptr = cpu_to_le64(FIELD_PREP(SDXI_DSC_CSB_PTR, addr >> 5)); +} + +static inline void sdxi_desc_make_valid(struct sdxi_desc *desc) +{ + u32 opcode = le32_to_cpu(desc->opcode); + + sdxi_desc_vl_expect(desc, 0); + FIELD_MODIFY(SDXI_DSC_VL, &opcode, 1); + /* + * Once vl is set, no more modifications to the descriptor + * payload are allowed. Ensure the vl update is ordered after + * all other initialization of the descriptor. + */ + dma_wmb(); + WRITE_ONCE(desc->opcode, cpu_to_le32(opcode)); +} + +static inline void sdxi_desc_set_fence(struct sdxi_desc *desc) +{ + u32 opcode = le32_to_cpu(desc->opcode); + + sdxi_desc_vl_expect(desc, 0); + FIELD_MODIFY(SDXI_DSC_FE, &opcode, 1); + desc->opcode = cpu_to_le32(opcode); +} + +static inline void sdxi_desc_set_sequential(struct sdxi_desc *desc) +{ + u32 opcode = le32_to_cpu(desc->opcode); + + sdxi_desc_vl_expect(desc, 0); + FIELD_MODIFY(SDXI_DSC_SE, &opcode, 1); + desc->opcode = cpu_to_le32(opcode); +} + +#endif /* DMA_SDXI_DESCRIPTOR_H */ diff --git a/drivers/dma/sdxi/hw.h b/drivers/dma/sdxi/hw.h index 46424376f26f..cb1bed2f83f2 100644 --- a/drivers/dma/sdxi/hw.h +++ b/drivers/dma/sdxi/hw.h @@ -140,6 +140,15 @@ struct sdxi_desc { __u8 operation[52]; __le64 csb_ptr; ); + +/* For opcode field */ +#define SDXI_DSC_VL BIT(0) +#define SDXI_DSC_SE BIT(1) +#define SDXI_DSC_FE BIT(2) + +/* For csb_ptr field */ +#define SDXI_DSC_CSB_PTR GENMASK_ULL(63, 5) + }; } __packed; static_assert(sizeof(struct sdxi_desc) == 64); -- 2.53.0