From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 42706CCD18D for ; Mon, 13 Oct 2025 15:34:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=gDytxOQxipVWyzqvOvEh8Ks8ZY04sUsXUuW/Y0dMKtA=; b=YIb6mwHOrke8AwhxGdvFq1+x9N PmKgNONOAWrpWLMPotlA74P8ZWNdFbzN1ME4pjNYAlI8AKrOmCw8BCjzcItT+K3m9xUAd7UuCMrT1 QC3xDw+r/aAjvGJ2qwXaE5yb1DOkpefC9PFKoJZH2z50NTV9/JNurK/HMszXtK3TkM95a/w0VKchS jzB0PD32IqEvNoIdiS42Vrb3cPpUFGFpb7+hm4YOJ2vWN+n9BO5HBFAIiKq58FhuqeB5shmeBdri4 1yXfygtgIkYHnIxVuPL/7wLgQ8vlVvFqq2rvFfQtNelsDc7vii325cIe/nEiSZYcss5fPomopUnUu LhiKW5Cg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1v8KZE-0000000DhXo-2gKV; Mon, 13 Oct 2025 15:34:32 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1v8KZC-0000000DhXD-0JuS for linux-nvme@lists.infradead.org; Mon, 13 Oct 2025 15:34:31 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 778E544F61; Mon, 13 Oct 2025 15:34:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C79FDC4CEE7; Mon, 13 Oct 2025 15:34:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760369668; bh=CkZHJYkrGRckP1QGodwp/pnp6qcw83Gg8wQFpG7pkV4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QspFKLI452/im+Lu8LrFA68k+h/JF4bMOH2CuJQIAv+EdHMu0HZWxRzWwNOQN0LNA 65OCcUQm1jAA743ab8ggFJRfH3avB3EdCQq3MMmsIFmbNJa9+x05s/xa0Ei5YGhYP4 VDHL/+rXJy4nUz8u3tFaJHn16bzCD/X9WpFBAhH1J/ouuxQsWCwZQlKGm5mkMJUlVs YO9iCuPzoqhLfKxCY4L98I/3iMLejzD3JJUyMEskzu+SENNwdCYqWO70K3S9qzZ1fK yu+pCnynEfB6nG0+h1P81qgeQ/Kare7ahJXBmA1DumM+dJTQ3iF1P6dBLq7mUV27W8 Jxf+JOJCi38Hg== From: Leon Romanovsky To: Christoph Hellwig , Jens Axboe Cc: Leon Romanovsky , Jason Gunthorpe , Keith Busch , linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Sagi Grimberg Subject: [PATCH 2/4] blk-mq-dma: unify DMA unmap routine Date: Mon, 13 Oct 2025 18:34:10 +0300 Message-ID: <04baf1fdff8a04197d5f64c2653c29e7482a2840.1760369219.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20251013_083430_176513_8C49CF66 X-CRM114-Status: GOOD ( 14.19 ) X-BeenThere: linux-nvme@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-nvme" Errors-To: linux-nvme-bounces+linux-nvme=archiver.kernel.org@lists.infradead.org From: Leon Romanovsky Combine regular and meta-intgerity DMA unmapping flow to one blk_rq_dma_unmap(). This allows us to handle addition of new DMA_ATTR_MMIO flow without adding extra function parameters. Signed-off-by: Leon Romanovsky --- block/blk-mq-dma.c | 29 +++++++++++++++++++++++++++++ include/linux/blk-integrity.h | 3 +-- include/linux/blk-mq-dma.h | 35 ++--------------------------------- 3 files changed, 32 insertions(+), 35 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index 4ba7b0323da4..0648bcb705ff 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -260,6 +260,35 @@ bool blk_rq_dma_map_iter_next(struct request *req, struct device *dma_dev, } EXPORT_SYMBOL_GPL(blk_rq_dma_map_iter_next); +/** + * blk_rq_dma_unmap - try to DMA unmap a request + * @req: request to unmap + * @dma_dev: device to unmap from + * @state: DMA IOVA state + * @mapped_len: number of bytes to unmap + * + * Returns %false if the callers need to manually unmap every DMA segment + * mapped using @iter or %true if no work is left to be done. + */ +bool blk_rq_dma_unmap(struct request *req, struct device *dma_dev, + struct dma_iova_state *state, size_t mapped_len) +{ + struct bio_integrity_payload *bip = bio_integrity(req->bio); + + if ((!bip && req->cmd_flags & REQ_P2PDMA) || + bio_integrity_flagged(req->bio, BIP_P2P_DMA)) + return true; + + if (dma_use_iova(state)) { + dma_iova_destroy(dma_dev, state, mapped_len, rq_dma_dir(req), + 0); + return true; + } + + return !dma_need_unmap(dma_dev); +} +EXPORT_SYMBOL(blk_rq_dma_unmap); + static inline struct scatterlist * blk_next_sg(struct scatterlist **sg, struct scatterlist *sglist) { diff --git a/include/linux/blk-integrity.h b/include/linux/blk-integrity.h index b659373788f6..4a0e65f00bd6 100644 --- a/include/linux/blk-integrity.h +++ b/include/linux/blk-integrity.h @@ -32,8 +32,7 @@ static inline bool blk_rq_integrity_dma_unmap(struct request *req, struct device *dma_dev, struct dma_iova_state *state, size_t mapped_len) { - return blk_dma_unmap(req, dma_dev, state, mapped_len, - bio_integrity(req->bio)->bip_flags & BIP_P2P_DMA); + return blk_rq_dma_unmap(req, dma_dev, state, mapped_len); } int blk_rq_count_integrity_sg(struct request_queue *, struct bio *); diff --git a/include/linux/blk-mq-dma.h b/include/linux/blk-mq-dma.h index 51829958d872..e93e167ec551 100644 --- a/include/linux/blk-mq-dma.h +++ b/include/linux/blk-mq-dma.h @@ -29,6 +29,8 @@ bool blk_rq_dma_map_iter_start(struct request *req, struct device *dma_dev, struct dma_iova_state *state, struct blk_dma_iter *iter); bool blk_rq_dma_map_iter_next(struct request *req, struct device *dma_dev, struct dma_iova_state *state, struct blk_dma_iter *iter); +bool blk_rq_dma_unmap(struct request *req, struct device *dma_dev, + struct dma_iova_state *state, size_t mapped_len); /** * blk_rq_dma_map_coalesce - were all segments coalesced? @@ -42,37 +44,4 @@ static inline bool blk_rq_dma_map_coalesce(struct dma_iova_state *state) return dma_use_iova(state); } -/** - * blk_dma_unmap - try to DMA unmap a request - * @req: request to unmap - * @dma_dev: device to unmap from - * @state: DMA IOVA state - * @mapped_len: number of bytes to unmap - * @is_p2p: true if mapped with PCI_P2PDMA_MAP_BUS_ADDR - * - * Returns %false if the callers need to manually unmap every DMA segment - * mapped using @iter or %true if no work is left to be done. - */ -static inline bool blk_dma_unmap(struct request *req, struct device *dma_dev, - struct dma_iova_state *state, size_t mapped_len, bool is_p2p) -{ - if (is_p2p) - return true; - - if (dma_use_iova(state)) { - dma_iova_destroy(dma_dev, state, mapped_len, rq_dma_dir(req), - 0); - return true; - } - - return !dma_need_unmap(dma_dev); -} - -static inline bool blk_rq_dma_unmap(struct request *req, struct device *dma_dev, - struct dma_iova_state *state, size_t mapped_len) -{ - return blk_dma_unmap(req, dma_dev, state, mapped_len, - req->cmd_flags & REQ_P2PDMA); -} - #endif /* BLK_MQ_DMA_H */ -- 2.51.0