From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12E17C433EF for ; Mon, 15 Nov 2021 23:53:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D7C3261AFD for ; Mon, 15 Nov 2021 23:53:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354223AbhKOXxd (ORCPT ); Mon, 15 Nov 2021 18:53:33 -0500 Received: from mail.kernel.org ([198.145.29.99]:45402 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344317AbhKOTYZ (ORCPT ); Mon, 15 Nov 2021 14:24:25 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 80AA863480; Mon, 15 Nov 2021 18:55:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1637002528; bh=ovgZw4AxAl+DUbPfNFb0Vyg026Jd87Xtg+3kKVsS9Mo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y79W1JGfMgTMpjxXxY+cqiNjZBtVFWLxh34ZMHIRtftkfo54QMIHb2WlRnQ/JdgD8 NnLYjkptarhhQCV1gRu8jNQl8GIrcW1V/JrFs0HFD49+PcJiitWVeswSGy+AOTPWoW aV7JHdmdCaJz5YmZc9feyhX3HD5WOPOmRQNnxGBs= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, David Stevens , Christoph Hellwig , Robin Murphy , Joerg Roedel , Sasha Levin Subject: [PATCH 5.15 602/917] iommu/dma: Fix arch_sync_dma for map Date: Mon, 15 Nov 2021 18:01:37 +0100 Message-Id: <20211115165449.174936174@linuxfoundation.org> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211115165428.722074685@linuxfoundation.org> References: <20211115165428.722074685@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: David Stevens [ Upstream commit 06e620345d544e559b2961cb5a676ec9c80c8950 ] When calling arch_sync_dma, we need to pass it the memory that's actually being used for dma. When using swiotlb bounce buffers, this is the bounce buffer. Move arch_sync_dma into the __iommu_dma_map_swiotlb helper, so it can use the bounce buffer address if necessary. Now that iommu_dma_map_sg delegates to a function which takes care of architectural syncing in the untrusted device case, the call to iommu_dma_sync_sg_for_device can be moved so it only occurs for trusted devices. Doing the sync for untrusted devices before mapping never really worked, since it needs to be able to target swiotlb buffers. This also moves the architectural sync to before the call to __iommu_dma_map, to guarantee that untrusted devices can't see stale data they shouldn't see. Fixes: 82612d66d51d ("iommu: Allow the iommu/dma api to use bounce buffers") Signed-off-by: David Stevens Reviewed-by: Christoph Hellwig Reviewed-by: Robin Murphy Link: https://lore.kernel.org/r/20210929023300.335969-3-stevensd@google.com Signed-off-by: Joerg Roedel Signed-off-by: Sasha Levin --- drivers/iommu/dma-iommu.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index c4d205b63c582..19bebacbf1780 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -593,6 +593,9 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys, memset(padding_start, 0, padding_size); } + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_device(phys, org_size, dir); + iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask); if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys)) swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs); @@ -860,14 +863,9 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, { phys_addr_t phys = page_to_phys(page) + offset; bool coherent = dev_is_dma_coherent(dev); - dma_addr_t dma_handle; - dma_handle = __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev), + return __iommu_dma_map_swiotlb(dev, phys, size, dma_get_mask(dev), coherent, dir, attrs); - if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC) && - dma_handle != DMA_MAPPING_ERROR) - arch_sync_dma_for_device(phys, size, dir); - return dma_handle; } static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, @@ -1012,12 +1010,12 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, goto out; } - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - iommu_dma_sync_sg_for_device(dev, sg, nents, dir); - if (dev_is_untrusted(dev)) return iommu_dma_map_sg_swiotlb(dev, sg, nents, dir, attrs); + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + iommu_dma_sync_sg_for_device(dev, sg, nents, dir); + /* * Work out how much IOVA space we need, and align the segments to * IOVA granules for the IOMMU driver to handle. With some clever -- 2.33.0