From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C199A3FCB0C; Thu, 12 Mar 2026 16:50:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773334242; cv=none; b=BW9HjGml4zmPstx44XU1zalKwHNSox7vryNonVfzW7FzOF7a7HKchXkduvzj1b3YN19TYsO9bX0Haq7cM6HFySfSaH4MCSYi6tZ8X8Yw2YMGq07MTtJK2qp9DaCAYai46WMUmU/YbPTM7PjLA7D5TuXgeyvKjZmmwajur3KCx5c= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773334242; c=relaxed/simple; bh=I7ohnXz5SQWGkR4iqkJFGqlNZU7QTgaFdwYwVFc0xzE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=QrBxEy4YJXrlv4mJnK4/9dvxANnE/wa6QJAi9q4s3b/MQwjJzVSnbwgzCKjxtfvqgtL5Pa0NpBSxwepdxtpkDVC6jrzMnZzrVHpsW50n2PtW3DE9+s+aG28+syTOVE0HDuyTTIPMZdIExgfmEyh2nAJmIepSmSYcF3SFIKJl4kY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=X/Ncuhpr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="X/Ncuhpr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 70359C2BC87; Thu, 12 Mar 2026 16:50:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773334242; bh=I7ohnXz5SQWGkR4iqkJFGqlNZU7QTgaFdwYwVFc0xzE=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=X/Ncuhprz30y2xAin6UFvnTvzocNQuIqes6ui7gpDr2Q7sh24Z5OkTNZJtridjidu qHJA5XMMmx34FwJfNQTd9eWNuC/TsCnuMW0QQw2WMFgajgUTOtCfMLOlkV7P3EYHhH Nx4YTBgI13YFouzZbuB9ZuPbGPAZ2nHx5tpawOo/y4xbwSwX4qBFaw+N1s+ukFqidC 0py6Mr7JAg2AnYENLQ57jutZWMr2iGKWHhZqFgi67nenSrz73gVlv8Eod4sKTgKCrr cy/u7fNxZj8vYe+9tPI3ClTIVdvGPi/HMv2yRFXiddhyGIhxb0pVk4KDsI95EkyaWL rkkqouFhYoZ0w== Date: Thu, 12 Mar 2026 18:50:38 +0200 From: Leon Romanovsky To: Jason Gunthorpe Cc: Marek Szyprowski , Robin Murphy , "Michael S. Tsirkin" , Petr Tesarik , Jonathan Corbet , Shuah Khan , Jason Wang , Xuan Zhuo , Eugenio =?iso-8859-1?Q?P=E9rez?= , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Joerg Roedel , Will Deacon , Andrew Morton , iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux.dev, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v2 8/8] mm/hmm: Indicate that HMM requires DMA coherency Message-ID: <20260312165038.GB12611@unreal> References: <20260311-dma-debug-overlap-v2-0-e00bc2ca346d@nvidia.com> <20260311-dma-debug-overlap-v2-8-e00bc2ca346d@nvidia.com> <20260312122645.GG1469476@ziepe.ca> Precedence: bulk X-Mailing-List: virtualization@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260312122645.GG1469476@ziepe.ca> On Thu, Mar 12, 2026 at 09:26:45AM -0300, Jason Gunthorpe wrote: > On Wed, Mar 11, 2026 at 09:08:51PM +0200, Leon Romanovsky wrote: > > From: Leon Romanovsky > > > > HMM mirroring can work on coherent systems without SWIOTLB path only. > > Until introduction of DMA_ATTR_REQUIRE_COHERENT, there was no reliable > > way to indicate that and various approximation was done: > > HMM is fundamentally about allowing a sophisticated device to > independently DMA to a process's memory concurrently with the CPU > accessing the same memory. It is similar to SVA but does not rely on > IOMMU support. Since the entire use model is concurrent access to the > same memory it becomes fatally broken as a uAPI if SWIOTLB is > replacing the memory, or the CPU caches are incoherent with DMA. > > Till now there was no reliable way to indicate that and various > approximation was done: > > > int hmm_dma_map_alloc(struct device *dev, struct hmm_dma_map *map, > > size_t nr_entries, size_t dma_entry_size) > > { > > <...> > > /* > > * The HMM API violates our normal DMA buffer ownership rules and can't > > * transfer buffer ownership. The dma_addressing_limited() check is a > > * best approximation to ensure no swiotlb buffering happens. > > */ > > dma_need_sync = !dev->dma_skip_sync; > > if (dma_need_sync || dma_addressing_limited(dev)) > > return -EOPNOTSUPP; > > Can it get dropped now then? Better not, it allows us to reject caller much earlier than DMA mapping flow. It is much saner to fail during UMEM ODP creation than start to fail for ODP pagefaults. > > > So let's mark mapped buffers with DMA_ATTR_REQUIRE_COHERENT attribute > > to prevent DMA debugging warnings for cache overlapped entries. > > Well, that isn't the main motivation, this prevents silent data > corruption if someone tries to use hmm in a system with swiotlb or > incoherent DMA, > > Looks OK otherwise > > Reviewed-by: Jason Gunthorpe > > Jason >