From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f42.google.com (mail-wm1-f42.google.com [209.85.128.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 863E02D8DB5 for ; Wed, 13 May 2026 13:58:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.42 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778680734; cv=none; b=jskhSiwicwvKR09inS/sGxyjA+d5Lhtl25g9L4CZaHmb4LLtCd5pWqRUqyaPhAAGSwf6xYOhFgaSWN3A0iM5VUCeK7w9a5TNACPCOIrPXfs8rEQIWWGdM4TGaJlND+teJ9VroG3zluHuPqBa+egPYlLVjQkWLmdMIBFHu+Xz4hw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778680734; c=relaxed/simple; bh=VgIxgGXl3Ha/Ri0FMajjsfOQaiUiAXPug9JjQfnZm8w=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=f1IMoGf9slTwQA0qR9sACFYVZMxoL6ihiMp4YV6TSA97WdofrtTRay040e7/GPtA5csq1El9Vu0GBA84FgEWvVH31CqLCxDa58Y9hoI6xiYejVWrngdT4FmRR36Jjeh59Ecrq7fSE/57zuMiQaUdaMVlp7tSuhLFRXJ6Kv+d9Gw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fV1vvEpg; arc=none smtp.client-ip=209.85.128.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fV1vvEpg" Received: by mail-wm1-f42.google.com with SMTP id 5b1f17b1804b1-4891ca4ce02so3485e9.1 for ; Wed, 13 May 2026 06:58:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1778680729; x=1779285529; darn=vger.kernel.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=tPMVkSo7vb48jv7YwJ0Vd+mCEf6970EXP9V58QWiGq0=; b=fV1vvEpgXGFf1BHOaWxNz9Ww7EFR4zlNE+BCEYQsHu+qeaPE05edunHw9bWFt9X326 Gdz7UEQYZaJJDpNCdX/XsluruKIjixFgZz8nbugxzCONcpfrb6HRmAY7IMf95PlAJZVG UGCsGA+7Gh0iXK4ZcvTfn9eNUCmSZ92ZnRVbTj2KMQVBCPIimKmAuAgRCsh/BEepdt/s k4mpjqBIjX3yH9cybdY0VyJhoculJh9ov1vU1gMSTmEWzqZekuon1wuzi7cdsayWb3Cv 8yRPEr3IxnADu88m3fQtLy9vec1HuubTJbvM+nKNNMHQIiz7YTlEtghaLv1vGy50QghY CRrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1778680729; x=1779285529; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tPMVkSo7vb48jv7YwJ0Vd+mCEf6970EXP9V58QWiGq0=; b=qt+3wwKswFvZKiNw7yXNALfb7Pyo8Gp5hwx+uku8E/X7UKOnVwwjGV1Ys12T1Rh/8g /5QtwPCWH9Ud9vMrleskSLVSNc1be3dmaQv+Yt7vOWD8YIVoASU+iSsh4D9pb9k01LFa 4Bx4RjXO7QFACL160F/t2Ezk0DRD8HojJz3re8x8VD25cl7JMOIxIg93k5Zpx9IbWfGq zOsX3JHUBI87FYw3+TuYE1zeenUG+rO5UnhDLDQn2HX5ZTlfsbamJgZ2hHPASBjsFf73 dh3mGZ/Q5OzIVPKO3ndpwywMhzeEX3ya26xIWFe0v4mzzx+07aTyc+fkDRYSRcP0ks2v SxqA== X-Forwarded-Encrypted: i=1; AFNElJ9Ryz5/+DVgtfC4RpBOPLJgW5CCbHHlYQd1Zwh+xIbnlaMj7eLRizX+IPn2Lc2aFvuJFofoS3TcuKV5jSM=@vger.kernel.org X-Gm-Message-State: AOJu0Yw/o7lXI+nXOwz7E3SIRWMVyb31Y3qKTL59PufqqNC+lcD96iJh TriaJFZPBjqvV4DYIw8ijjk2HkVxwkGcklYlryRc+ChcMEp9gp/10WoFcRYykUBXVw== X-Gm-Gg: Acq92OGJPi910q77XGRVhateQK5yRE1L7bERU3iWo1UP2sKhp+vYxktdkNZzaWgJ+B+ 0qoSMAf/qcQHH2GbX+GcCjeBfkCSTXhvdyAxv8TzHszU6qZcPA+uuv+pCMyxIjsFt9H1UtmiPzX MsRZPe1pI98WtnF7+fwkzqcrLXLvPpyMXVYGEkZ5r5q3Js5JkSxi4SZ+ZOG1j5Au/N5IVJhbgrv 4SlXct1iiuZL1WEB+GGsQznSEvw2gn/VF5KCUtIIEdjfYdA5mze6tVlj3gwriD47BWPt3s1wFOr btt51+ZBPVK9aB8dJ4nGlq/eo76tmdCnxd4qNqmRTWumRtblVQ4+U08gPC0+GcC3l1EkkUpqwtV Ls/90OQRDsW3d/Hd65SIVJf6PCY/8e7ZK9cgZt9ySU3ZgyNbAMNXd5L5ALQKpmbjQWpOXGhczCY uVj7kAbjINEYbP1lOH85QSPV2QvkDOpzMI7kMbFdhjIitUO2sR4Fa+tZs6ZQs1h088IuI= X-Received: by 2002:a05:600c:8a16:10b0:48a:5618:b4d4 with SMTP id 5b1f17b1804b1-48fcacd0d81mr589735e9.1.1778680729194; Wed, 13 May 2026 06:58:49 -0700 (PDT) Received: from google.com (8.181.38.34.bc.googleusercontent.com. [34.38.181.8]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-4548ec6cd75sm39576190f8f.16.2026.05.13.06.58.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 May 2026 06:58:48 -0700 (PDT) Date: Wed, 13 May 2026 13:58:43 +0000 From: Mostafa Saleh To: "Aneesh Kumar K.V (Arm)" Cc: iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-coco@lists.linux.dev, Robin Murphy , Marek Szyprowski , Will Deacon , Marc Zyngier , Steven Price , Suzuki K Poulose , Catalin Marinas , Jiri Pirko , Jason Gunthorpe , Petr Tesarik , Alexey Kardashevskiy , Dan Williams , Xu Yilun , linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , "Christophe Leroy (CS GROUP)" , Alexander Gordeev , Gerald Schaefer , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Sven Schnelle , x86@kernel.org Subject: Re: [PATCH v4 02/13] dma-direct: use DMA_ATTR_CC_SHARED in alloc/free paths Message-ID: References: <20260512090408.794195-1-aneesh.kumar@kernel.org> <20260512090408.794195-3-aneesh.kumar@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20260512090408.794195-3-aneesh.kumar@kernel.org> On Tue, May 12, 2026 at 02:33:57PM +0530, Aneesh Kumar K.V (Arm) wrote: > Propagate force_dma_unencrypted() into DMA_ATTR_CC_SHARED in the > dma-direct allocation path and use the attribute to drive the related > decisions. > > This updates dma_direct_alloc(), dma_direct_free(), and > dma_direct_alloc_pages() to fold the forced unencrypted case into attrs. > > Signed-off-by: Aneesh Kumar K.V (Arm) > --- > kernel/dma/direct.c | 44 ++++++++++++++++++++++++++++++++++++-------- > 1 file changed, 36 insertions(+), 8 deletions(-) > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index b958f150718a..0c2e1f8436ce 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -201,16 +201,31 @@ void *dma_direct_alloc(struct device *dev, size_t size, > dma_addr_t *dma_handle, gfp_t gfp, unsigned long attrs) > { > bool remap = false, set_uncached = false; > - bool mark_mem_decrypt = true; > + bool mark_mem_decrypt = false; > struct page *page; > void *ret; > > + /* > + * DMA_ATTR_CC_SHARED is not a caller-visible dma_alloc_*() > + * attribute. The direct allocator uses it internally after it has > + * decided that the backing pages must be shared/decrypted, so the > + * rest of the allocation path can consistently select DMA addresses, > + * choose compatible pools and restore encryption on free. > + */ > + if (attrs & DMA_ATTR_CC_SHARED) > + return NULL; > + > + if (force_dma_unencrypted(dev)) { > + attrs |= DMA_ATTR_CC_SHARED; > + mark_mem_decrypt = true; > + } > + > size = PAGE_ALIGN(size); > if (attrs & DMA_ATTR_NO_WARN) > gfp |= __GFP_NOWARN; > > - if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > - !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) > + if (((attrs & (DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_CC_SHARED)) == > + DMA_ATTR_NO_KERNEL_MAPPING) && !is_swiotlb_for_alloc(dev)) > return dma_direct_alloc_no_mapping(dev, size, dma_handle, gfp); > > if (!dev_is_dma_coherent(dev)) { > @@ -244,7 +259,7 @@ void *dma_direct_alloc(struct device *dev, size_t size, > * Remapping or decrypting memory may block, allocate the memory from > * the atomic pools instead if we aren't allowed block. > */ > - if ((remap || force_dma_unencrypted(dev)) && > + if ((remap || (attrs & DMA_ATTR_CC_SHARED)) && > dma_direct_use_pool(dev, gfp)) > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); > > @@ -318,11 +333,20 @@ void *dma_direct_alloc(struct device *dev, size_t size, > void dma_direct_free(struct device *dev, size_t size, > void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) > { > - bool mark_mem_encrypted = true; > + bool mark_mem_encrypted = false; > unsigned int page_order = get_order(size); > > - if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) && > - !force_dma_unencrypted(dev) && !is_swiotlb_for_alloc(dev)) { > + /* > + * if the device had requested for an unencrypted buffer, > + * convert it to encrypted on free > + */ > + if (force_dma_unencrypted(dev)) { > + attrs |= DMA_ATTR_CC_SHARED; > + mark_mem_encrypted = true; > + } > + > + if (((attrs & (DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_CC_SHARED)) == > + DMA_ATTR_NO_KERNEL_MAPPING) && !is_swiotlb_for_alloc(dev)) { > /* cpu_addr is a struct page cookie, not a kernel address */ > dma_free_contiguous(dev, cpu_addr, size); > return; > @@ -365,10 +389,14 @@ void dma_direct_free(struct device *dev, size_t size, > struct page *dma_direct_alloc_pages(struct device *dev, size_t size, > dma_addr_t *dma_handle, enum dma_data_direction dir, gfp_t gfp) > { > + unsigned long attrs = 0; > struct page *page; > void *ret; > > - if (force_dma_unencrypted(dev) && dma_direct_use_pool(dev, gfp)) > + if (force_dma_unencrypted(dev)) > + attrs |= DMA_ATTR_CC_SHARED; > + > + if ((attrs & DMA_ATTR_CC_SHARED) && dma_direct_use_pool(dev, gfp)) > return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp); What about dma_direct_free_pages()? Nothing inside uses attrs, but that’s quite similar to dma_direct_alloc_pages() Also, at this point, shouldn’t this patch also remove force_dma_unencrypted() calls from dma_set_decrypted() and dma_set_encrypted()? Thanks, Mostafa > > if (is_swiotlb_for_alloc(dev)) { > -- > 2.43.0 >