All of lore.kernel.org
 help / color / mirror / Atom feed
From: Aneesh Kumar K.V <aneesh.kumar@kernel.org>
To: Steven Price <steven.price@arm.com>,
	linux-kernel@vger.kernel.org, iommu@lists.linux.dev,
	linux-coco@lists.linux.dev
Cc: Catalin Marinas <catalin.marinas@arm.com>,
	will@kernel.org, maz@kernel.org, tglx@linutronix.de,
	robin.murphy@arm.com, suzuki.poulose@arm.com,
	akpm@linux-foundation.org, jgg@ziepe.ca
Subject: Re: [PATCH v2 1/4] swiotlb: dma: its: Enforce host page-size alignment for shared buffers
Date: Mon, 22 Dec 2025 21:12:31 +0530	[thread overview]
Message-ID: <yq5apl86tteg.fsf@kernel.org> (raw)
In-Reply-To: <4a34ed21-f1e0-4991-a367-d6d2f9ad705f@arm.com>

Steven Price <steven.price@arm.com> writes:

> On 21/12/2025 16:09, Aneesh Kumar K.V (Arm) wrote:
>> When running private-memory guests, the guest kernel must apply
>> additional constraints when allocating buffers that are shared with the
>> hypervisor.
>> 
>> These shared buffers are also accessed by the host kernel and therefore
>> must be aligned to the host’s page size.
>> 
>> On non-secure hosts, set_guest_memory_attributes() tracks memory at the
>> host PAGE_SIZE granularity. This creates a mismatch when the guest
>> applies attributes at 4K boundaries while the host uses 64K pages. In
>> such cases, the call returns -EINVAL, preventing the conversion of
>> memory regions from private to shared.
>> 
>> Architectures such as Arm can tolerate realm physical address space PFNs
>> being mapped as shared memory, as incorrect accesses are detected and
>> reported as GPC faults. However, relying on this mechanism is unsafe and
>> can still lead to kernel crashes.
>> 
>> This is particularly likely when guest_memfd allocations are mmapped and
>> accessed from userspace. Once exposed to userspace, we cannot guarantee
>> that applications will only access the intended 4K shared region rather
>> than the full 64K page mapped into their address space. Such userspace
>> addresses may also be passed back into the kernel and accessed via the
>> linear map, resulting in a GPC fault and a kernel crash.
>> 
>> With CCA, although Stage-2 mappings managed by the RMM still operate at
>> a 4K granularity, shared pages must nonetheless be aligned to the
>> host-managed page size to avoid the issues described above.
>> 
>> Introduce a new helper, mem_encryp_align(), to allow callers to enforce
>> the required alignment for shared buffers.
>> 
>> The architecture-specific implementation of mem_encrypt_align() will be
>> provided in a follow-up patch.
>> 
>> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
>> ---
>>  arch/arm64/include/asm/mem_encrypt.h |  6 ++++++
>>  arch/arm64/mm/mem_encrypt.c          |  6 ++++++
>>  drivers/irqchip/irq-gic-v3-its.c     |  7 ++++---
>>  include/linux/mem_encrypt.h          |  7 +++++++
>>  kernel/dma/contiguous.c              | 10 ++++++++++
>>  kernel/dma/direct.c                  |  6 ++++++
>>  kernel/dma/pool.c                    |  6 ++++--
>>  kernel/dma/swiotlb.c                 | 18 ++++++++++++------
>>  8 files changed, 55 insertions(+), 11 deletions(-)
>> 
>> diff --git a/arch/arm64/include/asm/mem_encrypt.h b/arch/arm64/include/asm/mem_encrypt.h
>> index d77c10cd5b79..b7ac143b81ce 100644
>> --- a/arch/arm64/include/asm/mem_encrypt.h
>> +++ b/arch/arm64/include/asm/mem_encrypt.h
>> @@ -17,6 +17,12 @@ int set_memory_encrypted(unsigned long addr, int numpages);
>>  int set_memory_decrypted(unsigned long addr, int numpages);
>>  bool force_dma_unencrypted(struct device *dev);
>>  
>> +#define mem_encrypt_align mem_encrypt_align
>> +static inline size_t mem_encrypt_align(size_t size)
>> +{
>> +	return size;
>> +}
>> +
>>  int realm_register_memory_enc_ops(void);
>>  
>>  /*
>> diff --git a/arch/arm64/mm/mem_encrypt.c b/arch/arm64/mm/mem_encrypt.c
>> index 645c099fd551..deb364eadd47 100644
>> --- a/arch/arm64/mm/mem_encrypt.c
>> +++ b/arch/arm64/mm/mem_encrypt.c
>> @@ -46,6 +46,12 @@ int set_memory_decrypted(unsigned long addr, int numpages)
>>  	if (likely(!crypt_ops) || WARN_ON(!PAGE_ALIGNED(addr)))
>>  		return 0;
>>  
>> +	if (WARN_ON(!IS_ALIGNED(addr, mem_encrypt_align(PAGE_SIZE))))
>> +		return 0;
>> +
>> +	if (WARN_ON(!IS_ALIGNED(numpages << PAGE_SHIFT, mem_encrypt_align(PAGE_SIZE))))
>> +		return 0;
>> +
>>  	return crypt_ops->decrypt(addr, numpages);
>>  }
>>  EXPORT_SYMBOL_GPL(set_memory_decrypted);
>> diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
>> index 467cb78435a9..ffb8ef3a1eb3 100644
>> --- a/drivers/irqchip/irq-gic-v3-its.c
>> +++ b/drivers/irqchip/irq-gic-v3-its.c
>> @@ -213,16 +213,17 @@ static gfp_t gfp_flags_quirk;
>>  static struct page *its_alloc_pages_node(int node, gfp_t gfp,
>>  					 unsigned int order)
>>  {
>> +	unsigned int new_order;
>>  	struct page *page;
>>  	int ret = 0;
>>  
>> -	page = alloc_pages_node(node, gfp | gfp_flags_quirk, order);
>> -
>> +	new_order = get_order(mem_encrypt_align((PAGE_SIZE << order)));
>> +	page = alloc_pages_node(node, gfp | gfp_flags_quirk, new_order);
>>  	if (!page)
>>  		return NULL;
>>  
>>  	ret = set_memory_decrypted((unsigned long)page_address(page),
>> -				   1 << order);
>> +				   1 << new_order);
>>  	/*
>>  	 * If set_memory_decrypted() fails then we don't know what state the
>>  	 * page is in, so we can't free it. Instead we leak it.
>
> Don't you also need to update its_free_pages() in a similar manner so
> that the set_memory_encrypted()/free_pages() calls are done with the
> same order argument?
>

Yes, agreed — good point. The free path needs to mirror the allocation
path, so its_free_pages() should use the same order when calling
set_memory_encrypted()/decrypted() and free_pages(). I’ll update it
accordingly to keep the behavior symmetric and consistent. I also
noticed that swiotlb also need similar change.

-aneesh


  reply	other threads:[~2025-12-22 15:42 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-21 16:09 [PATCH v2 0/4] Enforce host page-size alignment for shared buffers Aneesh Kumar K.V (Arm)
2025-12-21 16:09 ` [PATCH v2 1/4] swiotlb: dma: its: " Aneesh Kumar K.V (Arm)
2025-12-22 14:49   ` Steven Price
2025-12-22 15:42     ` Aneesh Kumar K.V [this message]
2026-01-06  1:16   ` Jason Gunthorpe
2026-01-06  6:37     ` Aneesh Kumar K.V
2025-12-21 16:09 ` [PATCH v2 2/4] coco: guest: arm64: Fetch host IPA change alignment via RHI hostconf Aneesh Kumar K.V (Arm)
2025-12-21 16:09 ` [PATCH v2 3/4] coco: host: arm64: Handle hostconf RHI calls in kernel Aneesh Kumar K.V (Arm)
2025-12-21 20:10   ` Suzuki K Poulose
2025-12-22 14:37     ` Aneesh Kumar K.V
2025-12-23 19:56       ` Suzuki K Poulose
2025-12-21 16:09 ` [PATCH v2 4/4] dma: direct: set decrypted flag for remapped dma allocations Aneesh Kumar K.V (Arm)
2025-12-22 15:05   ` Suzuki K Poulose
2025-12-23  8:18     ` Aneesh Kumar K.V
2025-12-26  8:59       ` Aneesh Kumar K.V
2026-03-11 12:24         ` Mostafa Saleh
2026-01-06  1:11 ` [PATCH v2 0/4] Enforce host page-size alignment for shared buffers Jason Gunthorpe
2026-01-06  6:39   ` Aneesh Kumar K.V

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=yq5apl86tteg.fsf@kernel.org \
    --to=aneesh.kumar@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=catalin.marinas@arm.com \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@ziepe.ca \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=steven.price@arm.com \
    --cc=suzuki.poulose@arm.com \
    --cc=tglx@linutronix.de \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.