From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8FD31FF885A for ; Tue, 28 Apr 2026 13:31:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:MIME-Version:References:In-Reply-To:Subject:Cc:To:From: Message-ID:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=DdsmTjEgQfYU48Ale/qzF05NHByH53Ihz+LjBYTWmBo=; b=hcYkvq28huC627mUzV6uP9KGGq F4KYKspNAwfOlpt1HBtTTDYxn5r1r9cwvE9za+fcBEY0v0ddwISXPpTssEIlVGajAfD9xUK6bQlGj tO/m9++oA7oXlRY2CZHJfnXRaknFGSDhynWv8zMP9JzwcSqloBph2oF3yhw1Q3cnq/TxI9l44EDVG XyRpmVLMduEWaOmrE8ismmXo5m8hJQ5PgLVnSP2pc6LApESMlO1xGJ0ffscRKwFdIW86uafwGbyxB ZNWvO+rziePCmCoBBx1iy1G9EyHfqcKln1h2i1WEBja8bFRRmR/9MkpZgLTbGJp+LYhV0S1Pfbf/6 EbksYtoQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1wHiX3-00000001XyO-34Gb; Tue, 28 Apr 2026 13:31:21 +0000 Received: from sea.source.kernel.org ([172.234.252.31]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1wHiX0-00000001Xxs-301A for linux-arm-kernel@lists.infradead.org; Tue, 28 Apr 2026 13:31:20 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 4FFDF40229; Tue, 28 Apr 2026 13:31:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2AAD3C2BCAF; Tue, 28 Apr 2026 13:31:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777383077; bh=Fqnn2hEQAqa2HmSkh6q7L5idMdiVy1lhIhJc0byAbe4=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=mvgWZk/x3TCAQl1Prytst/0qUjN8xheWdIG0ydFt+g5hrZwF3pKK6x8vq0yyBWV3q 5YHgiZGGYM/6NbkIBoHIhiTOOby2INGEMp3R1v/EGWZjubpfocvJ9rB3DfTP+K5PBp EU2E96R4Bo8m6cHs9XmYxUvaKFaCTYTivetZrp9ZixJkB209kU3Iz19jM2m8YnBlVm bFH+Dt9xtUJ/VpHo57zj4RhDCHJ2wZFevPDCV8xyqIPl4R2Y2NaoJqLh8oM0HVBAMW joPDjPCE7LTeB+H3NJrIoKiGjIOXxCQTqeDb5fTCnVHC1l0K9RoTe7I0xkSyPlWvoE 92rXaJC6CGFbg== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.98.2) (envelope-from ) id 1wHiWw-0000000FW07-3pws; Tue, 28 Apr 2026 13:31:15 +0000 Date: Tue, 28 Apr 2026 14:31:14 +0100 Message-ID: <86v7dbyzx9.wl-maz@kernel.org> From: Marc Zyngier To: Aneesh Kumar K.V Cc: linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-coco@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, Catalin Marinas , Jason Gunthorpe , Marek Szyprowski , Robin Murphy , Steven Price , Suzuki K Poulose , Thomas Gleixner , Will Deacon Subject: Re: [PATCH v4 2/3] swiotlb: dma: its: Enforce host page-size alignment for shared buffers In-Reply-To: References: <20260427063108.909019-1-aneesh.kumar@kernel.org> <20260427063108.909019-3-aneesh.kumar@kernel.org> <86zf2ozrb8.wl-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/30.1 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: aneesh.kumar@kernel.org, linux-kernel@vger.kernel.org, iommu@lists.linux.dev, linux-coco@lists.linux.dev, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, catalin.marinas@arm.com, jgg@ziepe.ca, m.szyprowski@samsung.com, robin.murphy@arm.com, steven.price@arm.com, suzuki.poulose@arm.com, tglx@kernel.org, will@kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260428_063118_822869_63457F4C X-CRM114-Status: GOOD ( 37.49 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, 28 Apr 2026 13:20:53 +0100, Aneesh Kumar K.V wrote: >=20 > Marc Zyngier writes: >=20 > > On Mon, 27 Apr 2026 07:31:07 +0100, > > "Aneesh Kumar K.V (Arm)" wrote: > >>=20 > >> When running private-memory guests, the guest kernel must apply additi= onal > >> constraints when allocating buffers that are shared with the hyperviso= r. > >>=20 > >> These shared buffers are also accessed by the host kernel and therefore > >> must be aligned to the host=E2=80=99s page size, and have a size that = is a multiple > >> of the host page size. > >>=20 > >> On non-secure hosts, set_guest_memory_attributes() tracks memory at the > >> host PAGE_SIZE granularity. This creates a mismatch when the guest app= lies > >> attributes at 4K boundaries while the host uses 64K pages. In such cas= es, > >> set_guest_memory_attributes() call returns -EINVAL, preventing the > >> conversion of memory regions from private to shared. > >>=20 > >> Architectures such as Arm can tolerate realm physical address space > >> (protected memory) PFNs being mapped as shared memory, as incorrect > >> accesses are detected and reported as GPC faults. However, relying on = this > >> mechanism is unsafe and can still lead to kernel crashes. > >>=20 > >> This is particularly likely when guest_memfd allocations are mmapped a= nd > >> accessed from userspace. Once exposed to userspace, we cannot guarantee > >> that applications will only access the intended 4K shared region rather > >> than the full 64K page mapped into their address space. Such userspace > >> addresses may also be passed back into the kernel and accessed via the > >> linear map, resulting in a GPC fault and a kernel crash. > >>=20 > >> With CCA, although Stage-2 mappings managed by the RMM still operate a= t a > >> 4K granularity, shared pages must nonetheless be aligned to the > >> host-managed page size and sized as whole host pages to avoid the issu= es > >> described above. > > > > I thought that was being fixed, and that there was now a strong > > guarantee that RMM and host are aligned on the page size. Even more, > > S2 is totally irrelevant here. The only thing that matters is the host > > page size vs the guest page size. Nothing else. > > >=20 > Yes, the latest RMM update includes the ability to change the granule > size. >=20 > The section above in the commit message was intended to explain that the > S2 mapping size is irrelevant. I agree it is not clear as written, so I > will reword it to improve clarity. Even better, remove it. Nothing CCA-specific should be in this patch. [...] > >> static struct gen_pool *itt_pool; > >> @@ -268,11 +272,13 @@ static void *itt_alloc_pool(int node, int size) > >> if (addr) > >> break; > >> =20 > >> - page =3D its_alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0); > >> + page =3D its_alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, > >> + get_order(mem_decrypt_granule_size())); > > > > You already taught its_alloc_pages_node() about the decrypt granule > > size stuff. I don't think we need to see more of it (and you don't > > mess with the call that is just above it). > > > >> if (!page) > >> break; > >> =20 > >> - gen_pool_add(itt_pool, (unsigned long)page_address(page), PAGE_SIZE= , node); > >> + gen_pool_add(itt_pool, (unsigned long)page_address(page), > >> + mem_decrypt_granule_size(), node); > > > > I'd rather see something like mem_decrypt_align(PAGE_SIZE), which > > keeps the intent clear. > > >=20 > The helper was added based on feedback from a previous version. I assume > you are suggesting that only this caller should switch? I don't know what you mean by 'this'. What I'd like to see is this last hunk be changed to: gen_pool_add(itt_pool, (unsigned long)page_address(page), mem_decrypt_align(PAGE_SIZE), node); and the previous hunk simply dropped. Thanks, M. --=20 Without deviation from the norm, progress is not possible.