From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 98C94C27C5E for ; Mon, 10 Jun 2024 17:27:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=woKL9SP4uejuBD40sRU0ICfesQjRciJeqxANPnbtdf4=; b=DikTt+LnARl+Qh apUXFYZM+sIHvOdi7HNJEvw5GP9xkhiTBpYZDhto2p5ce0Fb4MZdY/R0hT2mZAgDVpsNTEBiYrIMb Day+22Iw2jKR85jM2zZbfnMYc6gCfhKi+mcXy204Cd3BibxYUF1+UvNWVpoqsTN2lbddZVqT+I/XE yemDKfxAmnWKi/f0eKTWm9RoVbjNMUD2LU0bjoShr1WzinJXVwil6fjeTP7NEOtsdc1lXJlyQ5Bco 8rRA8dTxLQIVrHLLy7AkLYp0cboaLbAjcGWMMRIi945hLlB5zM9hJDW/yyp87S/U3PH6IEI5IUi+o D6hyGj8zzKSbcUYvEkzQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sGins-00000005yfb-1ibT; Mon, 10 Jun 2024 17:27:32 +0000 Received: from dfw.source.kernel.org ([139.178.84.217]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sGinp-00000005yep-2NUb for linux-arm-kernel@lists.infradead.org; Mon, 10 Jun 2024 17:27:30 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id F0A5B60AC2; Mon, 10 Jun 2024 17:27:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A4E42C4AF1C; Mon, 10 Jun 2024 17:27:24 +0000 (UTC) Date: Mon, 10 Jun 2024 18:27:22 +0100 From: Catalin Marinas To: Steven Price Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, Suzuki K Poulose , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni Subject: Re: [PATCH v3 09/14] arm64: Enable memory encrypt for Realms Message-ID: References: <20240605093006.145492-1-steven.price@arm.com> <20240605093006.145492-10-steven.price@arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20240605093006.145492-10-steven.price@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240610_102729_694967_D57DDD6D X-CRM114-Status: GOOD ( 15.66 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Wed, Jun 05, 2024 at 10:30:01AM +0100, Steven Price wrote: > +static int __set_memory_encrypted(unsigned long addr, > + int numpages, > + bool encrypt) > +{ > + unsigned long set_prot = 0, clear_prot = 0; > + phys_addr_t start, end; > + int ret; > + > + if (!is_realm_world()) > + return 0; > + > + if (!__is_lm_address(addr)) > + return -EINVAL; > + > + start = __virt_to_phys(addr); > + end = start + numpages * PAGE_SIZE; > + > + /* > + * Break the mapping before we make any changes to avoid stale TLB > + * entries or Synchronous External Aborts caused by RIPAS_EMPTY > + */ > + ret = __change_memory_common(addr, PAGE_SIZE * numpages, > + __pgprot(0), > + __pgprot(PTE_VALID)); > + > + if (encrypt) { > + clear_prot = PROT_NS_SHARED; > + ret = rsi_set_memory_range_protected(start, end); > + } else { > + set_prot = PROT_NS_SHARED; > + ret = rsi_set_memory_range_shared(start, end); > + } > + > + if (ret) > + return ret; > + > + set_prot |= PTE_VALID; > + > + return __change_memory_common(addr, PAGE_SIZE * numpages, > + __pgprot(set_prot), > + __pgprot(clear_prot)); > +} This works, does break-before-make and also rejects vmalloc() ranges (for the time being). One particular aspect I don't like is doing the TLBI twice. It's sufficient to do it when you first make the pte invalid. We could guess this in __change_memory_common() if set_mask has PTE_VALID. The call sites are restricted to this file, just add a comment. An alternative would be to add a bool flush argument to this function. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel