From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EDE9BC30653 for ; Thu, 27 Jun 2024 14:35:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: Content-Type:In-Reply-To:From:References:Cc:To:Subject:MIME-Version:Date: Message-ID:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=Z5ejePXFePcgqT59yCaVAZ0CfM6ovx29m9UNnWI0Cbg=; b=OWSXHKRoPQWMExFapHrKXh3I+X bBUZ7h+QNgTcko/VI87r2czq0/w6oezOMPl3u6iI3HeVoMXy9QGFJ6ynT5HImkLrEO18MwEAq5bWJ RORjqnvh9aA4YeP7KTM/w7XeIBld9cuHriaOjvzL0HaPsYxgOA3w7bASCUJIO3CPRuq6bqrCQ/WgV kEjiAMxjkb31YvmxhZO86n1OrjsemQ7mHaZndsI90AXCm+SztQI6M9Jci1ceGcqod62MoobtyZalq UmDKLMhc7l4YCE33I561uOOrwv+LY0TotzC133yZj/oUk67NLm0URvTKDlF9uRLO2yj+pC8PAP5o/ kJT6L48A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sMqCw-0000000Aemi-2EqJ; Thu, 27 Jun 2024 14:34:42 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sMqCf-0000000Aeee-2nnl for linux-arm-kernel@lists.infradead.org; Thu, 27 Jun 2024 14:34:29 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0DD6E367; Thu, 27 Jun 2024 07:34:46 -0700 (PDT) Received: from [10.1.30.72] (e122027.cambridge.arm.com [10.1.30.72]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 95F9A3F73B; Thu, 27 Jun 2024 07:34:17 -0700 (PDT) Message-ID: Date: Thu, 27 Jun 2024 15:34:15 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 09/14] arm64: Enable memory encrypt for Realms To: Catalin Marinas Cc: kvm@vger.kernel.org, kvmarm@lists.linux.dev, Suzuki K Poulose , Marc Zyngier , Will Deacon , James Morse , Oliver Upton , Zenghui Yu , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Joey Gouly , Alexandru Elisei , Christoffer Dall , Fuad Tabba , linux-coco@lists.linux.dev, Ganapatrao Kulkarni References: <20240605093006.145492-1-steven.price@arm.com> <20240605093006.145492-10-steven.price@arm.com> From: Steven Price Content-Language: en-GB In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240627_073425_804699_D35E0FD2 X-CRM114-Status: GOOD ( 21.12 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 10/06/2024 18:27, Catalin Marinas wrote: > On Wed, Jun 05, 2024 at 10:30:01AM +0100, Steven Price wrote: >> +static int __set_memory_encrypted(unsigned long addr, >> + int numpages, >> + bool encrypt) >> +{ >> + unsigned long set_prot = 0, clear_prot = 0; >> + phys_addr_t start, end; >> + int ret; >> + >> + if (!is_realm_world()) >> + return 0; >> + >> + if (!__is_lm_address(addr)) >> + return -EINVAL; >> + >> + start = __virt_to_phys(addr); >> + end = start + numpages * PAGE_SIZE; >> + >> + /* >> + * Break the mapping before we make any changes to avoid stale TLB >> + * entries or Synchronous External Aborts caused by RIPAS_EMPTY >> + */ >> + ret = __change_memory_common(addr, PAGE_SIZE * numpages, >> + __pgprot(0), >> + __pgprot(PTE_VALID)); >> + >> + if (encrypt) { >> + clear_prot = PROT_NS_SHARED; >> + ret = rsi_set_memory_range_protected(start, end); >> + } else { >> + set_prot = PROT_NS_SHARED; >> + ret = rsi_set_memory_range_shared(start, end); >> + } >> + >> + if (ret) >> + return ret; >> + >> + set_prot |= PTE_VALID; >> + >> + return __change_memory_common(addr, PAGE_SIZE * numpages, >> + __pgprot(set_prot), >> + __pgprot(clear_prot)); >> +} > > This works, does break-before-make and also rejects vmalloc() ranges > (for the time being). > > One particular aspect I don't like is doing the TLBI twice. It's > sufficient to do it when you first make the pte invalid. We could guess > this in __change_memory_common() if set_mask has PTE_VALID. The call > sites are restricted to this file, just add a comment. An alternative > would be to add a bool flush argument to this function. > I'm always a bit scared of changing this sort of thing, but AFAICT the below should be safe: - flush_tlb_kernel_range(start, start + size); + /* + * If the memory is being made valid without changing any other bits + * then a TLBI isn't required as a non-valid entry cannot be cached in + * the TLB. + */ + if (set_mask != PTE_VALID || clear_mask) + flush_tlb_kernel_range(start, start + size); It will affect users of set_memory_valid() by removing the TLBI when marking memory as valid. I'll add this change as a separate patch so it can be reverted easily if there's something I've overlooked. Thanks, Steve