From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wr1-f42.google.com (mail-wr1-f42.google.com [209.85.221.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26CDA3876CB for ; Wed, 14 Jan 2026 09:27:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.42 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768382884; cv=none; b=EcS2wXUmXS5rcDqjfhwLEjWOmu2B6tGXUYtFyO2mXJuFKAAZTSrQCmdqHhEng9Yvnmi97/hz/x1ase2empv1pCHyzm1i1uW4LQzRMmZi+RUzK3N4SFyYzkFy372E564mKBm+C4MxQzF6PUd4I8wAFEM5dUgLSdat/EJjM5z96iE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1768382884; c=relaxed/simple; bh=vT/SHrgrKq6WisSwZQKI5TBKlgZu6AE9Vrb37GEgtks=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=LXF0+rlJ4mcp5RTTLltN2sGDG+su3YooAbBauR3zGgvq1AtTVXXCxTuDNG/CVA6JljCvACiGfREA02/ZsepjDBCYOfupsiQWE9Sy0yoa23medenHSoabKhzAhs6tqJrkW1mvKADvEuaFTmB7kulPZbn1Kz1BqfEQnM5r3R/7ipA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pZzrL2vD; arc=none smtp.client-ip=209.85.221.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pZzrL2vD" Received: by mail-wr1-f42.google.com with SMTP id ffacd0b85a97d-42fbc305552so6292793f8f.0 for ; Wed, 14 Jan 2026 01:27:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1768382870; x=1768987670; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=Kap4gu0a51WoqK/VPbkwUanlik+X2vGOaSYLGv2cqSA=; b=pZzrL2vD4lJuslUh3lbKzAlLS0J5nEKLuFhWb7C24RrKnC0N31l4Yz/KVsLIAaJ8ij fX+mfsS/wL8RC9J4Phi1jcMN0yffXuxwrZtObU8YakJhV68MVToTTJ08sNgmzNLQrZ4C cVer01X30l3KS5/SXjqzE33CPIOGOIw2zKmNbAVGVbtcXsRKrRj9+CFnNQOHwAMlZHck Y+R+fGbuXoq5JNOZJ7tgg1vsCIHNiPo5QyLqHdU1jtCKZNMMlykrTmvoN0lJX0i7EDLn H2sd3Z7CcrNwP5ora0QDgPfvSpCAt3vxrmtkd/4trirf9ZFpdrxv2wgZahQaZz1qOwk2 3GMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768382870; x=1768987670; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Kap4gu0a51WoqK/VPbkwUanlik+X2vGOaSYLGv2cqSA=; b=dvrdXB64YfHuZJQC7i51e+A6LTtH7guWdbpf1+Pg5mOaGb8dOnLNiMpBOZKLyF91D7 qTPP/tkojXTFtpJCml/SegcN/jeZjKTXIMjAjmYoxSGyV/Sy5BIny3jgkIoNbijXPtNT pDW2ErPkbTIL2hdacoydoROXSXeh6ARFEKjFA80HIBGh6DgyK5pUnBT/mk5Kv+AsAJjs Garlrr+jSCf6xllZT2QQm1khaCrX8l9G4QEa6Fcgrnu7iTvKc23DaMoH31x3eK+aCYVx JEJ9XCjEUUDk9Dg5iPqk/vxD6qgebbDtN/FXJCnIZCw7Koo2HSvJ471gSbqKxGLUy5Dg rSaA== X-Forwarded-Encrypted: i=1; AJvYcCV+ScI2cTFb3RxMZMKggTcG+L5R+uhPfN3i63ciIOqThcNrXNKXDCR50UDMaj1rDWkINIGKFv6N6ToinxI=@vger.kernel.org X-Gm-Message-State: AOJu0Yx8lH+JBY+lVNMpqbBupSwBMifcIhDIG+DzQcPNVWbRaDU8dKw/ TEGDpol/DO+rer9DoTfYNGF9rQ/hchDdAGxj3vlPh5UetJsPQK/xnz6La6NfU/Z3WA== X-Gm-Gg: AY/fxX4YSjRuTVfcRPo96FYgXja+DAmcHT28TPfnr+whqCp7XGM2Mdn7KmAoT7pe9kw qj5tpvAQeA3gNNzT/134JTunii1N3dgxBOvRc9s9eYHA2ySaevU+lVToCOKLjj1fEdavndfilhC 9yE3UbyQ3AA5+iuANIUc+VZxXPVfNWdP7frsCA6HMrUB/Rs+vLVesSzRo9/2GaBkUNYXyQcagxM tvp9jrRjmZ7so+2jiuTtAOyjWp+hh86tQHex/a8wVnfIEjmJ5iZeU4g4lQGGflmkC8Xgp3KuxHy RJWXA0Y2i4UPUMChV+KDLwuhrglOXFWgMIIhsgWiWoSn3JLBVpSkFLfaC2nfbTRYOYAxEhXjlHf IrBLCTRyo3edXE2hqYw2ioW3KhlbJnvetquaCOGrMSq+Q3/FfT5FfjRF1VjPvOGkluWN2AHbfIn zR/PrSshbDUHo4pCfIsQsiV8PcuR3RLnTz5ABOmVg9qpdyQPKPPw== X-Received: by 2002:a05:6000:2011:b0:431:4c6:724d with SMTP id ffacd0b85a97d-4342c5487f2mr2035382f8f.29.1768382870113; Wed, 14 Jan 2026 01:27:50 -0800 (PST) Received: from google.com (44.145.34.34.bc.googleusercontent.com. [34.34.145.44]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-432bd5df96asm48157294f8f.28.2026.01.14.01.27.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jan 2026 01:27:49 -0800 (PST) Date: Wed, 14 Jan 2026 09:27:46 +0000 From: Vincent Donnefort To: Petteri Kangaslampi Cc: kvmarm@lists.linux.dev, Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 1/1] KVM: arm64: Calculate hyp VA size only once Message-ID: References: <20260113194409.2970324-1-pekangas@google.com> <20260113194409.2970324-2-pekangas@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260113194409.2970324-2-pekangas@google.com> On Tue, Jan 13, 2026 at 07:44:09PM +0000, Petteri Kangaslampi wrote: > Calculate the hypervisor's VA size only once to maintain consistency > between the memory layout and MMU initialization logic. Previously the > two would be inconsistent when the kernel is configured for less than > IDMAP_VA_BITS of VA space. > > Signed-off-by: Petteri Kangaslampi Tested-by: Vincent Donnefort > --- > arch/arm64/include/asm/kvm_mmu.h | 3 ++- > arch/arm64/kvm/arm.c | 4 ++-- > arch/arm64/kvm/mmu.c | 28 ++++----------------------- > arch/arm64/kvm/va_layout.c | 33 +++++++++++++++++++++++++++----- > 4 files changed, 36 insertions(+), 32 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h > index 2dc5e6e742bb..d968aca0461a 100644 > --- a/arch/arm64/include/asm/kvm_mmu.h > +++ b/arch/arm64/include/asm/kvm_mmu.h > @@ -103,6 +103,7 @@ alternative_cb_end > void kvm_update_va_mask(struct alt_instr *alt, > __le32 *origptr, __le32 *updptr, int nr_inst); > void kvm_compute_layout(void); > +u32 kvm_hyp_va_bits(void); > void kvm_apply_hyp_relocations(void); > > #define __hyp_pa(x) (((phys_addr_t)(x)) + hyp_physvirt_offset) > @@ -185,7 +186,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu); > > phys_addr_t kvm_mmu_get_httbr(void); > phys_addr_t kvm_get_idmap_vector(void); > -int __init kvm_mmu_init(u32 *hyp_va_bits); > +int __init kvm_mmu_init(u32 hyp_va_bits); > > static inline void *__kvm_vector_slot2addr(void *base, > enum arm64_hyp_spectre_vector slot) > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 4f80da0c0d1d..4703f0e15102 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -2568,7 +2568,7 @@ static void pkvm_hyp_init_ptrauth(void) > /* Inits Hyp-mode on all online CPUs */ > static int __init init_hyp_mode(void) > { > - u32 hyp_va_bits; > + u32 hyp_va_bits = kvm_hyp_va_bits(); > int cpu; > int err = -ENOMEM; > > @@ -2582,7 +2582,7 @@ static int __init init_hyp_mode(void) > /* > * Allocate Hyp PGD and setup Hyp identity mapping > */ > - err = kvm_mmu_init(&hyp_va_bits); > + err = kvm_mmu_init(hyp_va_bits); > if (err) > goto out_err; > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index 48d7c372a4cd..d5a506c99f73 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -2284,11 +2284,9 @@ static struct kvm_pgtable_mm_ops kvm_hyp_mm_ops = { > .virt_to_phys = kvm_host_pa, > }; > > -int __init kvm_mmu_init(u32 *hyp_va_bits) > +int __init kvm_mmu_init(u32 hyp_va_bits) > { > int err; > - u32 idmap_bits; > - u32 kernel_bits; > > hyp_idmap_start = __pa_symbol(__hyp_idmap_text_start); > hyp_idmap_start = ALIGN_DOWN(hyp_idmap_start, PAGE_SIZE); > @@ -2302,25 +2300,7 @@ int __init kvm_mmu_init(u32 *hyp_va_bits) > */ > BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK); > > - /* > - * The ID map is always configured for 48 bits of translation, which > - * may be fewer than the number of VA bits used by the regular kernel > - * stage 1, when VA_BITS=52. > - * > - * At EL2, there is only one TTBR register, and we can't switch between > - * translation tables *and* update TCR_EL2.T0SZ at the same time. Bottom > - * line: we need to use the extended range with *both* our translation > - * tables. > - * > - * So use the maximum of the idmap VA bits and the regular kernel stage > - * 1 VA bits to assure that the hypervisor can both ID map its code page > - * and map any kernel memory. > - */ > - idmap_bits = IDMAP_VA_BITS; > - kernel_bits = vabits_actual; > - *hyp_va_bits = max(idmap_bits, kernel_bits); > - > - kvm_debug("Using %u-bit virtual addresses at EL2\n", *hyp_va_bits); > + kvm_debug("Using %u-bit virtual addresses at EL2\n", hyp_va_bits); > kvm_debug("IDMAP page: %lx\n", hyp_idmap_start); > kvm_debug("HYP VA range: %lx:%lx\n", > kern_hyp_va(PAGE_OFFSET), > @@ -2345,7 +2325,7 @@ int __init kvm_mmu_init(u32 *hyp_va_bits) > goto out; > } > > - err = kvm_pgtable_hyp_init(hyp_pgtable, *hyp_va_bits, &kvm_hyp_mm_ops); > + err = kvm_pgtable_hyp_init(hyp_pgtable, hyp_va_bits, &kvm_hyp_mm_ops); > if (err) > goto out_free_pgtable; > > @@ -2354,7 +2334,7 @@ int __init kvm_mmu_init(u32 *hyp_va_bits) > goto out_destroy_pgtable; > > io_map_base = hyp_idmap_start; > - __hyp_va_bits = *hyp_va_bits; > + __hyp_va_bits = hyp_va_bits; > return 0; > > out_destroy_pgtable: > diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c > index 91b22a014610..2346f9435a71 100644 > --- a/arch/arm64/kvm/va_layout.c > +++ b/arch/arm64/kvm/va_layout.c > @@ -46,9 +46,31 @@ static void init_hyp_physvirt_offset(void) > hyp_physvirt_offset = (s64)__pa(kern_va) - (s64)hyp_va; > } > > +/* > + * Calculate the actual VA size used by the hypervisor > + */ > +__init u32 kvm_hyp_va_bits(void) > +{ > + /* > + * The ID map is always configured for 48 bits of translation, which may > + * be different from the number of VA bits used by the regular kernel > + * stage 1. > + * > + * At EL2, there is only one TTBR register, and we can't switch between > + * translation tables *and* update TCR_EL2.T0SZ at the same time. Bottom > + * line: we need to use the extended range with *both* our translation > + * tables. > + * > + * So use the maximum of the idmap VA bits and the regular kernel stage > + * 1 VA bits as the hypervisor VA size to assure that the hypervisor can > + * both ID map its code page and map any kernel memory. > + */ > + return max(IDMAP_VA_BITS, vabits_actual); > +} > + > /* > * We want to generate a hyp VA with the following format (with V == > - * vabits_actual): > + * hypervisor VA bits): > * > * 63 ... V | V-1 | V-2 .. tag_lsb | tag_lsb - 1 .. 0 > * --------------------------------------------------------- > @@ -61,10 +83,11 @@ __init void kvm_compute_layout(void) > { > phys_addr_t idmap_addr = __pa_symbol(__hyp_idmap_text_start); > u64 hyp_va_msb; > + u32 hyp_va_bits = kvm_hyp_va_bits(); > > /* Where is my RAM region? */ > - hyp_va_msb = idmap_addr & BIT(vabits_actual - 1); > - hyp_va_msb ^= BIT(vabits_actual - 1); > + hyp_va_msb = idmap_addr & BIT(hyp_va_bits - 1); > + hyp_va_msb ^= BIT(hyp_va_bits - 1); > > tag_lsb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^ > (u64)(high_memory - 1)); > @@ -72,9 +95,9 @@ __init void kvm_compute_layout(void) > va_mask = GENMASK_ULL(tag_lsb - 1, 0); > tag_val = hyp_va_msb; > > - if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && tag_lsb != (vabits_actual - 1)) { > + if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && tag_lsb != (hyp_va_bits - 1)) { > /* We have some free bits to insert a random tag. */ > - tag_val |= get_random_long() & GENMASK_ULL(vabits_actual - 2, tag_lsb); > + tag_val |= get_random_long() & GENMASK_ULL(hyp_va_bits - 2, tag_lsb); > } > tag_val >>= tag_lsb; > > -- > 2.52.0.457.g6b5491de43-goog >