From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A066A27C851; Tue, 4 Mar 2025 15:25:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741101931; cv=none; b=SjNOszI184lDHu61iZ7AzzQrNhGtnXNE5ygNaUwEX1SP93CubTkjYYyJyV2fJxxi65+lx5V/br8PRs2Zb/dMdkFB5kpiBqa74V1O9zknKd6Zb6TCPkE+Vl1/n6ncBhOyU4K+HgFUtgVlpfwzXfo2oWRvDXVr0H2GgVywRGY2/XI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741101931; c=relaxed/simple; bh=Q3MX4rkE4OTeTa/jKAc1daOdke8WhwOrg3oQElBMyBI=; h=Date:Message-ID:From:To:Cc:Subject:In-Reply-To:References: MIME-Version:Content-Type; b=hM+Y1Rm5D38/aUsQH0NyYDsQymuTmurzxWrLd48UCZw6p8eHaJqwLsHV8J7ajjhu3kfy8Ln+L/w7Cft32iVzMiZ/Ez0EQhlh1XPb7jEn96RBqRILJ0r9ygj9sL5KvGO1cZoFArSvZekvxzODH/HFxxdSkigLw8lArA/qmYLeFmA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ochd6w06; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ochd6w06" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 26B05C4CEE5; Tue, 4 Mar 2025 15:25:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741101931; bh=Q3MX4rkE4OTeTa/jKAc1daOdke8WhwOrg3oQElBMyBI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=ochd6w06w2ZkuDI3gJgfDWEZAlMwLfvaOSP+Wp4+GDsolwX4NxSeF+OYTX7Gy4uwo 7bMS/G7TgXyclh+c/cRhO8xVOnQv7i4+2m5hrMM7o7pyjUYP7/JLLtmLpqAtWCN048 JTt6qLBduehqTXwMjboEXQvpb/YBcCi0mNTghujFDvc8VYk2KU9gdhm41nt98Ou2xL DeTY39HMi641ZAPySmps05l0RGTg72Cu4t/FX3oObi4bzhGwwAN3U78sBxToOg57vv BrR4UA4xwlR0uBFWQ/vW0/atfHnH/UU2y1/jETtQeysgfREG3I/nnIuimAQ34ieYM3 NsuBwvN1sNRQA== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1tpU9A-00AIEh-SR; Tue, 04 Mar 2025 15:25:28 +0000 Date: Tue, 04 Mar 2025 15:25:28 +0000 Message-ID: <86frjsq0lz.wl-maz@kernel.org> From: Marc Zyngier To: Fuad Tabba Cc: kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Joey Gouly , Suzuki K Poulose , Oliver Upton , Zenghui Yu , Mark Rutland Subject: Re: [PATCH 03/18] KVM: arm64: Handle trapping of FEAT_LS64* instructions In-Reply-To: References: <20250210184150.2145093-1-maz@kernel.org> <20250210184150.2145093-4-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/29.4 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) Precedence: bulk X-Mailing-List: kvm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: tabba@google.com, kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, joey.gouly@arm.com, suzuki.poulose@arm.com, oliver.upton@linux.dev, yuzenghui@huawei.com, mark.rutland@arm.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Hi Fuad, On Tue, 04 Mar 2025 14:36:19 +0000, Fuad Tabba wrote: > > Hi Marc, > > On Mon, 10 Feb 2025 at 18:42, Marc Zyngier wrote: > > > > We generally don't expect FEAT_LS64* instructions to trap, unless > > they are trapped by a guest hypervisor. > > > > Otherwise, this is just the guest playing tricks on us by using > > an instruction that isn't advertised, which we handle with a well > > deserved UNDEF. > > > > Signed-off-by: Marc Zyngier > > --- > > arch/arm64/kvm/handle_exit.c | 64 ++++++++++++++++++++++++++++++++++++ > > 1 file changed, 64 insertions(+) > > > > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c > > index 512d152233ff2..4f8354bf7dc5f 100644 > > --- a/arch/arm64/kvm/handle_exit.c > > +++ b/arch/arm64/kvm/handle_exit.c > > @@ -294,6 +294,69 @@ static int handle_svc(struct kvm_vcpu *vcpu) > > return 1; > > } > > > > +static int handle_ls64b(struct kvm_vcpu *vcpu) > > +{ > > + struct kvm *kvm = vcpu->kvm; > > + u64 esr = kvm_vcpu_get_esr(vcpu); > > + u64 iss = ESR_ELx_ISS(esr); > > + bool allowed; > > + > > + switch (iss) { > > + case ESR_ELx_ISS_ST64BV: > > + allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V); > > + break; > > + case ESR_ELx_ISS_ST64BV0: > > + allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA); > > + break; > > + case ESR_ELx_ISS_LDST64B: > > + allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64); > > + break; > > + default: > > + /* Clearly, we're missing something. */ > > + goto unknown_trap; > > + } > > + > > + if (!allowed) > > + goto undef; > > + > > + if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) { > > + u64 hcrx = __vcpu_sys_reg(vcpu, HCRX_EL2); > > + bool fwd; > > + > > + switch (iss) { > > + case ESR_ELx_ISS_ST64BV: > > + fwd = !(hcrx & HCRX_EL2_EnASR); > > + break; > > + case ESR_ELx_ISS_ST64BV0: > > + fwd = !(hcrx & HCRX_EL2_EnAS0); > > + break; > > + case ESR_ELx_ISS_LDST64B: > > + fwd = !(hcrx & HCRX_EL2_EnALS); > > + break; > > + default: > > + /* We don't expect to be here */ > > + fwd = false; > > + } > > + > > + if (fwd) { > > + kvm_inject_nested_sync(vcpu, esr); > > + return 1; > > + } > > + } > > + > > +unknown_trap: > > + /* > > + * If we land here, something must be very wrong, because we > > + * have no idea why we trapped at all. Warn and undef as a > > + * fallback. > > + */ > > + WARN_ON(1); > > nit: should this be WARN_ONCE() instead? > > > + > > +undef: > > + kvm_inject_undefined(vcpu); > > + return 1; > > +} > > I'm wondering if this can be simplified by having one switch() > statement that toggles both allowed and fwd (or maybe even only fwd), > and then inject depending on that, e.g., > > +static int handle_ls64b(struct kvm_vcpu *vcpu) > +{ > + struct kvm *kvm = vcpu->kvm; > + bool is_nv = vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu); > + u64 hcrx = __vcpu_sys_reg(vcpu, HCRX_EL2); > + u64 esr = kvm_vcpu_get_esr(vcpu); > + u64 iss = ESR_ELx_ISS(esr); > + bool fwd = false; > + > + switch (iss) { > + case ESR_ELx_ISS_ST64BV: > + fwd = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V) && > + !(hcrx & HCRX_EL2_EnASR) > + break; > ... > + default: > + WARN_ONCE(1); > + } > + > + if (is_nv && fwd) { > + kvm_inject_nested_sync(vcpu, esr); > + else > + kvm_inject_undefined(vcpu); > + > + return 1; > +} > > I think this has the same effect as the code above. Yeah, I think something like that could work. I initially was trying to handle all 4 states (allowed, fwd), but obviously some states do not exist (you don't forward something that isn't allowed). My only concern here is that we don't have clear rules on the init of EL2 guest registers such as HCRX_EL2 when !NV, but that's something that can be easily done (or even worked around locally). Maybe we should introduce the dreaded notion of "effective value" for EL2 registers when NV is not enabled... Thanks, M. -- Without deviation from the norm, progress is not possible.