From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3FF9238159 for ; Sat, 21 Feb 2026 01:12:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.202 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771636375; cv=none; b=FvqULXxduihg2pjBR5lbiUE8q65NZfRNKkJ9xvcPq1d/fTz5cT/IG50p7TB2cve7I6R01RIwvq2AiBGMaRqdEvj55fBf/Ia1u0lZuH/9zwjcRRmApx/zymdZshFBzGqEhjbQHgmhDLv1QI95Q2gWjh2GTC0hUlkmUOBO74MDtMw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771636375; c=relaxed/simple; bh=sptKJUCd8+x3A8IH5AuYHocimKOtptgixOf8+MB8H8E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nyXmBSapHZDWPfISvy5b4mvpjwMBVbR0P1VxRAOy2ud+d0Fl6kcjF/p+lzcFd8oi997Ob1PIdA40hks3IRd8N8wodGrerWTWHxjkwogZlcEcjYrLtNMaguUqSrPj5by2Kr1fksxkzPDyYRIp0iNV+jk8A3HM5g1vLyJbifEXpfc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=aE3zpOjB; arc=none smtp.client-ip=209.85.214.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aE3zpOjB" Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-2ad4a8c1f5aso25557365ad.0 for ; Fri, 20 Feb 2026 17:12:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1771636374; x=1772241174; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=m6OAsBgRAO98KlOsG7z1xcbC84TvI4dL/jf3QhS9k/E=; b=aE3zpOjBUc+jTfyAlJ1z6PStKRrhxYJgbP+sbtHkbmbfpFe/0FxWLTbiv4t3OW9hCA D29R1ioY0qKwSmXj9Z1zYKNiDZI4XcAoq3728USPoXDNQm7WrHrmKHMZpElEbyA4M4ly yhb/DnfwL7GmFfJD8Kc3XP2+o1ifz+FQL34+znjsCQ+wIo75+23fSyuIto6Udvxk4P4p FqH9YJ5DixiD9CHFT4IEUAnTpnfkMY6J18KSMFKPs7FvY+et/eo8yXCVjBbP/EHI/Rbo RDTDGgfgQx8KR6RkEKnamREL/SFokOnkdRN8lpj4wlO7DeG1EFmFxo2Ecv/9D267zBqI bfyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1771636374; x=1772241174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=m6OAsBgRAO98KlOsG7z1xcbC84TvI4dL/jf3QhS9k/E=; b=HD4g1V3moH/h1c4/3h+ObNb9UlS4+jFp1ZGgP34XxMeozInLzjaOGhGAEGuyT3FgrS BioH/PwQXWGA6DmZGE3eihRPpf1KH1tFyC5U7SUtjI/e6fMY7OrdUN6rEb08Z0Ry09UB qgx8QWQGKmegUaIm3+B3/OC5jCDWKXz8+i7OQLloGTCuR4OG5dcPsmd4Q2srvsii1skl 5K+BeEHkaRGO1QyfF4aae901zfDVzoRAWThWbZxt+zcXvN/IaqQwX5THu4fiR3xvFAEt +W2wRFYxv39RqLf0l5PRML667IAFeOb6gQukR9tm3zdae7eS/cKF/HketW43DSJAbcev Y7SQ== X-Forwarded-Encrypted: i=1; AJvYcCXpuk6e5jl/TOqYWwlacSD0xrf38iImYAq972dKZ648E0fw55g31xlKGGcPcMdeXlGyItOvuzaeU8saBSA=@vger.kernel.org X-Gm-Message-State: AOJu0YyYC85Wwm5F0U2FU7dCK70cr18qiGjphOncX1WHFtBPvZbHLmQj eG7eMstlu64iDaibjgOC2O0zgCciGE4NLEtp44zPgmEmTwdCsULtercog9NCIMT1oeKhL8+C4dV N/Hswuw== X-Received: from pllk12.prod.google.com ([2002:a17:902:760c:b0:2a9:8200:4985]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:ce03:b0:2aa:d2f4:9c11 with SMTP id d9443c01a7336-2ad5f7019c7mr74255455ad.5.1771636373912; Fri, 20 Feb 2026 17:12:53 -0800 (PST) Date: Fri, 20 Feb 2026 17:12:52 -0800 In-Reply-To: <20260206190851.860662-10-yosry.ahmed@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260206190851.860662-1-yosry.ahmed@linux.dev> <20260206190851.860662-10-yosry.ahmed@linux.dev> Message-ID: Subject: Re: [PATCH v5 09/26] KVM: nSVM: Call enter_guest_mode() before switching to VMCB02 From: Sean Christopherson To: Yosry Ahmed Cc: Paolo Bonzini , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org Content-Type: text/plain; charset="us-ascii" On Fri, Feb 06, 2026, Yosry Ahmed wrote: > In preparation for moving more changes that rely on is_guest_mode() > before switching to VMCB02, move entering guest mode a bit earlier. > > Nothing between the new callsite(s) and the old ones rely on > is_guest_mode(), so this should be safe. > > No functional change intended. > > Cc: stable@vger.kernel.org > Signed-off-by: Yosry Ahmed > --- > arch/x86/kvm/svm/nested.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c > index 29069fc5e8cb..607d99172e2b 100644 > --- a/arch/x86/kvm/svm/nested.c > +++ b/arch/x86/kvm/svm/nested.c > @@ -741,9 +741,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm, > > nested_svm_transition_tlb_flush(vcpu); > > - /* Enter Guest-Mode */ > - enter_guest_mode(vcpu); > - > /* > * Filled at exit: exit_code, exit_info_1, exit_info_2, exit_int_info, > * exit_int_info_err, next_rip, insn_len, insn_bytes. > @@ -944,6 +941,8 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa, > > WARN_ON(svm->vmcb == svm->nested.vmcb02.ptr); > > + enter_guest_mode(vcpu); > + > nested_svm_copy_common_state(svm->vmcb01.ptr, svm->nested.vmcb02.ptr); > > svm_switch_vmcb(svm, &svm->nested.vmcb02); > @@ -1890,6 +1889,7 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, > svm_copy_vmrun_state(&svm->vmcb01.ptr->save, save); > nested_copy_vmcb_control_to_cache(svm, ctl); > > + enter_guest_mode(vcpu); > svm_switch_vmcb(svm, &svm->nested.vmcb02); > nested_vmcb02_prepare_control(svm, svm->vmcb->save.rip, svm->vmcb->save.cs.base); LOL, guess what! Today end's in 'y', which means there's a nSVM bug! It's a super minor one though, especially in the broader context, I just happened to see it when looking at this patch. As per 3f6821aa147b ("KVM: x86: Forcibly leave nested if RSM to L2 hits shutdown"), shutdown on RSM is suppose to hit L1, not L2. But if enter_svm_guest_mode() fails, svm_leave_smm() bails without leaving guest code. Syzkaller probably hasn't found the bug because nested_run_pending doesn't get set, but it's still technically wrong. Of course, as the comment in emulator_leave_smm() says, the *entire* RSM flow is wrong, because it's not a VM-Enter/VMRUN, it's somethign else entirely. Anyways, I don't think there's anything to do in this series, but at some point we should probably do: diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index a2452b8ec49d..5cc9ad9b750d 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -4877,13 +4877,15 @@ static int svm_leave_smm(struct kvm_vcpu *vcpu, const union kvm_smram *smram) vmcb12 = map.hva; nested_copy_vmcb_control_to_cache(svm, &vmcb12->control); nested_copy_vmcb_save_to_cache(svm, &vmcb12->save); + ret = enter_svm_guest_mode(vcpu, smram64->svm_guest_vmcb_gpa, vmcb12, false); - if (ret) - goto unmap_save; + goto leave_nested; svm->nested.nested_run_pending = 1; +leave_nested: + svm_leave_nested(vcpu); unmap_save: kvm_vcpu_unmap(vcpu, &map_save); unmap_map: