From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E70333321DD for ; Thu, 23 Oct 2025 15:14:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761232449; cv=none; b=g6JPrG54mPe1LHRcT9XVHrRBDucrDdZbNCs5u2makjpKK6WqiQk8PxqZP2OG0wv82qfjGWKUkwyJuDAFoXCRiJ09+wZL0jN0wR4H3I/QHfPv6kaz1LV/+QFuD4GbzkzT26z8NOEiTy6hsIK6soIr9YQqrlgkOCywmg7E9jzyIOo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761232449; c=relaxed/simple; bh=uHRCY9cQRjtJ7szhlZgEWB+vNgN2UZvp04I8fLcJxas=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=cum0wq4j9mjwdiijolPhH8zPNZraw1Z6A5dCQHVKcJ+GkXDkTdWV8mQvJWaLa2iW9GHd4L7sXwm/pvLCNu3zaSXkJ2oZS0FLnMTNzo+UQL/PCVLSz0op4QqwVkFtf86IPtz2PcDwMBIx2haHDEMh0ay/B2V3qubFK8Cms7YaDTE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YQwUj7hz; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--seanjc.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YQwUj7hz" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-33ba9047881so1111321a91.1 for ; Thu, 23 Oct 2025 08:14:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1761232446; x=1761837246; darn=lists.linux.dev; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eGj/4U8Cjm7Jzuo0N797IrKS+KWnd9Al6qyj+0gj5DU=; b=YQwUj7hz2KTZkAfAiI4aTpFWcQHP3qf/NJYhByQWh83Uh8Nz0Kv+132nCfhhe6sc8x BhNoTZfU/Sv6Tckwpk65SbEFKvp1plHaJj0cTONmTtD8MUXBzB510PlFVucn3N5s9hn8 7GwafDTr1m8+rOAiq+aOKapZWMrQAtSxdZe43nVRI0Ngm+OkXwSbSNqjXLmgnceokUOo 2iKj3jLi89T2Q8q0IWqxi5UwsnSPBJZl5rJNy2UUlnPd2o7SkorIVaZ0+4qqytiLsbJV NMMRk6xUE81LBcqsrJKHWSlGjSuNZ/IxUPRDScGxIjsDxJxXnRHm/UH06uDDgJ+6W+W+ YTfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761232446; x=1761837246; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eGj/4U8Cjm7Jzuo0N797IrKS+KWnd9Al6qyj+0gj5DU=; b=n7Q/UBzr0+aAo5rHDtp+KAT5b3pEGvCC9SWoneKYqLu9V5x0G5PoGv4sjqTAgiiL6O z08S4PDOqnCRGP5c1sVYIWI+jmlCe/7kUzeKw4UeKUWEzmui4e3TX6hQQqWMHc+M/9Zr sSnzqHYD+SwEHvdHkJ3aOWtizKtA21dKi20DwErNQw2g3Pedh5uXzNCIf2WDwhVs/uAH SzrjxRilTYukoRQy1v1c92XU23Fpmz0A2HBbCcRYU9V4KfMsLQPyEsJG9/6dpwf+vbWa gjUGp5/DdiAYPF+6g/h/n1ds91DyFtZx94yv2VV3TrmvDV3exM2jOLkZWi8B0MPcdRyh yMKQ== X-Forwarded-Encrypted: i=1; AJvYcCVRqF0d9VkpDpcL04fl+7nCGSMFMwE7O7H6C7O6tNifMfPJKDm7iPM/1FVLF79lstbb3O7hxm9f7+U=@lists.linux.dev X-Gm-Message-State: AOJu0YxTsBcLtm5omc6W4NOWB/5Q973sLwWemfh+3hwkJqptFfiVubaZ VierheEGYl8qWmVzBaOonLPKS15ZGm28MiJhcxmSA7BgAsb85an101g7W0YP0Hcq3clerI/kuvt hDIONow== X-Google-Smtp-Source: AGHT+IGyk+8DKCi3tYCJowX0kgLUyc4PpyLVM16G5qza1KYeH4GEBaOP8pzea9wQFWKumE2t1taaAoPvn3M= X-Received: from pjbrm7.prod.google.com ([2002:a17:90b:3ec7:b0:330:6c04:207]) (user=seanjc job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:570e:b0:32e:7ff6:6dbd with SMTP id 98e67ed59e1d1-33e21dedc9fmr9424992a91.0.1761232446100; Thu, 23 Oct 2025 08:14:06 -0700 (PDT) Date: Thu, 23 Oct 2025 08:14:04 -0700 In-Reply-To: Precedence: bulk X-Mailing-List: loongarch@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251017003244.186495-1-seanjc@google.com> <20251017003244.186495-20-seanjc@google.com> Message-ID: Subject: Re: [PATCH v3 19/25] KVM: TDX: Assert that mmu_lock is held for write when removing S-EPT entries From: Sean Christopherson To: Yan Zhao Cc: Marc Zyngier , Oliver Upton , Tianrui Zhao , Bibo Mao , Huacai Chen , Madhavan Srinivasan , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Claudio Imbrenda , Paolo Bonzini , "Kirill A. Shutemov" , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, x86@kernel.org, linux-coco@lists.linux.dev, linux-kernel@vger.kernel.org, Ira Weiny , Kai Huang , Michael Roth , Vishal Annapurve , Rick Edgecombe , Ackerley Tng , Binbin Wu Content-Type: text/plain; charset="us-ascii" On Thu, Oct 23, 2025, Yan Zhao wrote: > On Thu, Oct 16, 2025 at 05:32:37PM -0700, Sean Christopherson wrote: > > Unconditionally assert that mmu_lock is held for write when removing S-EPT > > entries, not just when removing S-EPT entries triggers certain conditions, > > e.g. needs to do TDH_MEM_TRACK or kick vCPUs out of the guest. > > Conditionally asserting implies that it's safe to hold mmu_lock for read > > when those paths aren't hit, which is simply not true, as KVM doesn't > > support removing S-EPT entries under read-lock. > > > > Only two paths lead to remove_external_spte(), and both paths asserts that > > mmu_lock is held for write (tdp_mmu_set_spte() via lockdep, and > > handle_removed_pt() via KVM_BUG_ON()). > > > > Deliberately leave lockdep assertions in the "no vCPUs" helpers to document > > that wait_for_sept_zap is guarded by holding mmu_lock for write. > > > > Signed-off-by: Sean Christopherson > > --- > > arch/x86/kvm/vmx/tdx.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c > > index e517ad3d5f4f..f6782b0ffa98 100644 > > --- a/arch/x86/kvm/vmx/tdx.c > > +++ b/arch/x86/kvm/vmx/tdx.c > > @@ -1711,8 +1711,6 @@ static void tdx_track(struct kvm *kvm) > > if (unlikely(kvm_tdx->state != TD_STATE_RUNNABLE)) > > return; > > > > - lockdep_assert_held_write(&kvm->mmu_lock); > Could we also deliberately leave lockdep assertion for tdx_track()? Can do. > This is because if we allow removing S-EPT entries while holding mmu_lock for > read in future, tdx_track() needs to be protected by a separate spinlock to > ensure serialization of tdh_mem_track() and vCPUs kick-off (kicking off vCPUs > must follow each tdh_mem_track() to unblock the next tdh_mem_track()). Does this look/sound right? From: Sean Christopherson Date: Thu, 28 Aug 2025 17:06:17 -0700 Subject: [PATCH] KVM: TDX: Assert that mmu_lock is held for write when removing S-EPT entries Unconditionally assert that mmu_lock is held for write when removing S-EPT entries, not just when removing S-EPT entries triggers certain conditions, e.g. needs to do TDH_MEM_TRACK or kick vCPUs out of the guest. Conditionally asserting implies that it's safe to hold mmu_lock for read when those paths aren't hit, which is simply not true, as KVM doesn't support removing S-EPT entries under read-lock. Only two paths lead to remove_external_spte(), and both paths asserts that mmu_lock is held for write (tdp_mmu_set_spte() via lockdep, and handle_removed_pt() via KVM_BUG_ON()). Deliberately leave lockdep assertions in the "no vCPUs" helpers to document that wait_for_sept_zap is guarded by holding mmu_lock for write, and keep the conditional assert in tdx_track() as well, but with a comment to help explain why holding mmu_lock for write matters (above and beyond why tdx_sept_remove_private_spte()'s requirements). Signed-off-by: Sean Christopherson --- arch/x86/kvm/vmx/tdx.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index dca9e2561270..899051c64faa 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -1715,6 +1715,11 @@ static void tdx_track(struct kvm *kvm) if (unlikely(kvm_tdx->state != TD_STATE_RUNNABLE)) return; + /* + * The full sequence of TDH.MEM.TRACK and forcing vCPUs out of guest + * mode must be serialized, as TDH.MEM.TRACK will fail if the previous + * tracking epoch hasn't completed. + */ lockdep_assert_held_write(&kvm->mmu_lock); err = tdh_mem_track(&kvm_tdx->td); @@ -1762,6 +1767,8 @@ static void tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn, gpa_t gpa = gfn_to_gpa(gfn); u64 err, entry, level_state; + lockdep_assert_held_write(&kvm->mmu_lock); + /* * HKID is released after all private pages have been removed, and set * before any might be populated. Warn if zapping is attempted when base-commit: 69564844a116861ebea4396894005c8b4e48f870 --