From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6892BC3DA5D for ; Thu, 25 Jul 2024 18:08:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=UUGaRPCQ3N3wWtHO14ZqED92UBSd1h87iBGfjwpcBN0=; b=LIRuS+7ldQT/c3e4XH9ZNZfAe+ /7+6jl2AGyEFPCdLkEx6nF86Rs3B5cc7WEttblxQRQH5KFVN9JKN2uF3qFdscW19umh4NL/kNZJ+F V7gya48KRwdpFKk01fwmFFKpmYcTB9ZQqjJRy0y4bJZpdNUSsyjzHgJLIoXG8SNYXy/kSdmJsZx1W xkyliZQNS6ib/vKqEd1IYZqSEB23syEpRCK1GiKHrYsLI5WB6k+14e00R0XRBhMI/NU+wn5nOUeN7 HwAh0zVyQlh0XCiFtHQW01v0OFknr3lrg0/fDD2+mI8I+u3oBTLr4PXieEDe2cFNeU6sr3/wO9UX3 AUIldK2A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sX2t8-00000001jux-2Cb2; Thu, 25 Jul 2024 18:08:26 +0000 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sX2si-00000001jmJ-4BFr for linux-arm-kernel@lists.infradead.org; Thu, 25 Jul 2024 18:08:02 +0000 Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-1fc5549788eso9357105ad.1 for ; Thu, 25 Jul 2024 11:08:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1721930879; x=1722535679; darn=lists.infradead.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=UUGaRPCQ3N3wWtHO14ZqED92UBSd1h87iBGfjwpcBN0=; b=N1D/I6yX0u1n/uMFqx5VKHZB+N+NpQCJHiEi1MJiYbNxvEtkWi/d+koGrKb3syXCkF VjB/twI8lvoBYfa4aISdC/758CjW0ztZrZfwp4JJkRalselMmj10UNCOFRuIVeXf1ZBX Nge3b6dtmPna79P8BWcWZJHRBZZmvjBFqTAGtuC4aW57bZlzt1zPUQ5ogg4Xpo0xy2xs ejIbA0v4O33GjtyjE45PStrAtpblsnpTPUh7g7KIhtR+G0byZsf0HeYNZmdyDEJ40s1f HvVz4TILrLadFsXPfwrvMbDVPcIjXETzxkfXjcIiTpUQRFsn+F8fCkZvthYkHWa5/r45 SRCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721930879; x=1722535679; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=UUGaRPCQ3N3wWtHO14ZqED92UBSd1h87iBGfjwpcBN0=; b=IhTd68ulkN3J/mkE/H2v4g1EzLzz7TiEnQ+GCbsGk+7zIlhmZaWijlHFdVDMYlSqPw s65p4ACB29IoxY6wSKcjALvHkK68kFYcNPXGObadYId/WsVzYzQG6Z7IO7pE0CYHBFY6 WgMk6pbQjviYPDPMzGiZOFzTrTEGutIlnweR6awpnactMIkY1WCkfHx0d/l9OauIRPgm 2Pf1XKxUpu4ici22RMRxLXecXuTJN+RXOwSyokmultptWsSusEYCMgIrvY2qzHyk89DD VQ5o7eyxfvN6CutrjdLeEmrXiIT3O6V8hy9fyUTAFJxu/C6Gf0EC5lQ8W/OKYmDhglJG om7w== X-Forwarded-Encrypted: i=1; AJvYcCWtjQ6W36I2KeHMDlxM0VgIN3fAsTUXrB7SKBV/cRir6jpqV0HrNxdxCpEc/1WmcQ8uSuBK2QOXIHl5z9n/LIPClaVjypxPUcsqDof0eD/Fbs62Nww= X-Gm-Message-State: AOJu0YyJekf/fBmzBfK/0wadErU0+tUCyurvzuk3iNqrOULYMT5in6BS unxHvZP3xx91dcYnCpOhXYeuHalbqeuAEwS7UlJZQyklB9ze4MGnR2sFKnSyHQ== X-Google-Smtp-Source: AGHT+IHI1A1tvfnC1v1eHXLnRKjVzRp5f2FjOQ5bxe8jqwJy1SWM+jZHo5hxZrA2PYuH6SKb8vBZaw== X-Received: by 2002:a17:903:1c4:b0:1fc:2ee3:d46f with SMTP id d9443c01a7336-1fed90b6bd7mr34131415ad.11.1721930878977; Thu, 25 Jul 2024 11:07:58 -0700 (PDT) Received: from google.com (61.139.125.34.bc.googleusercontent.com. [34.125.139.61]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7a9f9ec7acdsm1454827a12.66.2024.07.25.11.07.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Jul 2024 11:07:58 -0700 (PDT) Date: Thu, 25 Jul 2024 11:07:53 -0700 From: David Matlack To: James Houghton Cc: Andrew Morton , Paolo Bonzini , Ankit Agrawal , Axel Rasmussen , Catalin Marinas , David Rientjes , James Morse , Jason Gunthorpe , Jonathan Corbet , Marc Zyngier , Oliver Upton , Raghavendra Rao Ananta , Ryan Roberts , Sean Christopherson , Shaoqin Huang , Suzuki K Poulose , Wei Xu , Will Deacon , Yu Zhao , Zenghui Yu , kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v6 02/11] KVM: x86: Relax locking for kvm_test_age_gfn and kvm_age_gfn Message-ID: References: <20240724011037.3671523-1-jthoughton@google.com> <20240724011037.3671523-3-jthoughton@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240724011037.3671523-3-jthoughton@google.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240725_110801_058254_E06502E9 X-CRM114-Status: GOOD ( 26.20 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 2024-07-24 01:10 AM, James Houghton wrote: > Walk the TDP MMU in an RCU read-side critical section. This requires a > way to do RCU-safe walking of the tdp_mmu_roots; do this with a new > macro. The PTE modifications are now done atomically, and > kvm_tdp_mmu_spte_need_atomic_write() has been updated to account for the > fact that kvm_age_gfn can now lockless update the accessed bit and the > R/X bits). > > If the cmpxchg for marking the spte for access tracking fails, we simply > retry if the spte is still a leaf PTE. If it isn't, we return false > to continue the walk. > > Harvesting age information from the shadow MMU is still done while > holding the MMU write lock. > > Suggested-by: Yu Zhao > Signed-off-by: James Houghton Aside from the comment fixes below, Reviewed-by: David Matlack > --- > arch/x86/include/asm/kvm_host.h | 1 + > arch/x86/kvm/Kconfig | 1 + > arch/x86/kvm/mmu/mmu.c | 10 ++++- > arch/x86/kvm/mmu/tdp_iter.h | 27 +++++++------ > arch/x86/kvm/mmu/tdp_mmu.c | 67 +++++++++++++++++++++++++-------- > 5 files changed, 77 insertions(+), 29 deletions(-) > [...] > --- a/arch/x86/kvm/mmu/tdp_iter.h > +++ b/arch/x86/kvm/mmu/tdp_iter.h > @@ -25,6 +25,13 @@ static inline u64 kvm_tdp_mmu_write_spte_atomic(tdp_ptep_t sptep, u64 new_spte) > return xchg(rcu_dereference(sptep), new_spte); > } > > +static inline u64 tdp_mmu_clear_spte_bits_atomic(tdp_ptep_t sptep, u64 mask) > +{ > + atomic64_t *sptep_atomic = (atomic64_t *)rcu_dereference(sptep); > + > + return (u64)atomic64_fetch_and(~mask, sptep_atomic); > +} > + > static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte) > { > KVM_MMU_WARN_ON(is_ept_ve_possible(new_spte)); > @@ -32,10 +39,11 @@ static inline void __kvm_tdp_mmu_write_spte(tdp_ptep_t sptep, u64 new_spte) > } > > /* > - * SPTEs must be modified atomically if they are shadow-present, leaf > - * SPTEs, and have volatile bits, i.e. has bits that can be set outside > - * of mmu_lock. The Writable bit can be set by KVM's fast page fault > - * handler, and Accessed and Dirty bits can be set by the CPU. > + * SPTEs must be modified atomically if they have bits that can be set outside > + * of the mmu_lock. This can happen for any shadow-present leaf SPTEs, as the > + * Writable bit can be set by KVM's fast page fault handler, the Accessed and > + * Dirty bits can be set by the CPU, and the Accessed and R/X bits can be "R/X bits" should be "W/R/X bits". > + * cleared by age_gfn_range. nit: "age_gfn_range()"