From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BE7CC3DA5D for ; Thu, 25 Jul 2024 16:39:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB85E6B0082; Thu, 25 Jul 2024 12:39:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C68016B0083; Thu, 25 Jul 2024 12:39:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B087C6B0085; Thu, 25 Jul 2024 12:39:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 918A76B0082 for ; Thu, 25 Jul 2024 12:39:41 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 42BC0141273 for ; Thu, 25 Jul 2024 16:39:41 +0000 (UTC) X-FDA: 82378836162.04.90E5BA4 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf29.hostedemail.com (Postfix) with ESMTP id 67EA112001F for ; Thu, 25 Jul 2024 16:39:39 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=f3cYQNei; spf=pass (imf29.hostedemail.com: domain of dmatlack@google.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=dmatlack@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721925541; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QcSVy1tvOkU8yRLWC0qxdbVVrc8ZEhqmlAiyT2QSHkE=; b=dSSinixRZPe+rImCIOO0dxJLYAJWN7013xyZ7sq+L8ruCH5B9bgx7nqBHlOetjpk8MPbM6 OaIFZnSaef/RYBr0JSwrtpDiRDJ2+QDjhaUI++XGWJi3G7Qgq6n3AI9sBEG1amdq9G9v+2 uk6A+0KKTBdO3vae+18V6yE8WNiMWY0= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=f3cYQNei; spf=pass (imf29.hostedemail.com: domain of dmatlack@google.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=dmatlack@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721925541; a=rsa-sha256; cv=none; b=Mbqiy5rKXN0/kQhWnjF4JRvztOrcP0tbs+bzKmgw7SEflUVVMr4M105aN9OPyzouirq7Gz lkGUDqaDZJ4YTPq0VLbp912QKZUjSQpdjAgkC5PGdGeYv1YdL7PNp1oLY26WWciX2G5K4E axXLE79MlrQc52qT0QjLCXpJb/o0obE= Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-1fd66cddd4dso10566065ad.2 for ; Thu, 25 Jul 2024 09:39:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1721925578; x=1722530378; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=QcSVy1tvOkU8yRLWC0qxdbVVrc8ZEhqmlAiyT2QSHkE=; b=f3cYQNei5HZ+hl2HcUho4rkpVDEVmrzMqjw5KPTTEja1SgCwmcMEpwoFKeot+mdmcr sqHUVnHx2TSPG9z2/bgqBQgw7bhQnCqZpmtu/LsCpluYBn9F3qD3O6IQzqqXFkcyJn+M B3QdIHtAaFwneRh56KJYS8SwtivEOh024iZqk3t3Gls3EX1m2iPCAW5RmEtleTfZpOMC KbDuONCIG6S2E+gj48fa4WAs7d+wUSpudziv69ZjZhLDpSXzo532oXQN/jw37X2V/eg9 OJ9vVA/zQzjQigVoeU27gwrFGfcP8bWbHCRF7Tc4hWYf6pDsoTvEPnYcip8PCBH6RUUY 8m7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721925578; x=1722530378; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=QcSVy1tvOkU8yRLWC0qxdbVVrc8ZEhqmlAiyT2QSHkE=; b=nfMv6GQQqXeEwQAVi4QTA9trEgfD89WDz4+2MCCncAt0BGYpF463wEM3Vo3NT3sgps 8Z0z5H4Az7ahTGWEzkwCkeESZKXpbbJqWa0FEEzQpShixVg7xKEaJid9QOk9o4zmOtGR 8gNjlBIvvip9L7MV/ysWE96SEXC4NPDLZDWaKYSsFUVF1WO8EBYoQvL/HjI407jQVfCK BJO0P401MyY0r6Rp09iRKrWlvghUZwZg9CzVwnmivMKl2eWxYYP4u422A35GGd+/Uq+0 QeDlkN+oXZ551LfiNKOQR06iKgCCOEwot4/tSwIkuXvC5lLwNbv2xXfpCco8WJV/Jwji znMg== X-Forwarded-Encrypted: i=1; AJvYcCXFNgXMxIYZ05y0LQj0Wzx+NkI+9L6ruM67GJKfFuNwr5hSR94CksYiO/EkR14eYCv8hQOFIC0UVBBsrQTWxBmRkOk= X-Gm-Message-State: AOJu0YyzlSEOB/tjgXqYYneA6LezY5h29Cda9cUbDQyBxDYCYM1xMHBC bS7kUjVTSF/Ak7YkIVIXUxlo/upN2NMF6MphHfuRXXXgN7ZfwDZ3fTvNRk7+4w== X-Google-Smtp-Source: AGHT+IHB3MBDNmfV4qXP0PeruhsiIcuNxKT7iPw0JA5SjJko2zFwnnPefUglyTGQ1O1zxG/RUL8Ssw== X-Received: by 2002:a17:903:1246:b0:1f8:6bae:28f with SMTP id d9443c01a7336-1fed3870ea6mr39885005ad.9.1721925577381; Thu, 25 Jul 2024 09:39:37 -0700 (PDT) Received: from google.com (61.139.125.34.bc.googleusercontent.com. [34.125.139.61]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1fed7ee4ce6sm16408675ad.157.2024.07.25.09.39.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 Jul 2024 09:39:36 -0700 (PDT) Date: Thu, 25 Jul 2024 09:39:32 -0700 From: David Matlack To: James Houghton Cc: Andrew Morton , Paolo Bonzini , Ankit Agrawal , Axel Rasmussen , Catalin Marinas , David Rientjes , James Morse , Jason Gunthorpe , Jonathan Corbet , Marc Zyngier , Oliver Upton , Raghavendra Rao Ananta , Ryan Roberts , Sean Christopherson , Shaoqin Huang , Suzuki K Poulose , Wei Xu , Will Deacon , Yu Zhao , Zenghui Yu , kvmarm@lists.linux.dev, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v6 01/11] KVM: Add lockless memslot walk to KVM Message-ID: References: <20240724011037.3671523-1-jthoughton@google.com> <20240724011037.3671523-2-jthoughton@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240724011037.3671523-2-jthoughton@google.com> X-Stat-Signature: uxd4i57nxpjw7oke8kbmo77mrx4kppqf X-Rspam-User: X-Rspamd-Queue-Id: 67EA112001F X-Rspamd-Server: rspam02 X-HE-Tag: 1721925579-79303 X-HE-Meta: U2FsdGVkX18x903K3BsI/xAGI7aBa2gsWl7kGoA8deMNBJKNiT2GxUEs4DQrBxUEqrDZ//K1RS9sN6OhTETqJoDEAYTu1PtTT2Eb82l2e1e+HJkoL8xXNhemc42ibnGTw2zbUVGbtAZnXz1UG4iZh1DnOlKiq9pP290WEaK/Qy/mGZ0no5LJQ1QGQI56wqqbK0Wz62WvnqKTcpAga7gQhXQFT2UzeniTgoDIyQ6UwKef1agkYOG3PKQ3/talijzf3cDjUcg4sO2jwY+z6mALhWgHUgD3UrSDWPr28D50WdYnDA0ZDnfhyedPCcNhRrJD5ExAN5cjRBNkuSibDYn03NV5snjLd2oMxdECutGDsKAQ6t5OYPaObE4RZHoCDpYJivkLiaJJJQx1a0v2+xOan+1Z9GoO/NqmDfCDESKZuVkuVnFXjZcxNdHuQiNmf/QMrzwd5gqCyTvLQWX1vgFdqJ8+A86c0uu4PEOqxKUOVy8SvmXxbVT1W7XshgA3hmvi4yvn31iT9M3zG4LWu9xzRQxJrK/TJvFqvRsWJ4meHuPi9+mKGeUmOT+bbNSCehcH3dLiZyQAKhvJTjdDB5gq9vnMwwRi2xplVCsYvMFaS3JOCVubEFSc1rFCgFN8I0HY7p7ojm0aro3zvoQuCIqR047z6yGD1tfKUoenn+942JLMfApj8R8VaJjGKeudEKgMLjx3WEWr9FsPyFIy5amp27PBTu3ZLZNMEbSSAfgHf4RPZaNvjDXn1peulDETf4LGA3outjPDyg8FhpFsou8txtTEpG1Lk5hmggmLnDb45ZLHTFI2HBLZ1Rv6mwitKlYivLT97AGL1Z8tFf/qxtcLniWep84vkUVE+BytFiOGY+b8/8knrcNyLrMH4EyfkBCu7GRrYuyXelJK0orM+RkWFT+PmFbow/BOGeihKIJijLHOSh1DsK/KgsknqqLuur1bEGF4YZQaieb34eIXyJm QwGeV0ya tq+9NxMeUzJclVhjGUlVzZPx/uPawsvQbF+5/MZDqkwp84RZIJh7AnZjl0XwQYmINu7HOEp2H7Yb8HjbFprZdIPJ5YsZ4VIeslXbEQuL8qm4NPEh1VMemChgawlJe+Yce0tZy1/1B1GD8p3o7KMNle9BEvfP0zWrMyLstNI1BkSBAs/PQgN0IDey7+z8g2VzTGnPx9Q21LONSPJCWDDbVlP+JJXERTCWC1+GzXHP5ZJBUNIrI8ddYCdHgsXCs6jzN7RNg+f61huEUqsO6vUmEPThwc+Zsgk74nVlXWQVyqEDN8ZiX6aFlAp6+O9GIGoZtn+83niudItUOxIA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024-07-24 01:10 AM, James Houghton wrote: > Provide flexibility to the architecture to synchronize as optimally as > they can instead of always taking the MMU lock for writing. > > Architectures that do their own locking must select > CONFIG_KVM_MMU_NOTIFIER_YOUNG_LOCKLESS. > > The immediate application is to allow architectures to implement the > test/clear_young MMU notifiers more cheaply. > > Suggested-by: Yu Zhao > Signed-off-by: James Houghton Aside from the cleanup suggestion (which should be in separate patches anyway): Reviewed-by: David Matlack > --- > include/linux/kvm_host.h | 1 + > virt/kvm/Kconfig | 3 +++ > virt/kvm/kvm_main.c | 26 +++++++++++++++++++------- > 3 files changed, 23 insertions(+), 7 deletions(-) > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 689e8be873a7..8cd80f969cff 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -266,6 +266,7 @@ struct kvm_gfn_range { > gfn_t end; > union kvm_mmu_notifier_arg arg; > bool may_block; > + bool lockless; > }; > bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); > bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); > diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig > index b14e14cdbfb9..632334861001 100644 > --- a/virt/kvm/Kconfig > +++ b/virt/kvm/Kconfig > @@ -100,6 +100,9 @@ config KVM_GENERIC_MMU_NOTIFIER > select MMU_NOTIFIER > bool > > +config KVM_MMU_NOTIFIER_YOUNG_LOCKLESS > + bool > + > config KVM_GENERIC_MEMORY_ATTRIBUTES > depends on KVM_GENERIC_MMU_NOTIFIER > bool > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index d0788d0a72cc..33f8997a5c29 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -555,6 +555,7 @@ struct kvm_mmu_notifier_range { > on_lock_fn_t on_lock; > bool flush_on_ret; > bool may_block; > + bool lockless; > }; > > /* > @@ -609,6 +610,10 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, > IS_KVM_NULL_FN(range->handler))) > return r; > > + /* on_lock will never be called for lockless walks */ > + if (WARN_ON_ONCE(range->lockless && !IS_KVM_NULL_FN(range->on_lock))) > + return r; > + > idx = srcu_read_lock(&kvm->srcu); > > for (i = 0; i < kvm_arch_nr_memslot_as_ids(kvm); i++) { > @@ -640,15 +645,18 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, > gfn_range.start = hva_to_gfn_memslot(hva_start, slot); > gfn_range.end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, slot); > gfn_range.slot = slot; > + gfn_range.lockless = range->lockless; > > if (!r.found_memslot) { > r.found_memslot = true; > - KVM_MMU_LOCK(kvm); > - if (!IS_KVM_NULL_FN(range->on_lock)) > - range->on_lock(kvm); > - > - if (IS_KVM_NULL_FN(range->handler)) > - goto mmu_unlock; > + if (!range->lockless) { > + KVM_MMU_LOCK(kvm); > + if (!IS_KVM_NULL_FN(range->on_lock)) > + range->on_lock(kvm); > + > + if (IS_KVM_NULL_FN(range->handler)) > + goto mmu_unlock; > + } > } > r.ret |= range->handler(kvm, &gfn_range); > } > @@ -658,7 +666,7 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, > kvm_flush_remote_tlbs(kvm); > > mmu_unlock: > - if (r.found_memslot) > + if (r.found_memslot && !range->lockless) > KVM_MMU_UNLOCK(kvm); > > srcu_read_unlock(&kvm->srcu, idx); > @@ -679,6 +687,8 @@ static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn, > .on_lock = (void *)kvm_null_fn, > .flush_on_ret = true, > .may_block = false, > + .lockless = > + IS_ENABLED(CONFIG_KVM_MMU_NOTIFIER_YOUNG_LOCKLESS), > }; > > return __kvm_handle_hva_range(kvm, &range).ret; > @@ -697,6 +707,8 @@ static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifier *mn > .on_lock = (void *)kvm_null_fn, > .flush_on_ret = false, > .may_block = false, > + .lockless = > + IS_ENABLED(CONFIG_KVM_MMU_NOTIFIER_YOUNG_LOCKLESS), kvm_handle_hva_range{,_no_flush}() have very generic names but they're intimately tied to the "young" notifiers. Whereas __kvm_handle_hva_range() is the truly generic handler function. This is arguably a pre-existing issue, but adding CONFIG_KVM_MMU_NOTIFIER_YOUNG_LOCKLESS makes these functions even more intamtely tied to the "young" notifiers. We could rename kvm_handle_hva_range{,_no_flush}() but I think the cleanest thing to do might be to just drop them entirely and move their contents into their callers (there are only 2 callers of these 3 functions). That will create a little duplication but IMO will make the code easier to read. And then we can also rename __kvm_handle_hva_range() to kvm_handle_hva_range(). e.g. Something like this as the end result: diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 86fb2b560d98..0146c83e24bd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -590,8 +590,8 @@ static void kvm_null_fn(void) node; \ node = interval_tree_iter_next(node, start, last)) \ -static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, - const struct kvm_mmu_notifier_range *range) +static __always_inline kvm_mn_ret_t kvm_handle_hva_range(struct kvm *kvm, + const struct kvm_mmu_notifier_range *range) { struct kvm_mmu_notifier_return r = { .ret = false, @@ -674,48 +674,6 @@ static __always_inline kvm_mn_ret_t __kvm_handle_hva_range(struct kvm *kvm, return r; } -static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn, - unsigned long start, - unsigned long end, - gfn_handler_t handler) -{ - struct kvm *kvm = mmu_notifier_to_kvm(mn); - const struct kvm_mmu_notifier_range range = { - .start = start, - .end = end, - .handler = handler, - .on_lock = (void *)kvm_null_fn, - .flush_on_ret = true, - .may_block = false, - .lockless = - IS_ENABLED(CONFIG_KVM_MMU_NOTIFIER_YOUNG_LOCKLESS), - }; - - return __kvm_handle_hva_range(kvm, &range).ret; -} - -static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifier *mn, - unsigned long start, - unsigned long end, - gfn_handler_t handler, - bool fast_only) -{ - struct kvm *kvm = mmu_notifier_to_kvm(mn); - const struct kvm_mmu_notifier_range range = { - .start = start, - .end = end, - .handler = handler, - .on_lock = (void *)kvm_null_fn, - .flush_on_ret = false, - .may_block = false, - .lockless = - IS_ENABLED(CONFIG_KVM_MMU_NOTIFIER_YOUNG_LOCKLESS), - .arg.fast_only = fast_only, - }; - - return __kvm_handle_hva_range(kvm, &range).ret; -} - void kvm_mmu_invalidate_begin(struct kvm *kvm) { lockdep_assert_held_write(&kvm->mmu_lock); @@ -808,7 +766,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, * that guest memory has been reclaimed. This needs to be done *after* * dropping mmu_lock, as x86's reclaim path is slooooow. */ - if (__kvm_handle_hva_range(kvm, &hva_range).found_memslot) + if (kvm_handle_hva_range(kvm, &hva_range).found_memslot) kvm_arch_guest_memory_reclaimed(kvm); return 0; @@ -854,7 +812,7 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, }; bool wake; - __kvm_handle_hva_range(kvm, &hva_range); + kvm_handle_hva_range(kvm, &hva_range); /* Pairs with the increment in range_start(). */ spin_lock(&kvm->mn_invalidate_lock); @@ -876,6 +834,17 @@ static int kvm_mmu_notifier_clear_flush_young(struct mmu_notifier *mn, unsigned long start, unsigned long end) { + struct kvm *kvm = mmu_notifier_to_kvm(mn); + const struct kvm_mmu_notifier_range range = { + .start = start, + .end = end, + .handler = kvm_age_gfn, + .on_lock = (void *)kvm_null_fn, + .flush_on_ret = true, + .may_block = false, + .lockless = IS_ENABLED(CONFIG_KVM_MMU_NOTIFIER_YOUNG_LOCKLESS), + }; + trace_kvm_age_hva(start, end, false); return kvm_handle_hva_range(mn, start, end, kvm_age_gfn); @@ -887,6 +856,18 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn, unsigned long end, bool fast_only) { + struct kvm *kvm = mmu_notifier_to_kvm(mn); + const struct kvm_mmu_notifier_range range = { + .start = start, + .end = end, + .handler = kvm_age_gfn, + .on_lock = (void *)kvm_null_fn, + .flush_on_ret = false, + .may_block = false, + .lockless = IS_ENABLED(CONFIG_KVM_MMU_NOTIFIER_YOUNG_LOCKLESS), + .arg.fast_only = fast_only, + }; + trace_kvm_age_hva(start, end, fast_only); /* @@ -902,8 +883,7 @@ static int kvm_mmu_notifier_clear_young(struct mmu_notifier *mn, * cadence. If we find this inaccurate, we might come up with a * more sophisticated heuristic later. */ - return kvm_handle_hva_range_no_flush(mn, start, end, kvm_age_gfn, - fast_only); + return kvm_handle_hva_range(kvm, &range).ret; } static int kvm_mmu_notifier_test_young(struct mmu_notifier *mn, @@ -911,6 +891,18 @@ static int kvm_mmu_notifier_test_young(struct mmu_notifier *mn, unsigned long address, bool fast_only) { + struct kvm *kvm = mmu_notifier_to_kvm(mn); + const struct kvm_mmu_notifier_range range = { + .start = address, + .end = address + 1, + .handler = kvm_test_age_gfn, + .on_lock = (void *)kvm_null_fn, + .flush_on_ret = false, + .may_block = false, + .lockless = IS_ENABLED(CONFIG_KVM_MMU_NOTIFIER_YOUNG_LOCKLESS), + .arg.fast_only = fast_only, + }; + trace_kvm_test_age_hva(address, fast_only); return kvm_handle_hva_range_no_flush(mn, address, address + 1, > }; > > return __kvm_handle_hva_range(kvm, &range).ret; > -- > 2.46.0.rc1.232.g9752f9e123-goog >