From: David Matlack <dmatlack@google.com>
To: Vipin Sharma <vipinsh@google.com>
Cc: Ben Gardon <bgardon@google.com>,
seanjc@google.com, pbonzini@redhat.com, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches
Date: Thu, 29 Dec 2022 13:15:44 -0800 [thread overview]
Message-ID: <Y64DgHuPd8oTPSm5@google.com> (raw)
In-Reply-To: <CAHVum0efHuRmER-whXnwHYMsBLBcb-mgDu+uogCJbMhz2e0_MA@mail.gmail.com>
On Wed, Dec 28, 2022 at 02:07:49PM -0800, Vipin Sharma wrote:
> On Tue, Dec 27, 2022 at 10:37 AM Ben Gardon <bgardon@google.com> wrote:
> > On Wed, Dec 21, 2022 at 6:35 PM Vipin Sharma <vipinsh@google.com> wrote:
> > >
> > > Tested this change by running dirty_log_perf_test while dropping cache
> > > via "echo 2 > /proc/sys/vm/drop_caches" at 1 second interval
> > > continuously. There were WARN_ON(!mc->nobjs) messages printed in kernel
> > > logs from kvm_mmu_memory_cache_alloc(), which is expected.
> >
> > Oh, that's not a good thing. I don't think we want to be hitting those
> > warnings. For one, kernel warnings should not be expected behavior,
> > probably for many reasons, but at least because Syzbot will find it.
> > In this particular case, we don't want to hit that because in that
> > case we'll try to do a GFP_ATOMIC, which can fail, and if it fails,
> > we'll BUG:
> >
> > void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
> > {
> > void *p;
> >
> > if (WARN_ON(!mc->nobjs))
> > p = mmu_memory_cache_alloc_obj(mc, GFP_ATOMIC | __GFP_ACCOUNT);
> > else
> > p = mc->objects[--mc->nobjs];
> > BUG_ON(!p);
> > return p;
> > }
> >
> > Perhaps the risk of actually panicking is small, but it probably
> > indicates that we need better error handling around failed allocations
> > from the cache.
> > Or, the slightly less elegant approach might be to just hold the cache
> > lock around the cache topup and use of pages from the cache, but
> > adding better error handling would probably be cleaner.
>
> I was counting on the fact that shrinker will ideally run only in
> extreme cases, i.e. host is running on low memory. So, this WARN_ON
> will only be rarely used. I was not aware of Syzbot, it seems like it
> will be a concern if it does this kind of testing.
In an extreme low-memory situation, forcing vCPUS to do GFP_ATOMIC
allocations to handle page faults is risky. Plus it's a waste of time to
free that memory since it's just going to get immediately reallocated.
>
> I thought about keeping a mutex, taking it during topup and releasing
> it after the whole operation is done but I stopped it as the duration
> of holding mutex will be long and might block the memory shrinker
> longer. I am not sure though, if this is a valid concern.
Use mutex_trylock() to skip any vCPUs that are currently handling page
faults.
next prev parent reply other threads:[~2022-12-29 21:15 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-22 2:34 [Patch v3 0/9] NUMA aware page table's pages allocation Vipin Sharma
2022-12-22 2:34 ` [Patch v3 1/9] KVM: x86/mmu: Repurpose KVM MMU shrinker to purge shadow page caches Vipin Sharma
2022-12-27 18:37 ` Ben Gardon
2022-12-28 22:07 ` Vipin Sharma
2022-12-29 21:15 ` David Matlack [this message]
2023-01-03 17:38 ` Vipin Sharma
2022-12-29 21:54 ` David Matlack
2023-01-03 18:01 ` Vipin Sharma
2023-01-04 0:25 ` Vipin Sharma
2023-01-18 17:43 ` Sean Christopherson
2023-01-03 19:32 ` Mingwei Zhang
2023-01-04 1:00 ` Vipin Sharma
2023-01-04 6:29 ` Mingwei Zhang
2023-01-04 6:57 ` Mingwei Zhang
2023-01-18 17:36 ` Sean Christopherson
2022-12-22 2:34 ` [Patch v3 2/9] KVM: x86/mmu: Remove zapped_obsolete_pages from struct kvm_arch{} Vipin Sharma
2022-12-29 21:59 ` David Matlack
2022-12-22 2:34 ` [Patch v3 3/9] KVM: x86/mmu: Shrink split_shadow_page_cache via KVM MMU shrinker Vipin Sharma
2022-12-22 2:34 ` [Patch v3 4/9] KVM: Add module param to make page tables NUMA aware Vipin Sharma
2022-12-29 22:05 ` David Matlack
2022-12-22 2:34 ` [Patch v3 5/9] KVM: x86/mmu: Allocate TDP page table's page on correct NUMA node on split Vipin Sharma
2022-12-27 19:02 ` Ben Gardon
2022-12-28 22:07 ` Vipin Sharma
2022-12-29 22:30 ` David Matlack
2023-01-03 18:26 ` Vipin Sharma
2022-12-22 2:34 ` [Patch v3 6/9] KVM: Provide NUMA node support to kvm_mmu_memory_cache{} Vipin Sharma
2022-12-27 19:09 ` Ben Gardon
2022-12-28 22:07 ` Vipin Sharma
2022-12-29 18:22 ` Ben Gardon
2023-01-03 17:36 ` Vipin Sharma
2022-12-29 23:08 ` David Matlack
2022-12-29 23:11 ` David Matlack
2023-01-03 18:45 ` Vipin Sharma
2023-01-03 18:55 ` David Matlack
2022-12-22 2:34 ` [Patch v3 7/9] KVM: x86/mmu: Allocate page table's pages on NUMA node of the underlying pages Vipin Sharma
2022-12-27 19:34 ` Ben Gardon
2022-12-28 22:08 ` Vipin Sharma
2022-12-29 18:20 ` Ben Gardon
2022-12-22 2:34 ` [Patch v3 8/9] KVM: x86/mmu: Make split_shadow_page_cache NUMA aware Vipin Sharma
2022-12-27 19:42 ` Ben Gardon
2022-12-28 22:08 ` Vipin Sharma
2022-12-29 23:18 ` David Matlack
2023-01-03 18:49 ` Vipin Sharma
2022-12-22 2:34 ` [Patch v3 9/9] KVM: x86/mmu: Reduce default cache size in KVM from 40 to PT64_ROOT_MAX_LEVEL Vipin Sharma
2022-12-27 19:52 ` Ben Gardon
2022-12-28 22:08 ` Vipin Sharma
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y64DgHuPd8oTPSm5@google.com \
--to=dmatlack@google.com \
--cc=bgardon@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=seanjc@google.com \
--cc=vipinsh@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox