From: Ackerley Tng <ackerleytng@google.com>
To: Patrick Roy <patrick.roy@linux.dev>,
David Hildenbrand <david@redhat.com>,
Will Deacon <will@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>,
"Roy, Patrick" <roypat@amazon.co.uk>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"corbet@lwn.net" <corbet@lwn.net>,
"maz@kernel.org" <maz@kernel.org>,
"oliver.upton@linux.dev" <oliver.upton@linux.dev>,
"joey.gouly@arm.com" <joey.gouly@arm.com>,
"suzuki.poulose@arm.com" <suzuki.poulose@arm.com>,
"yuzenghui@huawei.com" <yuzenghui@huawei.com>,
"catalin.marinas@arm.com" <catalin.marinas@arm.com>,
"tglx@linutronix.de" <tglx@linutronix.de>,
"mingo@redhat.com" <mingo@redhat.com>,
"bp@alien8.de" <bp@alien8.de>,
"dave.hansen@linux.intel.com" <dave.hansen@linux.intel.com>,
"x86@kernel.org" <x86@kernel.org>,
"hpa@zytor.com" <hpa@zytor.com>,
"luto@kernel.org" <luto@kernel.org>,
"peterz@infradead.org" <peterz@infradead.org>,
"willy@infradead.org" <willy@infradead.org>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"lorenzo.stoakes@oracle.com" <lorenzo.stoakes@oracle.com>,
"Liam.Howlett@oracle.com" <Liam.Howlett@oracle.com>,
"vbabka@suse.cz" <vbabka@suse.cz>,
"rppt@kernel.org" <rppt@kernel.org>,
"surenb@google.com" <surenb@google.com>,
"mhocko@suse.com" <mhocko@suse.com>,
"song@kernel.org" <song@kernel.org>,
"jolsa@kernel.org" <jolsa@kernel.org>,
"ast@kernel.org" <ast@kernel.org>,
"daniel@iogearbox.net" <daniel@iogearbox.net>,
"andrii@kernel.org" <andrii@kernel.org>,
"martin.lau@linux.dev" <martin.lau@linux.dev>,
"eddyz87@gmail.com" <eddyz87@gmail.com>,
"yonghong.song@linux.dev" <yonghong.song@linux.dev>,
"john.fastabend@gmail.com" <john.fastabend@gmail.com>,
"kpsingh@kernel.org" <kpsingh@kernel.org>,
"sdf@fomichev.me" <sdf@fomichev.me>,
"haoluo@google.com" <haoluo@google.com>,
"jgg@ziepe.ca" <jgg@ziepe.ca>,
"jhubbard@nvidia.com" <jhubbard@nvidia.com>,
"peterx@redhat.com" <peterx@redhat.com>,
"jannh@google.com" <jannh@google.com>,
"pfalcato@suse.de" <pfalcato@suse.de>,
"shuah@kernel.org" <shuah@kernel.org>,
"seanjc@google.com" <seanjc@google.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"kvmarm@lists.linux.dev" <kvmarm@lists.linux.dev>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"bpf@vger.kernel.org" <bpf@vger.kernel.org>,
"linux-kselftest@vger.kernel.org"
<linux-kselftest@vger.kernel.org>,
"Cali, Marco" <xmarcalx@amazon.co.uk>,
"Kalyazin, Nikita" <kalyazin@amazon.co.uk>,
"Thomson, Jack" <jackabt@amazon.co.uk>,
"derekmn@amazon.co.uk" <derekmn@amazon.co.uk>,
"tabba@google.com" <tabba@google.com>
Subject: Re: [PATCH v7 06/12] KVM: guest_memfd: add module param for disabling TLB flushing
Date: Fri, 07 Nov 2025 07:29:35 -0800 [thread overview]
Message-ID: <diqzqzu9dfog.fsf@google.com> (raw)
In-Reply-To: <d25340e3-2017-4614-a472-c5c7244c7ce4@linux.dev>
Patrick Roy <patrick.roy@linux.dev> writes:
> Hey all,
>
> sorry it took me a while to get back to this, turns out moving
> internationally is move time consuming than I expected.
>
> On Mon, 2025-09-29 at 12:20 +0200, David Hildenbrand wrote:
>> On 27.09.25 09:38, Patrick Roy wrote:
>>> On Fri, 2025-09-26 at 21:09 +0100, David Hildenbrand wrote:
>>>> On 26.09.25 12:53, Will Deacon wrote:
>>>>> On Fri, Sep 26, 2025 at 10:46:15AM +0100, Patrick Roy wrote:
>>>>>> On Thu, 2025-09-25 at 21:13 +0100, David Hildenbrand wrote:
>>>>>>> On 25.09.25 21:59, Dave Hansen wrote:
>>>>>>>> On 9/25/25 12:20, David Hildenbrand wrote:
>>>>>>>>> On 25.09.25 20:27, Dave Hansen wrote:
>>>>>>>>>> On 9/24/25 08:22, Roy, Patrick wrote:
>>>>>>>>>>> Add an option to not perform TLB flushes after direct map manipulations.
>>>>>>>>>>
>>>>>>>>>> I'd really prefer this be left out for now. It's a massive can of worms.
>>>>>>>>>> Let's agree on something that works and has well-defined behavior before
>>>>>>>>>> we go breaking it on purpose.
>>>>>>>>>
>>>>>>>>> May I ask what the big concern here is?
>>>>>>>>
>>>>>>>> It's not a _big_ concern.
>>>>>>>
>>>>>>> Oh, I read "can of worms" and thought there is something seriously problematic :)
>>>>>>>
>>>>>>>> I just think we want to start on something
>>>>>>>> like this as simple, secure, and deterministic as possible.
>>>>>>>
>>>>>>> Yes, I agree. And it should be the default. Less secure would have to be opt-in and documented thoroughly.
>>>>>>
>>>>>> Yes, I am definitely happy to have the 100% secure behavior be the
>>>>>> default, and the skipping of TLB flushes be an opt-in, with thorough
>>>>>> documentation!
>>>>>>
>>>>>> But I would like to include the "skip tlb flushes" option as part of
>>>>>> this patch series straight away, because as I was alluding to in the
>>>>>> commit message, with TLB flushes this is not usable for Firecracker for
>>>>>> performance reasons :(
>>>>>
>>>>> I really don't want that option for arm64. If we're going to bother
>>>>> unmapping from the linear map, we should invalidate the TLB.
>>>>
>>>> Reading "TLB flushes result in a up to 40x elongation of page faults in
>>>> guest_memfd (scaling with the number of CPU cores), or a 5x elongation
>>>> of memory population,", I can understand why one would want that optimization :)
>>>>
>>>> @Patrick, couldn't we use fallocate() to preallocate memory and batch the TLB flush within such an operation?
>>>>
>>>> That is, we wouldn't flush after each individual direct-map modification but after multiple ones part of a single operation like fallocate of a larger range.
>>>>
>>>> Likely wouldn't make all use cases happy.
>>>>
>>>
>>> For Firecracker, we rely a lot on not preallocating _all_ VM memory, and
>>> trying to ensure only the actual "working set" of a VM is faulted in (we
>>> pack a lot more VMs onto a physical host than there is actual physical
>>> memory available). For VMs that are restored from a snapshot, we know
>>> pretty well what memory needs to be faulted in (that's where @Nikita's
>>> write syscall comes in), so there we could try such an optimization. But
>>> for everything else we very much rely on the on-demand nature of guest
>>> memory allocation (and hence direct map removal). And even right now,
>>> the long pole performance-wise are these on-demand faults, so really, we
>>> don't want them to become even slower :(
>>
>> Makes sense. I guess even without support for large folios one could implement a kind of "fault" around: for example, on access to one addr, allocate+prepare all pages in the same 2 M chunk, flushing the tlb only once after adjusting all the direct map entries.
>>
>>>
>>> Also, can we really batch multiple TLB flushes as you suggest? Even if
>>> pages are at consecutive indices in guest_memfd, they're not guaranteed
>>> to be continguous physically, e.g. we couldn't just coalesce multiple
>>> TLB flushes into a single TLB flush of a larger range.
>>
>> Well, you there is the option on just flushing the complete tlb of course :) When trying to flush a range you would indeed run into the problem of flushing an ever growing range.
>
> In the last guest_memfd upstream call (over a week ago now), we've
> discussed the option of batching and deferring TLB flushes, while
> providing a sort of "deadline" at which a TLB flush will
> deterministically be done. E.g. guest_memfd would keep a counter of how
> many pages got direct map zapped, and do a flush of a range that
> contains all zapped pages every 512 allocated pages (and to ensure the
> flushes even happen in a timely manner if no allocations happen for a
> long time, also every, say, 5 seconds or something like that). Would
> that work for everyone? I briefly tested the performance of
> batch-flushes with secretmem in QEMU, and its within of 30% of the "no
> TLB flushes at all" solution in a simple benchmark that just memsets
> 2GiB of memory.
>
> I think something like this, together with the batch-flushing at the end
> of fallocate() / write() as David suggested above should work for
> Firecracker.
>
>>> There's probably other things we can try. Backing guest_memfd with
>>> hugepages would reduce the number TLB flushes by 512x (although not all
>>> users of Firecracker at Amazon [can] use hugepages).
>>
>> Right.
>>
>>>
>>> And I do still wonder if it's possible to have "async TLB flushes" where
>>> we simply don't wait for the IPI (x86 terminology, not sure what the
>>> mechanism on arm64 is). Looking at
>>> smp_call_function_many_cond()/invlpgb_kernel_range_flush() on x86, it
>>> seems so? Although seems like on ARM it's actually just handled by a
>>> single instruction (TLBI) and not some interprocess communication
>>> thingy. Maybe there's a variant that's faster / better for this usecase?
>>
>> Right, some architectures (and IIRC also x86 with some extension) are able to flush remote TLBs without IPIs.
>>
>> Doing a quick search, there seems to be some research on async TLB flushing, e.g., [1].
>>
>> In the context here, I wonder whether an async TLB flush would be
>> significantly better than not doing an explicit TLB flush: in both
>> cases, it's not really deterministic when the relevant TLB entries
>> will vanish: with the async variant it might happen faster on average
>> I guess.
>
> I actually did end up playing around with this a while ago, and it made
> things slightly better performance wise, but it was still too bad to be
> useful :(
>
Does it help if we add a guest_memfd ioctl that allows userspace to zap
from the direct map to batch TLB flushes?
Could usage be something like:
0. Create guest_memfd with GUEST_MEMFD_FLAG_NO_DIRECT_MAP.
1. write() entire VM memory to guest_memfd.
2. ioctl(guest_memfd, KVM_GUEST_MEMFD_ZAP_DIRECT_MAP, { offset, len })
3. vcpu_run()
This way, we could flush the tlb once for the entire range of { offset,
len } instead of zapping once per fault.
For not-yet-allocated folios, those will get zapped once per fault
though.
Maybe this won't help much if the intention is to allow on-demand
loading of memory, since the demands will come to guest_memfd on a
per-folio basis.
>>
>> [1] https://cs.yale.edu/homes/abhishek/kumar-taco20.pdf
>>
>
> Best,
> Patrick
next prev parent reply other threads:[~2025-11-07 15:29 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-24 15:10 [PATCH v7 00/12] Direct Map Removal Support for guest_memfd Patrick Roy
2025-09-24 15:10 ` [PATCH v7 01/12] arch: export set_direct_map_valid_noflush to KVM module Patrick Roy
2025-09-24 15:10 ` [PATCH v7 02/12] x86/tlb: export flush_tlb_kernel_range " Patrick Roy
2025-09-24 15:10 ` [PATCH v7 03/12] mm: introduce AS_NO_DIRECT_MAP Patrick Roy
2025-09-24 15:22 ` [PATCH v7 04/12] KVM: guest_memfd: Add stub for kvm_arch_gmem_invalidate Roy, Patrick
2025-09-24 15:22 ` [PATCH v7 05/12] KVM: guest_memfd: Add flag to remove from direct map Roy, Patrick
2025-09-25 11:00 ` David Hildenbrand
2025-09-25 15:52 ` Roy, Patrick
2025-09-25 19:28 ` David Hildenbrand
2025-09-26 14:49 ` Patrick Roy
2025-10-31 17:30 ` Brendan Jackman
2025-11-01 9:39 ` Mike Rapoport
2025-11-03 10:35 ` Brendan Jackman
2025-11-03 10:50 ` Mike Rapoport
2025-11-04 11:08 ` Brendan Jackman
2025-11-03 7:57 ` Aneesh Kumar K.V
2025-09-24 15:22 ` [PATCH v7 06/12] KVM: guest_memfd: add module param for disabling TLB flushing Roy, Patrick
2025-09-25 11:02 ` David Hildenbrand
2025-09-25 15:50 ` Roy, Patrick
2025-09-25 19:32 ` David Hildenbrand
2025-09-25 18:27 ` Dave Hansen
2025-09-25 19:20 ` David Hildenbrand
2025-09-25 19:59 ` Dave Hansen
2025-09-25 20:13 ` David Hildenbrand
2025-09-26 9:46 ` Patrick Roy
2025-09-26 10:53 ` Will Deacon
2025-09-26 20:09 ` David Hildenbrand
2025-09-27 7:38 ` Patrick Roy
2025-09-29 10:20 ` David Hildenbrand
2025-10-11 14:32 ` Patrick Roy
2025-11-07 15:29 ` Ackerley Tng [this message]
2025-11-07 17:22 ` Nikita Kalyazin
2025-11-07 17:21 ` Nikita Kalyazin
2025-10-30 16:05 ` Brendan Jackman
2025-10-31 18:31 ` Brendan Jackman
2025-09-24 15:22 ` [PATCH v7 07/12] KVM: selftests: load elf via bounce buffer Roy, Patrick
2025-09-24 15:22 ` [PATCH v7 08/12] KVM: selftests: set KVM_MEM_GUEST_MEMFD in vm_mem_add() if guest_memfd != -1 Roy, Patrick
2025-09-24 15:22 ` [PATCH v7 09/12] KVM: selftests: Add guest_memfd based vm_mem_backing_src_types Roy, Patrick
2025-09-24 15:22 ` [PATCH v7 10/12] KVM: selftests: cover GUEST_MEMFD_FLAG_NO_DIRECT_MAP in existing selftests Roy, Patrick
2025-09-24 15:22 ` [PATCH v7 11/12] KVM: selftests: stuff vm_mem_backing_src_type into vm_shape Roy, Patrick
2025-09-24 15:22 ` [PATCH v7 12/12] KVM: selftests: Test guest execution from direct map removed gmem Roy, Patrick
2025-10-30 17:18 ` Brendan Jackman
2025-09-25 10:26 ` [PATCH v7 04/12] KVM: guest_memfd: Add stub for kvm_arch_gmem_invalidate David Hildenbrand
2025-09-25 10:25 ` [PATCH v7 03/12] mm: introduce AS_NO_DIRECT_MAP David Hildenbrand
2025-09-24 15:29 ` [PATCH v7 00/12] Direct Map Removal Support for guest_memfd Roy, Patrick
2025-09-24 15:38 ` David Hildenbrand
2025-11-07 15:54 ` Brendan Jackman
2025-11-07 17:23 ` Nikita Kalyazin
2025-11-07 18:04 ` Brendan Jackman
2025-11-07 18:11 ` Nikita Kalyazin
2025-11-07 17:37 ` Brendan Jackman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=diqzqzu9dfog.fsf@google.com \
--to=ackerleytng@google.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bp@alien8.de \
--cc=bpf@vger.kernel.org \
--cc=catalin.marinas@arm.com \
--cc=corbet@lwn.net \
--cc=daniel@iogearbox.net \
--cc=dave.hansen@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=derekmn@amazon.co.uk \
--cc=eddyz87@gmail.com \
--cc=haoluo@google.com \
--cc=hpa@zytor.com \
--cc=jackabt@amazon.co.uk \
--cc=jannh@google.com \
--cc=jgg@ziepe.ca \
--cc=jhubbard@nvidia.com \
--cc=joey.gouly@arm.com \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kalyazin@amazon.co.uk \
--cc=kpsingh@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=luto@kernel.org \
--cc=martin.lau@linux.dev \
--cc=maz@kernel.org \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=oliver.upton@linux.dev \
--cc=patrick.roy@linux.dev \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=pfalcato@suse.de \
--cc=roypat@amazon.co.uk \
--cc=rppt@kernel.org \
--cc=sdf@fomichev.me \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=song@kernel.org \
--cc=surenb@google.com \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=tglx@linutronix.de \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=xmarcalx@amazon.co.uk \
--cc=yonghong.song@linux.dev \
--cc=yuzenghui@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).