From: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
To: "ackerleytng@google.com" <ackerleytng@google.com>,
"Zhao, Yan Y" <yan.y.zhao@intel.com>
Cc: "Du, Fan" <fan.du@intel.com>,
"Li, Xiaoyao" <xiaoyao.li@intel.com>,
"Shutemov, Kirill" <kirill.shutemov@intel.com>,
"Hansen, Dave" <dave.hansen@intel.com>,
"david@redhat.com" <david@redhat.com>,
"Li, Zhiquan1" <zhiquan1.li@intel.com>,
"vbabka@suse.cz" <vbabka@suse.cz>,
"tabba@google.com" <tabba@google.com>,
"thomas.lendacky@amd.com" <thomas.lendacky@amd.com>,
"michael.roth@amd.com" <michael.roth@amd.com>,
"seanjc@google.com" <seanjc@google.com>,
"Weiny, Ira" <ira.weiny@intel.com>,
"Peng, Chao P" <chao.p.peng@intel.com>,
"binbin.wu@linux.intel.com" <binbin.wu@linux.intel.com>,
"Yamahata, Isaku" <isaku.yamahata@intel.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"quic_eberman@quicinc.com" <quic_eberman@quicinc.com>,
"Annapurve, Vishal" <vannapurve@google.com>,
"jroedel@suse.de" <jroedel@suse.de>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Miao, Jun" <jun.miao@intel.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"pgonda@google.com" <pgonda@google.com>,
"x86@kernel.org" <x86@kernel.org>
Subject: Re: [RFC PATCH 08/21] KVM: TDX: Increase/decrease folio ref for huge pages
Date: Wed, 2 Jul 2025 23:51:52 +0000 [thread overview]
Message-ID: <a9affa03c7cdc8109d0ed6b5ca30ec69269e2f34.camel@intel.com> (raw)
In-Reply-To: <diqz7c0q48g7.fsf@ackerleytng-ctop.c.googlers.com>
On Wed, 2025-07-02 at 13:57 -0700, Ackerley Tng wrote:
> >
> > If a poisoned page continued to be used, it's a bit weird, no?
>
> Do you mean "continued to be used" in the sense that it is present in a
> filemap and belongs to a (guest_memfd) inode?
I mean anyway where it might get read or written to again.
>
> A poisoned page is not faulted in anywhere, and in that sense the page
> is not "used". In the case of regular poisoning as in a call to
> memory_failure(), the page is unmapped from the page tables. If that
> page belongs to guest_memfd, in today's code [2], guest_memfd
> intentionally does not truncate it from the filemap. For guest_memfd,
> handling the HWpoison at fault time is by design; keeping it present in
> the filemap is by design.
I thought I read that you would allow it to be re-used. I see that the code
already checks for poison in the kvm_gmem_get_pfn() path and the mmap() path. So
it will just sit in the fd and not be handed out again. I think it's ok. Well,
as long as conversion to shared doesn't involve zeroing...?
>
> In the case of TDX unmap failures leading to HWpoison, the only place it
> may remain mapped is in the Secure-EPTs. I use "may" because I'm not
> sure about how badly the unmap failed. But either way, the TD gets
> bugged, all vCPUs of the TD are stopped, so the HWpoison-ed page is no
> longer "used".
>
> [2]
> https://github.com/torvalds/linux/blob/b4911fb0b060899e4eebca0151eb56deb86921ec/virt/kvm/guest_memfd.c#L334
Yes, I saw that. It looks like special error case treatment for the state we are
setting up.
>
> > It could take an
> > #MC for another reason from userspace and the handling code would see the
> > page
> > flag is already set. If it doesn't already trip up some MM code somewhere,
> > it
> > might put undue burden on the memory failure code to have to expect repeated
> > poisoning of the same memory.
> >
>
> If it does take another #MC and go to memory_failure(), memory_failure()
> already checks for the HWpoison flag being set [3]. This is handled by
> killing the process. There is similar handling for a HugeTLB
> folio. We're not introducing anything new by using HWpoison; we're
> buying into the HWpoison framework, which already handles seeing a
> HWpoison when handling a poison.
Do you see another user that is setting the poison flag manually like proposed?
(i.e. not through memory failure handlers)
>
> [3]
> https://github.com/torvalds/linux/blob/b4911fb0b060899e4eebca0151eb56deb86921ec/mm/memory-failure.c#L2270
>
> > >
> > > > What about a kvm_gmem_buggy_cleanup() instead of the system wide one.
> > > > KVM calls
> > > > it and then proceeds to bug the TD only from the KVM side. It's not as
> > > > safe for
> > > > the system, because who knows what a buggy TDX module could do. But TDX
> > > > module
> > > > could also be buggy without the kernel catching wind of it.
> > > >
> > > > Having a single callback to basically bug the fd would solve the atomic
> > > > context
> > > > issue. Then guestmemfd could dump the entire fd into memory_failure()
> > > > instead of
> > > > returning the pages. And developers could respond by fixing the bug.
> > > >
> > >
> > > This could work too.
> > >
> > > I'm in favor of buying into the HWpoison system though, since we're
> > > quite sure this is fair use of HWpoison.
> >
> > Do you mean manually setting the poison flag, or calling into
> > memory_failure(),
> > and friends?
>
> I mean manually setting the poison flag.
>
> * If regular 4K page, set the flag.
> * If THP page (not (yet) supported by guest_memfd), set the poison flag
> on the specific subpage causing the error, and in addition set THP'S
> has_hwpoison
> flag
> * If HugeTLB page, call folio_set_hugetlb_hwpoison() on the subpage.
>
> This is already the process in memory_failure() and perhaps some
> refactoring could be done.
>
> I think calling memory_failure() would do too much, since in addition to
> setting the flag, memory_failure() also sometimes does freeing and may
> kill processes, and triggers the users of the page to further handle the
> HWpoison.
It definitely seem like there is more involved than setting the flag. Which
means for our case we should try to understand what we are skipping and how it
fits with the rest of the kernel. Is any code the checks for poison assuming
that memory_failure() stuff has been done? Stuff like that.
>
> > If we set them manually, we need to make sure that it does not have
> > side effects on the machine check handler. It seems risky/messy to me. But
> > Kirill didn't seem worried.
> >
>
> I believe the memory_failure() is called from the machine check handler:
>
> DEFINE_IDTENTRY_MCE(exc_machine_check)
> -> exc_machine_check_kernel()
> -> do_machine_check()
> -> kill_me_now() or kill_me_maybe()
> -> memory_failure()
>
> (I might have quoted just one of the paths and I'll have to look into it
> more.)
It looked that way to me too. But it works from other contexts. See
MADV_HWPOISON (which is for testing).
>
> For now, IIUC setting the poison flag is a subset of memory_failure(), which
> is a
> subset of what the machine check handler does.
>
> memory_failure() handles an already poisoned page, so I don't see any
> side effects.
>
> I'm happy that Kirill didn't seem worried :) Rick, let me know if you
> see any specific risks.
>
> > Maybe we could bring the poison page flag up to DavidH and see if there is
> > any
> > concern before going down this path too far?
> >
>
> I can do that. David's cc-ed on this email, and I hope to get a chance
> to talk about handling HWpoison (generally, not TDX specifically) at the
> guest_memfd bi-weekly upstream call on 2025-07-10 so I can bring this up
> too.
Ok sounds good. Should we just continue the discussion there? I can try to
attend.
>
> > >
> > > Are you saying kvm_gmem_buggy_cleanup() will just set the HWpoison flag
> > > on the parts of the folios in trouble?
> >
> > I was saying kvm_gmem_buggy_cleanup() can set a bool on the fd, similar to
> > VM_BUG_ON() setting vm_dead.
>
> Setting a bool on the fd is a possible option too. Comparing an
> inode-level boolean and HWpoison, I still prefer HWpoison because
>
> 1. HWpoison gives us more information about which (sub)folio was
> poisoned. We can think of the bool on the fd as an fd-wide
> poisoning. If we don't know which subpage has an error, we're forced
> to leak the entire fd when the inode is released, which could be a
> huge amount of memory leaked.
> 2. HWpoison is already checked on faults, so there is no need to add an
> extra check on a bool
> 3. For HugeTLB, HWpoison will have to be summarized/itemized on merge/split to
> handle
> regular non-TDX related HWpoisons, so no additional code there.
>
> > After an invalidate, if gmem see this, it needs to
> > assume everything failed, and invalidate everything and poison all guest
> > memory.
> > The point was to have the simplest possible handling for a rare error.
>
> I agree a bool will probably result in fewer lines of code being changed
> and could be a fair first cut, but I feel like we would very quickly
> need another patch series to get more granular information and not have
> to leak an entire fd worth of memory.
We will only leak an entire VMs worth of memory if there is a bug, the form of
which I'm not sure. The kernel doesn't usually have a lot of defensive code to
handle for bugs elsewhere. Unless it's to help debugging. But especially for
other platform software (bios, etc), it should try to stay out of the job of
maintaining code to work around unfixed bugs. And here we are working around
*potential bugs*.
So another *possible* solution is to expect TDX module/KVM to work. Kill the TD,
return success to the invalidation, and hope that it doesn't do anything to
those zombie mappings. It will likely work. Probably much more likely to work
then some other warning cases in the kernel. As far as debugging, if strange
crashes are observed after a bit splat, it can be a good hint.
Unless Yan has some specific case to worry about that she has been holding on to
that makes this error condition a more expected state. That could change things.
>
> Along these lines, Yan seems to prefer setting HWpoison on the entire
> folio without going into the details of the exact subfolios being
> poisoned. I think this is a possible in-between solution that doesn't
> require leaking the entire fd worth of memory, but it still leaks more
> than just where the actual error happened.
>
> I'm willing to go with just setting HWpoison on the entire large folio
> as a first cut and leak more memory than necessary (because if we don't
> know which subpage it is, we are forced to leak everything to be safe).
Leaking more memory than necessary in a bug case seems totally ok to me.
>
> However, this patch series needs a large page provider in guest_memfd, and
> will only land either after THP or HugeTLB support lands in
> guest_memfd.
>
> For now if you're testing on guest_memfd+HugeTLB,
> folio_set_hugetlb_hwpoison() already exists, why not use it?
>
> > Although
> > it's only a proposal. The TDX emergency shutdown option may be simpler
> > still.
> > But killing all TDs is not ideal. So thought we could at least consider
> > other
> > options.
> >
> > If we have a solution where TDX needs to do something complicated because
> > something of its specialness, it may get NAKed.
>
> Using HWpoison is generic, since guest_memfd needs to handle HWpoison
> for regular memory errors anyway. Even if it is not a final solution, it
> should be good enough, if not for this patch series to merge, at least
> for the next RFC of this patch series. :)
Yes, maybe. If we have a normal, easy, non-imposing solution for handling the
error then I won't object.
next prev parent reply other threads:[~2025-07-02 23:52 UTC|newest]
Thread overview: 294+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-24 3:00 [RFC PATCH 00/21] KVM: TDX huge page support for private memory Yan Zhao
2025-04-24 3:04 ` [RFC PATCH 01/21] KVM: gmem: Allocate 2M huge page from guest_memfd backend Yan Zhao
2025-04-24 3:04 ` [RFC PATCH 02/21] x86/virt/tdx: Enhance tdh_mem_page_aug() to support huge pages Yan Zhao
2025-04-24 7:48 ` Kirill A. Shutemov
2025-04-24 8:41 ` Yan Zhao
2025-04-25 6:51 ` Binbin Wu
2025-04-25 7:19 ` Yan Zhao
2025-05-13 18:52 ` Edgecombe, Rick P
2025-05-16 9:05 ` Yan Zhao
2025-05-16 17:10 ` Edgecombe, Rick P
2025-06-19 9:26 ` Nikolay Borisov
2025-06-23 9:32 ` Yan Zhao
2025-05-15 2:16 ` Chao Gao
2025-05-16 9:07 ` Yan Zhao
2025-07-08 8:48 ` Yan Zhao
2025-07-08 13:55 ` Edgecombe, Rick P
2025-07-08 15:29 ` Vishal Annapurve
2025-07-08 15:32 ` Edgecombe, Rick P
2025-07-08 22:06 ` Vishal Annapurve
2025-07-08 23:16 ` Edgecombe, Rick P
2025-07-08 23:31 ` Vishal Annapurve
2025-07-09 2:23 ` Yan Zhao
2025-07-09 14:08 ` Edgecombe, Rick P
2025-04-24 3:04 ` [RFC PATCH 03/21] x86/virt/tdx: Add SEAMCALL wrapper tdh_mem_page_demote() Yan Zhao
2025-04-25 7:12 ` Binbin Wu
2025-04-25 7:17 ` Yan Zhao
2025-04-25 7:25 ` Binbin Wu
2025-04-25 9:24 ` Yan Zhao
2025-05-13 18:19 ` Edgecombe, Rick P
2025-05-15 8:26 ` Yan Zhao
2025-05-15 17:28 ` Edgecombe, Rick P
2025-05-16 2:23 ` Yan Zhao
2025-07-01 21:15 ` Edgecombe, Rick P
2025-04-24 3:05 ` [RFC PATCH 04/21] KVM: TDX: Enforce 4KB mapping level during TD build Time Yan Zhao
2025-04-24 7:55 ` Kirill A. Shutemov
2025-04-24 8:49 ` Yan Zhao
2025-05-13 19:12 ` Edgecombe, Rick P
2025-05-15 9:16 ` Yan Zhao
2025-05-15 17:32 ` Edgecombe, Rick P
2025-05-16 10:05 ` Yan Zhao
2025-04-24 3:05 ` [RFC PATCH 05/21] KVM: TDX: Enhance tdx_clear_page() to support huge pages Yan Zhao
2025-05-13 19:17 ` Edgecombe, Rick P
2025-05-16 2:02 ` Yan Zhao
2025-04-24 3:05 ` [RFC PATCH 06/21] KVM: TDX: Assert the reclaimed pages were mapped as expected Yan Zhao
2025-05-13 19:25 ` Edgecombe, Rick P
2025-05-16 2:11 ` Yan Zhao
2025-05-16 17:34 ` Edgecombe, Rick P
2025-04-24 3:05 ` [RFC PATCH 07/21] KVM: TDX: Add a helper for WBINVD on huge pages with TD's keyID Yan Zhao
2025-05-06 8:37 ` Binbin Wu
2025-05-16 3:10 ` Yan Zhao
2025-05-13 19:29 ` Edgecombe, Rick P
2025-05-16 3:03 ` Yan Zhao
2025-05-16 17:35 ` Edgecombe, Rick P
2025-04-24 3:06 ` [RFC PATCH 08/21] KVM: TDX: Increase/decrease folio ref for huge pages Yan Zhao
2025-04-29 0:17 ` Vishal Annapurve
2025-04-29 0:49 ` Yan Zhao
2025-04-29 13:46 ` Vishal Annapurve
2025-05-06 0:53 ` Yan Zhao
2025-05-06 5:08 ` Vishal Annapurve
2025-05-06 6:04 ` Yan Zhao
2025-05-06 13:18 ` Vishal Annapurve
2025-05-07 7:37 ` Yan Zhao
2025-05-07 14:56 ` Vishal Annapurve
2025-05-08 1:30 ` Yan Zhao
2025-05-08 14:10 ` Vishal Annapurve
2025-05-09 3:20 ` Yan Zhao
2025-05-09 14:20 ` Vishal Annapurve
2025-05-09 23:45 ` Edgecombe, Rick P
2025-05-10 0:41 ` Vishal Annapurve
2025-05-12 21:59 ` Edgecombe, Rick P
2025-05-12 2:15 ` Yan Zhao
2025-05-12 16:53 ` Vishal Annapurve
2025-05-15 3:01 ` Yan Zhao
2025-06-04 20:02 ` Ackerley Tng
2025-06-05 2:42 ` Yan Zhao
2025-06-05 21:12 ` Ackerley Tng
2025-06-16 10:43 ` Yan Zhao
2025-06-16 23:27 ` Edgecombe, Rick P
2025-06-11 14:30 ` Vishal Annapurve
2025-06-16 9:59 ` Yan Zhao
2025-06-17 0:12 ` Edgecombe, Rick P
2025-06-17 1:38 ` Yan Zhao
2025-06-17 15:52 ` Edgecombe, Rick P
2025-06-18 0:19 ` Yan Zhao
2025-06-18 0:41 ` Edgecombe, Rick P
2025-06-23 9:27 ` Yan Zhao
2025-06-23 18:20 ` Edgecombe, Rick P
[not found] ` <draft-diqzh606mcz0.fsf@ackerleytng-ctop.c.googlers.com>
2025-06-23 22:48 ` Ackerley Tng
2025-06-24 10:18 ` Yan Zhao
2025-06-24 21:29 ` Ackerley Tng
2025-06-24 22:22 ` Edgecombe, Rick P
2025-06-24 22:00 ` Edgecombe, Rick P
2025-06-24 22:14 ` Edgecombe, Rick P
2025-06-24 23:30 ` Ackerley Tng
2025-06-25 0:01 ` Edgecombe, Rick P
2025-06-25 7:29 ` Yan Zhao
2025-06-25 23:09 ` Ackerley Tng
2025-06-25 23:19 ` Edgecombe, Rick P
2025-06-26 15:16 ` Shutemov, Kirill
2025-06-26 22:19 ` Edgecombe, Rick P
2025-06-27 17:59 ` Ackerley Tng
2025-06-30 11:13 ` Yan Zhao
2025-06-30 17:55 ` Edgecombe, Rick P
2025-06-30 19:25 ` Ackerley Tng
2025-06-30 21:45 ` Edgecombe, Rick P
2025-07-01 5:01 ` Yan Zhao
2025-07-01 5:22 ` Vishal Annapurve
2025-07-01 6:03 ` Yan Zhao
2025-07-01 7:13 ` Vishal Annapurve
2025-07-01 14:15 ` Edgecombe, Rick P
2025-07-01 22:09 ` Ackerley Tng
2025-07-02 11:24 ` Yan Zhao
2025-07-02 18:43 ` Ackerley Tng
2025-07-03 4:54 ` Yan Zhao
2025-07-14 19:32 ` Ackerley Tng
2025-07-01 16:13 ` Edgecombe, Rick P
2025-07-01 21:48 ` Ackerley Tng
2025-07-01 21:57 ` Ackerley Tng
2025-07-01 22:37 ` Edgecombe, Rick P
2025-07-02 20:57 ` Ackerley Tng
2025-07-02 23:51 ` Edgecombe, Rick P [this message]
2025-07-08 21:19 ` Ackerley Tng
2025-07-11 1:46 ` Edgecombe, Rick P
2025-07-11 5:12 ` Yan Zhao
2025-07-11 16:14 ` Edgecombe, Rick P
2025-07-14 19:49 ` Ackerley Tng
2025-07-15 15:08 ` Edgecombe, Rick P
2025-07-15 22:31 ` Ackerley Tng
2025-07-02 9:08 ` Yan Zhao
2025-07-02 15:28 ` Edgecombe, Rick P
2025-07-01 5:07 ` Yan Zhao
2025-07-01 22:01 ` Ackerley Tng
2025-07-01 22:26 ` Ackerley Tng
2025-06-30 21:47 ` Vishal Annapurve
2025-07-01 9:35 ` Yan Zhao
2025-07-01 13:32 ` Vishal Annapurve
2025-07-01 14:02 ` Vishal Annapurve
2025-07-01 15:42 ` Edgecombe, Rick P
2025-07-01 16:14 ` Edgecombe, Rick P
2025-07-02 8:54 ` Yan Zhao
2025-07-02 13:12 ` Vishal Annapurve
2025-06-25 7:08 ` Yan Zhao
2025-06-25 22:54 ` Ackerley Tng
2025-06-24 22:03 ` Edgecombe, Rick P
2025-06-17 0:25 ` Edgecombe, Rick P
2025-06-17 2:00 ` Yan Zhao
2025-06-17 3:51 ` Vishal Annapurve
2025-06-17 6:52 ` Yan Zhao
2025-06-17 8:09 ` Vishal Annapurve
2025-06-17 9:57 ` Yan Zhao
2025-06-18 4:25 ` Vishal Annapurve
2025-06-18 0:34 ` Edgecombe, Rick P
2025-06-18 0:46 ` Yan Zhao
2025-06-18 4:33 ` Vishal Annapurve
2025-06-18 6:13 ` Yan Zhao
2025-06-18 6:21 ` Vishal Annapurve
2025-06-18 6:32 ` Yan Zhao
2025-06-18 6:44 ` Vishal Annapurve
2025-06-18 6:57 ` Yan Zhao
2025-06-18 4:29 ` Vishal Annapurve
2025-06-19 0:22 ` Edgecombe, Rick P
2025-06-05 2:47 ` Yan Zhao
2025-06-05 22:35 ` Ackerley Tng
2025-06-19 8:11 ` Yan Zhao
2025-06-20 18:06 ` Vishal Annapurve
2025-07-16 1:23 ` Yan Zhao
2025-07-16 20:57 ` Ackerley Tng
2025-07-18 5:49 ` Yan Zhao
2025-07-22 5:33 ` Ackerley Tng
2025-07-22 6:37 ` Yan Zhao
2025-07-22 17:55 ` Ackerley Tng
2025-05-12 19:00 ` Ackerley Tng
2025-05-12 21:44 ` Edgecombe, Rick P
2025-04-24 3:06 ` [RFC PATCH 09/21] KVM: TDX: Enable 2MB mapping size after TD is RUNNABLE Yan Zhao
2025-05-13 20:10 ` Edgecombe, Rick P
2025-05-16 1:35 ` Huang, Kai
2025-05-16 9:43 ` Yan Zhao
2025-05-16 22:35 ` Huang, Kai
2025-05-16 23:47 ` Edgecombe, Rick P
2025-05-19 8:32 ` Yan Zhao
2025-05-19 16:53 ` Edgecombe, Rick P
2025-05-20 9:34 ` Yan Zhao
2025-05-20 23:47 ` Huang, Kai
2025-06-11 14:42 ` Sean Christopherson
2025-06-12 23:39 ` Edgecombe, Rick P
2025-06-13 0:19 ` Sean Christopherson
2025-06-13 0:25 ` Edgecombe, Rick P
2025-06-13 0:44 ` Sean Christopherson
2025-06-13 0:47 ` Edgecombe, Rick P
2025-06-13 1:32 ` Yan Zhao
2025-06-13 21:53 ` Edgecombe, Rick P
2025-06-13 22:19 ` Sean Christopherson
2025-06-13 23:33 ` Edgecombe, Rick P
2025-06-16 3:14 ` Yan Zhao
2025-06-16 22:49 ` Edgecombe, Rick P
2025-06-17 0:52 ` Yan Zhao
2025-06-18 0:30 ` Yan Zhao
2025-06-20 16:31 ` Sean Christopherson
2025-06-23 21:44 ` Edgecombe, Rick P
2025-06-24 9:57 ` Yan Zhao
2025-06-24 18:35 ` Edgecombe, Rick P
2025-06-25 9:28 ` Yan Zhao
2025-06-25 9:36 ` Yan Zhao
2025-06-25 14:48 ` Edgecombe, Rick P
2025-06-26 0:50 ` Yan Zhao
2025-06-25 14:47 ` Edgecombe, Rick P
2025-06-26 8:53 ` Yan Zhao
2025-07-01 0:42 ` Edgecombe, Rick P
2025-07-01 2:41 ` Yan Zhao
2025-07-01 15:36 ` Edgecombe, Rick P
2025-07-02 0:12 ` Yan Zhao
2025-07-02 0:18 ` Edgecombe, Rick P
2025-07-02 1:07 ` Yan Zhao
2025-07-02 15:26 ` Edgecombe, Rick P
2025-07-02 3:31 ` Yan Zhao
2025-06-25 13:47 ` Vishal Annapurve
2025-06-25 15:51 ` Edgecombe, Rick P
2025-06-18 1:22 ` Edgecombe, Rick P
2025-06-18 11:32 ` Shutemov, Kirill
2025-06-20 16:32 ` Sean Christopherson
2025-06-20 17:44 ` Kirill Shutemov
2025-06-20 18:40 ` Sean Christopherson
2025-06-20 19:26 ` Kirill Shutemov
2025-06-13 2:41 ` Xiaoyao Li
2025-06-13 3:29 ` Yan Zhao
2025-06-13 5:35 ` Yan Zhao
2025-06-13 6:08 ` Xiaoyao Li
2025-05-21 15:40 ` Edgecombe, Rick P
2025-05-22 3:52 ` Yan Zhao
2025-05-23 23:40 ` Edgecombe, Rick P
2025-05-27 1:31 ` Yan Zhao
2025-05-20 23:34 ` Huang, Kai
2025-05-21 2:35 ` Yan Zhao
2025-05-16 9:28 ` Yan Zhao
2025-04-24 3:06 ` [RFC PATCH 10/21] KVM: x86/mmu: Disallow page merging (huge page adjustment) for mirror root Yan Zhao
2025-05-13 20:15 ` Edgecombe, Rick P
2025-05-16 4:01 ` Yan Zhao
2025-05-16 17:50 ` Edgecombe, Rick P
2025-05-19 3:57 ` Yan Zhao
2025-05-19 17:42 ` Edgecombe, Rick P
2025-05-20 10:11 ` Yan Zhao
2025-04-24 3:06 ` [RFC PATCH 11/21] KVM: x86: Add "vcpu" "gfn" parameters to x86 hook private_max_mapping_level Yan Zhao
2025-04-24 3:07 ` [RFC PATCH 12/21] KVM: TDX: Determine max mapping level according to vCPU's ACCEPT level Yan Zhao
2025-05-13 21:20 ` Edgecombe, Rick P
2025-05-16 6:12 ` Xiaoyao Li
2025-05-16 6:30 ` Yan Zhao
2025-05-16 22:02 ` Edgecombe, Rick P
2025-05-19 6:39 ` Yan Zhao
2025-05-19 20:17 ` Edgecombe, Rick P
2025-04-24 3:07 ` [RFC PATCH 13/21] KVM: x86/tdp_mmu: Alloc external_spt page for mirror page table splitting Yan Zhao
2025-04-24 3:07 ` [RFC PATCH 14/21] KVM: x86/tdp_mmu: Invoke split_external_spt hook with exclusive mmu_lock Yan Zhao
2025-05-13 23:06 ` Edgecombe, Rick P
2025-05-16 9:17 ` Yan Zhao
2025-05-16 22:11 ` Edgecombe, Rick P
2025-05-19 4:01 ` Yan Zhao
2025-05-19 20:21 ` Edgecombe, Rick P
2025-05-20 5:40 ` Binbin Wu
2025-05-20 9:40 ` Yan Zhao
2025-04-24 3:08 ` [RFC PATCH 15/21] KVM: TDX: Support huge page splitting with exclusive kvm->mmu_lock Yan Zhao
2025-05-20 6:18 ` Binbin Wu
2025-05-20 9:40 ` Yan Zhao
2025-07-02 15:47 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 16/21] KVM: x86/mmu: Introduce kvm_split_boundary_leafs() to split boundary leafs Yan Zhao
2025-05-13 22:56 ` Edgecombe, Rick P
2025-05-16 7:46 ` Yan Zhao
2025-05-16 8:03 ` Yan Zhao
2025-05-16 22:27 ` Edgecombe, Rick P
2025-05-19 8:12 ` Yan Zhao
2025-05-16 11:44 ` Yan Zhao
2025-05-16 22:16 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 17/21] KVM: Change the return type of gfn_handler_t() from bool to int Yan Zhao
2025-04-24 3:08 ` [RFC PATCH 18/21] KVM: x86: Split huge boundary leafs before private to shared conversion Yan Zhao
2025-05-09 23:34 ` Edgecombe, Rick P
2025-05-12 2:25 ` Yan Zhao
2025-05-12 21:53 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 19/21] KVM: gmem: Split huge boundary leafs for punch hole of private memory Yan Zhao
2025-04-24 10:19 ` Francesco Lavra
2025-04-25 1:55 ` Yan Zhao
2025-05-13 22:59 ` Edgecombe, Rick P
2025-05-16 8:19 ` Yan Zhao
2025-04-24 3:09 ` [RFC PATCH 20/21] KVM: x86: Force a prefetch fault's max mapping level to 4KB for TDX Yan Zhao
2025-05-13 23:20 ` Edgecombe, Rick P
2025-05-16 8:43 ` Yan Zhao
2025-05-21 3:30 ` Binbin Wu
2025-05-21 5:03 ` Yan Zhao
2025-04-24 3:09 ` [RFC PATCH 21/21] KVM: x86: Ignore splitting huge pages in fault path " Yan Zhao
2025-05-13 21:58 ` Edgecombe, Rick P
2025-05-16 6:40 ` Yan Zhao
2025-04-24 7:35 ` [RFC PATCH 00/21] KVM: TDX huge page support for private memory Kirill A. Shutemov
2025-04-24 8:33 ` Yan Zhao
2025-04-24 9:05 ` Kirill A. Shutemov
2025-04-24 9:08 ` Juergen Gross
2025-04-24 9:49 ` Yan Zhao
2025-04-24 10:39 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a9affa03c7cdc8109d0ed6b5ca30ec69269e2f34.camel@intel.com \
--to=rick.p.edgecombe@intel.com \
--cc=ackerleytng@google.com \
--cc=binbin.wu@linux.intel.com \
--cc=chao.p.peng@intel.com \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=fan.du@intel.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jroedel@suse.de \
--cc=jun.miao@intel.com \
--cc=kirill.shutemov@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=pgonda@google.com \
--cc=quic_eberman@quicinc.com \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).