From: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
To: "Zhao, Yan Y" <yan.y.zhao@intel.com>
Cc: "quic_eberman@quicinc.com" <quic_eberman@quicinc.com>,
"Li, Xiaoyao" <xiaoyao.li@intel.com>,
"Huang, Kai" <kai.huang@intel.com>, "Du, Fan" <fan.du@intel.com>,
"Hansen, Dave" <dave.hansen@intel.com>,
"david@redhat.com" <david@redhat.com>,
"thomas.lendacky@amd.com" <thomas.lendacky@amd.com>,
"vbabka@suse.cz" <vbabka@suse.cz>,
"Li, Zhiquan1" <zhiquan1.li@intel.com>,
"Shutemov, Kirill" <kirill.shutemov@intel.com>,
"michael.roth@amd.com" <michael.roth@amd.com>,
"seanjc@google.com" <seanjc@google.com>,
"Weiny, Ira" <ira.weiny@intel.com>,
"Peng, Chao P" <chao.p.peng@intel.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"Yamahata, Isaku" <isaku.yamahata@intel.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"binbin.wu@linux.intel.com" <binbin.wu@linux.intel.com>,
"ackerleytng@google.com" <ackerleytng@google.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"Annapurve, Vishal" <vannapurve@google.com>,
"tabba@google.com" <tabba@google.com>,
"jroedel@suse.de" <jroedel@suse.de>,
"Miao, Jun" <jun.miao@intel.com>,
"pgonda@google.com" <pgonda@google.com>,
"x86@kernel.org" <x86@kernel.org>
Subject: Re: [RFC PATCH 09/21] KVM: TDX: Enable 2MB mapping size after TD is RUNNABLE
Date: Wed, 2 Jul 2025 00:18:48 +0000 [thread overview]
Message-ID: <f13239faa54062feb9937fe9fc6d087ffcca7ac6.camel@intel.com> (raw)
In-Reply-To: <aGR5ZYpn3xyfOZhS@yzhao56-desk.sh.intel.com>
On Wed, 2025-07-02 at 08:12 +0800, Yan Zhao wrote:
> > Then (3) could be skipped in the case of ability to demote under read lock?
> >
> > I noticed that the other lpage_info updaters took mmu write lock, and I
> > wasn't
> > sure why. We shouldn't take a lock that we don't actually need just for
> > safety
> > margin or to copy other code.
> Use write mmu_lock is of reason.
>
> In the 3 steps,
> 1. set lpage_info
> 2. demote if needed
> 3. go to fault handler
>
> Step 2 requires holding write mmu_lock before invoking
> kvm_split_boundary_leafs().
> The write mmu_lock is also possible to get dropped and re-acquired in
> kvm_split_boundary_leafs() for purpose like memory allocation.
>
> If 1 is with read mmu_lock, the other vCPUs is still possible to fault in at
> 2MB
> level after the demote in step 2.
> Luckily, current TDX doesn't support promotion now.
> But we can avoid wading into this complex situation by holding write mmu_lock
> in 1.
I don't think because some code might race in the future is a good reason to
take the write lock.
>
> > > > For most accept
> > > > cases, we could fault in the PTE's on the read lock. And in the future
> > > > we
> > > > could
> > >
> > > The actual mapping at 4KB level is still with read mmu_lock in
> > > __vmx_handle_ept_violation().
> > >
> > > > have a demote that could work under read lock, as we talked. So
> > > > kvm_split_boundary_leafs() often or could be unneeded or work under read
> > > > lock
> > > > when needed.
> > > Could we leave the "demote under read lock" as a future optimization?
> >
> > We could add it to the list. If we have a TDX module that supports demote
> > with a
> > single SEAMCALL then we don't have the rollback problem. The optimization
> > could
> > utilize that. That said, we should focus on the optimizations that make the
> > biggest difference to real TDs. Your data suggests this might not be the
> > case
> > today.
> Ok.
>
> > > > What is the problem in hugepage_set_guest_inhibit() that requires the
> > > > write
> > > > lock?
> > > As above, to avoid the other vCPUs reading stale mapping level and
> > > splitting
> > > under read mmu_lock.
> >
> > We need mmu write lock for demote, but as long as the order is:
> > 1. set lpage_info
> > 2. demote if needed
> > 3. go to fault handler
> >
> > Then (3) should have what it needs even if another fault races (1).
> See the above comment for why we need to hold write mmu_lock for 1.
>
> Besides, as we need write mmu_lock anyway for 2 (i.e. hold write mmu_lock
> before
> walking the SPTEs to check if there's any existing mapping), I don't see any
> performance impact by holding write mmu_lock for 1.
It's maintainability problem too. Someday someone may want to remove it and
scratch their head for what race they are missing.
>
>
> > > As guest_inhibit is set one-way, we could test it using
> > > hugepage_test_guest_inhibit() without holding the lock. The chance to hold
> > > write
> > > mmu_lock for hugepage_set_guest_inhibit() is then greatly reduced.
> > > (in my testing, 11 during VM boot).
> > >
> > > > But in any case, it seems like we have *a* solution here. It doesn't
> > > > seem
> > > > like
> > > > there are any big downsides. Should we close it?
> > > I think it's good, as long as Sean doesn't disagree :)
> >
> > He seemed onboard. Let's close it. We can even discuss lpage_info update
> > locking
> > on v2.
> Ok. I'll use write mmu_lock for updating lpage_info in v2 first.
Specifically, why do the other lpage_info updating functions take mmu write
lock. Are you sure there is no other reason?
next prev parent reply other threads:[~2025-07-02 0:19 UTC|newest]
Thread overview: 294+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-24 3:00 [RFC PATCH 00/21] KVM: TDX huge page support for private memory Yan Zhao
2025-04-24 3:04 ` [RFC PATCH 01/21] KVM: gmem: Allocate 2M huge page from guest_memfd backend Yan Zhao
2025-04-24 3:04 ` [RFC PATCH 02/21] x86/virt/tdx: Enhance tdh_mem_page_aug() to support huge pages Yan Zhao
2025-04-24 7:48 ` Kirill A. Shutemov
2025-04-24 8:41 ` Yan Zhao
2025-04-25 6:51 ` Binbin Wu
2025-04-25 7:19 ` Yan Zhao
2025-05-13 18:52 ` Edgecombe, Rick P
2025-05-16 9:05 ` Yan Zhao
2025-05-16 17:10 ` Edgecombe, Rick P
2025-06-19 9:26 ` Nikolay Borisov
2025-06-23 9:32 ` Yan Zhao
2025-05-15 2:16 ` Chao Gao
2025-05-16 9:07 ` Yan Zhao
2025-07-08 8:48 ` Yan Zhao
2025-07-08 13:55 ` Edgecombe, Rick P
2025-07-08 15:29 ` Vishal Annapurve
2025-07-08 15:32 ` Edgecombe, Rick P
2025-07-08 22:06 ` Vishal Annapurve
2025-07-08 23:16 ` Edgecombe, Rick P
2025-07-08 23:31 ` Vishal Annapurve
2025-07-09 2:23 ` Yan Zhao
2025-07-09 14:08 ` Edgecombe, Rick P
2025-04-24 3:04 ` [RFC PATCH 03/21] x86/virt/tdx: Add SEAMCALL wrapper tdh_mem_page_demote() Yan Zhao
2025-04-25 7:12 ` Binbin Wu
2025-04-25 7:17 ` Yan Zhao
2025-04-25 7:25 ` Binbin Wu
2025-04-25 9:24 ` Yan Zhao
2025-05-13 18:19 ` Edgecombe, Rick P
2025-05-15 8:26 ` Yan Zhao
2025-05-15 17:28 ` Edgecombe, Rick P
2025-05-16 2:23 ` Yan Zhao
2025-07-01 21:15 ` Edgecombe, Rick P
2025-04-24 3:05 ` [RFC PATCH 04/21] KVM: TDX: Enforce 4KB mapping level during TD build Time Yan Zhao
2025-04-24 7:55 ` Kirill A. Shutemov
2025-04-24 8:49 ` Yan Zhao
2025-05-13 19:12 ` Edgecombe, Rick P
2025-05-15 9:16 ` Yan Zhao
2025-05-15 17:32 ` Edgecombe, Rick P
2025-05-16 10:05 ` Yan Zhao
2025-04-24 3:05 ` [RFC PATCH 05/21] KVM: TDX: Enhance tdx_clear_page() to support huge pages Yan Zhao
2025-05-13 19:17 ` Edgecombe, Rick P
2025-05-16 2:02 ` Yan Zhao
2025-04-24 3:05 ` [RFC PATCH 06/21] KVM: TDX: Assert the reclaimed pages were mapped as expected Yan Zhao
2025-05-13 19:25 ` Edgecombe, Rick P
2025-05-16 2:11 ` Yan Zhao
2025-05-16 17:34 ` Edgecombe, Rick P
2025-04-24 3:05 ` [RFC PATCH 07/21] KVM: TDX: Add a helper for WBINVD on huge pages with TD's keyID Yan Zhao
2025-05-06 8:37 ` Binbin Wu
2025-05-16 3:10 ` Yan Zhao
2025-05-13 19:29 ` Edgecombe, Rick P
2025-05-16 3:03 ` Yan Zhao
2025-05-16 17:35 ` Edgecombe, Rick P
2025-04-24 3:06 ` [RFC PATCH 08/21] KVM: TDX: Increase/decrease folio ref for huge pages Yan Zhao
2025-04-29 0:17 ` Vishal Annapurve
2025-04-29 0:49 ` Yan Zhao
2025-04-29 13:46 ` Vishal Annapurve
2025-05-06 0:53 ` Yan Zhao
2025-05-06 5:08 ` Vishal Annapurve
2025-05-06 6:04 ` Yan Zhao
2025-05-06 13:18 ` Vishal Annapurve
2025-05-07 7:37 ` Yan Zhao
2025-05-07 14:56 ` Vishal Annapurve
2025-05-08 1:30 ` Yan Zhao
2025-05-08 14:10 ` Vishal Annapurve
2025-05-09 3:20 ` Yan Zhao
2025-05-09 14:20 ` Vishal Annapurve
2025-05-09 23:45 ` Edgecombe, Rick P
2025-05-10 0:41 ` Vishal Annapurve
2025-05-12 21:59 ` Edgecombe, Rick P
2025-05-12 2:15 ` Yan Zhao
2025-05-12 16:53 ` Vishal Annapurve
2025-05-15 3:01 ` Yan Zhao
2025-06-04 20:02 ` Ackerley Tng
2025-06-05 2:42 ` Yan Zhao
2025-06-05 21:12 ` Ackerley Tng
2025-06-16 10:43 ` Yan Zhao
2025-06-16 23:27 ` Edgecombe, Rick P
2025-06-11 14:30 ` Vishal Annapurve
2025-06-16 9:59 ` Yan Zhao
2025-06-17 0:12 ` Edgecombe, Rick P
2025-06-17 1:38 ` Yan Zhao
2025-06-17 15:52 ` Edgecombe, Rick P
2025-06-18 0:19 ` Yan Zhao
2025-06-18 0:41 ` Edgecombe, Rick P
2025-06-23 9:27 ` Yan Zhao
2025-06-23 18:20 ` Edgecombe, Rick P
[not found] ` <draft-diqzh606mcz0.fsf@ackerleytng-ctop.c.googlers.com>
2025-06-23 22:48 ` Ackerley Tng
2025-06-24 10:18 ` Yan Zhao
2025-06-24 21:29 ` Ackerley Tng
2025-06-24 22:22 ` Edgecombe, Rick P
2025-06-24 22:00 ` Edgecombe, Rick P
2025-06-24 22:14 ` Edgecombe, Rick P
2025-06-24 23:30 ` Ackerley Tng
2025-06-25 0:01 ` Edgecombe, Rick P
2025-06-25 7:29 ` Yan Zhao
2025-06-25 23:09 ` Ackerley Tng
2025-06-25 23:19 ` Edgecombe, Rick P
2025-06-26 15:16 ` Shutemov, Kirill
2025-06-26 22:19 ` Edgecombe, Rick P
2025-06-27 17:59 ` Ackerley Tng
2025-06-30 11:13 ` Yan Zhao
2025-06-30 17:55 ` Edgecombe, Rick P
2025-06-30 19:25 ` Ackerley Tng
2025-06-30 21:45 ` Edgecombe, Rick P
2025-07-01 5:01 ` Yan Zhao
2025-07-01 5:22 ` Vishal Annapurve
2025-07-01 6:03 ` Yan Zhao
2025-07-01 7:13 ` Vishal Annapurve
2025-07-01 14:15 ` Edgecombe, Rick P
2025-07-01 22:09 ` Ackerley Tng
2025-07-02 11:24 ` Yan Zhao
2025-07-02 18:43 ` Ackerley Tng
2025-07-03 4:54 ` Yan Zhao
2025-07-14 19:32 ` Ackerley Tng
2025-07-01 16:13 ` Edgecombe, Rick P
2025-07-01 21:48 ` Ackerley Tng
2025-07-01 21:57 ` Ackerley Tng
2025-07-01 22:37 ` Edgecombe, Rick P
2025-07-02 20:57 ` Ackerley Tng
2025-07-02 23:51 ` Edgecombe, Rick P
2025-07-08 21:19 ` Ackerley Tng
2025-07-11 1:46 ` Edgecombe, Rick P
2025-07-11 5:12 ` Yan Zhao
2025-07-11 16:14 ` Edgecombe, Rick P
2025-07-14 19:49 ` Ackerley Tng
2025-07-15 15:08 ` Edgecombe, Rick P
2025-07-15 22:31 ` Ackerley Tng
2025-07-02 9:08 ` Yan Zhao
2025-07-02 15:28 ` Edgecombe, Rick P
2025-07-01 5:07 ` Yan Zhao
2025-07-01 22:01 ` Ackerley Tng
2025-07-01 22:26 ` Ackerley Tng
2025-06-30 21:47 ` Vishal Annapurve
2025-07-01 9:35 ` Yan Zhao
2025-07-01 13:32 ` Vishal Annapurve
2025-07-01 14:02 ` Vishal Annapurve
2025-07-01 15:42 ` Edgecombe, Rick P
2025-07-01 16:14 ` Edgecombe, Rick P
2025-07-02 8:54 ` Yan Zhao
2025-07-02 13:12 ` Vishal Annapurve
2025-06-25 7:08 ` Yan Zhao
2025-06-25 22:54 ` Ackerley Tng
2025-06-24 22:03 ` Edgecombe, Rick P
2025-06-17 0:25 ` Edgecombe, Rick P
2025-06-17 2:00 ` Yan Zhao
2025-06-17 3:51 ` Vishal Annapurve
2025-06-17 6:52 ` Yan Zhao
2025-06-17 8:09 ` Vishal Annapurve
2025-06-17 9:57 ` Yan Zhao
2025-06-18 4:25 ` Vishal Annapurve
2025-06-18 0:34 ` Edgecombe, Rick P
2025-06-18 0:46 ` Yan Zhao
2025-06-18 4:33 ` Vishal Annapurve
2025-06-18 6:13 ` Yan Zhao
2025-06-18 6:21 ` Vishal Annapurve
2025-06-18 6:32 ` Yan Zhao
2025-06-18 6:44 ` Vishal Annapurve
2025-06-18 6:57 ` Yan Zhao
2025-06-18 4:29 ` Vishal Annapurve
2025-06-19 0:22 ` Edgecombe, Rick P
2025-06-05 2:47 ` Yan Zhao
2025-06-05 22:35 ` Ackerley Tng
2025-06-19 8:11 ` Yan Zhao
2025-06-20 18:06 ` Vishal Annapurve
2025-07-16 1:23 ` Yan Zhao
2025-07-16 20:57 ` Ackerley Tng
2025-07-18 5:49 ` Yan Zhao
2025-07-22 5:33 ` Ackerley Tng
2025-07-22 6:37 ` Yan Zhao
2025-07-22 17:55 ` Ackerley Tng
2025-05-12 19:00 ` Ackerley Tng
2025-05-12 21:44 ` Edgecombe, Rick P
2025-04-24 3:06 ` [RFC PATCH 09/21] KVM: TDX: Enable 2MB mapping size after TD is RUNNABLE Yan Zhao
2025-05-13 20:10 ` Edgecombe, Rick P
2025-05-16 1:35 ` Huang, Kai
2025-05-16 9:43 ` Yan Zhao
2025-05-16 22:35 ` Huang, Kai
2025-05-16 23:47 ` Edgecombe, Rick P
2025-05-19 8:32 ` Yan Zhao
2025-05-19 16:53 ` Edgecombe, Rick P
2025-05-20 9:34 ` Yan Zhao
2025-05-20 23:47 ` Huang, Kai
2025-06-11 14:42 ` Sean Christopherson
2025-06-12 23:39 ` Edgecombe, Rick P
2025-06-13 0:19 ` Sean Christopherson
2025-06-13 0:25 ` Edgecombe, Rick P
2025-06-13 0:44 ` Sean Christopherson
2025-06-13 0:47 ` Edgecombe, Rick P
2025-06-13 1:32 ` Yan Zhao
2025-06-13 21:53 ` Edgecombe, Rick P
2025-06-13 22:19 ` Sean Christopherson
2025-06-13 23:33 ` Edgecombe, Rick P
2025-06-16 3:14 ` Yan Zhao
2025-06-16 22:49 ` Edgecombe, Rick P
2025-06-17 0:52 ` Yan Zhao
2025-06-18 0:30 ` Yan Zhao
2025-06-20 16:31 ` Sean Christopherson
2025-06-23 21:44 ` Edgecombe, Rick P
2025-06-24 9:57 ` Yan Zhao
2025-06-24 18:35 ` Edgecombe, Rick P
2025-06-25 9:28 ` Yan Zhao
2025-06-25 9:36 ` Yan Zhao
2025-06-25 14:48 ` Edgecombe, Rick P
2025-06-26 0:50 ` Yan Zhao
2025-06-25 14:47 ` Edgecombe, Rick P
2025-06-26 8:53 ` Yan Zhao
2025-07-01 0:42 ` Edgecombe, Rick P
2025-07-01 2:41 ` Yan Zhao
2025-07-01 15:36 ` Edgecombe, Rick P
2025-07-02 0:12 ` Yan Zhao
2025-07-02 0:18 ` Edgecombe, Rick P [this message]
2025-07-02 1:07 ` Yan Zhao
2025-07-02 15:26 ` Edgecombe, Rick P
2025-07-02 3:31 ` Yan Zhao
2025-06-25 13:47 ` Vishal Annapurve
2025-06-25 15:51 ` Edgecombe, Rick P
2025-06-18 1:22 ` Edgecombe, Rick P
2025-06-18 11:32 ` Shutemov, Kirill
2025-06-20 16:32 ` Sean Christopherson
2025-06-20 17:44 ` Kirill Shutemov
2025-06-20 18:40 ` Sean Christopherson
2025-06-20 19:26 ` Kirill Shutemov
2025-06-13 2:41 ` Xiaoyao Li
2025-06-13 3:29 ` Yan Zhao
2025-06-13 5:35 ` Yan Zhao
2025-06-13 6:08 ` Xiaoyao Li
2025-05-21 15:40 ` Edgecombe, Rick P
2025-05-22 3:52 ` Yan Zhao
2025-05-23 23:40 ` Edgecombe, Rick P
2025-05-27 1:31 ` Yan Zhao
2025-05-20 23:34 ` Huang, Kai
2025-05-21 2:35 ` Yan Zhao
2025-05-16 9:28 ` Yan Zhao
2025-04-24 3:06 ` [RFC PATCH 10/21] KVM: x86/mmu: Disallow page merging (huge page adjustment) for mirror root Yan Zhao
2025-05-13 20:15 ` Edgecombe, Rick P
2025-05-16 4:01 ` Yan Zhao
2025-05-16 17:50 ` Edgecombe, Rick P
2025-05-19 3:57 ` Yan Zhao
2025-05-19 17:42 ` Edgecombe, Rick P
2025-05-20 10:11 ` Yan Zhao
2025-04-24 3:06 ` [RFC PATCH 11/21] KVM: x86: Add "vcpu" "gfn" parameters to x86 hook private_max_mapping_level Yan Zhao
2025-04-24 3:07 ` [RFC PATCH 12/21] KVM: TDX: Determine max mapping level according to vCPU's ACCEPT level Yan Zhao
2025-05-13 21:20 ` Edgecombe, Rick P
2025-05-16 6:12 ` Xiaoyao Li
2025-05-16 6:30 ` Yan Zhao
2025-05-16 22:02 ` Edgecombe, Rick P
2025-05-19 6:39 ` Yan Zhao
2025-05-19 20:17 ` Edgecombe, Rick P
2025-04-24 3:07 ` [RFC PATCH 13/21] KVM: x86/tdp_mmu: Alloc external_spt page for mirror page table splitting Yan Zhao
2025-04-24 3:07 ` [RFC PATCH 14/21] KVM: x86/tdp_mmu: Invoke split_external_spt hook with exclusive mmu_lock Yan Zhao
2025-05-13 23:06 ` Edgecombe, Rick P
2025-05-16 9:17 ` Yan Zhao
2025-05-16 22:11 ` Edgecombe, Rick P
2025-05-19 4:01 ` Yan Zhao
2025-05-19 20:21 ` Edgecombe, Rick P
2025-05-20 5:40 ` Binbin Wu
2025-05-20 9:40 ` Yan Zhao
2025-04-24 3:08 ` [RFC PATCH 15/21] KVM: TDX: Support huge page splitting with exclusive kvm->mmu_lock Yan Zhao
2025-05-20 6:18 ` Binbin Wu
2025-05-20 9:40 ` Yan Zhao
2025-07-02 15:47 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 16/21] KVM: x86/mmu: Introduce kvm_split_boundary_leafs() to split boundary leafs Yan Zhao
2025-05-13 22:56 ` Edgecombe, Rick P
2025-05-16 7:46 ` Yan Zhao
2025-05-16 8:03 ` Yan Zhao
2025-05-16 22:27 ` Edgecombe, Rick P
2025-05-19 8:12 ` Yan Zhao
2025-05-16 11:44 ` Yan Zhao
2025-05-16 22:16 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 17/21] KVM: Change the return type of gfn_handler_t() from bool to int Yan Zhao
2025-04-24 3:08 ` [RFC PATCH 18/21] KVM: x86: Split huge boundary leafs before private to shared conversion Yan Zhao
2025-05-09 23:34 ` Edgecombe, Rick P
2025-05-12 2:25 ` Yan Zhao
2025-05-12 21:53 ` Edgecombe, Rick P
2025-04-24 3:08 ` [RFC PATCH 19/21] KVM: gmem: Split huge boundary leafs for punch hole of private memory Yan Zhao
2025-04-24 10:19 ` Francesco Lavra
2025-04-25 1:55 ` Yan Zhao
2025-05-13 22:59 ` Edgecombe, Rick P
2025-05-16 8:19 ` Yan Zhao
2025-04-24 3:09 ` [RFC PATCH 20/21] KVM: x86: Force a prefetch fault's max mapping level to 4KB for TDX Yan Zhao
2025-05-13 23:20 ` Edgecombe, Rick P
2025-05-16 8:43 ` Yan Zhao
2025-05-21 3:30 ` Binbin Wu
2025-05-21 5:03 ` Yan Zhao
2025-04-24 3:09 ` [RFC PATCH 21/21] KVM: x86: Ignore splitting huge pages in fault path " Yan Zhao
2025-05-13 21:58 ` Edgecombe, Rick P
2025-05-16 6:40 ` Yan Zhao
2025-04-24 7:35 ` [RFC PATCH 00/21] KVM: TDX huge page support for private memory Kirill A. Shutemov
2025-04-24 8:33 ` Yan Zhao
2025-04-24 9:05 ` Kirill A. Shutemov
2025-04-24 9:08 ` Juergen Gross
2025-04-24 9:49 ` Yan Zhao
2025-04-24 10:39 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f13239faa54062feb9937fe9fc6d087ffcca7ac6.camel@intel.com \
--to=rick.p.edgecombe@intel.com \
--cc=ackerleytng@google.com \
--cc=binbin.wu@linux.intel.com \
--cc=chao.p.peng@intel.com \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=fan.du@intel.com \
--cc=ira.weiny@intel.com \
--cc=isaku.yamahata@intel.com \
--cc=jroedel@suse.de \
--cc=jun.miao@intel.com \
--cc=kai.huang@intel.com \
--cc=kirill.shutemov@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=pgonda@google.com \
--cc=quic_eberman@quicinc.com \
--cc=seanjc@google.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=x86@kernel.org \
--cc=xiaoyao.li@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=zhiquan1.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).