linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Mi, Dapeng" <dapeng1.mi@linux.intel.com>
To: Sean Christopherson <seanjc@google.com>
Cc: Zide Chen <zide.chen@intel.com>,
	Mingwei Zhang <mizhang@google.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Xiong Zhang <xiong.y.zhang@intel.com>,
	Kan Liang <kan.liang@intel.com>,
	Zhenyu Wang <zhenyuw@linux.intel.com>,
	Manali Shukla <manali.shukla@amd.com>,
	Sandipan Das <sandipan.das@amd.com>,
	Jim Mattson <jmattson@google.com>,
	Stephane Eranian <eranian@google.com>,
	Ian Rogers <irogers@google.com>,
	Namhyung Kim <namhyung@kernel.org>,
	gce-passthrou-pmu-dev@google.com,
	Samantha Alt <samantha.alt@intel.com>,
	Zhiyuan Lv <zhiyuan.lv@intel.com>,
	Yanfei Xu <yanfei.xu@intel.com>,
	Like Xu <like.xu.linux@gmail.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Raghavendra Rao Ananta <rananta@google.com>,
	kvm@vger.kernel.org, linux-perf-users@vger.kernel.org
Subject: Re: [RFC PATCH v3 37/58] KVM: x86/pmu: Switch IA32_PERF_GLOBAL_CTRL at VM boundary
Date: Tue, 19 Nov 2024 13:20:47 +0800	[thread overview]
Message-ID: <67013550-9739-4943-812f-4ba6f01e4fb4@linux.intel.com> (raw)
In-Reply-To: <Zzvt_fNw0U34I9bJ@google.com>


On 11/19/2024 9:46 AM, Sean Christopherson wrote:
> On Fri, Oct 25, 2024, Dapeng Mi wrote:
>> On 10/25/2024 4:26 AM, Chen, Zide wrote:
>>> On 7/31/2024 9:58 PM, Mingwei Zhang wrote:
>>>
>>>> @@ -7295,6 +7299,46 @@ static void atomic_switch_perf_msrs(struct vcpu_vmx *vmx)
>>>>  					msrs[i].host, false);
>>>>  }
>>>>  
>>>> +static void save_perf_global_ctrl_in_passthrough_pmu(struct vcpu_vmx *vmx)
>>>> +{
>>>> +	struct kvm_pmu *pmu = vcpu_to_pmu(&vmx->vcpu);
>>>> +	int i;
>>>> +
>>>> +	if (vm_exit_controls_get(vmx) & VM_EXIT_SAVE_IA32_PERF_GLOBAL_CTRL) {
>>>> +		pmu->global_ctrl = vmcs_read64(GUEST_IA32_PERF_GLOBAL_CTRL);
>>> As commented in patch 26, compared with MSR auto save/store area
>>> approach, the exec control way needs one relatively expensive VMCS read
>>> on every VM exit.
>> Anyway, let us have a evaluation and data speaks.
> No, drop the unconditional VMREAD and VMWRITE, one way or another.  No benchmark
> will notice ~50 extra cycles, but if we write poor code for every feature, those
> 50 cycles per feature add up.
>
> Furthermore, checking to see if the CPU supports the load/save VMCS controls at
> runtime beyond ridiculous.  The mediated PMU requires ***VERSION 4***; if a CPU
> supports PMU version 4 and doesn't support the VMCS controls, KVM should yell and
> disable the passthrough PMU.  The amount of complexity added here to support a
> CPU that should never exist is silly.
>
>>>> +static void load_perf_global_ctrl_in_passthrough_pmu(struct vcpu_vmx *vmx)
>>>> +{
>>>> +	struct kvm_pmu *pmu = vcpu_to_pmu(&vmx->vcpu);
>>>> +	u64 global_ctrl = pmu->global_ctrl;
>>>> +	int i;
>>>> +
>>>> +	if (vm_entry_controls_get(vmx) & VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL) {
>>>> +		vmcs_write64(GUEST_IA32_PERF_GLOBAL_CTRL, global_ctrl);
>>> ditto.
>>>
>>> We may optimize it by introducing a new flag pmu->global_ctrl_dirty and
>>> update GUEST_IA32_PERF_GLOBAL_CTRL only when it's needed.  But this
>>> makes the code even more complicated.
> I haven't looked at surrounding code too much, but I guarantee there's _zero_
> reason to eat a VMWRITE+VMREAD on every transition.  If, emphasis on *if*, KVM
> accesses PERF_GLOBAL_CTRL frequently, e.g. on most exits, then add a VCPU_EXREG_XXX
> and let KVM's caching infrastructure do the heavy lifting.  Don't reinvent the
> wheel.  But first, convince the world that KVM actually accesses the MSR somewhat
> frequently.

Sean, let me give more background here.

VMX supports two ways to save/restore PERF_GLOBAL_CTRL MSR, one is to
leverage VMCS_EXIT_CTRL/VMCS_ENTRY_CTRL to save/restore guest
PERF_GLOBAL_CTRL value to/from VMCS guest state. The other is to use the
VMCS MSR auto-load/restore bitmap to save/restore guest PERF_GLOBAL_CTRL. 

Currently we prefer to use the former way to save/restore guest
PERF_GLOBAL_CTRL as long as HW supports it. There is a limitation on the
MSR auto-load/restore feature. When there are multiple MSRs, the MSRs are
saved/restored in the order of MSR index. As the suggestion of SDM,
PERF_GLOBAL_CTRL should always be written at last after all other PMU MSRs
are manipulated. So if there are some PMU MSRs whose index is larger than
PERF_GLOBAL_CTRL (It would be true in archPerfmon v6+, all PMU MSRs in the
new MSR range have larger index than PERF_GLOBAL_CTRL), these PMU MSRs
would be restored after PERF_GLOBAL_CTRL. That would break the rule. Of
course, it's good to save/restore PERF_GLOBAL_CTRL right now with the VMCS
VMCS MSR auto-load/restore bitmap feature since only one PMU MSR
PERF_GLOBAL_CTRL is saved/restored in current implementation.

PERF_GLOBAL_CTRL MSR could be frequently accessed by perf/pmu driver, e.g.
on each task switch, so PERF_GLOBAL_CTRL MSR is configured to passthrough
to reduce the performance impact in mediated vPMU proposal if guest own all
PMU HW resource. But if guest only owns part of PMU HW resource,
PERF_GLOBAL_CTRL would be set to interception mode.

I suppose KVM doesn't need access PERF_GLOBAL_CTRL in passthrough mode.
This piece of code is intently just for PERF_GLOBAL_CTRL interception mode,
but think twice it looks unnecessary to save/restore PERF_GLOBAL_CTRL via
VMCS as KVM would always maintain the guest PERF_GLOBAL_CTRL value? Anyway,
this part of code can be optimized.


>
> I'll do a more thorough review of this series in the coming weeks (days?).  I
> singled out this one because I happened to stumble across the code when digging
> into something else.
Thanks, look forward to your more comments.



  reply	other threads:[~2024-11-19  5:20 UTC|newest]

Thread overview: 183+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-01  4:58 [RFC PATCH v3 00/58] Mediated Passthrough vPMU 3.0 for x86 Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 01/58] sched/core: Move preempt_model_*() helpers from sched.h to preempt.h Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 02/58] sched/core: Drop spinlocks on contention iff kernel is preemptible Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 03/58] perf/x86: Do not set bit width for unavailable counters Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 04/58] x86/msr: Define PerfCntrGlobalStatusSet register Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 05/58] x86/msr: Introduce MSR_CORE_PERF_GLOBAL_STATUS_SET Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 06/58] perf: Support get/put passthrough PMU interfaces Mingwei Zhang
2024-09-06 10:59   ` Mi, Dapeng
2024-09-06 15:40     ` Liang, Kan
2024-09-09 22:17       ` Namhyung Kim
2024-08-01  4:58 ` [RFC PATCH v3 07/58] perf: Skip pmu_ctx based on event_type Mingwei Zhang
2024-10-11 11:18   ` Peter Zijlstra
2024-08-01  4:58 ` [RFC PATCH v3 08/58] perf: Clean up perf ctx time Mingwei Zhang
2024-10-11 11:39   ` Peter Zijlstra
2024-08-01  4:58 ` [RFC PATCH v3 09/58] perf: Add a EVENT_GUEST flag Mingwei Zhang
2024-08-21  5:27   ` Mi, Dapeng
2024-08-21 13:16     ` Liang, Kan
2024-10-11 11:41     ` Peter Zijlstra
2024-10-11 13:16       ` Liang, Kan
2024-10-11 18:42   ` Peter Zijlstra
2024-10-11 19:49     ` Liang, Kan
2024-10-14 10:55       ` Peter Zijlstra
2024-10-14 11:14   ` Peter Zijlstra
2024-10-14 15:06     ` Liang, Kan
2024-12-13  9:37   ` Sandipan Das
2024-12-13 16:26     ` Liang, Kan
2024-08-01  4:58 ` [RFC PATCH v3 10/58] perf: Add generic exclude_guest support Mingwei Zhang
2024-10-14 11:20   ` Peter Zijlstra
2024-10-14 15:27     ` Liang, Kan
2024-08-01  4:58 ` [RFC PATCH v3 11/58] x86/irq: Factor out common code for installing kvm irq handler Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 12/58] perf: core/x86: Register a new vector for KVM GUEST PMI Mingwei Zhang
2024-09-09 22:11   ` Colton Lewis
2024-09-10  4:59     ` Mi, Dapeng
2024-09-10 16:45       ` Colton Lewis
2024-08-01  4:58 ` [RFC PATCH v3 13/58] KVM: x86/pmu: Register KVM_GUEST_PMI_VECTOR handler Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 14/58] perf: Add switch_interrupt() interface Mingwei Zhang
2024-09-19  6:02   ` Manali Shukla
2024-09-19 13:00     ` Liang, Kan
2024-09-20  5:09       ` Manali Shukla
2024-09-23 18:49         ` Mingwei Zhang
2024-09-24 16:55           ` Manali Shukla
2024-10-14 11:59           ` Peter Zijlstra
2024-10-14 16:15             ` Liang, Kan
2024-10-14 17:45               ` Peter Zijlstra
2024-10-15 15:59                 ` Liang, Kan
2024-10-14 11:56   ` Peter Zijlstra
2024-10-14 15:40     ` Liang, Kan
2024-10-14 17:47       ` Peter Zijlstra
2024-10-14 17:51       ` Peter Zijlstra
2024-10-14 12:03   ` Peter Zijlstra
2024-10-14 15:51     ` Liang, Kan
2024-10-14 17:49       ` Peter Zijlstra
2024-10-15 13:23         ` Liang, Kan
2024-10-14 13:52   ` Peter Zijlstra
2024-10-14 15:57     ` Liang, Kan
2024-08-01  4:58 ` [RFC PATCH v3 15/58] perf/x86: Support switch_interrupt interface Mingwei Zhang
2024-09-09 22:11   ` Colton Lewis
2024-09-10  5:00     ` Mi, Dapeng
2024-10-24 19:45     ` Chen, Zide
2024-10-25  0:52       ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 16/58] perf/x86: Forbid PMI handler when guest own PMU Mingwei Zhang
2024-09-02  7:56   ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 17/58] perf: core/x86: Plumb passthrough PMU capability from x86_pmu to x86_pmu_cap Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 18/58] KVM: x86/pmu: Introduce enable_passthrough_pmu module parameter Mingwei Zhang
2024-11-19 14:30   ` Sean Christopherson
2024-11-20  3:21     ` Mi, Dapeng
2024-11-20 17:06       ` Sean Christopherson
2025-01-15  0:17       ` Mingwei Zhang
2025-01-15  2:52         ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 19/58] KVM: x86/pmu: Plumb through pass-through PMU to vcpu for Intel CPUs Mingwei Zhang
2024-11-19 14:54   ` Sean Christopherson
2024-11-20  3:47     ` Mi, Dapeng
2024-11-20 16:45       ` Sean Christopherson
2024-11-21  0:29         ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 20/58] KVM: x86/pmu: Always set global enable bits in passthrough mode Mingwei Zhang
2024-11-19 15:37   ` Sean Christopherson
2024-11-20  5:19     ` Mi, Dapeng
2024-11-20 17:09       ` Sean Christopherson
2024-11-21  0:37         ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 21/58] KVM: x86/pmu: Add a helper to check if passthrough PMU is enabled Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 22/58] KVM: x86/pmu: Add host_perf_cap and initialize it in kvm_x86_vendor_init() Mingwei Zhang
2024-11-19 15:43   ` Sean Christopherson
2024-11-20  5:21     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 23/58] KVM: x86/pmu: Allow RDPMC pass through when all counters exposed to guest Mingwei Zhang
2024-11-19 16:32   ` Sean Christopherson
2024-11-20  5:31     ` Mi, Dapeng
2025-01-22  5:08       ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 24/58] KVM: x86/pmu: Introduce macro PMU_CAP_PERF_METRICS Mingwei Zhang
2024-11-19 17:03   ` Sean Christopherson
2024-11-20  5:44     ` Mi, Dapeng
2024-11-20 17:21       ` Sean Christopherson
2024-08-01  4:58 ` [RFC PATCH v3 25/58] KVM: x86/pmu: Introduce PMU operator to check if rdpmc passthrough allowed Mingwei Zhang
2024-11-19 17:32   ` Sean Christopherson
2024-11-20  6:22     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 26/58] KVM: x86/pmu: Manage MSR interception for IA32_PERF_GLOBAL_CTRL Mingwei Zhang
2024-08-06  7:04   ` Mi, Dapeng
2024-10-24 20:26   ` Chen, Zide
2024-10-25  2:36     ` Mi, Dapeng
2024-11-19 18:16   ` Sean Christopherson
2024-11-20  7:56     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 27/58] KVM: x86/pmu: Create a function prototype to disable MSR interception Mingwei Zhang
2024-10-24 19:58   ` Chen, Zide
2024-10-25  2:50     ` Mi, Dapeng
2024-11-19 18:17   ` Sean Christopherson
2024-11-20  7:57     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 28/58] KVM: x86/pmu: Add intel_passthrough_pmu_msrs() to pass-through PMU MSRs Mingwei Zhang
2024-11-19 18:24   ` Sean Christopherson
2024-11-20 10:12     ` Mi, Dapeng
2024-11-20 18:32       ` Sean Christopherson
2024-08-01  4:58 ` [RFC PATCH v3 29/58] KVM: x86/pmu: Avoid legacy vPMU code when accessing global_ctrl in passthrough vPMU Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 30/58] KVM: x86/pmu: Exclude PMU MSRs in vmx_get_passthrough_msr_slot() Mingwei Zhang
2024-09-02  7:51   ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 31/58] KVM: x86/pmu: Add counter MSR and selector MSR index into struct kvm_pmc Mingwei Zhang
2024-11-19 18:58   ` Sean Christopherson
2024-11-20 11:50     ` Mi, Dapeng
2024-11-20 17:30       ` Sean Christopherson
2024-11-21  0:56         ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 32/58] KVM: x86/pmu: Introduce PMU operation prototypes for save/restore PMU context Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 33/58] KVM: x86/pmu: Implement the save/restore of PMU state for Intel CPU Mingwei Zhang
2024-08-06  7:27   ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 34/58] KVM: x86/pmu: Make check_pmu_event_filter() an exported function Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 35/58] KVM: x86/pmu: Allow writing to event selector for GP counters if event is allowed Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 36/58] KVM: x86/pmu: Allow writing to fixed counter selector if counter is exposed Mingwei Zhang
2024-09-02  7:59   ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 37/58] KVM: x86/pmu: Switch IA32_PERF_GLOBAL_CTRL at VM boundary Mingwei Zhang
2024-10-24 20:26   ` Chen, Zide
2024-10-25  2:51     ` Mi, Dapeng
2024-11-19  1:46       ` Sean Christopherson
2024-11-19  5:20         ` Mi, Dapeng [this message]
2024-11-19 13:44           ` Sean Christopherson
2024-11-20  2:08             ` Mi, Dapeng
2024-10-31  3:14   ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 38/58] KVM: x86/pmu: Exclude existing vLBR logic from the passthrough PMU Mingwei Zhang
2024-11-20 18:42   ` Sean Christopherson
2024-11-21  1:13     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 39/58] KVM: x86/pmu: Notify perf core at KVM context switch boundary Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 40/58] KVM: x86/pmu: Grab x86 core PMU for passthrough PMU VM Mingwei Zhang
2024-11-20 18:46   ` Sean Christopherson
2024-11-21  2:04     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 41/58] KVM: x86/pmu: Add support for PMU context switch at VM-exit/enter Mingwei Zhang
2024-10-24 19:57   ` Chen, Zide
2024-10-25  2:55     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 42/58] KVM: x86/pmu: Introduce PMU operator to increment counter Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 43/58] KVM: x86/pmu: Introduce PMU operator for setting counter overflow Mingwei Zhang
2024-10-25 16:16   ` Chen, Zide
2024-10-27 12:06     ` Mi, Dapeng
2024-11-20 18:48     ` Sean Christopherson
2024-11-21  2:05       ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 44/58] KVM: x86/pmu: Implement emulated counter increment for passthrough PMU Mingwei Zhang
2024-11-20 20:13   ` Sean Christopherson
2024-11-21  2:27     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 45/58] KVM: x86/pmu: Update pmc_{read,write}_counter() to disconnect perf API Mingwei Zhang
2024-11-20 20:19   ` Sean Christopherson
2024-11-21  2:52     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 46/58] KVM: x86/pmu: Disconnect counter reprogram logic from passthrough PMU Mingwei Zhang
2024-11-20 20:40   ` Sean Christopherson
2024-11-21  3:02     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 47/58] KVM: nVMX: Add nested virtualization support for " Mingwei Zhang
2024-11-20 20:52   ` Sean Christopherson
2024-11-21  3:14     ` Mi, Dapeng
2024-08-01  4:58 ` [RFC PATCH v3 48/58] perf/x86/intel: Support PERF_PMU_CAP_PASSTHROUGH_VPMU Mingwei Zhang
2024-08-02 17:50   ` Liang, Kan
2024-08-01  4:58 ` [RFC PATCH v3 49/58] KVM: x86/pmu/svm: Set passthrough capability for vcpus Mingwei Zhang
2024-08-01  4:58 ` [RFC PATCH v3 50/58] KVM: x86/pmu/svm: Set enable_passthrough_pmu module parameter Mingwei Zhang
2024-08-01  4:59 ` [RFC PATCH v3 51/58] KVM: x86/pmu/svm: Allow RDPMC pass through when all counters exposed to guest Mingwei Zhang
2024-08-01  4:59 ` [RFC PATCH v3 52/58] KVM: x86/pmu/svm: Implement callback to disable MSR interception Mingwei Zhang
2024-11-20 21:02   ` Sean Christopherson
2024-11-21  3:24     ` Mi, Dapeng
2024-08-01  4:59 ` [RFC PATCH v3 53/58] KVM: x86/pmu/svm: Set GuestOnly bit and clear HostOnly bit when guest write to event selectors Mingwei Zhang
2024-11-20 21:38   ` Sean Christopherson
2024-11-21  3:26     ` Mi, Dapeng
2024-08-01  4:59 ` [RFC PATCH v3 54/58] KVM: x86/pmu/svm: Add registers to direct access list Mingwei Zhang
2024-08-01  4:59 ` [RFC PATCH v3 55/58] KVM: x86/pmu/svm: Implement handlers to save and restore context Mingwei Zhang
2024-08-01  4:59 ` [RFC PATCH v3 56/58] KVM: x86/pmu/svm: Wire up PMU filtering functionality for passthrough PMU Mingwei Zhang
2024-11-20 21:39   ` Sean Christopherson
2024-11-21  3:29     ` Mi, Dapeng
2024-08-01  4:59 ` [RFC PATCH v3 57/58] KVM: x86/pmu/svm: Implement callback to increment counters Mingwei Zhang
2024-08-01  4:59 ` [RFC PATCH v3 58/58] perf/x86/amd: Support PERF_PMU_CAP_PASSTHROUGH_VPMU for AMD host Mingwei Zhang
2024-09-11 10:45 ` [RFC PATCH v3 00/58] Mediated Passthrough vPMU 3.0 for x86 Ma, Yongwei
2024-11-19 14:00 ` Sean Christopherson
2024-11-20  2:31   ` Mi, Dapeng
2024-11-20 11:55 ` Mi, Dapeng
2024-11-20 18:34   ` Sean Christopherson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=67013550-9739-4943-812f-4ba6f01e4fb4@linux.intel.com \
    --to=dapeng1.mi@linux.intel.com \
    --cc=eranian@google.com \
    --cc=gce-passthrou-pmu-dev@google.com \
    --cc=irogers@google.com \
    --cc=jmattson@google.com \
    --cc=kan.liang@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=like.xu.linux@gmail.com \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=manali.shukla@amd.com \
    --cc=mizhang@google.com \
    --cc=namhyung@kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rananta@google.com \
    --cc=samantha.alt@intel.com \
    --cc=sandipan.das@amd.com \
    --cc=seanjc@google.com \
    --cc=xiong.y.zhang@intel.com \
    --cc=yanfei.xu@intel.com \
    --cc=zhenyuw@linux.intel.com \
    --cc=zhiyuan.lv@intel.com \
    --cc=zide.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).