From mboxrd@z Thu Jan 1 00:00:00 1970 From: Brendan Gregg Subject: Re: [PATCH v4 2/2] x86/VPMU: implement ipc and arch filter flags Date: Mon, 4 Jan 2016 17:37:00 -0800 Message-ID: References: <1448930346-16902-1-git-send-email-bgregg@netflix.com> <5665DA6B.8010300@oracle.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============8374023888988781070==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: "Tian, Kevin" Cc: "Nakajima, Jun" , Andrew Cooper , "dietmar.hahn@ts.fujitsu.com" , "xen-devel@lists.xen.org" , Jan Beulich , Boris Ostrovsky List-Id: xen-devel@lists.xenproject.org --===============8374023888988781070== Content-Type: multipart/alternative; boundary=94eb2c0802f8db875b05288c4715 --94eb2c0802f8db875b05288c4715 Content-Type: text/plain; charset=UTF-8 Sorry for the delay... On Thu, Dec 17, 2015 at 10:12 PM, Tian, Kevin wrote: > > From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com] > > Sent: Tuesday, December 08, 2015 3:14 AM > > > > On 11/30/2015 07:39 PM, Brendan Gregg wrote: > > > This introduces a way to have a restricted VPMU, by specifying one of > two > > > predefined groups of PMCs to make available. For secure environments, > this > > > allows the VPMU to be used without needing to enable all PMCs. > > > > > > Signed-off-by: Brendan Gregg > > > Reviewed-by: Boris Ostrovsky > > > > This needs to be reviewed also by Intel maintainers (copied). Plus x86 > > maintainers. > > > > -boris > > > > [...] > > > > diff --git a/xen/arch/x86/cpu/vpmu_intel.c > b/xen/arch/x86/cpu/vpmu_intel.c > > > index 8d83a1a..a6c5545 100644 > > > --- a/xen/arch/x86/cpu/vpmu_intel.c > > > +++ b/xen/arch/x86/cpu/vpmu_intel.c > > > @@ -602,12 +602,19 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, > uint64_t > > msr_content, > > > "MSR_PERF_GLOBAL_STATUS(0x38E)!\n"); > > > return -EINVAL; > > > case MSR_IA32_PEBS_ENABLE: > > > + if ( vpmu_features & (XENPMU_FEATURE_IPC_ONLY | > > > + XENPMU_FEATURE_ARCH_ONLY) ) > > > + return -EINVAL; > > > if ( msr_content & 1 ) > > > gdprintk(XENLOG_WARNING, "Guest is trying to enable > PEBS, " > > > "which is not supported.\n"); > > > core2_vpmu_cxt->pebs_enable = msr_content; > > > return 0; > > > case MSR_IA32_DS_AREA: > > > + if ( (vpmu_features & (XENPMU_FEATURE_IPC_ONLY | > > > + XENPMU_FEATURE_ARCH_ONLY)) && > > > + !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) ) > > > + return -EINVAL; > > should the check be made just based on BTS? > Ah, yes. The BTS check was added after the new modes, but it should be standalone. I don't think anything else uses DS_AREA other than BTS. > > > if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) ) > > > { > > > if ( !is_canonical_address(msr_content) ) > > > @@ -652,12 +659,55 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, > uint64_t > > msr_content, > > > tmp = msr - MSR_P6_EVNTSEL(0); > > > if ( tmp >= 0 && tmp < arch_pmc_cnt ) > > > { > > > + bool_t blocked = 0; > > > + uint64_t umaskevent; > > > struct xen_pmu_cntr_pair *xen_pmu_cntr_pair = > > > vpmu_reg_pointer(core2_vpmu_cxt, arch_counters); > > > > > > if ( msr_content & ARCH_CTRL_MASK ) > > > return -EINVAL; > > > > > > + /* PMC filters */ > > > + umaskevent = msr_content & MSR_IA32_CMT_EVTSEL_UE_MASK; > > > + if ( vpmu_features & XENPMU_FEATURE_IPC_ONLY || > > > + vpmu_features & XENPMU_FEATURE_ARCH_ONLY ) > > > + { > > > + blocked = 1; > > > + switch ( umaskevent ) > > > + { > > > + /* > > > + * See the Pre-Defined Architectural Performance > Events table > > > + * from the Intel 64 and IA-32 Architectures Software > > > + * Developer's Manual, Volume 3B, System Programming > Guide, > > > + * Part 2. > > > + */ > > > + case 0x003c: /* unhalted core cycles */ > > Better to copy same wording from SDM, e.g. "UnHalted Core Cycles */. same > for below. > Ok, yes. > > > > + case 0x013c: /* unhalted ref cycles */ > > > + case 0x00c0: /* instruction retired */ > > > + blocked = 0; > > > + default: > > > + break; > > > + } > > > + } > > > + > > > + if ( vpmu_features & XENPMU_FEATURE_ARCH_ONLY ) > > > + { > > > + /* additional counters beyond IPC only; blocked > already set */ > > > + switch ( umaskevent ) > > > + { > > > + case 0x4f2e: /* LLC reference */ > > > + case 0x412e: /* LLC misses */ > > > + case 0x00c4: /* branch instruction retired */ > > > + case 0x00c5: /* branch */ > > > + blocked = 0; > > > + default: > > > + break; > > > + } > > > + } > > > + > > > + if ( blocked ) > > > + return -EINVAL; > > > + > > > if ( has_hvm_container_vcpu(v) ) > > > vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, > > > &core2_vpmu_cxt->global_ctrl); > > > diff --git a/xen/include/asm-x86/msr-index.h > b/xen/include/asm-x86/msr-index.h > > > index b8ad93c..0542064 100644 > > > --- a/xen/include/asm-x86/msr-index.h > > > +++ b/xen/include/asm-x86/msr-index.h > > > @@ -328,6 +328,7 @@ > > > > > > /* Platform Shared Resource MSRs */ > > > #define MSR_IA32_CMT_EVTSEL 0x00000c8d > > > +#define MSR_IA32_CMT_EVTSEL_UE_MASK 0x0000ffff > > > #define MSR_IA32_CMT_CTR 0x00000c8e > > > #define MSR_IA32_PSR_ASSOC 0x00000c8f > > > #define MSR_IA32_PSR_L3_QOS_CFG 0x00000c81 > > > diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h > > > index 7753df0..f9ad7b4 100644 > > > --- a/xen/include/public/pmu.h > > > +++ b/xen/include/public/pmu.h > > > @@ -84,9 +84,19 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t); > > > > > > /* > > > * PMU features: > > > - * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD) > > > + * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD) > > > + * - XENPMU_FEATURE_IPC_ONLY: Restrict PMC to the most minimum set > possible. > > PMC -> PMCs > Ok. > > > > + * Instructions, cycles, and ref cycles. > Can be > > > + * used to calculate > instructions-per-cycle (IPC) > > > + * (ignored on AMD). > > > + * - XENPMU_FEATURE_ARCH_ONLY: Restrict PMCs to the Intel Pre-Defined > > > + * Architecteral Performance Events > exposed by > > Architecteral -> Architectural > Ok. > > > > + * cpuid and listed in the Intel > developer's manual > > > + * (ignored on AMD). > > > */ > > > -#define XENPMU_FEATURE_INTEL_BTS 1 > > > +#define XENPMU_FEATURE_INTEL_BTS (1<<0) > > > +#define XENPMU_FEATURE_IPC_ONLY (1<<1) > > > +#define XENPMU_FEATURE_ARCH_ONLY (1<<2) > > > > > > /* > > > * Shared PMU data between hypervisor and PV(H) domains. > > Thanks for checking! New patch (v5) coming... Brendan -- Brendan Gregg, Senior Performance Architect, Netflix --94eb2c0802f8db875b05288c4715 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Sorry for the delay...

<= div class=3D"gmail_quote">On Thu, Dec 17, 2015 at 10:12 PM, Tian, Kevin <kevin.tian@intel.com> wrote:
> From: Boris Ostrovsky [mailto:boris.ostrovsky@oracle.com]
> Sent: Tuesday, December 08, 2015 3:14 AM
>
> On 11/30/2015 07:39 PM, Brendan Gregg wrote: > > This introduces a way to have a restricted VPMU, by specifying on= e of two
> > predefined groups of PMCs to make available. For secure environme= nts, this
> > allows the VPMU to be used without needing to enable all PMCs. > >
> > Signed-off-by: Brendan Gregg <bgregg@netflix.com>
> > Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>
> This needs to be reviewed also by Intel maintainers (copied). Plus x86=
> maintainers.
>
> -boris
>

[...]

> > diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpm= u_intel.c
> > index 8d83a1a..a6c5545 100644
> > --- a/xen/arch/x86/cpu/vpmu_intel.c
> > +++ b/xen/arch/x86/cpu/vpmu_intel.c
> > @@ -602,12 +602,19 @@ static int core2_vpmu_do_wrmsr(unsigned int= msr, uint64_t
> msr_content,
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -EINVAL;
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0case MSR_IA32_PEBS_ENABLE:
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( vpmu_features & (XENPMU_FEA= TURE_IPC_ONLY |
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0XENPMU_FEATURE_A= RCH_ONLY) )
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EINVAL;
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( msr_content & 1 = )
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0gdprintk(XE= NLOG_WARNING, "Guest is trying to enable PEBS, "
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 "which is not supported.\n");
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0core2_vpmu_cxt->pebs_e= nable =3D msr_content;
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return 0;
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0case MSR_IA32_DS_AREA:
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( (vpmu_features & (XENPMU_FE= ATURE_IPC_ONLY |
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0XENPMU_FEATURE_A= RCH_ONLY)) &&
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0!(vpmu_features = & XENPMU_FEATURE_INTEL_BTS) )
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EINVAL;

should the check be made just based on BTS?
Ah, yes. The BTS check was added after the new modes, but it sh= ould be standalone. I don't think anything else uses DS_AREA other than= BTS.


> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( vpmu_is_set(vpmu, VP= MU_CPU_HAS_DS) )
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0{
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( !is_ca= nonical_address(msr_content) )
> > @@ -652,12 +659,55 @@ static int core2_vpmu_do_wrmsr(unsigned int= msr, uint64_t
> msr_content,
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0tmp =3D msr - MSR_P6_EVNT= SEL(0);
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( tmp >=3D 0 &&= amp; tmp < arch_pmc_cnt )
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0{
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 bool_t blocked =3D 0;<= br> > > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 uint64_t umaskevent; > >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0struct xen_= pmu_cntr_pair *xen_pmu_cntr_pair =3D
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
> >
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( msr_co= ntent & ARCH_CTRL_MASK )
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0return -EINVAL;
> >
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* PMC filters */
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 umaskevent =3D msr_con= tent & MSR_IA32_CMT_EVTSEL_UE_MASK;
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( vpmu_features &am= p; XENPMU_FEATURE_IPC_ONLY ||
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0vp= mu_features & XENPMU_FEATURE_ARCH_ONLY )
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 {
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 blocked = =3D 1;
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 switch (= umaskevent )
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 {
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /*
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* = See the Pre-Defined Architectural Performance Events table
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* = from the Intel 64 and IA-32 Architectures Software
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* = Developer's Manual, Volume 3B, System Programming Guide,
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0* = Part 2.
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*/=
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case 0x0= 03c:=C2=A0 =C2=A0 =C2=A0 =C2=A0/* unhalted core cycles */

Better to copy same wording from SDM, e.g. "UnHalted Core = Cycles */. same for below.

Ok, yes.
=C2=A0

> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case 0x0= 13c:=C2=A0 =C2=A0 =C2=A0 =C2=A0/* unhalted ref cycles */
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case 0x0= 0c0:=C2=A0 =C2=A0 =C2=A0 =C2=A0/* instruction retired */
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 blocked =3D 0;
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 default:=
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 break;
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> > +
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( vpmu_features &am= p; XENPMU_FEATURE_ARCH_ONLY )
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 {
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 /* addit= ional counters beyond IPC only; blocked already set */
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 switch (= umaskevent )
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 {
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case 0x4= f2e:=C2=A0 =C2=A0 =C2=A0 =C2=A0/* LLC reference */
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case 0x4= 12e:=C2=A0 =C2=A0 =C2=A0 =C2=A0/* LLC misses */
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case 0x0= 0c4:=C2=A0 =C2=A0 =C2=A0 =C2=A0/* branch instruction retired */
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 case 0x0= 0c5:=C2=A0 =C2=A0 =C2=A0 =C2=A0/* branch */
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 blocked =3D 0;
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 default:=
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 break;
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 }
> > +
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( blocked )
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return -= EINVAL;
> > +
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if ( has_hv= m_container_vcpu(v) )
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &cor= e2_vpmu_cxt->global_ctrl);
> > diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x8= 6/msr-index.h
> > index b8ad93c..0542064 100644
> > --- a/xen/include/asm-x86/msr-index.h
> > +++ b/xen/include/asm-x86/msr-index.h
> > @@ -328,6 +328,7 @@
> >
> >=C2=A0 =C2=A0/* Platform Shared Resource MSRs */
> >=C2=A0 =C2=A0#define MSR_IA32_CMT_EVTSEL=C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A00x00000c8d
> > +#define MSR_IA32_CMT_EVTSEL_UE_MASK=C2=A0 =C2=A0 =C2=A0 =C2=A0 0= x0000ffff
> >=C2=A0 =C2=A0#define MSR_IA32_CMT_CTR=C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 0x00000c8e
> >=C2=A0 =C2=A0#define MSR_IA32_PSR_ASSOC=C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 0x00000c8f
> >=C2=A0 =C2=A0#define MSR_IA32_PSR_L3_QOS_CFG=C2=A0 =C2=A00x00000c8= 1
> > diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h<= br> > > index 7753df0..f9ad7b4 100644
> > --- a/xen/include/public/pmu.h
> > +++ b/xen/include/public/pmu.h
> > @@ -84,9 +84,19 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
> >
> >=C2=A0 =C2=A0/*
> >=C2=A0 =C2=A0 * PMU features:
> > - * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD= )
> > + * - XENPMU_FEATURE_INTEL_BTS:=C2=A0 Intel BTS support (ignored = on AMD)
> > + * - XENPMU_FEATURE_IPC_ONLY:=C2=A0 =C2=A0Restrict PMC to the mo= st minimum set possible.

PMC -> PMCs

Ok.
=C2=A0

> > + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Instructions, cycles, and ref cy= cles. Can be
> > + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 used to calculate instructions-p= er-cycle (IPC)
> > + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 (ignored on AMD).
> > + * - XENPMU_FEATURE_ARCH_ONLY:=C2=A0 Restrict PMCs to the Intel = Pre-Defined
> > + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Architecteral Performance Events= exposed by

Architecteral -> Architectural

Ok.
=C2=A0

> > + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cpuid and listed in the Intel de= veloper's manual
> > + *=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 (ignored on AMD).
> >=C2=A0 =C2=A0 */
> > -#define XENPMU_FEATURE_INTEL_BTS=C2=A0 1
> > +#define XENPMU_FEATURE_INTEL_BTS=C2=A0 (1<<0)
> > +#define XENPMU_FEATURE_IPC_ONLY=C2=A0 =C2=A0(1<<1)
> > +#define XENPMU_FEATURE_ARCH_ONLY=C2=A0 (1<<2)
> >
> >=C2=A0 =C2=A0/*
> >=C2=A0 =C2=A0 * Shared PMU data between hypervisor and PV(H) domai= ns.


Thanks for checking! New patch (v5) comi= ng...

= Brendan

--
Brendan Gregg, Senior Performance Architect, Netflix
--94eb2c0802f8db875b05288c4715-- --===============8374023888988781070== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============8374023888988781070==--