From mboxrd@z Thu Jan 1 00:00:00 1970 From: Wei Huang Subject: Re: [PATCH V1 2/4] KVM: x86/vPMU: Convert pmu.c code into Intel specific code Date: Tue, 04 Nov 2014 12:38:55 -0600 Message-ID: <54591D3F.6000707@redhat.com> References: <1414771534-29411-1-git-send-email-wei@redhat.com> <1414771534-29411-3-git-send-email-wei@redhat.com> <20141103183606.GA31554@potion.brq.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: kvm@vger.kernel.org, pbonzini@redhat.com, gleb@kernel.org To: =?UTF-8?B?UmFkaW0gS3LEjW3DocWZ?= Return-path: Received: from mx1.redhat.com ([209.132.183.28]:56174 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751977AbaKDSi6 (ORCPT ); Tue, 4 Nov 2014 13:38:58 -0500 In-Reply-To: <20141103183606.GA31554@potion.brq.redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 11/03/2014 12:36 PM, Radim Kr=C4=8Dm=C3=A1=C5=99 wrote: > 2014-10-31 12:05-0400, Wei Huang: >> This patch converts existing pmu.c into Intel specific code and hook= s >> up with the PMU interface using the following steps: >> >> - Convert pmu.c to pmu_intel.c; All public PMU functions are renamed >> and hooked up with the newly defined intel_pmu_ops. >> - Create a corresponding pmu_amd.c file with empty functions for AMD >> arch. >> - The PMU function pointer, kvm_pmu_ops, is initialized by calling >> kvm_x86_ops->get_pmu_ops(). >> - To reduce the code size, Intel and AMD modules are now generated >> from their corrponding arch and PMU files; In the meanwhile, due >> to this arrangement several, functions are exposed as public ones >> to allow calling from PMU code. >=20 > (Patch would have been easier to review as two patches, where the fir= st > one just renames pmu to pmu_intel and the second one does the split.= ) Will do in next spin. >=20 >> Signed-off-by: Wei Huang >> --- >=20 > --- > The rest is an idea for consideration ... > Please consider everything from now to be in parentheses > =3D ignore you are not enjoying shuffling code around. >=20 >> + >> +static unsigned find_arch_event(struct kvm_pmu *pmu, u8 event_selec= t, >> + u8 unit_mask) >=20 > The prototype could be exteded with > struct kvm_event_hw_type_mapping, size_t nr_events > and reused for AMD as well. >=20 >> +{ >> + int i; >> + >> + for (i =3D 0; i < ARRAY_SIZE(intel_arch_events); i++) >> + if (intel_arch_events[i].eventsel =3D=3D event_select >> + && intel_arch_events[i].unit_mask =3D=3D unit_mask >> + && (pmu->available_event_types & (1 << i))) >=20 > pmu->available_event_types would be -1 on AMD. >=20 >> + break; >> + >> + if (i =3D=3D ARRAY_SIZE(intel_arch_events)) >> + return PERF_COUNT_HW_MAX; >> + >> + return intel_arch_events[i].event_type; >> +} >=20 >> +static void reprogram_counter(struct kvm_pmc *pmc, u32 type, >> + unsigned config, bool exclude_user, bool exclude_kernel, >> + bool intr, bool in_tx, bool in_tx_cp) >=20 > This function could drop 'bool in_tx' and 'bool in_tx_cp', because it > already accepts config, so these flags can already be included there. > It has only one caller that uses them anyway. >=20 >> +{ >> + struct perf_event *event; >> + struct perf_event_attr attr =3D { >> + .type =3D type, >> + .size =3D sizeof(attr), >> + .pinned =3D true, >> + .exclude_idle =3D true, >> + .exclude_host =3D 1, >> + .exclude_user =3D exclude_user, >> + .exclude_kernel =3D exclude_kernel, >> + .config =3D config, >> + }; >> + if (in_tx) >> + attr.config |=3D HSW_IN_TX; >> + if (in_tx_cp) >> + attr.config |=3D HSW_IN_TX_CHECKPOINTED; >=20 > And after dropping this, it is identical to AMD. Thanks. I will consolidate the ideas above into next spin. >=20 >> + >> + attr.sample_period =3D (-pmc->counter) & pmc_bitmask(pmc); >> + >> + event =3D perf_event_create_kernel_counter(&attr, -1, current, >> + intr ? kvm_perf_overflow_intr : >> + kvm_perf_overflow, pmc); >> + if (IS_ERR(event)) { >> + printk_once("kvm: pmu event creation failed %ld\n", >> + PTR_ERR(event)); >> + return; >> + } >> + >> + pmc->perf_event =3D event; >> + clear_bit(pmc->idx, (unsigned long*)&pmc->vcpu->arch.pmu.reprogram= _pmi); >> +} >> + >> +static void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel) >> +{ >=20 > Because the two functions that this one uses have been merged, we cou= ld > propagate the changes and reuse this one with AMD as well. >=20 >> + unsigned config, type =3D PERF_TYPE_RAW; >> + u8 event_select, unit_mask; >> + >> + if (eventsel & ARCH_PERFMON_EVENTSEL_PIN_CONTROL) >> + printk_once("kvm pmu: pin control bit is ignored\n"); >> + >> + pmc->eventsel =3D eventsel; >> + >> + stop_counter(pmc); >> + >> + if (!(eventsel & ARCH_PERFMON_EVENTSEL_ENABLE) || !pmc_enabled(pmc= )) >> + return; >> + >> + event_select =3D eventsel & ARCH_PERFMON_EVENTSEL_EVENT; >> + unit_mask =3D (eventsel & ARCH_PERFMON_EVENTSEL_UMASK) >> 8; >> + >> + if (!(eventsel & (ARCH_PERFMON_EVENTSEL_EDGE | >> + ARCH_PERFMON_EVENTSEL_INV | >> + ARCH_PERFMON_EVENTSEL_CMASK | >> + HSW_IN_TX | >> + HSW_IN_TX_CHECKPOINTED))) { >> + config =3D find_arch_event(&pmc->vcpu->arch.pmu, event_select, >> + unit_mask); >> + if (config !=3D PERF_COUNT_HW_MAX) >> + type =3D PERF_TYPE_HARDWARE; >> + } >> + >> + if (type =3D=3D PERF_TYPE_RAW) >> + config =3D eventsel & X86_RAW_EVENT_MASK; >> + >> + reprogram_counter(pmc, type, config, >> + !(eventsel & ARCH_PERFMON_EVENTSEL_USR), >> + !(eventsel & ARCH_PERFMON_EVENTSEL_OS), >> + eventsel & ARCH_PERFMON_EVENTSEL_INT, >> + (eventsel & HSW_IN_TX), >> + (eventsel & HSW_IN_TX_CHECKPOINTED)); >> +}