From: "Liang, Kan" <kan.liang@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Mingwei Zhang <mizhang@google.com>,
Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Xiong Zhang <xiong.y.zhang@intel.com>,
Dapeng Mi <dapeng1.mi@linux.intel.com>,
Kan Liang <kan.liang@intel.com>,
Zhenyu Wang <zhenyuw@linux.intel.com>,
Manali Shukla <manali.shukla@amd.com>,
Sandipan Das <sandipan.das@amd.com>,
Jim Mattson <jmattson@google.com>,
Stephane Eranian <eranian@google.com>,
Ian Rogers <irogers@google.com>,
Namhyung Kim <namhyung@kernel.org>,
gce-passthrou-pmu-dev@google.com,
Samantha Alt <samantha.alt@intel.com>,
Zhiyuan Lv <zhiyuan.lv@intel.com>,
Yanfei Xu <yanfei.xu@intel.com>, maobibo <maobibo@loongson.cn>,
Like Xu <like.xu.linux@gmail.com>,
kvm@vger.kernel.org, linux-perf-users@vger.kernel.org
Subject: Re: [PATCH v2 07/54] perf: Add generic exclude_guest support
Date: Thu, 13 Jun 2024 09:37:36 -0400 [thread overview]
Message-ID: <3755c323-6244-4e75-9e79-679bd05b13a4@linux.intel.com> (raw)
In-Reply-To: <20240613091507.GA17707@noisy.programming.kicks-ass.net>
On 2024-06-13 5:15 a.m., Peter Zijlstra wrote:
> On Wed, Jun 12, 2024 at 09:38:06AM -0400, Liang, Kan wrote:
>> On 2024-06-12 7:17 a.m., Peter Zijlstra wrote:
>>> On Tue, Jun 11, 2024 at 09:27:46AM -0400, Liang, Kan wrote:
>>>> diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
>>>> index dd4920bf3d1b..68c8b93c4e5c 100644
>>>> --- a/include/linux/perf_event.h
>>>> +++ b/include/linux/perf_event.h
>>>> @@ -945,6 +945,7 @@ struct perf_event_context {
>>>> u64 time;
>>>> u64 timestamp;
>>>> u64 timeoffset;
>>>> + u64 timeguest;
>>>>
>>>> /*
>>>> * These fields let us detect when two contexts have both
>>>
>>>> @@ -651,10 +653,26 @@ __perf_update_times(struct perf_event *event, u64
>>>> now, u64 *enabled, u64 *runnin
>>>>
>>>> static void perf_event_update_time(struct perf_event *event)
>>>> {
>>>> - u64 now = perf_event_time(event);
>>>> + u64 now;
>>>> +
>>>> + /* Never count the time of an active guest into an exclude_guest event. */
>>>> + if (event->ctx->timeguest &&
>>>> + event->pmu->capabilities & PERF_PMU_CAP_PASSTHROUGH_VPMU) {
>>>> + /*
>>>> + * If a guest is running, use the timestamp while entering the guest.
>>>> + * If the guest is leaving, reset the event timestamp.
>>>> + */
>>>> + if (__this_cpu_read(perf_in_guest))
>>>> + event->tstamp = event->ctx->timeguest;
>>>> + else
>>>> + event->tstamp = event->ctx->time;
>>>> + return;
>>>> + }
>>>>
>>>> + now = perf_event_time(event);
>>>> __perf_update_times(event, now, &event->total_time_enabled,
>>>> &event->total_time_running);
>>>> +
>>>> event->tstamp = now;
>>>> }
>>>
>>> So I really don't like this much,
>>
>> An alternative way I can imagine may maintain a dedicated timeline for
>> the PASSTHROUGH PMUs. For that, we probably need two new timelines for
>> the normal events and the cgroup events. That sounds too complex.
>
> I'm afraid we might have to. Specifically, the below:
>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index 019c237dd456..6c46699c6752 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -665,7 +665,7 @@ static void perf_event_update_time(struct perf_event
>> *event)
>> if (__this_cpu_read(perf_in_guest))
>> event->tstamp = event->ctx->timeguest;
>> else
>> - event->tstamp = event->ctx->time;
>> + event->tstamp = perf_event_time(event);
>> return;
>> }
>
> is still broken in that it (ab)uses event state to track time, and this
> goes sideways in case of event overcommit, because then
> ctx_sched_{out,in}() will not visit all events.
>
> We've ran into that before. Time-keeping really should be per context or
> we'll get a ton of pain.
>
> I've ended up with the (uncompiled) below. Yes, it is unfortunate, but
> aside from a few cleanups (we could introduce a struct time_ctx { u64
> time, stamp, offset }; and fold a bunch of code, this is more or less
> the best we can do I'm afraid.
Sure. I will try the below codes and implement the cleanup patch as well.
Thanks,
Kan
>
> ---
>
> --- a/include/linux/perf_event.h
> +++ b/include/linux/perf_event.h
> @@ -947,7 +947,9 @@ struct perf_event_context {
> u64 time;
> u64 timestamp;
> u64 timeoffset;
> - u64 timeguest;
> + u64 guest_time;
> + u64 guest_timestamp;
> + u64 guest_timeoffset;
>
> /*
> * These fields let us detect when two contexts have both
> @@ -1043,6 +1045,9 @@ struct perf_cgroup_info {
> u64 time;
> u64 timestamp;
> u64 timeoffset;
> + u64 guest_time;
> + u64 guest_timestamp;
> + u64 guest_timeoffset;
> int active;
> };
>
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -638,26 +638,9 @@ __perf_update_times(struct perf_event *e
>
> static void perf_event_update_time(struct perf_event *event)
> {
> - u64 now;
> -
> - /* Never count the time of an active guest into an exclude_guest event. */
> - if (event->ctx->timeguest &&
> - event->pmu->capabilities & PERF_PMU_CAP_PASSTHROUGH_VPMU) {
> - /*
> - * If a guest is running, use the timestamp while entering the guest.
> - * If the guest is leaving, reset the event timestamp.
> - */
> - if (__this_cpu_read(perf_in_guest))
> - event->tstamp = event->ctx->timeguest;
> - else
> - event->tstamp = event->ctx->time;
> - return;
> - }
> -
> - now = perf_event_time(event);
> + u64 now = perf_event_time(event);
> __perf_update_times(event, now, &event->total_time_enabled,
> &event->total_time_running);
> -
> event->tstamp = now;
> }
>
> @@ -780,19 +763,33 @@ static inline int is_cgroup_event(struct
> static inline u64 perf_cgroup_event_time(struct perf_event *event)
> {
> struct perf_cgroup_info *t;
> + u64 time;
>
> t = per_cpu_ptr(event->cgrp->info, event->cpu);
> - return t->time;
> + time = t->time;
> + if (event->attr.exclude_guest)
> + time -= t->guest_time;
> + return time;
> }
>
> static inline u64 perf_cgroup_event_time_now(struct perf_event *event, u64 now)
> {
> struct perf_cgroup_info *t;
> + u64 time, guest_time;
>
> t = per_cpu_ptr(event->cgrp->info, event->cpu);
> - if (!__load_acquire(&t->active))
> - return t->time;
> - now += READ_ONCE(t->timeoffset);
> + if (!__load_acquire(&t->active)) {
> + time = t->time;
> + if (event->attr.exclude_guest)
> + time -= t->guest_time;
> + return time;
> + }
> +
> + time = now + READ_ONCE(t->timeoffset);
> + if (event->attr.exclude_guest && __this_cpu_read(perf_in_guest)) {
> + guest_time = now + READ_ONCE(t->guest_offset);
> + time -= guest_time;
> + }
> return now;
> }
>
> @@ -807,6 +804,17 @@ static inline void __update_cgrp_time(st
> WRITE_ONCE(info->timeoffset, info->time - info->timestamp);
> }
>
> +static inline void __update_cgrp_guest_time(struct perf_cgroup_info *info, u64 now, bool adv)
> +{
> + if (adv)
> + info->guest_time += now - info->guest_timestamp;
> + info->guest_timestamp = now;
> + /*
> + * see update_context_time()
> + */
> + WRITE_ONCE(info->guest_timeoffset, info->guest_time - info->guest_timestamp);
> +}
> +
> static inline void update_cgrp_time_from_cpuctx(struct perf_cpu_context *cpuctx, bool final)
> {
> struct perf_cgroup *cgrp = cpuctx->cgrp;
> @@ -821,6 +829,8 @@ static inline void update_cgrp_time_from
> info = this_cpu_ptr(cgrp->info);
>
> __update_cgrp_time(info, now, true);
> + if (__this_cpu_read(perf_in_guest))
> + __update_cgrp_guest_time(info, now, true);
> if (final)
> __store_release(&info->active, 0);
> }
> @@ -1501,14 +1511,39 @@ static void __update_context_time(struct
> WRITE_ONCE(ctx->timeoffset, ctx->time - ctx->timestamp);
> }
>
> +static void __update_context_guest_time(struct perf_event_context *ctx, bool adv)
> +{
> + u64 now = ctx->timestamp; /* must be called after __update_context_time(); */
> +
> + lockdep_assert_held(&ctx->lock);
> +
> + if (adv)
> + ctx->guest_time += now - ctx->guest_timestamp;
> + ctx->guest_timestamp = now;
> +
> + /*
> + * The above: time' = time + (now - timestamp), can be re-arranged
> + * into: time` = now + (time - timestamp), which gives a single value
> + * offset to compute future time without locks on.
> + *
> + * See perf_event_time_now(), which can be used from NMI context where
> + * it's (obviously) not possible to acquire ctx->lock in order to read
> + * both the above values in a consistent manner.
> + */
> + WRITE_ONCE(ctx->guest_timeoffset, ctx->guest_time - ctx->guest_timestamp);
> +}
> +
> static void update_context_time(struct perf_event_context *ctx)
> {
> __update_context_time(ctx, true);
> + if (__this_cpu_read(perf_in_guest))
> + __update_context_guest_time(ctx, true);
> }
>
> static u64 perf_event_time(struct perf_event *event)
> {
> struct perf_event_context *ctx = event->ctx;
> + u64 time;
>
> if (unlikely(!ctx))
> return 0;
> @@ -1516,12 +1551,17 @@ static u64 perf_event_time(struct perf_e
> if (is_cgroup_event(event))
> return perf_cgroup_event_time(event);
>
> - return ctx->time;
> + time = ctx->time;
> + if (event->attr.exclude_guest)
> + time -= ctx->guest_time;
> +
> + return time;
> }
>
> static u64 perf_event_time_now(struct perf_event *event, u64 now)
> {
> struct perf_event_context *ctx = event->ctx;
> + u64 time, guest_time;
>
> if (unlikely(!ctx))
> return 0;
> @@ -1529,11 +1569,19 @@ static u64 perf_event_time_now(struct pe
> if (is_cgroup_event(event))
> return perf_cgroup_event_time_now(event, now);
>
> - if (!(__load_acquire(&ctx->is_active) & EVENT_TIME))
> - return ctx->time;
> + if (!(__load_acquire(&ctx->is_active) & EVENT_TIME)) {
> + time = ctx->time;
> + if (event->attr.exclude_guest)
> + time -= ctx->guest_time;
> + return time;
> + }
>
> - now += READ_ONCE(ctx->timeoffset);
> - return now;
> + time = now + READ_ONCE(ctx->timeoffset);
> + if (event->attr.exclude_guest && __this_cpu_read(perf_in_guest)) {
> + guest_time = now + READ_ONCE(ctx->guest_timeoffset);
> + time -= guest_time;
> + }
> + return time;
> }
>
> static enum event_type_t get_event_type(struct perf_event *event)
> @@ -3340,9 +3388,14 @@ ctx_sched_out(struct perf_event_context
> * would only update time for the pinned events.
> */
> if (is_active & EVENT_TIME) {
> + bool stop;
> +
> + stop = !((ctx->is_active & event_type) & EVENT_ALL) &&
> + ctx == &cpuctx->ctx;
> +
> /* update (and stop) ctx time */
> update_context_time(ctx);
> - update_cgrp_time_from_cpuctx(cpuctx, ctx == &cpuctx->ctx);
> + update_cgrp_time_from_cpuctx(cpuctx, stop);
> /*
> * CPU-release for the below ->is_active store,
> * see __load_acquire() in perf_event_time_now()
> @@ -3366,8 +3419,12 @@ ctx_sched_out(struct perf_event_context
> * with PERF_PMU_CAP_PASSTHROUGH_VPMU.
> */
> is_active = EVENT_ALL;
> - } else
> + __update_context_guest_time(ctx, false);
> + perf_cgroup_set_guest_timestamp(cpuctx);
> + barrier();
> + } else {
> is_active ^= ctx->is_active; /* changed bits */
> + }
>
> list_for_each_entry(pmu_ctx, &ctx->pmu_ctx_list, pmu_ctx_entry) {
> if (perf_skip_pmu_ctx(pmu_ctx, event_type))
> @@ -3866,10 +3923,15 @@ static inline void group_update_userpage
> event_update_userpage(event);
> }
>
> +struct merge_sched_data {
> + int can_add_hw;
> + enum event_type_t event_type;
> +};
> +
> static int merge_sched_in(struct perf_event *event, void *data)
> {
> struct perf_event_context *ctx = event->ctx;
> - int *can_add_hw = data;
> + struct merge_sched_data *msd = data;
>
> if (event->state <= PERF_EVENT_STATE_OFF)
> return 0;
> @@ -3881,18 +3943,18 @@ static int merge_sched_in(struct perf_ev
> * Don't schedule in any exclude_guest events of PMU with
> * PERF_PMU_CAP_PASSTHROUGH_VPMU, while a guest is running.
> */
> - if (__this_cpu_read(perf_in_guest) &&
> - event->pmu->capabilities & PERF_PMU_CAP_PASSTHROUGH_VPMU &&
> - event->attr.exclude_guest)
> + if (event->attr.exclude_guest && __this_cpu_read(perf_in_guest) &&
> + (event->pmu->capabilities & PERF_PMU_CAP_PASSTHROUGH_VPMU) &&
> + !(msd->event_type & EVENT_GUEST))
> return 0;
>
> - if (group_can_go_on(event, *can_add_hw)) {
> + if (group_can_go_on(event, msd->can_add_hw)) {
> if (!group_sched_in(event, ctx))
> list_add_tail(&event->active_list, get_event_list(event));
> }
>
> if (event->state == PERF_EVENT_STATE_INACTIVE) {
> - *can_add_hw = 0;
> + msd->can_add_hw = 0;
> if (event->attr.pinned) {
> perf_cgroup_event_disable(event, ctx);
> perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
> @@ -3911,11 +3973,15 @@ static int merge_sched_in(struct perf_ev
>
> static void pmu_groups_sched_in(struct perf_event_context *ctx,
> struct perf_event_groups *groups,
> - struct pmu *pmu)
> + struct pmu *pmu,
> + enum even_type_t event_type)
> {
> - int can_add_hw = 1;
> + struct merge_sched_data msd = {
> + .can_add_hw = 1,
> + .event_type = event_type,
> + };
> visit_groups_merge(ctx, groups, smp_processor_id(), pmu,
> - merge_sched_in, &can_add_hw);
> + merge_sched_in, &msd);
> }
>
> static void ctx_groups_sched_in(struct perf_event_context *ctx,
> @@ -3927,14 +3993,14 @@ static void ctx_groups_sched_in(struct p
> list_for_each_entry(pmu_ctx, &ctx->pmu_ctx_list, pmu_ctx_entry) {
> if (perf_skip_pmu_ctx(pmu_ctx, event_type))
> continue;
> - pmu_groups_sched_in(ctx, groups, pmu_ctx->pmu);
> + pmu_groups_sched_in(ctx, groups, pmu_ctx->pmu, event_type);
> }
> }
>
> static void __pmu_ctx_sched_in(struct perf_event_context *ctx,
> struct pmu *pmu)
> {
> - pmu_groups_sched_in(ctx, &ctx->flexible_groups, pmu);
> + pmu_groups_sched_in(ctx, &ctx->flexible_groups, pmu, 0);
> }
>
> static void
> @@ -3949,6 +4015,8 @@ ctx_sched_in(struct perf_event_context *
> return;
>
> if (!(is_active & EVENT_TIME)) {
> + /* EVENT_TIME should be active while the guest runs */
> + WARN_ON_ONCE(event_type & EVENT_GUEST);
> /* start ctx time */
> __update_context_time(ctx, false);
> perf_cgroup_set_timestamp(cpuctx);
> @@ -3979,8 +4047,11 @@ ctx_sched_in(struct perf_event_context *
> * the exclude_guest events.
> */
> update_context_time(ctx);
> - } else
> + update_cgrp_time_from_cpuctx(cpuctx, false);
> + barrier();
> + } else {
> is_active ^= ctx->is_active; /* changed bits */
> + }
>
> /*
> * First go through the list and put on any pinned groups
> @@ -5832,25 +5903,20 @@ void perf_guest_enter(void)
>
> perf_ctx_lock(cpuctx, cpuctx->task_ctx);
>
> - if (WARN_ON_ONCE(__this_cpu_read(perf_in_guest))) {
> - perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
> - return;
> - }
> + if (WARN_ON_ONCE(__this_cpu_read(perf_in_guest)))
> + goto unlock;
>
> perf_ctx_disable(&cpuctx->ctx, EVENT_GUEST);
> ctx_sched_out(&cpuctx->ctx, EVENT_GUEST);
> - /* Set the guest start time */
> - cpuctx->ctx.timeguest = cpuctx->ctx.time;
> perf_ctx_enable(&cpuctx->ctx, EVENT_GUEST);
> if (cpuctx->task_ctx) {
> perf_ctx_disable(cpuctx->task_ctx, EVENT_GUEST);
> task_ctx_sched_out(cpuctx->task_ctx, EVENT_GUEST);
> - cpuctx->task_ctx->timeguest = cpuctx->task_ctx->time;
> perf_ctx_enable(cpuctx->task_ctx, EVENT_GUEST);
> }
>
> __this_cpu_write(perf_in_guest, true);
> -
> +unlock:
> perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
> }
>
> @@ -5862,24 +5928,21 @@ void perf_guest_exit(void)
>
> perf_ctx_lock(cpuctx, cpuctx->task_ctx);
>
> - if (WARN_ON_ONCE(!__this_cpu_read(perf_in_guest))) {
> - perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
> - return;
> - }
> -
> - __this_cpu_write(perf_in_guest, false);
> + if (WARN_ON_ONCE(!__this_cpu_read(perf_in_guest)))
> + goto unlock;
>
> perf_ctx_disable(&cpuctx->ctx, EVENT_GUEST);
> ctx_sched_in(&cpuctx->ctx, EVENT_GUEST);
> - cpuctx->ctx.timeguest = 0;
> perf_ctx_enable(&cpuctx->ctx, EVENT_GUEST);
> if (cpuctx->task_ctx) {
> perf_ctx_disable(cpuctx->task_ctx, EVENT_GUEST);
> ctx_sched_in(cpuctx->task_ctx, EVENT_GUEST);
> - cpuctx->task_ctx->timeguest = 0;
> perf_ctx_enable(cpuctx->task_ctx, EVENT_GUEST);
> }
>
> + __this_cpu_write(perf_in_guest, false);
> +
> +unlock:
> perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
> }
>
next prev parent reply other threads:[~2024-06-13 13:37 UTC|newest]
Thread overview: 116+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-06 5:29 [PATCH v2 00/54] Mediated Passthrough vPMU 2.0 for x86 Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 01/54] KVM: x86/pmu: Set enable bits for GP counters in PERF_GLOBAL_CTRL at "RESET" Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 02/54] KVM: x86: Snapshot if a vCPU's vendor model is AMD vs. Intel compatible Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 03/54] KVM: x86/pmu: Do not mask LVTPC when handling a PMI on AMD platforms Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 04/54] x86/msr: Define PerfCntrGlobalStatusSet register Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 05/54] x86/msr: Introduce MSR_CORE_PERF_GLOBAL_STATUS_SET Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 06/54] perf: Support get/put passthrough PMU interfaces Mingwei Zhang
2024-05-07 8:31 ` Peter Zijlstra
2024-05-08 4:13 ` Zhang, Xiong Y
2024-05-07 8:41 ` Peter Zijlstra
2024-05-08 4:54 ` Zhang, Xiong Y
2024-05-08 8:32 ` Peter Zijlstra
2024-05-06 5:29 ` [PATCH v2 07/54] perf: Add generic exclude_guest support Mingwei Zhang
2024-05-07 8:58 ` Peter Zijlstra
2024-06-10 17:23 ` Liang, Kan
2024-06-11 12:06 ` Peter Zijlstra
2024-06-11 13:27 ` Liang, Kan
2024-06-12 11:17 ` Peter Zijlstra
2024-06-12 13:38 ` Liang, Kan
2024-06-13 9:15 ` Peter Zijlstra
2024-06-13 13:37 ` Liang, Kan [this message]
2024-06-13 18:04 ` Liang, Kan
2024-06-17 7:51 ` Peter Zijlstra
2024-06-17 13:34 ` Liang, Kan
2024-06-17 15:00 ` Peter Zijlstra
2024-06-17 15:45 ` Liang, Kan
2024-05-06 5:29 ` [PATCH v2 08/54] perf/x86/intel: Support PERF_PMU_CAP_PASSTHROUGH_VPMU Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 09/54] perf: core/x86: Register a new vector for KVM GUEST PMI Mingwei Zhang
2024-05-07 9:12 ` Peter Zijlstra
2024-05-08 10:06 ` Yanfei Xu
2024-05-06 5:29 ` [PATCH v2 10/54] KVM: x86: Extract x86_set_kvm_irq_handler() function Mingwei Zhang
2024-05-07 9:18 ` Peter Zijlstra
2024-05-08 8:57 ` Zhang, Xiong Y
2024-05-06 5:29 ` [PATCH v2 11/54] KVM: x86/pmu: Register guest pmi handler for emulated PMU Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 12/54] perf: x86: Add x86 function to switch PMI handler Mingwei Zhang
2024-05-07 9:22 ` Peter Zijlstra
2024-05-08 6:58 ` Zhang, Xiong Y
2024-05-08 8:37 ` Peter Zijlstra
2024-05-09 7:30 ` Zhang, Xiong Y
2024-05-07 21:40 ` Chen, Zide
2024-05-08 3:44 ` Mi, Dapeng
2024-05-30 5:12 ` Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 13/54] perf: core/x86: Forbid PMI handler when guest own PMU Mingwei Zhang
2024-05-07 9:33 ` Peter Zijlstra
2024-05-09 7:39 ` Zhang, Xiong Y
2024-05-06 5:29 ` [PATCH v2 14/54] perf: core/x86: Plumb passthrough PMU capability from x86_pmu to x86_pmu_cap Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 15/54] KVM: x86/pmu: Introduce enable_passthrough_pmu module parameter Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 16/54] KVM: x86/pmu: Plumb through pass-through PMU to vcpu for Intel CPUs Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 17/54] KVM: x86/pmu: Always set global enable bits in passthrough mode Mingwei Zhang
2024-05-08 4:18 ` Mi, Dapeng
2024-05-08 4:36 ` Mingwei Zhang
2024-05-08 6:27 ` Mi, Dapeng
2024-05-08 14:13 ` Sean Christopherson
2024-05-09 0:13 ` Mingwei Zhang
2024-05-09 0:30 ` Mi, Dapeng
2024-05-09 0:38 ` Mi, Dapeng
2024-05-06 5:29 ` [PATCH v2 18/54] KVM: x86/pmu: Add a helper to check if passthrough PMU is enabled Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 19/54] KVM: x86/pmu: Add host_perf_cap and initialize it in kvm_x86_vendor_init() Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 20/54] KVM: x86/pmu: Allow RDPMC pass through when all counters exposed to guest Mingwei Zhang
2024-05-08 21:55 ` Chen, Zide
2024-05-30 5:20 ` Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 21/54] KVM: x86/pmu: Introduce macro PMU_CAP_PERF_METRICS Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 22/54] KVM: x86/pmu: Introduce PMU operator to check if rdpmc passthrough allowed Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 23/54] KVM: x86/pmu: Manage MSR interception for IA32_PERF_GLOBAL_CTRL Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 24/54] KVM: x86/pmu: Create a function prototype to disable MSR interception Mingwei Zhang
2024-05-08 22:03 ` Chen, Zide
2024-05-30 5:24 ` Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 25/54] KVM: x86/pmu: Add intel_passthrough_pmu_msrs() to pass-through PMU MSRs Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 26/54] KVM: x86/pmu: Avoid legacy vPMU code when accessing global_ctrl in passthrough vPMU Mingwei Zhang
2024-05-08 21:48 ` Chen, Zide
2024-05-09 0:43 ` Mi, Dapeng
2024-05-09 1:29 ` Chen, Zide
2024-05-09 2:58 ` Mi, Dapeng
2024-05-30 5:28 ` Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 27/54] KVM: x86/pmu: Exclude PMU MSRs in vmx_get_passthrough_msr_slot() Mingwei Zhang
2024-05-14 7:33 ` Mi, Dapeng
2024-05-06 5:29 ` [PATCH v2 28/54] KVM: x86/pmu: Add counter MSR and selector MSR index into struct kvm_pmc Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 29/54] KVM: x86/pmu: Introduce PMU operation prototypes for save/restore PMU context Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 30/54] KVM: x86/pmu: Implement the save/restore of PMU state for Intel CPU Mingwei Zhang
2024-05-14 8:08 ` Mi, Dapeng
2024-05-30 5:34 ` Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 31/54] KVM: x86/pmu: Make check_pmu_event_filter() an exported function Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 32/54] KVM: x86/pmu: Allow writing to event selector for GP counters if event is allowed Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 33/54] KVM: x86/pmu: Allow writing to fixed counter selector if counter is exposed Mingwei Zhang
2024-05-06 5:29 ` [PATCH v2 34/54] KVM: x86/pmu: Switch IA32_PERF_GLOBAL_CTRL at VM boundary Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 35/54] KVM: x86/pmu: Exclude existing vLBR logic from the passthrough PMU Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 36/54] KVM: x86/pmu: Switch PMI handler at KVM context switch boundary Mingwei Zhang
2024-07-10 8:37 ` Sandipan Das
2024-07-10 10:01 ` Zhang, Xiong Y
2024-07-10 12:30 ` Sandipan Das
2024-05-06 5:30 ` [PATCH v2 37/54] KVM: x86/pmu: Grab x86 core PMU for passthrough PMU VM Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 38/54] KVM: x86/pmu: Call perf_guest_enter() at PMU context switch Mingwei Zhang
2024-05-07 9:39 ` Peter Zijlstra
2024-05-08 4:22 ` Mi, Dapeng
2024-05-30 4:34 ` Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 39/54] KVM: x86/pmu: Add support for PMU context switch at VM-exit/enter Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 40/54] KVM: x86/pmu: Introduce PMU operator to increment counter Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 41/54] KVM: x86/pmu: Introduce PMU operator for setting counter overflow Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 42/54] KVM: x86/pmu: Implement emulated counter increment for passthrough PMU Mingwei Zhang
2024-05-08 18:28 ` Chen, Zide
2024-05-09 1:11 ` Mi, Dapeng
2024-05-30 4:20 ` Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 43/54] KVM: x86/pmu: Update pmc_{read,write}_counter() to disconnect perf API Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 44/54] KVM: x86/pmu: Disconnect counter reprogram logic from passthrough PMU Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 45/54] KVM: nVMX: Add nested virtualization support for " Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 46/54] perf/x86/amd/core: Set passthrough capability for host Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 47/54] KVM: x86/pmu/svm: Set passthrough capability for vcpus Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 48/54] KVM: x86/pmu/svm: Set enable_passthrough_pmu module parameter Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 49/54] KVM: x86/pmu/svm: Allow RDPMC pass through when all counters exposed to guest Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 50/54] KVM: x86/pmu/svm: Implement callback to disable MSR interception Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 51/54] KVM: x86/pmu/svm: Set GuestOnly bit and clear HostOnly bit when guest write to event selectors Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 52/54] KVM: x86/pmu/svm: Add registers to direct access list Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 53/54] KVM: x86/pmu/svm: Implement handlers to save and restore context Mingwei Zhang
2024-05-06 5:30 ` [PATCH v2 54/54] KVM: x86/pmu/svm: Wire up PMU filtering functionality for passthrough PMU Mingwei Zhang
2024-05-28 2:35 ` [PATCH v2 00/54] Mediated Passthrough vPMU 2.0 for x86 Ma, Yongwei
2024-05-30 4:28 ` Mingwei Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3755c323-6244-4e75-9e79-679bd05b13a4@linux.intel.com \
--to=kan.liang@linux.intel.com \
--cc=dapeng1.mi@linux.intel.com \
--cc=eranian@google.com \
--cc=gce-passthrou-pmu-dev@google.com \
--cc=irogers@google.com \
--cc=jmattson@google.com \
--cc=kan.liang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=like.xu.linux@gmail.com \
--cc=linux-perf-users@vger.kernel.org \
--cc=manali.shukla@amd.com \
--cc=maobibo@loongson.cn \
--cc=mizhang@google.com \
--cc=namhyung@kernel.org \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=samantha.alt@intel.com \
--cc=sandipan.das@amd.com \
--cc=seanjc@google.com \
--cc=xiong.y.zhang@intel.com \
--cc=yanfei.xu@intel.com \
--cc=zhenyuw@linux.intel.com \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox