From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from e36.co.us.ibm.com (e36.co.us.ibm.com [32.97.110.154]) (using TLSv1 with cipher CAMELLIA256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id B258C1A08A4 for ; Mon, 27 Jul 2015 15:42:05 +1000 (AEST) Received: from /spool/local by e36.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sun, 26 Jul 2015 23:42:03 -0600 Received: from b03cxnp07029.gho.boulder.ibm.com (b03cxnp07029.gho.boulder.ibm.com [9.17.130.16]) by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id 9083719D8045 for ; Sun, 26 Jul 2015 23:32:59 -0600 (MDT) Received: from d03av03.boulder.ibm.com (d03av03.boulder.ibm.com [9.17.195.169]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t6R5g1rs54984924 for ; Sun, 26 Jul 2015 22:42:01 -0700 Received: from d03av03.boulder.ibm.com (localhost [127.0.0.1]) by d03av03.boulder.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t6R5g0YQ026753 for ; Sun, 26 Jul 2015 23:42:00 -0600 From: Sukadev Bhattiprolu To: Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Michael Ellerman Cc: , linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org Subject: [PATCH 08/10] perf: Add return value to __perf_event_read() Date: Sun, 26 Jul 2015 22:40:36 -0700 Message-Id: <1437975638-789-9-git-send-email-sukadev@linux.vnet.ibm.com> In-Reply-To: <1437975638-789-1-git-send-email-sukadev@linux.vnet.ibm.com> References: <1437975638-789-1-git-send-email-sukadev@linux.vnet.ibm.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Add a return value to __perf_event_read(). The return value will be needed later in perf_read_group() implements ability to read several counters in a PERF_PMU_TXN_READ transaction. Signed-off-by: Sukadev Bhattiprolu --- kernel/events/core.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) diff --git a/kernel/events/core.c b/kernel/events/core.c index f38fe0b..951d835 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3174,12 +3174,18 @@ void perf_event_exec(void) rcu_read_unlock(); } +struct perf_read_data { + struct perf_event *event; + int ret; +}; + /* * Cross CPU call to read the hardware event */ static void __perf_event_read(void *info) { - struct perf_event *event = info; + struct perf_read_data *data = info; + struct perf_event *event = data->event; struct perf_event_context *ctx = event->ctx; struct perf_cpu_context *cpuctx = __get_cpu_context(ctx); @@ -3201,6 +3207,8 @@ static void __perf_event_read(void *info) update_event_times(event); if (event->state == PERF_EVENT_STATE_ACTIVE) event->pmu->read(event); + + data->ret = 0; raw_spin_unlock(&ctx->lock); } @@ -3214,13 +3222,21 @@ static inline u64 perf_event_count(struct perf_event *event) static int perf_event_read(struct perf_event *event, bool group) { + int ret = 0; + /* * If event is enabled and currently active on a CPU, update the * value in the event structure: */ if (event->state == PERF_EVENT_STATE_ACTIVE) { + struct perf_read_data data = { + .event = event, + .ret = 0, + }; + smp_call_function_single(event->oncpu, - __perf_event_read, event, 1); + __perf_event_read, &data, 1); + ret = data.ret; } else if (event->state == PERF_EVENT_STATE_INACTIVE) { struct perf_event_context *ctx = event->ctx; unsigned long flags; @@ -3244,7 +3260,7 @@ static int perf_event_read(struct perf_event *event, bool group) raw_spin_unlock_irqrestore(&ctx->lock, flags); } - return 0; + return ret; } /* -- 1.7.9.5