From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:36028 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934458AbdERNTI (ORCPT ); Thu, 18 May 2017 09:19:08 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Sasha Levin , "Peter Zijlstra (Intel)" , Arnaldo Carvalho de Melo , Frederic Weisbecker , Jiri Olsa , Linus Torvalds , Stephane Eranian , Thomas Gleixner , Vince Weaver , Ingo Molnar , Amit Pundir Subject: [PATCH 3.18 33/49] perf: Fix race in swevent hash Date: Thu, 18 May 2017 15:16:42 +0200 Message-Id: <20170518131644.445904679@linuxfoundation.org> In-Reply-To: <20170518131643.028057293@linuxfoundation.org> References: <20170518131643.028057293@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 3.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Peter Zijlstra commit 12ca6ad2e3a896256f086497a7c7406a547ee373 upstream. There's a race on CPU unplug where we free the swevent hash array while it can still have events on. This will result in a use-after-free which is BAD. Simply do not free the hash array on unplug. This leaves the thing around and no use-after-free takes place. When the last swevent dies, we do a for_each_possible_cpu() iteration anyway to clean these up, at which time we'll free it, so no leakage will occur. Reported-by: Sasha Levin Tested-by: Sasha Levin Signed-off-by: Peter Zijlstra (Intel) Cc: Arnaldo Carvalho de Melo Cc: Frederic Weisbecker Cc: Jiri Olsa Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Stephane Eranian Cc: Thomas Gleixner Cc: Vince Weaver Signed-off-by: Ingo Molnar Signed-off-by: Amit Pundir Signed-off-by: Greg Kroah-Hartman --- kernel/events/core.c | 20 +------------------- 1 file changed, 1 insertion(+), 19 deletions(-) --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -5851,9 +5851,6 @@ struct swevent_htable { /* Recursion avoidance in each contexts */ int recursion[PERF_NR_CONTEXTS]; - - /* Keeps track of cpu being initialized/exited */ - bool online; }; static DEFINE_PER_CPU(struct swevent_htable, swevent_htable); @@ -6111,14 +6108,8 @@ static int perf_swevent_add(struct perf_ hwc->state = !(flags & PERF_EF_START); head = find_swevent_head(swhash, event); - if (!head) { - /* - * We can race with cpu hotplug code. Do not - * WARN if the cpu just got unplugged. - */ - WARN_ON_ONCE(swhash->online); + if (WARN_ON_ONCE(!head)) return -EINVAL; - } hlist_add_head_rcu(&event->hlist_entry, head); @@ -6185,7 +6176,6 @@ static int swevent_hlist_get_cpu(struct int err = 0; mutex_lock(&swhash->hlist_mutex); - if (!swevent_hlist_deref(swhash) && cpu_online(cpu)) { struct swevent_hlist *hlist; @@ -8342,7 +8332,6 @@ static void perf_event_init_cpu(int cpu) struct swevent_htable *swhash = &per_cpu(swevent_htable, cpu); mutex_lock(&swhash->hlist_mutex); - swhash->online = true; if (swhash->hlist_refcount > 0) { struct swevent_hlist *hlist; @@ -8395,14 +8384,7 @@ static void perf_event_exit_cpu_context( static void perf_event_exit_cpu(int cpu) { - struct swevent_htable *swhash = &per_cpu(swevent_htable, cpu); - perf_event_exit_cpu_context(cpu); - - mutex_lock(&swhash->hlist_mutex); - swhash->online = false; - swevent_hlist_release(swhash); - mutex_unlock(&swhash->hlist_mutex); } #else static inline void perf_event_exit_cpu(int cpu) { }