From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755056AbbCLPP3 (ORCPT ); Thu, 12 Mar 2015 11:15:29 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:40307 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754999AbbCLPP0 (ORCPT ); Thu, 12 Mar 2015 11:15:26 -0400 Date: Thu, 12 Mar 2015 16:15:07 +0100 From: Peter Zijlstra To: Alexei Starovoitov Cc: Ingo Molnar , Steven Rostedt , Namhyung Kim , Arnaldo Carvalho de Melo , Jiri Olsa , Masami Hiramatsu , "David S. Miller" , Daniel Borkmann , linux-api@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v6 tip 2/8] tracing: attach BPF programs to kprobes Message-ID: <20150312151507.GI2896@worktop.programming.kicks-ass.net> References: <1426047534-8148-1-git-send-email-ast@plumgrid.com> <1426047534-8148-3-git-send-email-ast@plumgrid.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1426047534-8148-3-git-send-email-ast@plumgrid.com> User-Agent: Mutt/1.5.22.1 (2013-10-16) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Mar 10, 2015 at 09:18:48PM -0700, Alexei Starovoitov wrote: > +unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx) > +{ > + unsigned int ret; > + int cpu; > + > + if (in_nmi()) /* not supported yet */ > + return 1; > + > + preempt_disable_notrace(); > + > + cpu = raw_smp_processor_id(); > + if (unlikely(per_cpu(bpf_prog_active, cpu)++ != 0)) { > + /* since some bpf program is already running on this cpu, > + * don't call into another bpf program (same or different) > + * and don't send kprobe event into ring-buffer, > + * so return zero here > + */ > + ret = 0; > + goto out; > + } > + > + rcu_read_lock(); You've so far tried very hard to not get into tracing; and then you call rcu_read_lock() :-) So either document why this isn't a problem, provide rcu_read_lock_notrace() or switch to RCU-sched and thereby avoid the problem. > + ret = BPF_PROG_RUN(prog, ctx); > + rcu_read_unlock(); > + > + out: > + per_cpu(bpf_prog_active, cpu)--; > + preempt_enable_notrace(); > + > + return ret; > +} > +EXPORT_SYMBOL_GPL(trace_call_bpf);