From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759026AbdADKBV (ORCPT ); Wed, 4 Jan 2017 05:01:21 -0500 Received: from merlin.infradead.org ([205.233.59.134]:38090 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750746AbdADKBS (ORCPT ); Wed, 4 Jan 2017 05:01:18 -0500 Date: Wed, 4 Jan 2017 11:01:02 +0100 From: Peter Zijlstra To: Masami Hiramatsu Cc: Ingo Molnar , Josh Poimboeuf , linux-kernel@vger.kernel.org, Ananth N Mavinakayanahalli , Thomas Gleixner , "H . Peter Anvin" , Andrey Konovalov , Steven Rostedt Subject: Re: [PATCH tip/master v3] kprobes: extable: Identify kprobes' insn-slots as kernel text area Message-ID: <20170104100102.GE25813@worktop.programming.kicks-ass.net> References: <20161226133012.347f7e45dbf8a8d671ea07fb@kernel.org> <148281924021.12148.14275351848773920571.stgit@devbox> <20170103105402.GB25813@worktop.programming.kicks-ass.net> <20170104140604.5e2d53c69580f5c67ea6cd62@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170104140604.5e2d53c69580f5c67ea6cd62@kernel.org> User-Agent: Mutt/1.5.22.1 (2013-10-16) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 04, 2017 at 02:06:04PM +0900, Masami Hiramatsu wrote: > On Tue, 3 Jan 2017 11:54:02 +0100 > Peter Zijlstra wrote: > > How many entries should one expect on that list? I spend quite a bit of > > time reducing the cost of is_module_text_address() a while back and see > > that both ftrace (which actually needs this to be fast) and now > > kprobes have linear list walks in here. > > It depends on how many probes are used and optimized. However, in most > cases, there should be one entry (unless user defines optimized probes > over 32 on x86, from my experience, it is very rare case. :) ) OK, that's good :-) > > I'm assuming the ftrace thing to be mostly empty, since I never saw it > > on my benchmarks back then, but it is something Steve should look at I > > suppose. > > > > Similarly, the changelog here should include some talk about worst case > > costs. > > Would you have any good benchmark to measure it? Not trivially so; what I did was cobble together a debugfs file that measures the average of the PMI time in perf_sample_event_took(), and a module that has a 10 deep callchain around a while(1) loop. Then perf record with callchains for a few seconds. Generating the callchain does the unwinder thing and ends up calling is_kernel_address() lots. The case I worked on was 0 modules vs 100+ modules in a distro build, which was fairly obviously painful back then, since is_module_text_address() used a linear lookup. I'm not sure I still have all those bits, but I can dig around a bit if you're interested.