From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755784AbcGHOs6 (ORCPT ); Fri, 8 Jul 2016 10:48:58 -0400 Received: from mail-pa0-f68.google.com ([209.85.220.68]:33739 "EHLO mail-pa0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755729AbcGHOsr (ORCPT ); Fri, 8 Jul 2016 10:48:47 -0400 Date: Fri, 8 Jul 2016 16:48:38 +0200 From: Ingo Molnar To: Josh Poimboeuf Cc: Byungchul Park , peterz@infradead.org, linux-kernel@vger.kernel.org, walken@google.com, =?iso-8859-1?Q?Fr=E9d=E9ric?= Weisbecker , Peter Zijlstra Subject: Re: [PATCH 1/2] x86/dumpstack: Optimize save_stack_trace Message-ID: <20160708144838.GA17466@gmail.com> References: <1467628075-7289-1-git-send-email-byungchul.park@lge.com> <20160707101740.GF2279@X58A-UD3R> <20160708100819.GA17300@gmail.com> <20160708142929.lvxgapbxfv5wfbk2@treble> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160708142929.lvxgapbxfv5wfbk2@treble> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Josh Poimboeuf wrote: > On Fri, Jul 08, 2016 at 12:08:19PM +0200, Ingo Molnar wrote: > > > > * Byungchul Park wrote: > > > > > On Mon, Jul 04, 2016 at 07:27:54PM +0900, Byungchul Park wrote: > > > > I suggested this patch on https://lkml.org/lkml/2016/6/20/22. However, > > > > I want to proceed saperately since it's somewhat independent from each > > > > other. Frankly speaking, I want this patchset to be accepted at first so > > > > that the crossfeature can use this optimized save_stack_trace_norm() > > > > which makes crossrelease work smoothly. > > > > > > What do you think about this way to improve it? > > > > I like both of your improvements, the speed up is impressive: > > > > [ 2.327597] save_stack_trace() takes 87114 ns > > ... > > [ 2.781694] save_stack_trace() takes 20044 ns > > ... > > [ 3.103264] save_stack_trace takes 3821 (sched_lock) > > > > Could you please also measure call graph recording (perf record -g), how much > > faster does it get with your patches and what are our remaining performance hot > > spots? > > > > Could you please merge your patches to the latest -tip tree, because this commit I > > merged earlier today: > > > > 81c2949f7fdc x86/dumpstack: Add show_stack_regs() and use it > > > > conflicts with your patches. (I'll push this commit out later today.) > > > > Also, could you please rename the _norm names to _fast or so, to signal that this > > is a faster but less reliable method to get a stack dump? Nobody knows what > > '_norm' means, but '_fast' is pretty self-explanatory. > > Hm, but is print_context_stack_bp() variant really less reliable? From > what I can tell, its only differences vs print_context_stack() are: > > - It doesn't scan the stack for "guesses" (which are 'unreliable' and > are ignored by the ops->address() callback anyway). > > - It stops if ops->address() returns an error (which in this case means > the array is full anyway). > > - It stops if the address isn't a kernel text address. I think this > shouldn't normally be possible unless there's some generated code like > bpf on the stack. Maybe it could be slightly improved for this case. > > So instead of adding a new save_stack_trace_fast() variant, why don't we > just modify the existing save_stack_trace() to use > print_context_stack_bp()? Ok, agreed! Thanks, Ingo