From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34669C433EF for ; Wed, 13 Apr 2022 20:28:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232705AbiDMUa6 (ORCPT ); Wed, 13 Apr 2022 16:30:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237426AbiDMUa5 (ORCPT ); Wed, 13 Apr 2022 16:30:57 -0400 Received: from mail-lf1-f50.google.com (mail-lf1-f50.google.com [209.85.167.50]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1095C83B0B; Wed, 13 Apr 2022 13:28:34 -0700 (PDT) Received: by mail-lf1-f50.google.com with SMTP id o2so5539167lfu.13; Wed, 13 Apr 2022 13:28:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=CujCbp/sw+khaMiFxer85R28JC7xz+rn3pwqBuSA5a8=; b=wVcKQz7+i2gNvbzEiVQkU2NOM4klnbSiX/bfk5JhEKW3CCdAjlbMyjjMP+iqL5JcsV YRx/BNBxpMmvxgUR9CMfeeWnKk65RQyv/lt7nLolMeLNt4u0DU2Jhsfb3TIhs2j/wUZS VY1L6ToAHG7szUuMereHc1ReEUiosX+UlermqwsGC6QTtcdiPlYuAip11Y94PNeDNyQ0 EbLrdy/MekVoxTUIIdR2K6gRpv3ZlNaK8sUQoIys9a21x4+7Y/5sgTsf3pzYsu8ysIdr Bgur+Jh9KetwPJTTVocX9zcqgX+5OKy0VYzezyjjWMTDN3/Gon06f5+Ccn0HQ6pvutLl CLww== X-Gm-Message-State: AOAM532ezeD7xv8vENRk3k0AFPaa6wR+ZNk7VWCGg8lJrlnVcwUw9uoU EnMR+Hq9bZF+/VsuH7Bbe1LDpPXDsKwyb2BMBkQ= X-Google-Smtp-Source: ABdhPJyluSBAO6ZKxyhld5cSe7DUtYi1Hm4e/CMZd5/P8jWPcPR/Cuaw+fCb884KrHifRVnNq0S/qBHAxM+PV/u7Eh8= X-Received: by 2002:a05:6512:3404:b0:44a:310f:72f7 with SMTP id i4-20020a056512340400b0044a310f72f7mr31031455lfr.47.1649881712413; Wed, 13 Apr 2022 13:28:32 -0700 (PDT) MIME-Version: 1.0 References: <20220412154817.2728324-1-irogers@google.com> In-Reply-To: From: Namhyung Kim Date: Wed, 13 Apr 2022 13:28:21 -0700 Message-ID: Subject: Re: [PATCH v2 0/4] Tidy up symbol end fixup To: Ian Rogers Cc: John Garry , Will Deacon , Mathieu Poirier , Leo Yan , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , James Clark , Alexandre Truong , German Gomez , Dave Marchevsky , Song Liu , Ravi Bangoria , Li Huafei , =?UTF-8?Q?Martin_Li=C5=A1ka?= , William Cohen , Riccardo Mancini , Masami Hiramatsu , Thomas Richter , Lexi Shao , Remi Bernon , Michael Petlan , Denis Nikitin , linux-arm-kernel@lists.infradead.org, linux-perf-users , linux-kernel , Stephane Eranian Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-perf-users@vger.kernel.org On Tue, Apr 12, 2022 at 4:48 PM Ian Rogers wrote: > > On Tue, Apr 12, 2022 at 3:12 PM Namhyung Kim wrote: > > > > Hi Ian, > > > > On Tue, Apr 12, 2022 at 08:48:13AM -0700, Ian Rogers wrote: > > > Fixing up more symbol ends as introduced in: > > > https://lore.kernel.org/lkml/20220317135536.805-1-mpetlan@redhat.com/ > > > caused perf annotate to run into memory limits - every symbol holds > > > all the disassembled code in the annotation, and so making symbols > > > ends further away dramatically increased memory usage (40MB to > > > >1GB). Modify the symbol end logic so that special kernel cases aren't > > > applied in the common case. > > > > > > v2. Drops a merged patch. Fixes a build issue with libbfd enabled. > > > > How about just like this? We can get rid of arch functions as they > > mostly do the same thing (kernel vs module boundary check). > > > > Not tested. ;-) > > > > Thanks, > > Namhyung > > > > --------------8<------------- > > > > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c > > index dea0fc495185..df41d7266d91 100644 > > --- a/tools/perf/util/symbol.c > > +++ b/tools/perf/util/symbol.c > > @@ -35,6 +35,7 @@ > > #include "path.h" > > #include > > #include > > +#include // page_size > > > > #include > > #include > > @@ -231,8 +226,16 @@ void symbols__fixup_end(struct rb_root_cached *symbols) > > prev = curr; > > curr = rb_entry(nd, struct symbol, rb_node); > > > > - if (prev->end == prev->start || prev->end != curr->start) > > - arch__symbols__fixup_end(prev, curr); > > + if (prev->end == prev->start) { > > + /* Last kernel symbol mapped to end of page */ > > I like the simpler logic but don't like applying this in symbol-elf > given the comment says it is kernel specific - so we could keep the > is_kernel change. I'm fine with the change. :) > > > + if (!strchr(prev->name, '[') != !strchr(curr->name, '[')) > > I find this condition not to be intention revealing. On ARM there is > also an || for the condition reversed. When this is in an is_kernel > block then I think it is clear this is kernel hack, so I think it is > good to comment on what the condition is for. Yeah, usually modules are loaded after the kernel image but it seems ARM could load them before the kernel. So I made the change not to call strchr() again. But we might need to consider the special "[__builtin_kprobes]" symbols. > > > + prev->end = roundup(prev->end + 1, page_size); > > Currently the roundup varies per architecture, but it is not clear to > me that it matters. Yeah, it would be the same as the logic for the last entry to be more conservative. > > > + else > > I think we should comment here that we're extending zero sized symbols > to the start of the next symbol. Sounds good. Thanks, Namhyung > > > + prev->end = curr->start; > > + > > + pr_debug4("%s sym:%s end:%#" PRIx64 "\n", > > + __func__, prev->name, prev->end); > > + } > > Thanks, > Ian > > > } > > > > /* Last entry */