From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755832Ab3EGOkw (ORCPT ); Tue, 7 May 2013 10:40:52 -0400 Received: from g4t0016.houston.hp.com ([15.201.24.19]:17711 "EHLO g4t0016.houston.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755652Ab3EGOkq (ORCPT ); Tue, 7 May 2013 10:40:46 -0400 Message-ID: <51891256.2040402@hp.com> Date: Tue, 07 May 2013 10:40:22 -0400 From: Waiman Long User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.5) Gecko/20120601 Thunderbird/10.0.5 MIME-Version: 1.0 To: Jiri Olsa CC: Arnaldo Carvalho de Melo , Stephane Eranian , Namhyung Kim , Peter Zijlstra , Paul Mackerras , Ingo Molnar , linux-kernel@vger.kernel.org, "Chandramouleeswaran, Aswin" , "Norton, Scott J" Subject: Re: [PATCH] perf: fix symbol processing bug and greatly improve performance References: <1367847833-4932-1-git-send-email-Waiman.Long@hp.com> <20130507093031.GA1076@krava.brq.redhat.com> In-Reply-To: <20130507093031.GA1076@krava.brq.redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 05/07/2013 05:30 AM, Jiri Olsa wrote: > On Mon, May 06, 2013 at 09:43:53AM -0400, Waiman Long wrote: >> When "perf record" was used on a large machine with a lot of CPUs, >> the perf post-processing time could take a lot of minutes and even >> hours depending on how large the resulting perf.data file was. >> >> While running AIM7 1500-user high_systime workload on a 80-core x86-64 >> system with a 3.9 kernel, the workload itself took about 2 minutes >> to run and the perf.data file had a size of 1108.746 MB. However, >> the post-processing step took more than 10 minutes. >> >> With a gprof-profiled perf binary, the time spent by perf was as >> follows: >> >> % cumulative self self total >> time seconds seconds calls s/call s/call name >> 96.90 822.10 822.10 192156 0.00 0.00 dsos__find >> 0.81 828.96 6.86 172089958 0.00 0.00 rb_next >> 0.41 832.44 3.48 48539289 0.00 0.00 rb_erase >> >> So 97% (822 seconds) of the time was spent in a single dsos_find() >> function. After analyzing the call-graph data below: >> >> ----------------------------------------------- >> 0.00 822.12 192156/192156 map__new [6] >> [7] 96.9 0.00 822.12 192156 vdso__dso_findnew [7] >> 822.10 0.00 192156/192156 dsos__find [8] >> 0.01 0.00 192156/192156 dsos__add [62] >> 0.01 0.00 192156/192366 dso__new [61] >> 0.00 0.00 1/45282525 memdup [31] >> 0.00 0.00 192156/192230 dso__set_long_name [91] >> ----------------------------------------------- >> 822.10 0.00 192156/192156 vdso__dso_findnew [7] >> [8] 96.9 822.10 0.00 192156 dsos__find [8] >> ----------------------------------------------- >> >> It was found that the vdso__dso_findnew() function failed to locate >> VDSO__MAP_NAME ("[vdso]") in the dso list and have to insert a new >> entry at the end for 192156 times. This problem is due to the fact that >> there are 2 types of name in the dso entry - short name and long name. >> The initial dso__new() adds "[vdso]" to both the short and long names. >> After that, vdso__dso_findnew() modifies the long name to something >> like /tmp/perf-vdso.so-NoXkDj. The dsos__find() function only compares >> the long name. As a result, the same vdso entry is duplicated many >> time in the dso list. This bug increases memory consumption as well >> as slows the symbol processing time to a crawl. > hi, > the issue is there and fix looks ok, thanks! > > though I'm not able to get vdso callchains to pop out > even by investigating report with vdso heavy workload. > > I'll have a closer look.. The test machine that I used have RHEL 6.4 installed in it with a upstream 3.9 kernel layered on top. The kernel config is based on the 6.4 configuration file with modification to enable the X2APIC option needed by the machine. Other than that, I didn't make too much modification to the base configuration. I used the "-a -s" option when running perf-record. I don't think the vdso callchains were major part of the workload that I tested. I think it is the high number of CPU cores plus the high number of users (1500) that cause the performance bottleneck to surface. In a smaller machine, those bottlenecks may be much less noticeable. The vdso call-chain dominates the post-processsing time because of the need to search through the while DSO list for the vdso library which can grow to 2M+ in my test case. Regards, Longman