From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751122AbdFSPKj (ORCPT ); Mon, 19 Jun 2017 11:10:39 -0400 Received: from foss.arm.com ([217.140.101.70]:51340 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750982AbdFSPKh (ORCPT ); Mon, 19 Jun 2017 11:10:37 -0400 Date: Mon, 19 Jun 2017 16:09:39 +0100 From: Mark Rutland To: Andi Kleen Cc: Alexey Budankov , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Alexander Shishkin , Kan Liang , Dmitri Prokhorov , Valery Cherepennikov , David Carrillo-Cisneros , Stephane Eranian , linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 1/n] perf/core: addressing 4x slowdown during per-process profiling of STREAM benchmark on Intel Xeon Phi Message-ID: <20170619150939.GA4555@leverpostej> References: <09226446-39b9-9bd2-d60f-b9bb947987c5@linux.intel.com> <20170615195618.GA8807@leverpostej> <07a76338-4c71-569a-d36e-7d6bcd10bd74@linux.intel.com> <20170616090938.GB20092@leverpostej> <22a2dafb-de05-199b-54ed-0c3b24349826@linux.intel.com> <20170619124639.GA3661@leverpostej> <20170619133831.GB3894@leverpostej> <20170619145908.GA23705@tassilo.jf.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170619145908.GA23705@tassilo.jf.intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 19, 2017 at 07:59:08AM -0700, Andi Kleen wrote: > > > For comparison, can you give --per-thread a go prior to these patches > > > being applied? > > > > FWIW, I had a go with (an old) perf record on an arm64 system using > > --per-thread, and I see that no samples are recorded, which seems like a > > bug. > > > > With --per-thread, the slwodown was ~20%, whereas with the defaults it > > was > 400%. > > I'm not sure what the point of the experiment is? It has to work > with reasonable overead even without --per-thread. > > FWIW Alexey already root caused the problem, so there's no need > to restart the debugging. Sure; we understand where that overhead is coming from, we have an idea as to how to mitigate that, and we should try to make that work it we can. I was trying to get a feel for how that compares to what we can do today. For other reasons (e.g. fd exhaustion), opening NR_CPUS * n events might not be a great idea on systems with a huge number of CPUs. We might want a heuristic in the perf tool regardless. Thanks, Mark.