From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932415AbdJYJH6 (ORCPT ); Wed, 25 Oct 2017 05:07:58 -0400 Received: from mail-lf0-f67.google.com ([209.85.215.67]:50422 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932316AbdJYJHy (ORCPT ); Wed, 25 Oct 2017 05:07:54 -0400 X-Google-Smtp-Source: ABhQp+Scr5Hb9TC9yCUb2fPJeHWGbjdFxegWpnhBerF5L4o+fXsIGeJ6cnwJx/4rz3zs3ao8JUwTjw== Date: Wed, 25 Oct 2017 11:07:50 +0200 From: Ingo Molnar To: Jiri Olsa Cc: "Liang, Kan" , "acme@kernel.org" , "mingo@redhat.com" , "linux-kernel@vger.kernel.org" , "peterz@infradead.org" , "jolsa@kernel.org" , "wangnan0@huawei.com" , "hekuang@huawei.com" , "namhyung@kernel.org" , "alexander.shishkin@linux.intel.com" , "Hunter, Adrian" , "ak@linux.intel.com" Subject: Re: [PATCH V3 0/6] event synthesization multithreading for perf record Message-ID: <20171025090750.3kt3dtonrjl7gmgr@gmail.com> References: <1508529934-369393-1-git-send-email-kan.liang@intel.com> <20171023114822.ijbixdkhysinlwqv@gmail.com> <37D7C6CF3E00A74B8858931C1DB2F077537D874E@SHSMSX103.ccr.corp.intel.com> <20171024092200.wef6b66ecmhrvaja@gmail.com> <20171024114755.GA2716@krava> <20171024125944.uswroptykcqrgjox@gmail.com> <20171025090034.GA27028@krava> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171025090034.GA27028@krava> User-Agent: NeoMutt/20170609 (1.8.3) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Jiri Olsa wrote: > On Tue, Oct 24, 2017 at 02:59:44PM +0200, Ingo Molnar wrote: > > > > * Jiri Olsa wrote: > > > > > I recently made some changes on threaded record, which are based > > > on Namhyungs time* API, which is needed to read/sort the data afterwards > > > > > > but I wasn't able to get any substantial and constant reduce of LOST events > > > and then I got sidetracked and did not finish, but it's in here: > > > > So, in the context of system-wide profiling, the way that would work best I think > > is the following: > > > > thread #0 binds itself to CPU#0 (via sched_setaffinity) and creates a per-CPU event on CPU#0 > > thread #1 binds itself to CPU#1 (via sched_setaffinity) and creates a per-CPU event on CPU#1 > > thread #2 binds itself to CPU#2 (via sched_setaffinity) and creates a per-CPU event on CPU#2 > > > > etc. > > > > Is this how you implemented it? > > in a way ;-) but I made it more generic and let record create just > few threads and let them share cpu subset.. and so there was no binding > > > > > If the threads in the thread pool are just free-running then the scheduler might > > not migrate it to the 'right' CPU that is streaming the perf events and there will > > be a lot of cross-talking between CPUs. > > ok it's easy to add binding now and 1:1 thread:cpu mapping.. I'll retry Please Cc: me - this is a really interesting aspect of perf scalability! Thanks, Ingo