From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7218C4321E for ; Mon, 10 Sep 2018 10:23:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 538592087E for ; Mon, 10 Sep 2018 10:23:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 538592087E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728121AbeIJPQx (ORCPT ); Mon, 10 Sep 2018 11:16:53 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:60468 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727840AbeIJPQx (ORCPT ); Mon, 10 Sep 2018 11:16:53 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 09C4B40216E7; Mon, 10 Sep 2018 10:23:31 +0000 (UTC) Received: from krava (unknown [10.43.17.10]) by smtp.corp.redhat.com (Postfix) with SMTP id 7709DA9F0C; Mon, 10 Sep 2018 10:23:29 +0000 (UTC) Date: Mon, 10 Sep 2018 12:23:28 +0200 From: Jiri Olsa To: Ingo Molnar Cc: Alexey Budankov , Peter Zijlstra , Arnaldo Carvalho de Melo , Alexander Shishkin , Namhyung Kim , Andi Kleen , linux-kernel Subject: Re: [PATCH v8 0/3]: perf: reduce data loss when profiling highly parallel CPU bound workloads Message-ID: <20180910102328.GC15548@krava> References: <20180910091841.GA4664@gmail.com> <20180910095909.GA15548@krava> <20180910100303.GA101776@gmail.com> <20180910100841.GB15548@krava> <20180910101325.GA5544@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180910101325.GA5544@gmail.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Scanned-By: MIMEDefang 2.79 on 10.11.54.5 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Mon, 10 Sep 2018 10:23:31 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Mon, 10 Sep 2018 10:23:31 +0000 (UTC) for IP:'10.11.54.5' DOMAIN:'int-mx05.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'jolsa@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 10, 2018 at 12:13:25PM +0200, Ingo Molnar wrote: > > * Jiri Olsa wrote: > > > On Mon, Sep 10, 2018 at 12:03:03PM +0200, Ingo Molnar wrote: > > > > > > * Jiri Olsa wrote: > > > > > > > > Per-CPU threading the record session would have so many other advantages as well (scalability, > > > > > etc.). > > > > > > > > > > Jiri did per-CPU recording patches a couple of months ago, not sure how usable they are at the > > > > > moment? > > > > > > > > it's still usable, I can rebase it and post a branch pointer, > > > > the problem is I haven't been able to find a case with a real > > > > performance benefit yet.. ;-) > > > > > > > > perhaps because I haven't tried on server with really big cpu > > > > numbers > > > > > > Maybe Alexey could pick up from there? Your concept looked fairly mature to me > > > and I tried it on a big-CPU box back then and there were real improvements. > > > > too bad u did not share your results, it could have been already in ;-) > > Yeah :-/ Had a proper round of testing on my TODO, then the big box I'd have tested it on > broke ... > > > let me rebase/repost once more and let's see > > Thanks! > > > I think we could benefit from both multiple threads event reading > > and AIO writing for perf.data.. it could be merged together > > So instead of AIO writing perf.data, why not just turn perf.data into a directory structure > with per CPU files? That would allow all sorts of neat future performance features such as that's basically what the multiple-thread record patchset does jirka > mmap() or splice() based zero-copy. > > User-space post-processing can then read the files and put them into global order - or use the > per CPU nature of them, which would be pretty useful too. > > Also note how well this works on NUMA as well, as the backing pages would be allocated in a > NUMA-local fashion. > > I.e. the whole per-CPU threading would enable such a separation of the tracing/event streams > and would allow true scalability. > > Thanks, > > Ingo