From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36D16C433F5 for ; Tue, 11 Oct 2022 11:57:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229451AbiJKL5V (ORCPT ); Tue, 11 Oct 2022 07:57:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229475AbiJKL5U (ORCPT ); Tue, 11 Oct 2022 07:57:20 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 628A77F26F; Tue, 11 Oct 2022 04:57:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1665489439; x=1697025439; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=XzJyJLhn3OWIM+SDzpoP33h7icTAUN97RR03Oxsgpeo=; b=L+Np3Qi7RoB0r7xMZZ0OmhXBJ4m7wr0VBvEyUnReDUSsRauWLhus6I1v TwGaOIvoH3wFeK67bVI2EKUlrOtd3Vy0mKDTV7PcXW4pzDiLJFAmTlZNb 0Rx5PdRITNcoQzwR4blkBF+/rwHteqDEE4/+fFCEjaWf7hqGkstYxDWKz SkzbXFv7N35RDCGC6gWTwZ0851tgL2bS8gIiehWCdQn23vz/qIMzE0JI/ rY8WA7Jf47VcDTWQvTI93wJmRTRoWTMSGj+aatUfw8c9zYnLbKMfNcQNH zg3dlQ/pDlwc4s5gOx29bUzh4zjlBbUygx0Xn0CsjUww9JwwXw5JXc0iV w==; X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="284864448" X-IronPort-AV: E=Sophos;i="5.95,176,1661842800"; d="scan'208";a="284864448" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2022 04:57:17 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10496"; a="751708546" X-IronPort-AV: E=Sophos;i="5.95,176,1661842800"; d="scan'208";a="751708546" Received: from akleen-mobl1.amr.corp.intel.com (HELO [10.213.187.182]) ([10.213.187.182]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2022 04:57:12 -0700 Message-ID: Date: Tue, 11 Oct 2022 07:57:11 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.3.2 Subject: Re: [RFC/PATCHSET 00/19] perf stat: Cleanup counter aggregation (v1) Content-Language: en-US To: Namhyung Kim Cc: Arnaldo Carvalho de Melo , Jiri Olsa , Ingo Molnar , Peter Zijlstra , LKML , Ian Rogers , Adrian Hunter , linux-perf-users@vger.kernel.org, Kan Liang , Leo Yan , Athira Rajeev , James Clark , Xing Zhengjun References: <20221010053600.272854-1-namhyung@kernel.org> From: Andi Kleen In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-perf-users@vger.kernel.org >> My main concern would be subtle regressions since there are so many >> different combinations and way to travel through the code, and a lot of >> things are not covered by unit tests. When I worked on the code it was >> difficult to keep it all working. I assume you have some way to >> enumerate them all and tested that the output is identical? > Right, that's my concern too. > > I have tested many combinations manually and checked if they > produced similar results. I had a script to test many combinations, but had to check the output manually > But the problem is that I cannot test > all hardwares and more importantly it's hard to check > programmatically if the output is the same or not. Can use "dummy" or some software event (e.g. a probe on some syscall) to get stable numbers. I don't think we need to cover all hardware for the output options, the different events should be similar, but need some coverage for the different aggregation. Or we could add some more tool events just for testing purposes, that would allow covering different core scopes etc. and would easily allow generating known counts. -Andi