linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Nageswara Sastry <rnsastry@linux.ibm.com>
To: Kajol Jain <kjain@linux.ibm.com>,
	mpe@ellerman.id.au, linuxppc-dev@lists.ozlabs.org,
	nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org,
	peterz@infradead.org, dan.j.williams@intel.com,
	ira.weiny@intel.com, vishal.l.verma@intel.com
Cc: santosh@fossix.org, maddy@linux.ibm.com,
	aneesh.kumar@linux.ibm.com, atrajeev@linux.vnet.ibm.com,
	vaibhav@linux.ibm.com, tglx@linutronix.de
Subject: Re: [PATCH v6 0/4] Add perf interface to expose nvdimm
Date: Fri, 25 Feb 2022 11:25:28 +0530	[thread overview]
Message-ID: <ddf18609-84ad-e263-7dff-7b2cc68557ef@linux.ibm.com> (raw)
In-Reply-To: <20220217163357.276036-1-kjain@linux.ibm.com>



On 17/02/22 10:03 pm, Kajol Jain wrote:
> Patchset adds performance stats reporting support for nvdimm.
> Added interface includes support for pmu register/unregister
> functions. A structure is added called nvdimm_pmu to be used for
> adding arch/platform specific data such as cpumask, nvdimm device
> pointer and pmu event functions like event_init/add/read/del.
> User could use the standard perf tool to access perf events
> exposed via pmu.
> 
> Interface also defines supported event list, config fields for the
> event attributes and their corresponding bit values which are exported
> via sysfs. Patch 3 exposes IBM pseries platform nmem* device
> performance stats using this interface.
> 
> Result from power9 pseries lpar with 2 nvdimm device:
> 
> Ex: List all event by perf list
> 
> command:# perf list nmem
> 
>    nmem0/cache_rh_cnt/                                [Kernel PMU event]
>    nmem0/cache_wh_cnt/                                [Kernel PMU event]
>    nmem0/cri_res_util/                                [Kernel PMU event]
>    nmem0/ctl_res_cnt/                                 [Kernel PMU event]
>    nmem0/ctl_res_tm/                                  [Kernel PMU event]
>    nmem0/fast_w_cnt/                                  [Kernel PMU event]
>    nmem0/host_l_cnt/                                  [Kernel PMU event]
>    nmem0/host_l_dur/                                  [Kernel PMU event]
>    nmem0/host_s_cnt/                                  [Kernel PMU event]
>    nmem0/host_s_dur/                                  [Kernel PMU event]
>    nmem0/med_r_cnt/                                   [Kernel PMU event]
>    nmem0/med_r_dur/                                   [Kernel PMU event]
>    nmem0/med_w_cnt/                                   [Kernel PMU event]
>    nmem0/med_w_dur/                                   [Kernel PMU event]
>    nmem0/mem_life/                                    [Kernel PMU event]
>    nmem0/poweron_secs/                                [Kernel PMU event]
>    ...
>    nmem1/mem_life/                                    [Kernel PMU event]
>    nmem1/poweron_secs/                                [Kernel PMU event]
> 
> Patch1:
>          Introduces the nvdimm_pmu structure
> Patch2:
>          Adds common interface to add arch/platform specific data
>          includes nvdimm device pointer, pmu data along with
>          pmu event functions. It also defines supported event list
>          and adds attribute groups for format, events and cpumask.
>          It also adds code for cpu hotplug support.
> Patch3:
>          Add code in arch/powerpc/platform/pseries/papr_scm.c to expose
>          nmem* pmu. It fills in the nvdimm_pmu structure with pmu name,
>          capabilities, cpumask and event functions and then registers
>          the pmu by adding callbacks to register_nvdimm_pmu.
> Patch4:
>          Sysfs documentation patch
> 
> Changelog

Tested these patches with the automated tests at 
avocado-misc-tests/perf/perf_nmem.py
URL:
https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/perf/perf_nmem.py

1. On the system where target id and online id were different then not 
seeing value in 'cpumask' and those tests failed.

Example:
Log from dmesg
...
papr_scm ibm,persistent-memory:ibm,pmemory@44100003: Region registered 
with target node 1 and online node 0
...

tests log:
  (1/9) perf_nmem.py:perfNMEM.test_pmu_register_dmesg: PASS (1.13 s)
  (2/9) perf_nmem.py:perfNMEM.test_sysfs: PASS (1.10 s)
  (3/9) perf_nmem.py:perfNMEM.test_pmu_count: PASS (1.07 s)
  (4/9) perf_nmem.py:perfNMEM.test_all_events: PASS (18.14 s)
  (5/9) perf_nmem.py:perfNMEM.test_all_group_events: PASS (2.18 s)
  (6/9) perf_nmem.py:perfNMEM.test_mixed_events: CANCEL: With single PMU 
mixed events test is not possible. (1.10 s)
  (7/9) perf_nmem.py:perfNMEM.test_pmu_cpumask: ERROR: invalid literal 
for int() with base 10: '' (1.10 s)
  (8/9) perf_nmem.py:perfNMEM.test_cpumask: ERROR: invalid literal for 
int() with base 10: '' (1.10 s)
  (9/9) perf_nmem.py:perfNMEM.test_cpumask_cpu_off: ERROR: invalid 
literal for int() with base 10: '' (1.07 s)

2. On the system where target id and online id were same then seeing 
value in 'cpumask' and those tests pass.

tests log:
  (1/9) perf_nmem.py:perfNMEM.test_pmu_register_dmesg: PASS (1.16 s)
  (2/9) perf_nmem.py:perfNMEM.test_sysfs: PASS (1.10 s)
  (3/9) perf_nmem.py:perfNMEM.test_pmu_count: PASS (1.12 s)
  (4/9) perf_nmem.py:perfNMEM.test_all_events: PASS (18.10 s)
  (5/9) perf_nmem.py:perfNMEM.test_all_group_events: PASS (2.23 s)
  (6/9) perf_nmem.py:perfNMEM.test_mixed_events: CANCEL: With single PMU 
mixed events test is not possible. (1.13 s)
  (7/9) perf_nmem.py:perfNMEM.test_pmu_cpumask: PASS (1.08 s)
  (8/9) perf_nmem.py:perfNMEM.test_cpumask: PASS (1.09 s)
  (9/9) perf_nmem.py:perfNMEM.test_cpumask_cpu_off: PASS (1.62 s)

> ---
> Resend v5 -> v6
> - No logic change, just a rebase to latest upstream and
>    tested the patchset.
> 
> - Link to the patchset Resend v5: https://lkml.org/lkml/2021/11/15/3979
> 
> v5 -> Resend v5
> - Resend the patchset
> 
> - Link to the patchset v5: https://lkml.org/lkml/2021/9/28/643
> 
> v4 -> v5:
> - Remove multiple variables defined in nvdimm_pmu structure include
>    name and pmu functions(event_int/add/del/read) as they are just
>    used to copy them again in pmu variable. Now we are directly doing
>    this step in arch specific code as suggested by Dan Williams.
> 
> - Remove attribute group field from nvdimm pmu structure and
>    defined these attribute groups in common interface which
>    includes format, event list along with cpumask as suggested by
>    Dan Williams.
>    Since we added static defination for attrbute groups needed in
>    common interface, removes corresponding code from papr.
> 
> - Add nvdimm pmu event list with event codes in the common interface.
> 
> - Remove Acked-by/Reviewed-by/Tested-by tags as code is refactored
>    to handle review comments from Dan.
> 
> - Make nvdimm_pmu_free_hotplug_memory function static as reported
>    by kernel test robot, also add corresponding Reported-by tag.
> 
> - Link to the patchset v4: https://lkml.org/lkml/2021/9/3/45
> 
> v3 -> v4
> - Rebase code on top of current papr_scm code without any logical
>    changes.
> 
> - Added Acked-by tag from Peter Zijlstra and Reviewed by tag
>    from Madhavan Srinivasan.
> 
> - Link to the patchset v3: https://lkml.org/lkml/2021/6/17/605
> 
> v2 -> v3
> - Added Tested-by tag.
> 
> - Fix nvdimm mailing list in the ABI Documentation.
> 
> - Link to the patchset v2: https://lkml.org/lkml/2021/6/14/25
> 
> v1 -> v2
> - Fix hotplug code by adding pmu migration call
>    incase current designated cpu got offline. As
>    pointed by Peter Zijlstra.
> 
> - Removed the retun -1 part from cpu hotplug offline
>    function.
> 
> - Link to the patchset v1: https://lkml.org/lkml/2021/6/8/500
> 
> Kajol Jain (4):
>    drivers/nvdimm: Add nvdimm pmu structure
>    drivers/nvdimm: Add perf interface to expose nvdimm performance stats
>    powerpc/papr_scm: Add perf interface support
>    docs: ABI: sysfs-bus-nvdimm: Document sysfs event format entries for
>      nvdimm pmu
> 
>   Documentation/ABI/testing/sysfs-bus-nvdimm |  35 +++
>   arch/powerpc/include/asm/device.h          |   5 +
>   arch/powerpc/platforms/pseries/papr_scm.c  | 225 ++++++++++++++
>   drivers/nvdimm/Makefile                    |   1 +
>   drivers/nvdimm/nd_perf.c                   | 328 +++++++++++++++++++++
>   include/linux/nd.h                         |  41 +++
>   6 files changed, 635 insertions(+)
>   create mode 100644 drivers/nvdimm/nd_perf.c
> 

-- 
Thanks and Regards
R.Nageswara Sastry

  parent reply	other threads:[~2022-02-25  5:56 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-17 16:33 [PATCH v6 0/4] Add perf interface to expose nvdimm Kajol Jain
2022-02-17 16:33 ` [PATCH v6 1/4] drivers/nvdimm: Add nvdimm pmu structure Kajol Jain
2022-02-17 16:33 ` [PATCH v6 2/4] drivers/nvdimm: Add perf interface to expose nvdimm performance stats Kajol Jain
2022-02-17 16:33 ` [PATCH v6 3/4] powerpc/papr_scm: Add perf interface support Kajol Jain
2022-02-17 16:33 ` [PATCH v6 4/4] docs: ABI: sysfs-bus-nvdimm: Document sysfs event format entries for nvdimm pmu Kajol Jain
2022-02-18 18:06 ` [PATCH v6 0/4] Add perf interface to expose nvdimm Dan Williams
2022-02-23 19:07   ` Dan Williams
2022-02-23 21:17     ` Dan Williams
2022-02-24  6:16       ` kajoljain
2022-02-25  5:55 ` Nageswara Sastry [this message]
2022-02-25  6:38   ` kajoljain
2022-02-25  7:47     ` Aneesh Kumar K V
2022-02-25  8:39       ` kajoljain
2022-02-25 11:11     ` Nageswara Sastry
2022-02-25 11:23       ` kajoljain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ddf18609-84ad-e263-7dff-7b2cc68557ef@linux.ibm.com \
    --to=rnsastry@linux.ibm.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=atrajeev@linux.vnet.ibm.com \
    --cc=dan.j.williams@intel.com \
    --cc=ira.weiny@intel.com \
    --cc=kjain@linux.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=maddy@linux.ibm.com \
    --cc=mpe@ellerman.id.au \
    --cc=nvdimm@lists.linux.dev \
    --cc=peterz@infradead.org \
    --cc=santosh@fossix.org \
    --cc=tglx@linutronix.de \
    --cc=vaibhav@linux.ibm.com \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).