From: Guenter Roeck <linux@roeck-us.net>
To: Naveen Krishna Ch <naveenkrishna.ch@gmail.com>
Cc: linux-hwmon@vger.kernel.org, Guenter Roeck <groeck7@gmail.com>
Subject: Re: FW: [PATCH 5/6] hwmon: amd_energy: dump energy counters via debugfs
Date: Tue, 8 Sep 2020 09:34:32 -0700 [thread overview]
Message-ID: <20200908163432.GA169609@roeck-us.net> (raw)
In-Reply-To: <CAHfPSqCkEetYEo2yqZC_0JpGYUYgpqh34-B6sdCWde57_1PhbA@mail.gmail.com>
On Tue, Sep 08, 2020 at 09:40:40PM +0530, Naveen Krishna Ch wrote:
> Hi Guenter
>
> On Sat, 5 Sep 2020 at 22:28, Guenter Roeck <linux@roeck-us.net> wrote:
> >
> > On 9/5/20 9:41 AM, Naveen Krishna Ch wrote:
> > > Hi Guenter,
> > >
> > >> On 9/5/20 7:32 AM, Naveen Krishna Chatradhi wrote:
> > >>> Use seq_printf to capture the core and socket energies under debugfs
> > >>> path in '/sys/kernel/debug/amd_energy/'
> > >>> file cenergy_dump: To print out the core energy counters file
> > >>> senergy_dump: To print out the socket energy counters
> > >>>
> > >>> Signed-off-by: Naveen Krishna Chatradhi <nchatrad@amd.com>
> > >>
> > >> Isn't this a duplicate of other functionality available in the kernel ?
> > >> I'd have to look it up, but I am quite sure that energy values are already available. Besides that, what is the point of duplicating the hwmon attributes ?
> > >
> > > Idea is to reduce the latency in gathering all the counters (Rome has
> > > 128 cores on 2 sockets) from the user space.
> > > If there is a better way to get this done, please let me know. I shall
> > > implement and resubmit.
> > >
> >
> > Isn't turbostat supposed to be able to do that ?
> Apologies, I was not aware of turbostat, took a look at the turbostat
> code now, this is an elaborate tool which is dependent on msr.ko. At
> present, this tool provides a lot of information in sequence. There is
> no duplication of the code.
>
> We need this functionality for the following reasons.
> 1. Reduced latency in gathering the energy counters of all cores and packages
> 2. Possible to provide an API to the user space to integrate into
> other tools/frameworks
>
> Please let me know your opinion.
debugfs should only used for debugging and not to create a backdoor API.
What you are looking for is a more efficient API than the hwmon API
to access sensor data. There is an available API to do that: iio.
You have two options: Register the driver with iio directly, or better
implement a generic hwmon->iio bridge in the hwmon core. I have wanted
to do the latter forever, but never got around to impementing it.
Guenter
> >
> > Guenter
> >
> > >>
> > >> Guenter
> > >>
> > >>> ---
> > >>> drivers/hwmon/amd_energy.c | 110
> > >>> +++++++++++++++++++++++++++++++++++++
> > >>> 1 file changed, 110 insertions(+)
> > >>>
> > >>> diff --git a/drivers/hwmon/amd_energy.c b/drivers/hwmon/amd_energy.c
> > >>> index c294bea56c02..2184bd4510ed 100644
> > >>> --- a/drivers/hwmon/amd_energy.c
> > >>> +++ b/drivers/hwmon/amd_energy.c
> > >>> @@ -8,6 +8,7 @@
> > >>> #include <linux/bits.h>
> > >>> #include <linux/cpu.h>
> > >>> #include <linux/cpumask.h>
> > >>> +#include <linux/debugfs.h>
> > >>> #include <linux/delay.h>
> > >>> #include <linux/device.h>
> > >>> #include <linux/hwmon.h>
> > >>> @@ -20,6 +21,7 @@
> > >>> #include <linux/platform_device.h>
> > >>> #include <linux/sched.h>
> > >>> #include <linux/slab.h>
> > >>> +#include <linux/smp.h>
> > >>> #include <linux/topology.h>
> > >>> #include <linux/types.h>
> > >>>
> > >>> @@ -57,6 +59,8 @@ struct amd_energy_data {
> > >>> int nr_socks;
> > >>> int core_id;
> > >>> char (*label)[10];
> > >>> + u64 *cdump;
> > >>> + u64 *sdump;
> > >>> };
> > >>>
> > >>> static int amd_energy_read_labels(struct device *dev, @@ -329,6
> > >>> +333,108 @@ static int create_accumulate_status_file(struct amd_energy_data *data)
> > >>> &accumulate_attr.attr); }
> > >>>
> > >>> +#ifdef CONFIG_DEBUG_FS
> > >>> +static void dump_on_each_cpu(void *info) {
> > >>> + struct amd_energy_data *data = info;
> > >>> + int cpu = smp_processor_id();
> > >>> +
> > >>> + amd_add_delta(data, cpu, cpu, (long *)&data->cdump[cpu],
> > >>> + ENERGY_CORE_MSR);
> > >>> +}
> > >>> +
> > >>> +static int cenergy_dump_show(struct seq_file *s, void *unused) {
> > >>> + struct amd_energy_data *data = s->private;
> > >>> + struct cpumask *cpus_mask;
> > >>> + int i;
> > >>> +
> > >>> + cpus_mask = kmalloc(sizeof(*cpus_mask), GFP_KERNEL);
> > >>> + memset(data->cdump, 0, (data->nr_cpus) * sizeof(u64));
> > >>> + cpumask_clear(cpus_mask);
> > >>> + for (i = 0; i < data->nr_cpus; i++) {
> > >>> + if (cpu_online(i))
> > >>> + cpumask_set_cpu(i, cpus_mask);
> > >>> + }
> > >>> +
> > >>> + on_each_cpu_mask(cpus_mask, dump_on_each_cpu, data, true);
> > >>> +
> > >>> + for (i = 0; i < data->nr_cpus; i++) {
> > >>> + if (!(i & 3))
> > >>> + seq_printf(s, "Core %3d: ", i);
> > >>> +
> > >>> + seq_printf(s, "%16llu ", data->cdump[i]);
> > >>> + if ((i & 3) == 3)
> > >>> + seq_puts(s, "\n");
> > >>> + }
> > >>> + seq_puts(s, "\n");
> > >>> +
> > >>> + kfree(cpus_mask);
> > >>> + return 0;
> > >>> +}
> > >>> +DEFINE_SHOW_ATTRIBUTE(cenergy_dump);
> > >>> +
> > >>> +static int senergy_dump_show(struct seq_file *s, void *unused) {
> > >>> + struct amd_energy_data *data = s->private;
> > >>> + int i, cpu;
> > >>> +
> > >>> + for (i = 0; i < data->nr_socks; i++) {
> > >>> + cpu = cpumask_first_and(cpu_online_mask,
> > >>> + cpumask_of_node(i));
> > >>> + amd_add_delta(data, data->nr_cpus + i, cpu,
> > >>> + (long *)&data->sdump[i], ENERGY_PKG_MSR);
> > >>> + seq_printf(s, "Socket %1d: %16llu\n",
> > >>> + i, data->sdump[i]);
> > >>> + }
> > >>> +
> > >>> + return 0;
> > >>> +}
> > >>> +DEFINE_SHOW_ATTRIBUTE(senergy_dump);
> > >>> +
> > >>> +static void dump_debugfs_cleanup(void *ddir) {
> > >>> + debugfs_remove_recursive(ddir);
> > >>> +}
> > >>> +
> > >>> +static int create_dump_file(struct device *dev,
> > >>> + struct amd_energy_data *data) {
> > >>> + struct dentry *debugfs;
> > >>> + char name[] = "amd_energy";
> > >>> +
> > >>> + data->cdump = devm_kcalloc(dev, data->nr_cpus,
> > >>> + sizeof(u64), GFP_KERNEL);
> > >>> + if (!data->cdump)
> > >>> + return -ENOMEM;
> > >>> +
> > >>> + data->sdump = devm_kcalloc(dev, data->nr_socks,
> > >>> + sizeof(u64), GFP_KERNEL);
> > >>> + if (!data->sdump)
> > >>> + return -ENOMEM;
> > >>> +
> > >>> + debugfs = debugfs_create_dir(name, NULL);
> > >>> + if (debugfs) {
> > >>> + debugfs_create_file("cenergy_dump", 0440,
> > >>> + debugfs, data, &cenergy_dump_fops);
> > >>> + debugfs_create_file("senergy_dump", 0440,
> > >>> + debugfs, data, &senergy_dump_fops);
> > >>> + devm_add_action_or_reset(data->hwmon_dev,
> > >>> + dump_debugfs_cleanup, debugfs);
> > >>> + }
> > >>> +
> > >>> + return 0;
> > >>> +}
> > >>> +#else
> > >>> +
> > >>> +static int create_dump_file(struct device *dev,
> > >>> + struct amd_energy_data *data) {
> > >>> + return 0;
> > >>> +}
> > >>> +
> > >>> +#endif //CONFIG_DEBUG_FS
> > >>> +
> > >>> static int amd_energy_probe(struct platform_device *pdev) {
> > >>> struct amd_energy_data *data;
> > >>> @@ -376,6 +482,10 @@ static int amd_energy_probe(struct platform_device *pdev)
> > >>> if (ret)
> > >>> return ret;
> > >>>
> > >>> + ret = create_dump_file(dev, data);
> > >>> + if (ret)
> > >>> + return ret;
> > >>> +
> > >>> return 0;
> > >>> }
> > >>>
> > >>>
> > > Naveen
> > >
> >
>
>
> --
> Shine bright,
> (: Nav :)
next prev parent reply other threads:[~2020-09-08 16:34 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-05 14:32 [PATCH 0/6] RFC: hwmon: few improvements to amd_energy driver Naveen Krishna Chatradhi
2020-09-05 14:32 ` [PATCH 1/6] hwmon: amd_energy: Move label out of accumulation structure Naveen Krishna Chatradhi
2020-09-05 14:32 ` [PATCH 2/6] hwmon: amd_energy: optimize accumulation interval Naveen Krishna Chatradhi
2020-09-05 15:11 ` Guenter Roeck
[not found] ` <DM6PR12MB4388DCF9B42BA0093774606FE82A0@DM6PR12MB4388.namprd12.prod.outlook.com>
2020-09-05 16:27 ` Naveen Krishna Ch
2020-09-05 14:32 ` [PATCH 3/6] hwmon: amd_energy: Improve the accumulation logic Naveen Krishna Chatradhi
2020-09-05 15:14 ` Guenter Roeck
[not found] ` <DM6PR12MB438850F9DFD14163F11AA946E82A0@DM6PR12MB4388.namprd12.prod.outlook.com>
2020-09-05 16:31 ` FW: " Naveen Krishna Ch
2020-09-05 14:32 ` [PATCH 4/6] hwmon: amd_energy: let user enable/disable the sw accumulation Naveen Krishna Chatradhi
2020-09-05 15:17 ` Guenter Roeck
2020-09-05 15:33 ` Guenter Roeck
[not found] ` <DM6PR12MB4388A21B749811BBE1309AA3E8290@DM6PR12MB4388.namprd12.prod.outlook.com>
2020-09-08 16:21 ` FW: " Naveen Krishna Ch
2020-09-08 16:36 ` Guenter Roeck
2020-09-05 14:32 ` [PATCH 5/6] hwmon: amd_energy: dump energy counters via debugfs Naveen Krishna Chatradhi
2020-09-05 15:19 ` Guenter Roeck
[not found] ` <DM6PR12MB4388C77E35BD61F4DC2EAEC9E82A0@DM6PR12MB4388.namprd12.prod.outlook.com>
2020-09-05 16:41 ` FW: " Naveen Krishna Ch
2020-09-05 16:58 ` Guenter Roeck
2020-09-08 16:10 ` Naveen Krishna Ch
2020-09-08 16:34 ` Guenter Roeck [this message]
2020-09-08 16:46 ` Naveen Krishna Ch
2020-09-08 17:11 ` Guenter Roeck
2020-09-25 7:23 ` Chatradhi, Naveen Krishna
2020-09-05 14:32 ` [PATCH 6/6] hwmon: (amd_energy) Update driver documentation Naveen Krishna Chatradhi
2020-09-25 7:26 ` [PATCH 0/6] RFC: hwmon: few improvements to amd_energy driver Chatradhi, Naveen Krishna
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200908163432.GA169609@roeck-us.net \
--to=linux@roeck-us.net \
--cc=groeck7@gmail.com \
--cc=linux-hwmon@vger.kernel.org \
--cc=naveenkrishna.ch@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox