From: Toshi Kani <toshi.kani@hp.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Jeff Moyer <jmoyer@redhat.com>,
"Rafael J. Wysocki" <rjw@rjwysocki.net>,
Linux ACPI <linux-acpi@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
linux-nvdimm <linux-nvdimm@ml01.01.org>
Subject: Re: [PATCH v2 0/3] Add NUMA support for NVDIMM devices
Date: Wed, 10 Jun 2015 10:20:36 -0600 [thread overview]
Message-ID: <1433953236.23540.236.camel@misato.fc.hp.com> (raw)
In-Reply-To: <CAPcyv4h1ak0YUnS1=RZzKjgbpSAV5hrm0NCiwg_Nmp4zKob8kg@mail.gmail.com>
On Wed, 2015-06-10 at 08:57 -0700, Dan Williams wrote:
> On Wed, Jun 10, 2015 at 8:54 AM, Jeff Moyer <jmoyer@redhat.com> wrote:
> > Toshi Kani <toshi.kani@hp.com> writes:
> >
> >> Since NVDIMMs are installed on memory slots, they expose the NUMA
> >> topology of a platform. This patchset adds support of sysfs
> >> 'numa_node' to I/O-related NVDIMM devices under /sys/bus/nd/devices.
> >> This enables numactl(8) to accept 'block:' and 'file:' paths of
> >> pmem and btt devices as shown in the examples below.
> >> numactl --preferred block:pmem0 --show
> >> numactl --preferred file:/dev/pmem0s --show
> >>
> >> numactl can be used to bind an application to the locality of
> >> a target NVDIMM for better performance. Here is a result of fio
> >> benchmark to ext4/dax on an HP DL380 with 2 sockets for local and
> >> remote settings.
> >>
> >> Local [1] : 4098.3MB/s
> >> Remote [2]: 3718.4MB/s
> >>
> >> [1] numactl --preferred block:pmem0 --cpunodebind block:pmem0 fio <fs-on-pmem0>
> >> [2] numactl --preferred block:pmem1 --cpunodebind block:pmem1 fio <fs-on-pmem0>
> >
> > Did you post the patches to numactl somewhere?
> >
>
> numactl already supports this today.
Yes, numactl supports the following sysfs class lookup for numa_node.
This patchset adds numa_node for NVDIMM devices in the same sysfs format
as described in patch 3/3.
/* Generic sysfs class lookup */
static int
affinity_class(struct bitmask *mask, char *cls, const char *dev)
{
:
ret = sysfs_node_read(mask, "/sys/class/%s/%s/device/numa_node",
cls, dev);
Thanks,
-Toshi
next prev parent reply other threads:[~2015-06-10 16:20 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-09 23:10 [PATCH v2 0/3] Add NUMA support for NVDIMM devices Toshi Kani
2015-06-09 23:10 ` [PATCH v2 1/3] acpi: Add acpi_map_pxm_to_online_node() Toshi Kani
2015-06-19 0:42 ` Rafael J. Wysocki
2015-06-19 1:16 ` Toshi Kani
2015-06-09 23:10 ` [PATCH v2 2/3] libnvdimm: Set numa_node to NVDIMM devices Toshi Kani
2015-06-09 23:10 ` [PATCH v2 3/3] libnvdimm: Add sysfs " Toshi Kani
2015-06-10 15:54 ` [PATCH v2 0/3] Add NUMA support for " Jeff Moyer
2015-06-10 15:57 ` Dan Williams
2015-06-10 16:11 ` Jeff Moyer
2015-06-10 16:20 ` Elliott, Robert (Server Storage)
2015-06-10 16:37 ` Dan Williams
2015-06-10 16:20 ` Toshi Kani [this message]
2015-06-11 15:38 ` Dan Williams
2015-06-11 15:45 ` Toshi Kani
2015-06-18 20:24 ` Dan Williams
2015-06-19 0:43 ` Rafael J. Wysocki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1433953236.23540.236.camel@misato.fc.hp.com \
--to=toshi.kani@hp.com \
--cc=dan.j.williams@intel.com \
--cc=jmoyer@redhat.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvdimm@ml01.01.org \
--cc=rjw@rjwysocki.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox