Linux CXL
 help / color / mirror / Atom feed
* [ndctl PATCH] daxctl: Skip over memory failure node status
       [not found] <CGME20230213213916uscas1p2ee91a53c14ec5ddcb31322212af6cdaa@uscas1p2.samsung.com>
@ 2023-02-13 21:39 ` Adam Manzanares
  2023-02-13 23:26   ` Dan Williams
  2023-02-15  0:36   ` Verma, Vishal L
  0 siblings, 2 replies; 5+ messages in thread
From: Adam Manzanares @ 2023-02-13 21:39 UTC (permalink / raw)
  To: nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org
  Cc: Fan Ni, dave@stgolabs.net, vishal.l.verma@intel.com,
	Adam Manzanares

When trying to match a dax device to a memblock physical address
memblock_in_dev will fail if the the phys_index sysfs file does
not exist in the memblock. Currently the memory failure directory
associated with a node is currently interpreted as a memblock.
Skip over the memory_failure directory within the node directory.

Signed-off-by: Adam Manzanares <a.manzanares@samsung.com>
---
 daxctl/lib/libdaxctl.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/daxctl/lib/libdaxctl.c b/daxctl/lib/libdaxctl.c
index d990479..b27a8af 100644
--- a/daxctl/lib/libdaxctl.c
+++ b/daxctl/lib/libdaxctl.c
@@ -1552,6 +1552,8 @@ static int daxctl_memory_op(struct daxctl_memory *mem, enum memory_op op)
 	errno = 0;
 	while ((de = readdir(node_dir)) != NULL) {
 		if (strncmp(de->d_name, "memory", 6) == 0) {
+			if (strncmp(de->d_name, "memory_", 7) == 0)
+				continue;
 			rc = memblock_in_dev(mem, de->d_name);
 			if (rc < 0)
 				goto out_dir;
-- 
2.39.0

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* RE: [ndctl PATCH] daxctl: Skip over memory failure node status
  2023-02-13 21:39 ` [ndctl PATCH] daxctl: Skip over memory failure node status Adam Manzanares
@ 2023-02-13 23:26   ` Dan Williams
  2023-02-14  6:57     ` Adam Manzanares
  2023-02-15  0:36   ` Verma, Vishal L
  1 sibling, 1 reply; 5+ messages in thread
From: Dan Williams @ 2023-02-13 23:26 UTC (permalink / raw)
  To: Adam Manzanares, nvdimm@lists.linux.dev,
	linux-cxl@vger.kernel.org
  Cc: Fan Ni, dave@stgolabs.net, vishal.l.verma@intel.com,
	Adam Manzanares

Adam Manzanares wrote:
> When trying to match a dax device to a memblock physical address
> memblock_in_dev will fail if the the phys_index sysfs file does
> not exist in the memblock. Currently the memory failure directory
> associated with a node is currently interpreted as a memblock.
> Skip over the memory_failure directory within the node directory.

Oh, interesting, I did not know memory_failure() added entries to sysfs.
My grep-fu is failing me though... I only found node_init_cache_dev()
that creates a file named "memory_side_cache" under a node. This fix
will work for that as well, but I am still curious where the memory
failure file originates.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [ndctl PATCH] daxctl: Skip over memory failure node status
  2023-02-13 23:26   ` Dan Williams
@ 2023-02-14  6:57     ` Adam Manzanares
  2023-02-14 21:52       ` Dan Williams
  0 siblings, 1 reply; 5+ messages in thread
From: Adam Manzanares @ 2023-02-14  6:57 UTC (permalink / raw)
  To: Dan Williams
  Cc: nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, Fan Ni,
	dave@stgolabs.net, vishal.l.verma@intel.com

On Mon, Feb 13, 2023 at 03:26:58PM -0800, Dan Williams wrote:
> Adam Manzanares wrote:
> > When trying to match a dax device to a memblock physical address
> > memblock_in_dev will fail if the the phys_index sysfs file does
> > not exist in the memblock. Currently the memory failure directory
> > associated with a node is currently interpreted as a memblock.
> > Skip over the memory_failure directory within the node directory.
> 
> Oh, interesting, I did not know memory_failure() added entries to sysfs.
> My grep-fu is failing me though... I only found node_init_cache_dev()
> that creates a file named "memory_side_cache" under a node. This fix
> will work for that as well, but I am still curious where the memory
> failure file originates.

I found the issue on next-20230119, I have a suspicion your grep-fu will
work fine there. 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [ndctl PATCH] daxctl: Skip over memory failure node status
  2023-02-14  6:57     ` Adam Manzanares
@ 2023-02-14 21:52       ` Dan Williams
  0 siblings, 0 replies; 5+ messages in thread
From: Dan Williams @ 2023-02-14 21:52 UTC (permalink / raw)
  To: Adam Manzanares, Dan Williams
  Cc: nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, Fan Ni,
	dave@stgolabs.net, vishal.l.verma@intel.com

Adam Manzanares wrote:
> On Mon, Feb 13, 2023 at 03:26:58PM -0800, Dan Williams wrote:
> > Adam Manzanares wrote:
> > > When trying to match a dax device to a memblock physical address
> > > memblock_in_dev will fail if the the phys_index sysfs file does
> > > not exist in the memblock. Currently the memory failure directory
> > > associated with a node is currently interpreted as a memblock.
> > > Skip over the memory_failure directory within the node directory.
> > 
> > Oh, interesting, I did not know memory_failure() added entries to sysfs.
> > My grep-fu is failing me though... I only found node_init_cache_dev()
> > that creates a file named "memory_side_cache" under a node. This fix
> > will work for that as well, but I am still curious where the memory
> > failure file originates.
> 
> I found the issue on next-20230119, I have a suspicion your grep-fu will
> work fine there. 

Yup, thanks. For those following along at home, it's these patches:

https://lore.kernel.org/all/20230120034622.2698268-1-jiaqiyan@google.com

...which has some implications for interoperating with CXL Scan Media
which is distinct from a hardware patrol scrubber, but that's a
discussion for a different patch set.

For this patch though:

Reviewed-by: Dan Williams <dan.j.williams@intel.com>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [ndctl PATCH] daxctl: Skip over memory failure node status
  2023-02-13 21:39 ` [ndctl PATCH] daxctl: Skip over memory failure node status Adam Manzanares
  2023-02-13 23:26   ` Dan Williams
@ 2023-02-15  0:36   ` Verma, Vishal L
  1 sibling, 0 replies; 5+ messages in thread
From: Verma, Vishal L @ 2023-02-15  0:36 UTC (permalink / raw)
  To: linux-cxl@vger.kernel.org, nvdimm@lists.linux.dev,
	a.manzanares@samsung.com
  Cc: fan.ni@samsung.com, dave@stgolabs.net

On Mon, 2023-02-13 at 21:39 +0000, Adam Manzanares wrote:
> When trying to match a dax device to a memblock physical address
> memblock_in_dev will fail if the the phys_index sysfs file does
> not exist in the memblock. Currently the memory failure directory
> associated with a node is currently interpreted as a memblock.
> Skip over the memory_failure directory within the node directory.
> 
> Signed-off-by: Adam Manzanares <a.manzanares@samsung.com>
> ---
>  daxctl/lib/libdaxctl.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/daxctl/lib/libdaxctl.c b/daxctl/lib/libdaxctl.c
> index d990479..b27a8af 100644
> --- a/daxctl/lib/libdaxctl.c
> +++ b/daxctl/lib/libdaxctl.c
> @@ -1552,6 +1552,8 @@ static int daxctl_memory_op(struct daxctl_memory *mem, enum memory_op op)
>         errno = 0;
>         while ((de = readdir(node_dir)) != NULL) {
>                 if (strncmp(de->d_name, "memory", 6) == 0) {
> +                       if (strncmp(de->d_name, "memory_", 7) == 0)
> +                               continue;
>                         rc = memblock_in_dev(mem, de->d_name);
>                         if (rc < 0)
>                                 goto out_dir;

Applied, thanks Adam and Dan!

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-02-15  0:36 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CGME20230213213916uscas1p2ee91a53c14ec5ddcb31322212af6cdaa@uscas1p2.samsung.com>
2023-02-13 21:39 ` [ndctl PATCH] daxctl: Skip over memory failure node status Adam Manzanares
2023-02-13 23:26   ` Dan Williams
2023-02-14  6:57     ` Adam Manzanares
2023-02-14 21:52       ` Dan Williams
2023-02-15  0:36   ` Verma, Vishal L

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox