* [Qemu-devel] [PATCH for 3.1 v3] spapr: Fix ibm, max-associativity-domains property number of nodes
@ 2018-11-22 13:19 Serhii Popovych
2018-11-22 13:27 ` [Qemu-devel] [Qemu-ppc] " Greg Kurz
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Serhii Popovych @ 2018-11-22 13:19 UTC (permalink / raw)
To: qemu-ppc; +Cc: lvivier, qemu-devel, david
Laurent Vivier reported off by one with maximum number of NUMA nodes
provided by qemu-kvm being less by one than required according to
description of "ibm,max-associativity-domains" property in LoPAPR.
It appears that I incorrectly treated LoPAPR description of this
property assuming it provides last valid domain (NUMA node here)
instead of maximum number of domains.
### Before hot-add
(qemu) info numa
3 nodes
node 0 cpus: 0
node 0 size: 0 MB
node 0 plugged: 0 MB
node 1 cpus:
node 1 size: 1024 MB
node 1 plugged: 0 MB
node 2 cpus:
node 2 size: 0 MB
node 2 plugged: 0 MB
$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0
node 0 size: 0 MB
node 0 free: 0 MB
node 1 cpus:
node 1 size: 999 MB
node 1 free: 658 MB
node distances:
node 0 1
0: 10 40
1: 40 10
### Hot-add
(qemu) object_add memory-backend-ram,id=mem0,size=1G
(qemu) device_add pc-dimm,id=dimm1,memdev=mem0,node=2
(qemu) [ 87.704898] pseries-hotplug-mem: Attempting to hot-add 4 ...
<there is no "Initmem setup node 2 [mem 0xHEX-0xHEX]">
[ 87.705128] lpar: Attempting to resize HPT to shift 21
... <HPT resize messages>
### After hot-add
(qemu) info numa
3 nodes
node 0 cpus: 0
node 0 size: 0 MB
node 0 plugged: 0 MB
node 1 cpus:
node 1 size: 1024 MB
node 1 plugged: 0 MB
node 2 cpus:
node 2 size: 1024 MB
node 2 plugged: 1024 MB
$ numactl -H
available: 2 nodes (0-1)
^^^^^^^^^^^^^^^^^^^^^^^^
Still only two nodes (and memory hot-added to node 0 below)
node 0 cpus: 0
node 0 size: 1024 MB
node 0 free: 1021 MB
node 1 cpus:
node 1 size: 999 MB
node 1 free: 658 MB
node distances:
node 0 1
0: 10 40
1: 40 10
After fix applied numactl(8) reports 3 nodes available and memory
plugged into node 2 as expected.
>From David Gibson:
------------------
Qemu makes a distinction between "non NUMA" (nb_numa_nodes == 0) and
"NUMA with one node" (nb_numa_nodes == 1). But from a PAPR guests's
point of view these are equivalent. I don't want to present two
different cases to the guest when we don't need to, so even though the
guest can handle it, I'd prefer we put a '1' here for both the
nb_numa_nodes == 0 and nb_numa_nodes == 1 case.
This consolidates everything discussed previously on mailing list.
Fixes: da9f80fbad21 ("spapr: Add ibm,max-associativity-domains property")
Reported-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Serhii Popovych <spopovyc@redhat.com>
---
hw/ppc/spapr.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 7afd1a1..2ee7201 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1033,7 +1033,7 @@ static void spapr_dt_rtas(sPAPRMachineState *spapr, void *fdt)
cpu_to_be32(0),
cpu_to_be32(0),
cpu_to_be32(0),
- cpu_to_be32(nb_numa_nodes ? nb_numa_nodes - 1 : 0),
+ cpu_to_be32(nb_numa_nodes ? nb_numa_nodes : 1),
};
_FDT(rtas = fdt_add_subnode(fdt, 0, "rtas"));
--
1.8.3.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [Qemu-ppc] [PATCH for 3.1 v3] spapr: Fix ibm, max-associativity-domains property number of nodes
2018-11-22 13:19 [Qemu-devel] [PATCH for 3.1 v3] spapr: Fix ibm, max-associativity-domains property number of nodes Serhii Popovych
@ 2018-11-22 13:27 ` Greg Kurz
2018-11-22 13:47 ` [Qemu-devel] " Laurent Vivier
2018-11-23 0:19 ` David Gibson
2 siblings, 0 replies; 4+ messages in thread
From: Greg Kurz @ 2018-11-22 13:27 UTC (permalink / raw)
To: Serhii Popovych; +Cc: qemu-ppc, lvivier, qemu-devel, david
On Thu, 22 Nov 2018 08:19:27 -0500
Serhii Popovych <spopovyc@redhat.com> wrote:
> Laurent Vivier reported off by one with maximum number of NUMA nodes
> provided by qemu-kvm being less by one than required according to
> description of "ibm,max-associativity-domains" property in LoPAPR.
>
> It appears that I incorrectly treated LoPAPR description of this
> property assuming it provides last valid domain (NUMA node here)
> instead of maximum number of domains.
>
> ### Before hot-add
>
> (qemu) info numa
> 3 nodes
> node 0 cpus: 0
> node 0 size: 0 MB
> node 0 plugged: 0 MB
> node 1 cpus:
> node 1 size: 1024 MB
> node 1 plugged: 0 MB
> node 2 cpus:
> node 2 size: 0 MB
> node 2 plugged: 0 MB
>
> $ numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0
> node 0 size: 0 MB
> node 0 free: 0 MB
> node 1 cpus:
> node 1 size: 999 MB
> node 1 free: 658 MB
> node distances:
> node 0 1
> 0: 10 40
> 1: 40 10
>
> ### Hot-add
>
> (qemu) object_add memory-backend-ram,id=mem0,size=1G
> (qemu) device_add pc-dimm,id=dimm1,memdev=mem0,node=2
> (qemu) [ 87.704898] pseries-hotplug-mem: Attempting to hot-add 4 ...
> <there is no "Initmem setup node 2 [mem 0xHEX-0xHEX]">
> [ 87.705128] lpar: Attempting to resize HPT to shift 21
> ... <HPT resize messages>
>
> ### After hot-add
>
> (qemu) info numa
> 3 nodes
> node 0 cpus: 0
> node 0 size: 0 MB
> node 0 plugged: 0 MB
> node 1 cpus:
> node 1 size: 1024 MB
> node 1 plugged: 0 MB
> node 2 cpus:
> node 2 size: 1024 MB
> node 2 plugged: 1024 MB
>
> $ numactl -H
> available: 2 nodes (0-1)
> ^^^^^^^^^^^^^^^^^^^^^^^^
> Still only two nodes (and memory hot-added to node 0 below)
> node 0 cpus: 0
> node 0 size: 1024 MB
> node 0 free: 1021 MB
> node 1 cpus:
> node 1 size: 999 MB
> node 1 free: 658 MB
> node distances:
> node 0 1
> 0: 10 40
> 1: 40 10
>
> After fix applied numactl(8) reports 3 nodes available and memory
> plugged into node 2 as expected.
>
> From David Gibson:
> ------------------
> Qemu makes a distinction between "non NUMA" (nb_numa_nodes == 0) and
> "NUMA with one node" (nb_numa_nodes == 1). But from a PAPR guests's
> point of view these are equivalent. I don't want to present two
> different cases to the guest when we don't need to, so even though the
> guest can handle it, I'd prefer we put a '1' here for both the
> nb_numa_nodes == 0 and nb_numa_nodes == 1 case.
>
> This consolidates everything discussed previously on mailing list.
>
> Fixes: da9f80fbad21 ("spapr: Add ibm,max-associativity-domains property")
> Reported-by: Laurent Vivier <lvivier@redhat.com>
> Signed-off-by: Serhii Popovych <spopovyc@redhat.com>
> ---
Reviewed-by: Greg Kurz <groug@kaod.org>
> hw/ppc/spapr.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 7afd1a1..2ee7201 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1033,7 +1033,7 @@ static void spapr_dt_rtas(sPAPRMachineState *spapr, void *fdt)
> cpu_to_be32(0),
> cpu_to_be32(0),
> cpu_to_be32(0),
> - cpu_to_be32(nb_numa_nodes ? nb_numa_nodes - 1 : 0),
> + cpu_to_be32(nb_numa_nodes ? nb_numa_nodes : 1),
> };
>
> _FDT(rtas = fdt_add_subnode(fdt, 0, "rtas"));
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH for 3.1 v3] spapr: Fix ibm, max-associativity-domains property number of nodes
2018-11-22 13:19 [Qemu-devel] [PATCH for 3.1 v3] spapr: Fix ibm, max-associativity-domains property number of nodes Serhii Popovych
2018-11-22 13:27 ` [Qemu-devel] [Qemu-ppc] " Greg Kurz
@ 2018-11-22 13:47 ` Laurent Vivier
2018-11-23 0:19 ` David Gibson
2 siblings, 0 replies; 4+ messages in thread
From: Laurent Vivier @ 2018-11-22 13:47 UTC (permalink / raw)
To: Serhii Popovych, qemu-ppc; +Cc: qemu-devel, david
On 22/11/2018 14:19, Serhii Popovych wrote:
> Laurent Vivier reported off by one with maximum number of NUMA nodes
> provided by qemu-kvm being less by one than required according to
> description of "ibm,max-associativity-domains" property in LoPAPR.
>
> It appears that I incorrectly treated LoPAPR description of this
> property assuming it provides last valid domain (NUMA node here)
> instead of maximum number of domains.
>
> ### Before hot-add
>
> (qemu) info numa
> 3 nodes
> node 0 cpus: 0
> node 0 size: 0 MB
> node 0 plugged: 0 MB
> node 1 cpus:
> node 1 size: 1024 MB
> node 1 plugged: 0 MB
> node 2 cpus:
> node 2 size: 0 MB
> node 2 plugged: 0 MB
>
> $ numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0
> node 0 size: 0 MB
> node 0 free: 0 MB
> node 1 cpus:
> node 1 size: 999 MB
> node 1 free: 658 MB
> node distances:
> node 0 1
> 0: 10 40
> 1: 40 10
>
> ### Hot-add
>
> (qemu) object_add memory-backend-ram,id=mem0,size=1G
> (qemu) device_add pc-dimm,id=dimm1,memdev=mem0,node=2
> (qemu) [ 87.704898] pseries-hotplug-mem: Attempting to hot-add 4 ...
> <there is no "Initmem setup node 2 [mem 0xHEX-0xHEX]">
> [ 87.705128] lpar: Attempting to resize HPT to shift 21
> ... <HPT resize messages>
>
> ### After hot-add
>
> (qemu) info numa
> 3 nodes
> node 0 cpus: 0
> node 0 size: 0 MB
> node 0 plugged: 0 MB
> node 1 cpus:
> node 1 size: 1024 MB
> node 1 plugged: 0 MB
> node 2 cpus:
> node 2 size: 1024 MB
> node 2 plugged: 1024 MB
>
> $ numactl -H
> available: 2 nodes (0-1)
> ^^^^^^^^^^^^^^^^^^^^^^^^
> Still only two nodes (and memory hot-added to node 0 below)
> node 0 cpus: 0
> node 0 size: 1024 MB
> node 0 free: 1021 MB
> node 1 cpus:
> node 1 size: 999 MB
> node 1 free: 658 MB
> node distances:
> node 0 1
> 0: 10 40
> 1: 40 10
>
> After fix applied numactl(8) reports 3 nodes available and memory
> plugged into node 2 as expected.
>
> From David Gibson:
> ------------------
> Qemu makes a distinction between "non NUMA" (nb_numa_nodes == 0) and
> "NUMA with one node" (nb_numa_nodes == 1). But from a PAPR guests's
> point of view these are equivalent. I don't want to present two
> different cases to the guest when we don't need to, so even though the
> guest can handle it, I'd prefer we put a '1' here for both the
> nb_numa_nodes == 0 and nb_numa_nodes == 1 case.
>
> This consolidates everything discussed previously on mailing list.
>
> Fixes: da9f80fbad21 ("spapr: Add ibm,max-associativity-domains property")
> Reported-by: Laurent Vivier <lvivier@redhat.com>
> Signed-off-by: Serhii Popovych <spopovyc@redhat.com>
> ---
> hw/ppc/spapr.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 7afd1a1..2ee7201 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1033,7 +1033,7 @@ static void spapr_dt_rtas(sPAPRMachineState *spapr, void *fdt)
> cpu_to_be32(0),
> cpu_to_be32(0),
> cpu_to_be32(0),
> - cpu_to_be32(nb_numa_nodes ? nb_numa_nodes - 1 : 0),
> + cpu_to_be32(nb_numa_nodes ? nb_numa_nodes : 1),
> };
>
> _FDT(rtas = fdt_add_subnode(fdt, 0, "rtas"));
>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH for 3.1 v3] spapr: Fix ibm, max-associativity-domains property number of nodes
2018-11-22 13:19 [Qemu-devel] [PATCH for 3.1 v3] spapr: Fix ibm, max-associativity-domains property number of nodes Serhii Popovych
2018-11-22 13:27 ` [Qemu-devel] [Qemu-ppc] " Greg Kurz
2018-11-22 13:47 ` [Qemu-devel] " Laurent Vivier
@ 2018-11-23 0:19 ` David Gibson
2 siblings, 0 replies; 4+ messages in thread
From: David Gibson @ 2018-11-23 0:19 UTC (permalink / raw)
To: Serhii Popovych; +Cc: qemu-ppc, lvivier, qemu-devel
[-- Attachment #1: Type: text/plain, Size: 3651 bytes --]
On Thu, Nov 22, 2018 at 08:19:27AM -0500, Serhii Popovych wrote:
> Laurent Vivier reported off by one with maximum number of NUMA nodes
> provided by qemu-kvm being less by one than required according to
> description of "ibm,max-associativity-domains" property in LoPAPR.
>
> It appears that I incorrectly treated LoPAPR description of this
> property assuming it provides last valid domain (NUMA node here)
> instead of maximum number of domains.
>
> ### Before hot-add
>
> (qemu) info numa
> 3 nodes
> node 0 cpus: 0
> node 0 size: 0 MB
> node 0 plugged: 0 MB
> node 1 cpus:
> node 1 size: 1024 MB
> node 1 plugged: 0 MB
> node 2 cpus:
> node 2 size: 0 MB
> node 2 plugged: 0 MB
>
> $ numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0
> node 0 size: 0 MB
> node 0 free: 0 MB
> node 1 cpus:
> node 1 size: 999 MB
> node 1 free: 658 MB
> node distances:
> node 0 1
> 0: 10 40
> 1: 40 10
>
> ### Hot-add
>
> (qemu) object_add memory-backend-ram,id=mem0,size=1G
> (qemu) device_add pc-dimm,id=dimm1,memdev=mem0,node=2
> (qemu) [ 87.704898] pseries-hotplug-mem: Attempting to hot-add 4 ...
> <there is no "Initmem setup node 2 [mem 0xHEX-0xHEX]">
> [ 87.705128] lpar: Attempting to resize HPT to shift 21
> ... <HPT resize messages>
>
> ### After hot-add
>
> (qemu) info numa
> 3 nodes
> node 0 cpus: 0
> node 0 size: 0 MB
> node 0 plugged: 0 MB
> node 1 cpus:
> node 1 size: 1024 MB
> node 1 plugged: 0 MB
> node 2 cpus:
> node 2 size: 1024 MB
> node 2 plugged: 1024 MB
>
> $ numactl -H
> available: 2 nodes (0-1)
> ^^^^^^^^^^^^^^^^^^^^^^^^
> Still only two nodes (and memory hot-added to node 0 below)
> node 0 cpus: 0
> node 0 size: 1024 MB
> node 0 free: 1021 MB
> node 1 cpus:
> node 1 size: 999 MB
> node 1 free: 658 MB
> node distances:
> node 0 1
> 0: 10 40
> 1: 40 10
>
> After fix applied numactl(8) reports 3 nodes available and memory
> plugged into node 2 as expected.
>
> >From David Gibson:
> ------------------
> Qemu makes a distinction between "non NUMA" (nb_numa_nodes == 0) and
> "NUMA with one node" (nb_numa_nodes == 1). But from a PAPR guests's
> point of view these are equivalent. I don't want to present two
> different cases to the guest when we don't need to, so even though the
> guest can handle it, I'd prefer we put a '1' here for both the
> nb_numa_nodes == 0 and nb_numa_nodes == 1 case.
>
> This consolidates everything discussed previously on mailing list.
>
> Fixes: da9f80fbad21 ("spapr: Add ibm,max-associativity-domains property")
> Reported-by: Laurent Vivier <lvivier@redhat.com>
> Signed-off-by: Serhii Popovych <spopovyc@redhat.com>
Applied, thanks.
> ---
> hw/ppc/spapr.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 7afd1a1..2ee7201 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1033,7 +1033,7 @@ static void spapr_dt_rtas(sPAPRMachineState *spapr, void *fdt)
> cpu_to_be32(0),
> cpu_to_be32(0),
> cpu_to_be32(0),
> - cpu_to_be32(nb_numa_nodes ? nb_numa_nodes - 1 : 0),
> + cpu_to_be32(nb_numa_nodes ? nb_numa_nodes : 1),
> };
>
> _FDT(rtas = fdt_add_subnode(fdt, 0, "rtas"));
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-11-23 0:20 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-11-22 13:19 [Qemu-devel] [PATCH for 3.1 v3] spapr: Fix ibm, max-associativity-domains property number of nodes Serhii Popovych
2018-11-22 13:27 ` [Qemu-devel] [Qemu-ppc] " Greg Kurz
2018-11-22 13:47 ` [Qemu-devel] " Laurent Vivier
2018-11-23 0:19 ` David Gibson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).