linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH REPOST 0/3] VPHN parsing fixes
@ 2014-11-17 17:42 Greg Kurz
  2014-11-17 17:42 ` [PATCH REPOST 1/3] powerpc/vphn: clarify the H_HOME_NODE_ASSOCIATIVITY API Greg Kurz
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Greg Kurz @ 2014-11-17 17:42 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras

Repost with Cc: for Michael and Ben...

The following commit fixed an endianness issue in the VPHN code:

commit 5c9fb1899400096c6818181c525897a31d57e488
Author: Greg Kurz <gkurz@linux.vnet.ibm.com>
Date:   Wed Oct 15 12:42:58 2014 +0200

    powerpc/vphn: NUMA node code expects big-endian

It was discussed at the time that we should patch the parsing code instead
of boldly fixing all the values returned by the hypervisor. It is the goal
of this series.

I have an extra question: PAPR+ says that H_HOME_NODE_ASSOCIATIVITY is supposed
to populate registers R4 to R9 with 16-bit or 32-bit values. This means that we
could theorically get 24 associativity domain numbers. According to this commentthe code is limited to 12 though:

/*
 * 6 64-bit registers unpacked into 12 32-bit associativity values. To form
 * the complete property we have to add the length in the first cell.
 */
#define VPHN_ASSOC_BUFSIZE (VPHN_REGISTER_COUNT*sizeof(u64)/sizeof(u32) + 1)

I could find no justification for the fact that we don't expect the registers
to hold 16-bit relevant numbers only. Have I missed something ?

---

Greg Kurz (3):
      powerpc/vphn: clarify the H_HOME_NODE_ASSOCIATIVITY API
      powerpc/vphn: simplify the parsing code
      powerpc/vphn: move endianness fixing to vphn_unpack_associativity()


 arch/powerpc/mm/numa.c | 62 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 39 insertions(+), 23 deletions(-)

--
Greg

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH REPOST 1/3] powerpc/vphn: clarify the H_HOME_NODE_ASSOCIATIVITY API
  2014-11-17 17:42 [PATCH REPOST 0/3] VPHN parsing fixes Greg Kurz
@ 2014-11-17 17:42 ` Greg Kurz
  2014-11-17 17:42 ` [PATCH REPOST 2/3] powerpc/vphn: simplify the parsing code Greg Kurz
  2014-11-17 17:42 ` [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity() Greg Kurz
  2 siblings, 0 replies; 9+ messages in thread
From: Greg Kurz @ 2014-11-17 17:42 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras

The number of values returned by the H_HOME_NODE_ASSOCIATIVITY h_call deserves
to be explicitly defined, for a better understanding of the code.

Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
---
 arch/powerpc/mm/numa.c |    8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index b9d1dfd..1425517 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -1401,11 +1401,15 @@ static int update_cpu_associativity_changes_mask(void)
 	return cpumask_weight(changes);
 }
 
+/* The H_HOME_NODE_ASSOCIATIVITY h_call returns 6 64-bit registers.
+ */
+#define VPHN_REGISTER_COUNT 6
+
 /*
  * 6 64-bit registers unpacked into 12 32-bit associativity values. To form
  * the complete property we have to add the length in the first cell.
  */
-#define VPHN_ASSOC_BUFSIZE (6*sizeof(u64)/sizeof(u32) + 1)
+#define VPHN_ASSOC_BUFSIZE (VPHN_REGISTER_COUNT*sizeof(u64)/sizeof(u32) + 1)
 
 /*
  * Convert the associativity domain numbers returned from the hypervisor
@@ -1463,7 +1467,7 @@ static long hcall_vphn(unsigned long cpu, __be32 *associativity)
 	int i;
 
 	rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, hwcpu);
-	for (i = 0; i < 6; i++)
+	for (i = 0; i < VPHN_REGISTER_COUNT; i++)
 		retbuf[i] = cpu_to_be64(retbuf[i]);
 	vphn_unpack_associativity(retbuf, associativity);
 

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH REPOST 2/3] powerpc/vphn: simplify the parsing code
  2014-11-17 17:42 [PATCH REPOST 0/3] VPHN parsing fixes Greg Kurz
  2014-11-17 17:42 ` [PATCH REPOST 1/3] powerpc/vphn: clarify the H_HOME_NODE_ASSOCIATIVITY API Greg Kurz
@ 2014-11-17 17:42 ` Greg Kurz
  2014-11-17 17:42 ` [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity() Greg Kurz
  2 siblings, 0 replies; 9+ messages in thread
From: Greg Kurz @ 2014-11-17 17:42 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras

According to PAPR+ 14.11.6.1 H_HOME_NODE_ASSOCIATIVITY, the hypervisor is
supposed to pack significant fields first and fill the remaining unused
fields with "all ones". It means that the first unused field can be viewed
as an end-of-list marker.
The "ibm,associativity" property in the DT isn't padded with ones and no
code in arch/powerpc/mm/numa.c seems to expect the associativity array
to be padded either.

This patch simply ends the parsing when we reach the first unused field.

Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
---
 arch/powerpc/mm/numa.c |   20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 1425517..e30c469 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -1417,7 +1417,7 @@ static int update_cpu_associativity_changes_mask(void)
  */
 static int vphn_unpack_associativity(const long *packed, __be32 *unpacked)
 {
-	int i, nr_assoc_doms = 0;
+	int i;
 	const __be16 *field = (const __be16 *) packed;
 
 #define VPHN_FIELD_UNUSED	(0xffff)
@@ -1425,33 +1425,29 @@ static int vphn_unpack_associativity(const long *packed, __be32 *unpacked)
 #define VPHN_FIELD_MASK		(~VPHN_FIELD_MSB)
 
 	for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
-		if (be16_to_cpup(field) == VPHN_FIELD_UNUSED) {
-			/* All significant fields processed, and remaining
-			 * fields contain the reserved value of all 1's.
-			 * Just store them.
+		if (be16_to_cpup(field) == VPHN_FIELD_UNUSED)
+			/* All significant fields processed.
 			 */
-			unpacked[i] = *((__be32 *)field);
-			field += 2;
-		} else if (be16_to_cpup(field) & VPHN_FIELD_MSB) {
+			break;
+
+		if (be16_to_cpup(field) & VPHN_FIELD_MSB) {
 			/* Data is in the lower 15 bits of this field */
 			unpacked[i] = cpu_to_be32(
 				be16_to_cpup(field) & VPHN_FIELD_MASK);
 			field++;
-			nr_assoc_doms++;
 		} else {
 			/* Data is in the lower 15 bits of this field
 			 * concatenated with the next 16 bit field
 			 */
 			unpacked[i] = *((__be32 *)field);
 			field += 2;
-			nr_assoc_doms++;
 		}
 	}
 
 	/* The first cell contains the length of the property */
-	unpacked[0] = cpu_to_be32(nr_assoc_doms);
+	unpacked[0] = cpu_to_be32(i - 1);
 
-	return nr_assoc_doms;
+	return i - 1;
 }
 
 /*

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity()
  2014-11-17 17:42 [PATCH REPOST 0/3] VPHN parsing fixes Greg Kurz
  2014-11-17 17:42 ` [PATCH REPOST 1/3] powerpc/vphn: clarify the H_HOME_NODE_ASSOCIATIVITY API Greg Kurz
  2014-11-17 17:42 ` [PATCH REPOST 2/3] powerpc/vphn: simplify the parsing code Greg Kurz
@ 2014-11-17 17:42 ` Greg Kurz
  2014-11-26 23:39   ` Benjamin Herrenschmidt
  2 siblings, 1 reply; 9+ messages in thread
From: Greg Kurz @ 2014-11-17 17:42 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Paul Mackerras

The first argument to vphn_unpack_associativity() is a const long *, but the
parsing code expects __be64 values actually. This is inconsistent. We should
either pass a const __be64 * or change vphn_unpack_associativity() so that
it fixes endianness by itself.

This patch does the latter, since the caller doesn't need to know about
endianness and this allows to fix significant 64-bit values only. Please
note that the previous code was able to cope with 32-bit fields being split
accross two consecutives 64-bit values. Since PAPR+ doesn't say this cannot
happen, the behaviour was kept. It requires extra checking to know when fixing
is needed though.

Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
---
 arch/powerpc/mm/numa.c |   42 +++++++++++++++++++++++++++++-------------
 1 file changed, 29 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index e30c469..903ef27 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -1417,30 +1417,49 @@ static int update_cpu_associativity_changes_mask(void)
  */
 static int vphn_unpack_associativity(const long *packed, __be32 *unpacked)
 {
-	int i;
-	const __be16 *field = (const __be16 *) packed;
+	int i, j, k;
+	union {
+		__be64 packed[VPHN_REGISTER_COUNT];
+		__be16 field[VPHN_REGISTER_COUNT * 4];
+	} fixed;
 
 #define VPHN_FIELD_UNUSED	(0xffff)
 #define VPHN_FIELD_MSB		(0x8000)
 #define VPHN_FIELD_MASK		(~VPHN_FIELD_MSB)
 
-	for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
-		if (be16_to_cpup(field) == VPHN_FIELD_UNUSED)
+	for (i = 1, j = 0, k = 0; i < VPHN_ASSOC_BUFSIZE;) {
+		u16 field;
+
+		if (j % 4 == 0) {
+			fixed.packed[k] = cpu_to_be64(packed[k]);
+			k++;
+		}
+
+		field = be16_to_cpu(fixed.field[j]);
+
+		if (field == VPHN_FIELD_UNUSED)
 			/* All significant fields processed.
 			 */
 			break;
 
-		if (be16_to_cpup(field) & VPHN_FIELD_MSB) {
+		if (field & VPHN_FIELD_MSB) {
 			/* Data is in the lower 15 bits of this field */
-			unpacked[i] = cpu_to_be32(
-				be16_to_cpup(field) & VPHN_FIELD_MASK);
-			field++;
+			unpacked[i++] = cpu_to_be32(field & VPHN_FIELD_MASK);
+			j++;
 		} else {
 			/* Data is in the lower 15 bits of this field
 			 * concatenated with the next 16 bit field
 			 */
-			unpacked[i] = *((__be32 *)field);
-			field += 2;
+			if (unlikely(j % 4 == 3)) {
+				/* The next field is to be copied from the next
+				 * 64-bit input value. We must fix it now.
+				 */
+				fixed.packed[k] = cpu_to_be64(packed[k]);
+				k++;
+			}
+
+			unpacked[i++] = *((__be32 *)&fixed.field[j]);
+			j += 2;
 		}
 	}
 
@@ -1460,11 +1479,8 @@ static long hcall_vphn(unsigned long cpu, __be32 *associativity)
 	long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
 	u64 flags = 1;
 	int hwcpu = get_hard_smp_processor_id(cpu);
-	int i;
 
 	rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, hwcpu);
-	for (i = 0; i < VPHN_REGISTER_COUNT; i++)
-		retbuf[i] = cpu_to_be64(retbuf[i]);
 	vphn_unpack_associativity(retbuf, associativity);
 
 	return rc;

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity()
  2014-11-17 17:42 ` [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity() Greg Kurz
@ 2014-11-26 23:39   ` Benjamin Herrenschmidt
  2014-11-27  9:28     ` Greg Kurz
  0 siblings, 1 reply; 9+ messages in thread
From: Benjamin Herrenschmidt @ 2014-11-26 23:39 UTC (permalink / raw)
  To: Greg Kurz; +Cc: linuxppc-dev, Paul Mackerras

On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote:
> The first argument to vphn_unpack_associativity() is a const long *, but the
> parsing code expects __be64 values actually. This is inconsistent. We should
> either pass a const __be64 * or change vphn_unpack_associativity() so that
> it fixes endianness by itself.
> 
> This patch does the latter, since the caller doesn't need to know about
> endianness and this allows to fix significant 64-bit values only. Please
> note that the previous code was able to cope with 32-bit fields being split
> accross two consecutives 64-bit values. Since PAPR+ doesn't say this cannot
> happen, the behaviour was kept. It requires extra checking to know when fixing
> is needed though.

While I agree with moving the endian fixing down, the patch makes me
nervous. Note that I don't fully understand the format of what we are
parsing here so I might be wrong but ...

>  
>  #define VPHN_FIELD_UNUSED	(0xffff)
>  #define VPHN_FIELD_MSB		(0x8000)
>  #define VPHN_FIELD_MASK		(~VPHN_FIELD_MSB)
>  
> -	for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
> -		if (be16_to_cpup(field) == VPHN_FIELD_UNUSED)
> +	for (i = 1, j = 0, k = 0; i < VPHN_ASSOC_BUFSIZE;) {
> +		u16 field;
> +
> +		if (j % 4 == 0) {
> +			fixed.packed[k] = cpu_to_be64(packed[k]);
> +			k++;
> +		}

So we have essentially a bunch of 16-bit fields ... the above loads and
swap a whole 4 of them at once. However that means not only we byteswap
them individually, but we also flip the order of the fields. This is
ok ?

> +		field = be16_to_cpu(fixed.field[j]);
> +
> +		if (field == VPHN_FIELD_UNUSED)
>  			/* All significant fields processed.
>  			 */
>  			break;

For example, we might have USED,USED,USED,UNUSED ... after the swap, we
now have UNUSED,USED,USED,USED ... and we stop parsing in the above
line on the first one. Or am I missing something ? 

> -		if (be16_to_cpup(field) & VPHN_FIELD_MSB) {
> +		if (field & VPHN_FIELD_MSB) {
>  			/* Data is in the lower 15 bits of this field */
> -			unpacked[i] = cpu_to_be32(
> -				be16_to_cpup(field) & VPHN_FIELD_MASK);
> -			field++;
> +			unpacked[i++] = cpu_to_be32(field & VPHN_FIELD_MASK);
> +			j++;
>  		} else {
>  			/* Data is in the lower 15 bits of this field
>  			 * concatenated with the next 16 bit field
>  			 */
> -			unpacked[i] = *((__be32 *)field);
> -			field += 2;
> +			if (unlikely(j % 4 == 3)) {
> +				/* The next field is to be copied from the next
> +				 * 64-bit input value. We must fix it now.
> +				 */
> +				fixed.packed[k] = cpu_to_be64(packed[k]);
> +				k++;
> +			}
> +
> +			unpacked[i++] = *((__be32 *)&fixed.field[j]);
> +			j += 2;
>  		}
>  	}
>  
> @@ -1460,11 +1479,8 @@ static long hcall_vphn(unsigned long cpu, __be32 *associativity)
>  	long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
>  	u64 flags = 1;
>  	int hwcpu = get_hard_smp_processor_id(cpu);
> -	int i;
>  
>  	rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, hwcpu);
> -	for (i = 0; i < VPHN_REGISTER_COUNT; i++)
> -		retbuf[i] = cpu_to_be64(retbuf[i]);
>  	vphn_unpack_associativity(retbuf, associativity);
>  
>  	return rc;

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity()
  2014-11-26 23:39   ` Benjamin Herrenschmidt
@ 2014-11-27  9:28     ` Greg Kurz
  2014-11-28  1:49       ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 9+ messages in thread
From: Greg Kurz @ 2014-11-27  9:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev, Paul Mackerras

On Thu, 27 Nov 2014 10:39:23 +1100
Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:

> On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote:
> > The first argument to vphn_unpack_associativity() is a const long *, but the
> > parsing code expects __be64 values actually. This is inconsistent. We should
> > either pass a const __be64 * or change vphn_unpack_associativity() so that
> > it fixes endianness by itself.
> > 
> > This patch does the latter, since the caller doesn't need to know about
> > endianness and this allows to fix significant 64-bit values only. Please
> > note that the previous code was able to cope with 32-bit fields being split
> > accross two consecutives 64-bit values. Since PAPR+ doesn't say this cannot
> > happen, the behaviour was kept. It requires extra checking to know when fixing
> > is needed though.
> 
> While I agree with moving the endian fixing down, the patch makes me
> nervous. Note that I don't fully understand the format of what we are
> parsing here so I might be wrong but ...
> 

My understanding of PAPR+ is that H_HOME_NODE_ASSOCIATIVITY returns a sequence of
numbers in registers R4 to R9 (that is 64 * 6 = 384 bits). The numbers are either
16-bit long (if high order bit is 1) or 32-bit long. The remaining unused bits are
set to 1. 

Of course, in a LE guest, plpar_hcall9() stores flipped values to memory.

> >  
> >  #define VPHN_FIELD_UNUSED	(0xffff)
> >  #define VPHN_FIELD_MSB		(0x8000)
> >  #define VPHN_FIELD_MASK		(~VPHN_FIELD_MSB)
> >  
> > -	for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
> > -		if (be16_to_cpup(field) == VPHN_FIELD_UNUSED)
> > +	for (i = 1, j = 0, k = 0; i < VPHN_ASSOC_BUFSIZE;) {
> > +		u16 field;
> > +
> > +		if (j % 4 == 0) {
> > +			fixed.packed[k] = cpu_to_be64(packed[k]);
> > +			k++;
> > +		}
> 
> So we have essentially a bunch of 16-bit fields ... the above loads and
> swap a whole 4 of them at once. However that means not only we byteswap
> them individually, but we also flip the order of the fields. This is
> ok ?
> 

Yes. FWIW, it is exactly what the current code does.

> > +		field = be16_to_cpu(fixed.field[j]);
> > +
> > +		if (field == VPHN_FIELD_UNUSED)
> >  			/* All significant fields processed.
> >  			 */
> >  			break;
> 
> For example, we might have USED,USED,USED,UNUSED ... after the swap, we
> now have UNUSED,USED,USED,USED ... and we stop parsing in the above
> line on the first one. Or am I missing something ? 
> 

If we get USED,USED,USED,UNUSED from memory, that means the hypervisor
has returned UNUSED,USED,USED,USED. My point is that it cannot happen:
why would the hypervisor care to pack a sequence of useful numbers with
holes in it ? 
FWIW, I could never observe such a thing in a PowerVM guest... All ones always
come after the payload.

> > -		if (be16_to_cpup(field) & VPHN_FIELD_MSB) {
> > +		if (field & VPHN_FIELD_MSB) {
> >  			/* Data is in the lower 15 bits of this field */
> > -			unpacked[i] = cpu_to_be32(
> > -				be16_to_cpup(field) & VPHN_FIELD_MASK);
> > -			field++;
> > +			unpacked[i++] = cpu_to_be32(field & VPHN_FIELD_MASK);
> > +			j++;
> >  		} else {
> >  			/* Data is in the lower 15 bits of this field
> >  			 * concatenated with the next 16 bit field
> >  			 */
> > -			unpacked[i] = *((__be32 *)field);
> > -			field += 2;
> > +			if (unlikely(j % 4 == 3)) {
> > +				/* The next field is to be copied from the next
> > +				 * 64-bit input value. We must fix it now.
> > +				 */
> > +				fixed.packed[k] = cpu_to_be64(packed[k]);
> > +				k++;
> > +			}
> > +
> > +			unpacked[i++] = *((__be32 *)&fixed.field[j]);
> > +			j += 2;
> >  		}
> >  	}
> >  
> > @@ -1460,11 +1479,8 @@ static long hcall_vphn(unsigned long cpu, __be32 *associativity)
> >  	long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
> >  	u64 flags = 1;
> >  	int hwcpu = get_hard_smp_processor_id(cpu);
> > -	int i;
> >  
> >  	rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, hwcpu);
> > -	for (i = 0; i < VPHN_REGISTER_COUNT; i++)
> > -		retbuf[i] = cpu_to_be64(retbuf[i]);
> >  	vphn_unpack_associativity(retbuf, associativity);
> >  
> >  	return rc;
> 
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity()
  2014-11-27  9:28     ` Greg Kurz
@ 2014-11-28  1:49       ` Benjamin Herrenschmidt
  2014-11-28  8:39         ` Greg Kurz
  0 siblings, 1 reply; 9+ messages in thread
From: Benjamin Herrenschmidt @ 2014-11-28  1:49 UTC (permalink / raw)
  To: Greg Kurz; +Cc: linuxppc-dev, Paul Mackerras

On Thu, 2014-11-27 at 10:28 +0100, Greg Kurz wrote:
> On Thu, 27 Nov 2014 10:39:23 +1100
> Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
> 
> > On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote:
> > > The first argument to vphn_unpack_associativity() is a const long *, but the
> > > parsing code expects __be64 values actually. This is inconsistent. We should
> > > either pass a const __be64 * or change vphn_unpack_associativity() so that
> > > it fixes endianness by itself.
> > > 
> > > This patch does the latter, since the caller doesn't need to know about
> > > endianness and this allows to fix significant 64-bit values only. Please
> > > note that the previous code was able to cope with 32-bit fields being split
> > > accross two consecutives 64-bit values. Since PAPR+ doesn't say this cannot
> > > happen, the behaviour was kept. It requires extra checking to know when fixing
> > > is needed though.
> > 
> > While I agree with moving the endian fixing down, the patch makes me
> > nervous. Note that I don't fully understand the format of what we are
> > parsing here so I might be wrong but ...
> > 
> 
> My understanding of PAPR+ is that H_HOME_NODE_ASSOCIATIVITY returns a sequence of
> numbers in registers R4 to R9 (that is 64 * 6 = 384 bits). The numbers are either
> 16-bit long (if high order bit is 1) or 32-bit long. The remaining unused bits are
> set to 1. 

Ok, that's the bit I was missing. What we get is thus not a memory array
but a register one, which we "incorrectly" swap when writing to memory
inside plpar_hcall9().

Now, I'm not sure that replacing:

-	for (i = 0; i < VPHN_REGISTER_COUNT; i++)
-		retbuf[i] = cpu_to_be64(retbuf[i]);

With:

+		if (j % 4 == 0) {
+			fixed.packed[k] = cpu_to_be64(packed[k]);
+			k++;
+		}

Brings any benefit in term of readability. It makes sense to have a
"first pass" that undoes the helper swapping to re-create the original
"byte stream".

In a second pass, we parse that stream, one 16-bytes at a time, and
we could do so with a simple loop of be16_to_cpup(foo++). I wouldn't
bother with the cast to 32-bit etc... if you encounter a 32-bit case,
you just fetch another 16-bit and do value = (old << 16) | new

I think that should lead to something more readable, no ?

> Of course, in a LE guest, plpar_hcall9() stores flipped values to memory.
> 
> > >  
> > >  #define VPHN_FIELD_UNUSED	(0xffff)
> > >  #define VPHN_FIELD_MSB		(0x8000)
> > >  #define VPHN_FIELD_MASK		(~VPHN_FIELD_MSB)
> > >  
> > > -	for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
> > > -		if (be16_to_cpup(field) == VPHN_FIELD_UNUSED)
> > > +	for (i = 1, j = 0, k = 0; i < VPHN_ASSOC_BUFSIZE;) {
> > > +		u16 field;
> > > +
> > > +		if (j % 4 == 0) {
> > > +			fixed.packed[k] = cpu_to_be64(packed[k]);
> > > +			k++;
> > > +		}
> > 
> > So we have essentially a bunch of 16-bit fields ... the above loads and
> > swap a whole 4 of them at once. However that means not only we byteswap
> > them individually, but we also flip the order of the fields. This is
> > ok ?
> > 
> 
> Yes. FWIW, it is exactly what the current code does.
> 
> > > +		field = be16_to_cpu(fixed.field[j]);
> > > +
> > > +		if (field == VPHN_FIELD_UNUSED)
> > >  			/* All significant fields processed.
> > >  			 */
> > >  			break;
> > 
> > For example, we might have USED,USED,USED,UNUSED ... after the swap, we
> > now have UNUSED,USED,USED,USED ... and we stop parsing in the above
> > line on the first one. Or am I missing something ? 
> > 
> 
> If we get USED,USED,USED,UNUSED from memory, that means the hypervisor
> has returned UNUSED,USED,USED,USED. My point is that it cannot happen:
> why would the hypervisor care to pack a sequence of useful numbers with
> holes in it ? 
> FWIW, I could never observe such a thing in a PowerVM guest... All ones always
> come after the payload.
> 
> > > -		if (be16_to_cpup(field) & VPHN_FIELD_MSB) {
> > > +		if (field & VPHN_FIELD_MSB) {
> > >  			/* Data is in the lower 15 bits of this field */
> > > -			unpacked[i] = cpu_to_be32(
> > > -				be16_to_cpup(field) & VPHN_FIELD_MASK);
> > > -			field++;
> > > +			unpacked[i++] = cpu_to_be32(field & VPHN_FIELD_MASK);
> > > +			j++;
> > >  		} else {
> > >  			/* Data is in the lower 15 bits of this field
> > >  			 * concatenated with the next 16 bit field
> > >  			 */
> > > -			unpacked[i] = *((__be32 *)field);
> > > -			field += 2;
> > > +			if (unlikely(j % 4 == 3)) {
> > > +				/* The next field is to be copied from the next
> > > +				 * 64-bit input value. We must fix it now.
> > > +				 */
> > > +				fixed.packed[k] = cpu_to_be64(packed[k]);
> > > +				k++;
> > > +			}
> > > +
> > > +			unpacked[i++] = *((__be32 *)&fixed.field[j]);
> > > +			j += 2;
> > >  		}
> > >  	}
> > >  
> > > @@ -1460,11 +1479,8 @@ static long hcall_vphn(unsigned long cpu, __be32 *associativity)
> > >  	long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
> > >  	u64 flags = 1;
> > >  	int hwcpu = get_hard_smp_processor_id(cpu);
> > > -	int i;
> > >  
> > >  	rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, hwcpu);
> > > -	for (i = 0; i < VPHN_REGISTER_COUNT; i++)
> > > -		retbuf[i] = cpu_to_be64(retbuf[i]);
> > >  	vphn_unpack_associativity(retbuf, associativity);
> > >  
> > >  	return rc;
> > 
> > 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity()
  2014-11-28  1:49       ` Benjamin Herrenschmidt
@ 2014-11-28  8:39         ` Greg Kurz
  2014-12-01  9:17           ` Michael Ellerman
  0 siblings, 1 reply; 9+ messages in thread
From: Greg Kurz @ 2014-11-28  8:39 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev, Paul Mackerras

On Fri, 28 Nov 2014 12:49:08 +1100
Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:

> On Thu, 2014-11-27 at 10:28 +0100, Greg Kurz wrote:
> > On Thu, 27 Nov 2014 10:39:23 +1100
> > Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
> > 
> > > On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote:
> > > > The first argument to vphn_unpack_associativity() is a const long *, but the
> > > > parsing code expects __be64 values actually. This is inconsistent. We should
> > > > either pass a const __be64 * or change vphn_unpack_associativity() so that
> > > > it fixes endianness by itself.
> > > > 
> > > > This patch does the latter, since the caller doesn't need to know about
> > > > endianness and this allows to fix significant 64-bit values only. Please
> > > > note that the previous code was able to cope with 32-bit fields being split
> > > > accross two consecutives 64-bit values. Since PAPR+ doesn't say this cannot
> > > > happen, the behaviour was kept. It requires extra checking to know when fixing
> > > > is needed though.
> > > 
> > > While I agree with moving the endian fixing down, the patch makes me
> > > nervous. Note that I don't fully understand the format of what we are
> > > parsing here so I might be wrong but ...
> > > 
> > 
> > My understanding of PAPR+ is that H_HOME_NODE_ASSOCIATIVITY returns a sequence of
> > numbers in registers R4 to R9 (that is 64 * 6 = 384 bits). The numbers are either
> > 16-bit long (if high order bit is 1) or 32-bit long. The remaining unused bits are
> > set to 1. 
> 
> Ok, that's the bit I was missing. What we get is thus not a memory array
> but a register one, which we "incorrectly" swap when writing to memory
> inside plpar_hcall9().
> 

Yes.

> Now, I'm not sure that replacing:
> 
> -	for (i = 0; i < VPHN_REGISTER_COUNT; i++)
> -		retbuf[i] = cpu_to_be64(retbuf[i]);
> 
> With:
> 
> +		if (j % 4 == 0) {
> +			fixed.packed[k] = cpu_to_be64(packed[k]);
> +			k++;
> +		}
> 
> Brings any benefit in term of readability. It makes sense to have a
> "first pass" that undoes the helper swapping to re-create the original
> "byte stream".
> 

I was myself no quite satisfied by this change and looking for some tips :)

> In a second pass, we parse that stream, one 16-bytes at a time, and
> we could do so with a simple loop of be16_to_cpup(foo++). I wouldn't
> bother with the cast to 32-bit etc... if you encounter a 32-bit case,
> you just fetch another 16-bit and do value = (old << 16) | new
> 
> I think that should lead to something more readable, no ?
> 

Of course ! This is THE way to go. Thanks Ben ! :)

An while we're here, I have a question about VPHN_ASSOC_BUFSIZE. The
H_HOME_NODE_ASSOCIATIVITY spec says that the stream:
- is at most 64 * 6 = 384 bits long
- may contain 16-bit numbers
- is padded with "all ones"

The stream could theoretically contain up to 384 / 16 = 24 domain numbers.

The current code expects no more than 12 domain numbers... and strangely
seems to correlate the size of the output array to the size of the input
one as noted in the comment:

 "6 64-bit registers unpacked into 12 32-bit associativity values"

My understanding is that the resulting array is be32 only because it is
supposed to look like the ibm,associativity property from the DT... and
I could find no clue that this property is limited to 12 values. Have I
missed something ?

> > Of course, in a LE guest, plpar_hcall9() stores flipped values to memory.
> > 
> > > >  
> > > >  #define VPHN_FIELD_UNUSED	(0xffff)
> > > >  #define VPHN_FIELD_MSB		(0x8000)
> > > >  #define VPHN_FIELD_MASK		(~VPHN_FIELD_MSB)
> > > >  
> > > > -	for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
> > > > -		if (be16_to_cpup(field) == VPHN_FIELD_UNUSED)
> > > > +	for (i = 1, j = 0, k = 0; i < VPHN_ASSOC_BUFSIZE;) {
> > > > +		u16 field;
> > > > +
> > > > +		if (j % 4 == 0) {
> > > > +			fixed.packed[k] = cpu_to_be64(packed[k]);
> > > > +			k++;
> > > > +		}
> > > 
> > > So we have essentially a bunch of 16-bit fields ... the above loads and
> > > swap a whole 4 of them at once. However that means not only we byteswap
> > > them individually, but we also flip the order of the fields. This is
> > > ok ?
> > > 
> > 
> > Yes. FWIW, it is exactly what the current code does.
> > 
> > > > +		field = be16_to_cpu(fixed.field[j]);
> > > > +
> > > > +		if (field == VPHN_FIELD_UNUSED)
> > > >  			/* All significant fields processed.
> > > >  			 */
> > > >  			break;
> > > 
> > > For example, we might have USED,USED,USED,UNUSED ... after the swap, we
> > > now have UNUSED,USED,USED,USED ... and we stop parsing in the above
> > > line on the first one. Or am I missing something ? 
> > > 
> > 
> > If we get USED,USED,USED,UNUSED from memory, that means the hypervisor
> > has returned UNUSED,USED,USED,USED. My point is that it cannot happen:
> > why would the hypervisor care to pack a sequence of useful numbers with
> > holes in it ? 
> > FWIW, I could never observe such a thing in a PowerVM guest... All ones always
> > come after the payload.
> > 
> > > > -		if (be16_to_cpup(field) & VPHN_FIELD_MSB) {
> > > > +		if (field & VPHN_FIELD_MSB) {
> > > >  			/* Data is in the lower 15 bits of this field */
> > > > -			unpacked[i] = cpu_to_be32(
> > > > -				be16_to_cpup(field) & VPHN_FIELD_MASK);
> > > > -			field++;
> > > > +			unpacked[i++] = cpu_to_be32(field & VPHN_FIELD_MASK);
> > > > +			j++;
> > > >  		} else {
> > > >  			/* Data is in the lower 15 bits of this field
> > > >  			 * concatenated with the next 16 bit field
> > > >  			 */
> > > > -			unpacked[i] = *((__be32 *)field);
> > > > -			field += 2;
> > > > +			if (unlikely(j % 4 == 3)) {
> > > > +				/* The next field is to be copied from the next
> > > > +				 * 64-bit input value. We must fix it now.
> > > > +				 */
> > > > +				fixed.packed[k] = cpu_to_be64(packed[k]);
> > > > +				k++;
> > > > +			}
> > > > +
> > > > +			unpacked[i++] = *((__be32 *)&fixed.field[j]);
> > > > +			j += 2;
> > > >  		}
> > > >  	}
> > > >  
> > > > @@ -1460,11 +1479,8 @@ static long hcall_vphn(unsigned long cpu, __be32 *associativity)
> > > >  	long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
> > > >  	u64 flags = 1;
> > > >  	int hwcpu = get_hard_smp_processor_id(cpu);
> > > > -	int i;
> > > >  
> > > >  	rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, hwcpu);
> > > > -	for (i = 0; i < VPHN_REGISTER_COUNT; i++)
> > > > -		retbuf[i] = cpu_to_be64(retbuf[i]);
> > > >  	vphn_unpack_associativity(retbuf, associativity);
> > > >  
> > > >  	return rc;
> > > 
> > > 
> 
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity()
  2014-11-28  8:39         ` Greg Kurz
@ 2014-12-01  9:17           ` Michael Ellerman
  0 siblings, 0 replies; 9+ messages in thread
From: Michael Ellerman @ 2014-12-01  9:17 UTC (permalink / raw)
  To: Greg Kurz; +Cc: linuxppc-dev, Paul Mackerras

On Fri, 2014-11-28 at 09:39 +0100, Greg Kurz wrote:
> On Fri, 28 Nov 2014 12:49:08 +1100
> Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
> > In a second pass, we parse that stream, one 16-bytes at a time, and
> > we could do so with a simple loop of be16_to_cpup(foo++). I wouldn't
> > bother with the cast to 32-bit etc... if you encounter a 32-bit case,
> > you just fetch another 16-bit and do value = (old << 16) | new
> > 
> > I think that should lead to something more readable, no ?
> 
> Of course ! This is THE way to go. Thanks Ben ! :)
> 
> An while we're here, I have a question about VPHN_ASSOC_BUFSIZE. The
> H_HOME_NODE_ASSOCIATIVITY spec says that the stream:
> - is at most 64 * 6 = 384 bits long

That's from "Each of the registers R4-R9 ..."

> - may contain 16-bit numbers

"... is divided into 4 fields each 2 bytes long."

> - is padded with "all ones"
> 
> The stream could theoretically contain up to 384 / 16 = 24 domain numbers.

Yes I think that's right, based on:

"The high order bit of each 2 byte field is a length specifier:

1: The associativity domain number is contained in the low order 15 bits of the field,"

But then there's also:

"0: The associativity domain number is contained in the low order 15 bits of
the current field concatenated with the 16 bits of the next sequential field)"

> The current code expects no more than 12 domain numbers... and strangely
> seems to correlate the size of the output array to the size of the input
> one as noted in the comment:
> 
>  "6 64-bit registers unpacked into 12 32-bit associativity values"
> 
> My understanding is that the resulting array is be32 only because it is
> supposed to look like the ibm,associativity property from the DT... and
> I could find no clue that this property is limited to 12 values. Have I
> missed something ?

I don't know for sure, but I strongly suspect it's just confused about the two
options above. Probably when it was tested they only ever saw 12 32-bit values,
and so that assumption was allowed to stay in the code.

I'd be quite happy if you wanted to pull the parsing logic out into a separate
file, so we could write some userspace tests of it.

cheers

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2014-12-01  9:17 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-11-17 17:42 [PATCH REPOST 0/3] VPHN parsing fixes Greg Kurz
2014-11-17 17:42 ` [PATCH REPOST 1/3] powerpc/vphn: clarify the H_HOME_NODE_ASSOCIATIVITY API Greg Kurz
2014-11-17 17:42 ` [PATCH REPOST 2/3] powerpc/vphn: simplify the parsing code Greg Kurz
2014-11-17 17:42 ` [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity() Greg Kurz
2014-11-26 23:39   ` Benjamin Herrenschmidt
2014-11-27  9:28     ` Greg Kurz
2014-11-28  1:49       ` Benjamin Herrenschmidt
2014-11-28  8:39         ` Greg Kurz
2014-12-01  9:17           ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).