linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kurz <gkurz@linux.vnet.ibm.com>
To: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: linuxppc-dev@lists.ozlabs.org, Paul Mackerras <paulus@samba.org>
Subject: Re: [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity()
Date: Fri, 28 Nov 2014 09:39:19 +0100	[thread overview]
Message-ID: <20141128093919.700f1874@bahia.local> (raw)
In-Reply-To: <1417139348.2852.17.camel@kernel.crashing.org>

On Fri, 28 Nov 2014 12:49:08 +1100
Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:

> On Thu, 2014-11-27 at 10:28 +0100, Greg Kurz wrote:
> > On Thu, 27 Nov 2014 10:39:23 +1100
> > Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
> > 
> > > On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote:
> > > > The first argument to vphn_unpack_associativity() is a const long *, but the
> > > > parsing code expects __be64 values actually. This is inconsistent. We should
> > > > either pass a const __be64 * or change vphn_unpack_associativity() so that
> > > > it fixes endianness by itself.
> > > > 
> > > > This patch does the latter, since the caller doesn't need to know about
> > > > endianness and this allows to fix significant 64-bit values only. Please
> > > > note that the previous code was able to cope with 32-bit fields being split
> > > > accross two consecutives 64-bit values. Since PAPR+ doesn't say this cannot
> > > > happen, the behaviour was kept. It requires extra checking to know when fixing
> > > > is needed though.
> > > 
> > > While I agree with moving the endian fixing down, the patch makes me
> > > nervous. Note that I don't fully understand the format of what we are
> > > parsing here so I might be wrong but ...
> > > 
> > 
> > My understanding of PAPR+ is that H_HOME_NODE_ASSOCIATIVITY returns a sequence of
> > numbers in registers R4 to R9 (that is 64 * 6 = 384 bits). The numbers are either
> > 16-bit long (if high order bit is 1) or 32-bit long. The remaining unused bits are
> > set to 1. 
> 
> Ok, that's the bit I was missing. What we get is thus not a memory array
> but a register one, which we "incorrectly" swap when writing to memory
> inside plpar_hcall9().
> 

Yes.

> Now, I'm not sure that replacing:
> 
> -	for (i = 0; i < VPHN_REGISTER_COUNT; i++)
> -		retbuf[i] = cpu_to_be64(retbuf[i]);
> 
> With:
> 
> +		if (j % 4 == 0) {
> +			fixed.packed[k] = cpu_to_be64(packed[k]);
> +			k++;
> +		}
> 
> Brings any benefit in term of readability. It makes sense to have a
> "first pass" that undoes the helper swapping to re-create the original
> "byte stream".
> 

I was myself no quite satisfied by this change and looking for some tips :)

> In a second pass, we parse that stream, one 16-bytes at a time, and
> we could do so with a simple loop of be16_to_cpup(foo++). I wouldn't
> bother with the cast to 32-bit etc... if you encounter a 32-bit case,
> you just fetch another 16-bit and do value = (old << 16) | new
> 
> I think that should lead to something more readable, no ?
> 

Of course ! This is THE way to go. Thanks Ben ! :)

An while we're here, I have a question about VPHN_ASSOC_BUFSIZE. The
H_HOME_NODE_ASSOCIATIVITY spec says that the stream:
- is at most 64 * 6 = 384 bits long
- may contain 16-bit numbers
- is padded with "all ones"

The stream could theoretically contain up to 384 / 16 = 24 domain numbers.

The current code expects no more than 12 domain numbers... and strangely
seems to correlate the size of the output array to the size of the input
one as noted in the comment:

 "6 64-bit registers unpacked into 12 32-bit associativity values"

My understanding is that the resulting array is be32 only because it is
supposed to look like the ibm,associativity property from the DT... and
I could find no clue that this property is limited to 12 values. Have I
missed something ?

> > Of course, in a LE guest, plpar_hcall9() stores flipped values to memory.
> > 
> > > >  
> > > >  #define VPHN_FIELD_UNUSED	(0xffff)
> > > >  #define VPHN_FIELD_MSB		(0x8000)
> > > >  #define VPHN_FIELD_MASK		(~VPHN_FIELD_MSB)
> > > >  
> > > > -	for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
> > > > -		if (be16_to_cpup(field) == VPHN_FIELD_UNUSED)
> > > > +	for (i = 1, j = 0, k = 0; i < VPHN_ASSOC_BUFSIZE;) {
> > > > +		u16 field;
> > > > +
> > > > +		if (j % 4 == 0) {
> > > > +			fixed.packed[k] = cpu_to_be64(packed[k]);
> > > > +			k++;
> > > > +		}
> > > 
> > > So we have essentially a bunch of 16-bit fields ... the above loads and
> > > swap a whole 4 of them at once. However that means not only we byteswap
> > > them individually, but we also flip the order of the fields. This is
> > > ok ?
> > > 
> > 
> > Yes. FWIW, it is exactly what the current code does.
> > 
> > > > +		field = be16_to_cpu(fixed.field[j]);
> > > > +
> > > > +		if (field == VPHN_FIELD_UNUSED)
> > > >  			/* All significant fields processed.
> > > >  			 */
> > > >  			break;
> > > 
> > > For example, we might have USED,USED,USED,UNUSED ... after the swap, we
> > > now have UNUSED,USED,USED,USED ... and we stop parsing in the above
> > > line on the first one. Or am I missing something ? 
> > > 
> > 
> > If we get USED,USED,USED,UNUSED from memory, that means the hypervisor
> > has returned UNUSED,USED,USED,USED. My point is that it cannot happen:
> > why would the hypervisor care to pack a sequence of useful numbers with
> > holes in it ? 
> > FWIW, I could never observe such a thing in a PowerVM guest... All ones always
> > come after the payload.
> > 
> > > > -		if (be16_to_cpup(field) & VPHN_FIELD_MSB) {
> > > > +		if (field & VPHN_FIELD_MSB) {
> > > >  			/* Data is in the lower 15 bits of this field */
> > > > -			unpacked[i] = cpu_to_be32(
> > > > -				be16_to_cpup(field) & VPHN_FIELD_MASK);
> > > > -			field++;
> > > > +			unpacked[i++] = cpu_to_be32(field & VPHN_FIELD_MASK);
> > > > +			j++;
> > > >  		} else {
> > > >  			/* Data is in the lower 15 bits of this field
> > > >  			 * concatenated with the next 16 bit field
> > > >  			 */
> > > > -			unpacked[i] = *((__be32 *)field);
> > > > -			field += 2;
> > > > +			if (unlikely(j % 4 == 3)) {
> > > > +				/* The next field is to be copied from the next
> > > > +				 * 64-bit input value. We must fix it now.
> > > > +				 */
> > > > +				fixed.packed[k] = cpu_to_be64(packed[k]);
> > > > +				k++;
> > > > +			}
> > > > +
> > > > +			unpacked[i++] = *((__be32 *)&fixed.field[j]);
> > > > +			j += 2;
> > > >  		}
> > > >  	}
> > > >  
> > > > @@ -1460,11 +1479,8 @@ static long hcall_vphn(unsigned long cpu, __be32 *associativity)
> > > >  	long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
> > > >  	u64 flags = 1;
> > > >  	int hwcpu = get_hard_smp_processor_id(cpu);
> > > > -	int i;
> > > >  
> > > >  	rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, hwcpu);
> > > > -	for (i = 0; i < VPHN_REGISTER_COUNT; i++)
> > > > -		retbuf[i] = cpu_to_be64(retbuf[i]);
> > > >  	vphn_unpack_associativity(retbuf, associativity);
> > > >  
> > > >  	return rc;
> > > 
> > > 
> 
> 

  reply	other threads:[~2014-11-28  8:39 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-11-17 17:42 [PATCH REPOST 0/3] VPHN parsing fixes Greg Kurz
2014-11-17 17:42 ` [PATCH REPOST 1/3] powerpc/vphn: clarify the H_HOME_NODE_ASSOCIATIVITY API Greg Kurz
2014-11-17 17:42 ` [PATCH REPOST 2/3] powerpc/vphn: simplify the parsing code Greg Kurz
2014-11-17 17:42 ` [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity() Greg Kurz
2014-11-26 23:39   ` Benjamin Herrenschmidt
2014-11-27  9:28     ` Greg Kurz
2014-11-28  1:49       ` Benjamin Herrenschmidt
2014-11-28  8:39         ` Greg Kurz [this message]
2014-12-01  9:17           ` Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141128093919.700f1874@bahia.local \
    --to=gkurz@linux.vnet.ibm.com \
    --cc=benh@kernel.crashing.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=paulus@samba.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).