From: Greg Kurz <gkurz@linux.vnet.ibm.com>
To: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: linuxppc-dev@lists.ozlabs.org, Paul Mackerras <paulus@samba.org>
Subject: Re: [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity()
Date: Thu, 27 Nov 2014 10:28:12 +0100 [thread overview]
Message-ID: <20141127102812.7d1e625b@bahia.local> (raw)
In-Reply-To: <1417045163.5089.67.camel@kernel.crashing.org>
On Thu, 27 Nov 2014 10:39:23 +1100
Benjamin Herrenschmidt <benh@kernel.crashing.org> wrote:
> On Mon, 2014-11-17 at 18:42 +0100, Greg Kurz wrote:
> > The first argument to vphn_unpack_associativity() is a const long *, but the
> > parsing code expects __be64 values actually. This is inconsistent. We should
> > either pass a const __be64 * or change vphn_unpack_associativity() so that
> > it fixes endianness by itself.
> >
> > This patch does the latter, since the caller doesn't need to know about
> > endianness and this allows to fix significant 64-bit values only. Please
> > note that the previous code was able to cope with 32-bit fields being split
> > accross two consecutives 64-bit values. Since PAPR+ doesn't say this cannot
> > happen, the behaviour was kept. It requires extra checking to know when fixing
> > is needed though.
>
> While I agree with moving the endian fixing down, the patch makes me
> nervous. Note that I don't fully understand the format of what we are
> parsing here so I might be wrong but ...
>
My understanding of PAPR+ is that H_HOME_NODE_ASSOCIATIVITY returns a sequence of
numbers in registers R4 to R9 (that is 64 * 6 = 384 bits). The numbers are either
16-bit long (if high order bit is 1) or 32-bit long. The remaining unused bits are
set to 1.
Of course, in a LE guest, plpar_hcall9() stores flipped values to memory.
> >
> > #define VPHN_FIELD_UNUSED (0xffff)
> > #define VPHN_FIELD_MSB (0x8000)
> > #define VPHN_FIELD_MASK (~VPHN_FIELD_MSB)
> >
> > - for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
> > - if (be16_to_cpup(field) == VPHN_FIELD_UNUSED)
> > + for (i = 1, j = 0, k = 0; i < VPHN_ASSOC_BUFSIZE;) {
> > + u16 field;
> > +
> > + if (j % 4 == 0) {
> > + fixed.packed[k] = cpu_to_be64(packed[k]);
> > + k++;
> > + }
>
> So we have essentially a bunch of 16-bit fields ... the above loads and
> swap a whole 4 of them at once. However that means not only we byteswap
> them individually, but we also flip the order of the fields. This is
> ok ?
>
Yes. FWIW, it is exactly what the current code does.
> > + field = be16_to_cpu(fixed.field[j]);
> > +
> > + if (field == VPHN_FIELD_UNUSED)
> > /* All significant fields processed.
> > */
> > break;
>
> For example, we might have USED,USED,USED,UNUSED ... after the swap, we
> now have UNUSED,USED,USED,USED ... and we stop parsing in the above
> line on the first one. Or am I missing something ?
>
If we get USED,USED,USED,UNUSED from memory, that means the hypervisor
has returned UNUSED,USED,USED,USED. My point is that it cannot happen:
why would the hypervisor care to pack a sequence of useful numbers with
holes in it ?
FWIW, I could never observe such a thing in a PowerVM guest... All ones always
come after the payload.
> > - if (be16_to_cpup(field) & VPHN_FIELD_MSB) {
> > + if (field & VPHN_FIELD_MSB) {
> > /* Data is in the lower 15 bits of this field */
> > - unpacked[i] = cpu_to_be32(
> > - be16_to_cpup(field) & VPHN_FIELD_MASK);
> > - field++;
> > + unpacked[i++] = cpu_to_be32(field & VPHN_FIELD_MASK);
> > + j++;
> > } else {
> > /* Data is in the lower 15 bits of this field
> > * concatenated with the next 16 bit field
> > */
> > - unpacked[i] = *((__be32 *)field);
> > - field += 2;
> > + if (unlikely(j % 4 == 3)) {
> > + /* The next field is to be copied from the next
> > + * 64-bit input value. We must fix it now.
> > + */
> > + fixed.packed[k] = cpu_to_be64(packed[k]);
> > + k++;
> > + }
> > +
> > + unpacked[i++] = *((__be32 *)&fixed.field[j]);
> > + j += 2;
> > }
> > }
> >
> > @@ -1460,11 +1479,8 @@ static long hcall_vphn(unsigned long cpu, __be32 *associativity)
> > long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
> > u64 flags = 1;
> > int hwcpu = get_hard_smp_processor_id(cpu);
> > - int i;
> >
> > rc = plpar_hcall9(H_HOME_NODE_ASSOCIATIVITY, retbuf, flags, hwcpu);
> > - for (i = 0; i < VPHN_REGISTER_COUNT; i++)
> > - retbuf[i] = cpu_to_be64(retbuf[i]);
> > vphn_unpack_associativity(retbuf, associativity);
> >
> > return rc;
>
>
next prev parent reply other threads:[~2014-11-27 9:28 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-11-17 17:42 [PATCH REPOST 0/3] VPHN parsing fixes Greg Kurz
2014-11-17 17:42 ` [PATCH REPOST 1/3] powerpc/vphn: clarify the H_HOME_NODE_ASSOCIATIVITY API Greg Kurz
2014-11-17 17:42 ` [PATCH REPOST 2/3] powerpc/vphn: simplify the parsing code Greg Kurz
2014-11-17 17:42 ` [PATCH REPOST 3/3] powerpc/vphn: move endianness fixing to vphn_unpack_associativity() Greg Kurz
2014-11-26 23:39 ` Benjamin Herrenschmidt
2014-11-27 9:28 ` Greg Kurz [this message]
2014-11-28 1:49 ` Benjamin Herrenschmidt
2014-11-28 8:39 ` Greg Kurz
2014-12-01 9:17 ` Michael Ellerman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20141127102812.7d1e625b@bahia.local \
--to=gkurz@linux.vnet.ibm.com \
--cc=benh@kernel.crashing.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=paulus@samba.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).