From: Nicholas Piggin <npiggin@gmail.com>
To: Paul Mackerras <paulus@ozlabs.org>
Cc: linuxppc-dev@ozlabs.org
Subject: Re: [RFC PATCH] powerpc/powernv: Provide a way to force a core into SMT4 mode
Date: Sat, 27 Jan 2018 14:47:40 +1000 [thread overview]
Message-ID: <20180127144740.060c8be5@roar.ozlabs.ibm.com> (raw)
In-Reply-To: <20180127024546.GB5360@fergus.ozlabs.ibm.com>
On Sat, 27 Jan 2018 13:45:46 +1100
Paul Mackerras <paulus@ozlabs.org> wrote:
> On Sat, Jan 27, 2018 at 10:27:35AM +1000, Nicholas Piggin wrote:
> > On Thu, 25 Jan 2018 16:05:12 +1100
> > Paul Mackerras <paulus@ozlabs.org> wrote:
> >
> > > POWER9 processors up to and including "Nimbus" v2.2 have hardware
> > > bugs relating to transactional memory and thread reconfiguration.
> > > One of these bugs has a workaround which is to get the core into
> > > SMT4 state temporarily. This workaround is only needed when
> > > running bare-metal.
> >
> > How often will this be triggered, in practice? If it's infrequent,
> > then would it be better to just do a smp_call_function on siblings
> > and get them all spinning there? I'm looking sadly at the added
> > sync...
>
> We'll need to do this every time we exit a guest vcpu and the CPU is
> in "fake suspend" state, which will be the next exit after entering
> the vcpu when its MSR[TS] = 0b01 (suspend state). If the vcpu does a
> tresume or treclaim in fake suspend state, that causes a softpatch
> interrupt; the CPU doesn't get out of fake suspend state because of
> any guest instruction, only via hypervisor action.
>
> So it could be very rare or it could be quite frequent, depending on
> how much usage the guest makes of TM and how long it spends in suspend
> state.
>
> The smp_call_function on siblings wouldn't work in the case where some
> threads are off-line, since it only works on online CPUs. Also we
> would need to spin in the function being called on the other CPUs
> (otherwise you could get the situation where they wake up serially and
> you never have 3 or 4 threads simultaneously active), which would make
> me worry about deadlocks in the case where multiple threads are
> concurrently trying to get the core into SMT4 mode.
>
> If you can think of a way to eliminate the sync without introducing a
> race, I'm all ears. I haven't been able to.
Okay thanks for the details, yes it would have to be more complex than
a NULL function, I didn't realize offline CPUs would have to be involved.
I'll have a think about it.
A sync is about 1% of the stop/wake overhead, e.g., measured on P9 here
http://patchwork.ozlabs.org/patch/839017/
So it's not a showstopper. The approach seems like it should work AFAIKS.
Thanks,
Nick
next prev parent reply other threads:[~2018-01-27 4:47 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-25 5:05 [RFC PATCH] powerpc/powernv: Provide a way to force a core into SMT4 mode Paul Mackerras
2018-01-27 0:27 ` Nicholas Piggin
2018-01-27 2:45 ` Paul Mackerras
2018-01-27 4:47 ` Nicholas Piggin [this message]
2018-01-27 1:06 ` Ram Pai
2018-01-27 2:34 ` Paul Mackerras
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180127144740.060c8be5@roar.ozlabs.ibm.com \
--to=npiggin@gmail.com \
--cc=linuxppc-dev@ozlabs.org \
--cc=paulus@ozlabs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).