From: David Gibson <david@gibson.dropbear.id.au>
To: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Cc: qemu-ppc@nongnu.org, qemu-devel@nongnu.org, groug@kaod.org,
muriloo@linux.ibm.com
Subject: Re: [Qemu-devel] [QEMU-PPC] [PATCH V2 1/2] target/ppc: Don't require private l1d cache on POWER8 for cap_ppc_safe_cache
Date: Tue, 12 Jun 2018 21:19:17 +1000 [thread overview]
Message-ID: <20180612111917.GF30690@umbus.fritz.box> (raw)
In-Reply-To: <20180612051630.17854-1-sjitindarsingh@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 2094 bytes --]
On Tue, Jun 12, 2018 at 03:16:29PM +1000, Suraj Jitindar Singh wrote:
> For cap_ppc_safe_cache to be set to workaround, we require both a l1d
> cache flush instruction and private l1d cache.
>
> On POWER8 don't require private l1d cache. This means a guest on a
> POWER8 machine can make use of the cache flush workarounds.
>
> Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Applied to ppc-for-3.0, thanks.
>
> ---
>
> V1 -> V2:
> - Use mfpvr() to detect host type
>
> ---
> target/ppc/kvm.c | 19 ++++++++++++++++++-
> 1 file changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
> index 2c0c34e125..7fe9d0126b 100644
> --- a/target/ppc/kvm.c
> +++ b/target/ppc/kvm.c
> @@ -2412,11 +2412,28 @@ bool kvmppc_has_cap_mmu_hash_v3(void)
> return cap_mmu_hash_v3;
> }
>
> +static bool kvmppc_power8_host(void)
> +{
> + bool ret = false;
> +#ifdef TARGET_PPC64
> + {
> + uint32_t base_pvr = CPU_POWERPC_POWER_SERVER_MASK & mfpvr();
> + ret = (base_pvr == CPU_POWERPC_POWER8E_BASE) ||
> + (base_pvr == CPU_POWERPC_POWER8NVL_BASE) ||
> + (base_pvr == CPU_POWERPC_POWER8_BASE);
> + }
> +#endif /* TARGET_PPC64 */
> + return ret;
> +}
> +
> static int parse_cap_ppc_safe_cache(struct kvm_ppc_cpu_char c)
> {
> + bool l1d_thread_priv_req = !kvmppc_power8_host();
> +
> if (~c.behaviour & c.behaviour_mask & H_CPU_BEHAV_L1D_FLUSH_PR) {
> return 2;
> - } else if ((c.character & c.character_mask & H_CPU_CHAR_L1D_THREAD_PRIV) &&
> + } else if ((!l1d_thread_priv_req ||
> + c.character & c.character_mask & H_CPU_CHAR_L1D_THREAD_PRIV) &&
> (c.character & c.character_mask
> & (H_CPU_CHAR_L1D_FLUSH_ORI30 | H_CPU_CHAR_L1D_FLUSH_TRIG2))) {
> return 1;
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
prev parent reply other threads:[~2018-06-12 11:45 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-12 5:16 [Qemu-devel] [QEMU-PPC] [PATCH V2 1/2] target/ppc: Don't require private l1d cache on POWER8 for cap_ppc_safe_cache Suraj Jitindar Singh
2018-06-12 5:16 ` [Qemu-devel] [QEMU-PPC] [PATCH V2 2/2] ppc/spapr_caps: Don't disable cap_cfpc on POWER8 by default Suraj Jitindar Singh
2018-06-12 11:19 ` David Gibson
2018-06-12 11:19 ` David Gibson [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180612111917.GF30690@umbus.fritz.box \
--to=david@gibson.dropbear.id.au \
--cc=groug@kaod.org \
--cc=muriloo@linux.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
--cc=sjitindarsingh@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).