From: Scott Wood <scottwood@freescale.com>
To: Mihai Caraman <mihai.caraman@freescale.com>
Cc: <kvm-ppc@vger.kernel.org>, <kvm@vger.kernel.org>,
<linuxppc-dev@lists.ozlabs.org>
Subject: Re: [PATCH v3] KVM: PPC: e500mc: Enhance tlb invalidation condition on vcpu schedule
Date: Tue, 17 Jun 2014 14:18:26 -0500 [thread overview]
Message-ID: <1403032706.6603.776.camel@snotra.buserror.net> (raw)
In-Reply-To: <1403032176-28362-1-git-send-email-mihai.caraman@freescale.com>
On Tue, 2014-06-17 at 22:09 +0300, Mihai Caraman wrote:
> On vcpu schedule, the condition checked for tlb pollution is too loose.
> The tlb entries of a vcpu become polluted (vs stale) only when a different
> vcpu within the same logical partition runs in-between. Optimize the tlb
> invalidation condition keeping last_vcpu_on_cpu per logical partition id.
>
> With the new invalidation condition, a guest shows 4% performance improvement
> on P5020DS while running a memory stress application with the cpu oversubscribed,
> the other guest running a cpu intensive workload.
>
> Guest - old invalidation condition
> real 3.89
> user 3.87
> sys 0.01
>
> Guest - enhanced invalidation condition
> real 3.75
> user 3.73
> sys 0.01
>
> Host
> real 3.70
> user 1.85
> sys 0.00
>
> The memory stress application accesses 4KB pages backed by 75% of available
> TLB0 entries:
>
> char foo[ENTRIES][4096] __attribute__ ((aligned (4096)));
>
> int main()
> {
> char bar;
> int i, j;
>
> for (i = 0; i < ITERATIONS; i++)
> for (j = 0; j < ENTRIES; j++)
> bar = foo[j][0];
>
> return 0;
> }
>
> Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com>
> Cc: Scott Wood <scottwood@freescale.com>
> ---
> v3:
> - use existing logic while keeping last_vcpu_per_cpu per lpid
>
> v2:
> - improve patch name and description
> - add performance results
>
>
> arch/powerpc/kvm/e500mc.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/kvm/e500mc.c b/arch/powerpc/kvm/e500mc.c
> index 17e4562..95e33e3 100644
> --- a/arch/powerpc/kvm/e500mc.c
> +++ b/arch/powerpc/kvm/e500mc.c
> @@ -110,7 +110,7 @@ void kvmppc_mmu_msr_notify(struct kvm_vcpu *vcpu, u32 old_msr)
> {
> }
>
> -static DEFINE_PER_CPU(struct kvm_vcpu *, last_vcpu_on_cpu);
> +static DEFINE_PER_CPU(struct kvm_vcpu * [KVMPPC_NR_LPIDS], last_vcpu_on_cpu);
Hmm, I didn't know you could express types like that. Is this special
syntax that only works for typeof?
No space after *
Name should be adjusted to match, something like last_vcpu_of_lpid (with
the _on_cpu being implied by the fact that it's PER_CPU).
-Scott
next prev parent reply other threads:[~2014-06-17 19:18 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-06-17 19:09 [PATCH v3] KVM: PPC: e500mc: Enhance tlb invalidation condition on vcpu schedule Mihai Caraman
2014-06-17 19:18 ` Scott Wood [this message]
2014-06-17 19:42 ` mihai.caraman
2014-06-17 19:47 ` Scott Wood
2014-06-17 20:02 ` mihai.caraman
2014-06-17 20:05 ` Scott Wood
2014-06-17 20:36 ` mihai.caraman
2014-06-17 20:42 ` Alexander Graf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1403032706.6603.776.camel@snotra.buserror.net \
--to=scottwood@freescale.com \
--cc=kvm-ppc@vger.kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mihai.caraman@freescale.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox