public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Vitaly Kuznetsov <vkuznets@redhat.com>
To: Andy Lutomirski <luto@kernel.org>
Cc: devel@linuxdriverproject.org,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Jork Loeser <Jork.Loeser@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>, X86 ML <x86@kernel.org>,
	"linux-kernel\@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH v3 08/10] x86/hyper-v: use hypercall for remote TLB flush
Date: Thu, 13 Jul 2017 14:46:20 +0200	[thread overview]
Message-ID: <87d194mrmr.fsf@vitty.brq.redhat.com> (raw)
In-Reply-To: <CALCETrUyhjy-tCZYYjRfZcRqHdA48iYzGoAk0QrWvOeVRhSmbQ@mail.gmail.com> (Andy Lutomirski's message of "Mon, 26 Jun 2017 18:36:46 -0700")

Andy Lutomirski <luto@kernel.org> writes:

> On Tue, May 23, 2017 at 5:36 AM, Vitaly Kuznetsov <vkuznets@redhat.com> wrote:
>> Andy Lutomirski <luto@kernel.org> writes:
>>
>>>
>>> Also, can you share the benchmark you used for these patches?
>>
>> I didn't do much while writing the patchset, mostly I was running the
>> attached dumb trasher (32 pthreads doing mmap/munmap). On a 16 vCPU
>> Hyper-V 2016 guest I get the following (just re-did the test with
>> 4.12-rc1):
>>
>> Before the patchset:
>> # time ./pthread_mmap ./randfile
>>
>> real    3m33.118s
>> user    0m3.698s
>> sys     3m16.624s
>>
>> After the patchset:
>> # time ./pthread_mmap ./randfile
>>
>> real    2m19.920s
>> user    0m2.662s
>> sys     2m9.948s
>>
>> K. Y.'s guys at Microsoft did additional testing for the patchset on
>> different Hyper-V deployments including Azure, they may share their
>> findings too.
>
> I ran this benchmark on my big TLB patchset, mainly to make sure I
> didn't regress your test.  I seem to have sped it up by 30% or so
> instead.  I need to study this a little bit to figure out why to make
> sure that the reason isn't that I'm failing to do flushes I need to
> do.

Got back to this and tested everything on WS2016 Hyper-V guest (24
vCPUs) with my slightly modified benchmark. The numbers are:

1) pre-patch:

real	1m15.775s
user	0m0.850s
sys	1m31.515s

2) your 'x86/pcid' series (PCID feature is not passed to the guest so this
is mainly your lazy tlb optimization):

real	0m55.135s
user	0m1.168s
sys	1m3.810s

3) My 'pv tlb shootdown' patchset on top of your 'x86/pcid' series:

real	0m48.891s
user	0m1.052s
sys	0m52.591s

As far as I understand I need to add
'setup_clear_cpu_cap(X86_FEATURE_PCID)' to my series to make things work
properly if this feature appears in the guest.

Other than that there is an additional room for optimization:
tlb_single_page_flush_ceiling, I'm not sure that with Hyper-V's PV the
default value of 33 is optimal. But the investigation can be done
separately.

AFAIU with your TLB preparatory work which got into 4.13 our series
become untangled and can go through different trees. I'll rebase mine
and send it to K. Y. to push through Greg's char-misc tree.

Is there anything blocking your PCID series from going into 4.14? It
seems to big a huge improvement for some workloads.

-- 
  Vitaly

  reply	other threads:[~2017-07-13 12:46 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-19 14:09 [PATCH v3 00/10] Hyper-V: praravirtualized remote TLB flushing and hypercall improvements Vitaly Kuznetsov
2017-05-19 14:09 ` [PATCH v3 01/10] x86/hyper-v: include hyperv/ only when CONFIG_HYPERV is set Vitaly Kuznetsov
2017-05-19 14:09 ` [PATCH v3 02/10] x86/hyper-v: stash the max number of virtual/logical processor Vitaly Kuznetsov
2017-05-19 14:09 ` [PATCH v3 03/10] x86/hyper-v: make hv_do_hypercall() inline Vitaly Kuznetsov
2017-05-19 14:09 ` [PATCH v3 04/10] x86/hyper-v: fast hypercall implementation Vitaly Kuznetsov
2017-05-21  3:18   ` Andy Lutomirski
2017-05-22 10:44     ` Vitaly Kuznetsov
2017-05-22 22:04       ` Andy Lutomirski
2017-05-19 14:09 ` [PATCH v3 05/10] hyper-v: use fast hypercall for HVCALL_SIGNAL_EVENT Vitaly Kuznetsov
2017-05-19 14:09 ` [PATCH v3 06/10] x86/hyper-v: implement rep hypercalls Vitaly Kuznetsov
2017-05-19 14:09 ` [PATCH v3 07/10] hyper-v: globalize vp_index Vitaly Kuznetsov
2017-05-19 14:09 ` [PATCH v3 08/10] x86/hyper-v: use hypercall for remote TLB flush Vitaly Kuznetsov
2017-05-21  3:23   ` Andy Lutomirski
     [not found]     ` <87zie5tbmm.fsf@vitty.brq.redhat.com>
2017-05-22 14:39       ` KY Srinivasan
2017-05-22 18:28       ` Andy Lutomirski
2017-05-23 12:36         ` Vitaly Kuznetsov
2017-05-23 17:50           ` KY Srinivasan
2017-06-27  1:36           ` Andy Lutomirski
2017-07-13 12:46             ` Vitaly Kuznetsov [this message]
2017-07-14 22:26               ` Andy Lutomirski
2017-05-19 14:09 ` [PATCH v3 09/10] x86/hyper-v: support extended CPU ranges for TLB flush hypercalls Vitaly Kuznetsov
2017-05-19 14:09 ` [PATCH v3 10/10] tracing/hyper-v: trace hyperv_mmu_flush_tlb_others() Vitaly Kuznetsov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87d194mrmr.fsf@vitty.brq.redhat.com \
    --to=vkuznets@redhat.com \
    --cc=Jork.Loeser@microsoft.com \
    --cc=devel@linuxdriverproject.org \
    --cc=haiyangz@microsoft.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=rostedt@goodmis.org \
    --cc=sthemmin@microsoft.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox