From: Quan Xu <quan.xu0@gmail.com>
To: Wanpeng Li <kernellwp@gmail.com>
Cc: Yang Zhang <yang.zhang.wz@gmail.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
kvm <kvm@vger.kernel.org>, Wanpeng Li <wanpeng.li@hotmail.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Radim Krcmar <rkrcmar@redhat.com>,
David Matlack <dmatlack@google.com>,
Alexander Graf <agraf@suse.de>,
Peter Zijlstra <peterz@infradead.org>,
linux-doc@vger.kernel.org
Subject: Re: [RFC PATCH v2 0/7] x86/idle: add halt poll support
Date: Thu, 14 Sep 2017 17:40:56 +0800 [thread overview]
Message-ID: <b4ce4453-636f-ead5-8aab-e05102bcdd72@gmail.com> (raw)
In-Reply-To: <CANRm+CxZHt-y728U60=XXvsaNL_Zc8Pu5i1WiV-dpAwNnXExVA@mail.gmail.com>
On 2017/9/14 17:19, Wanpeng Li wrote:
> 2017-09-14 16:36 GMT+08:00 Quan Xu <quan.xu0@gmail.com>:
>>
>> on 2017/9/13 19:56, Yang Zhang wrote:
>>> On 2017/8/29 22:56, Michael S. Tsirkin wrote:
>>>> On Tue, Aug 29, 2017 at 11:46:34AM +0000, Yang Zhang wrote:
>>>>> Some latency-intensive workload will see obviously performance
>>>>> drop when running inside VM.
>>>>
>>>> But are we trading a lot of CPU for a bit of lower latency?
>>>>
>>>>> The main reason is that the overhead
>>>>> is amplified when running inside VM. The most cost i have seen is
>>>>> inside idle path.
>>>>>
>>>>> This patch introduces a new mechanism to poll for a while before
>>>>> entering idle state. If schedule is needed during poll, then we
>>>>> don't need to goes through the heavy overhead path.
>>>>
>>>> Isn't it the job of an idle driver to find the best way to
>>>> halt the CPU?
>>>>
>>>> It looks like just by adding a cstate we can make it
>>>> halt at higher latencies only. And at lower latencies,
>>>> if it's doing a good job we can hopefully use mwait to
>>>> stop the CPU.
>>>>
>>>> In fact I have been experimenting with exactly that.
>>>> Some initial results are encouraging but I could use help
>>>> with testing and especially tuning. If you can help
>>>> pls let me know!
>>>
>>> Quan, Can you help to test it and give result? Thanks.
>>>
>> Hi, MST
>>
>> I have tested the patch "intel_idle: add pv cstates when running on kvm" on
>> a recent host that allows guests
>> to execute mwait without an exit. also I have tested our patch "[RFC PATCH
>> v2 0/7] x86/idle: add halt poll support",
>> upstream linux, and idle=poll.
>>
>> the following is the result (which seems better than ever berfore, as I ran
>> test case on a more powerful machine):
>>
>> for __netperf__, the first column is trans. rate per sec, the second column
>> is CPU utilzation.
>>
>> 1. upstream linux
> This "upstream linux" means that disables the kvm adaptive
> halt-polling after confirm with Xu Quan.
upstream linux -- the source code is just from upstream linux, without
our patch or MST's patch..
yes, we disable kvm halt-polling(halt_poll_ns=0) for _all_of_ following
cases.
Quan
> Regards,
> Wanpeng Li
>
>> 28371.7 bits/s -- 76.6 %CPU
>>
>> 2. idle=poll
>>
>> 34372 bit/s -- 999.3 %CPU
>>
>> 3. "[RFC PATCH v2 0/7] x86/idle: add halt poll support", with different
>> values of parameter 'halt_poll_threshold':
>>
>> 28362.7 bits/s -- 74.7 %CPU (halt_poll_threshold=10000)
>> 32949.5 bits/s -- 82.5 %CPU (halt_poll_threshold=20000)
>> 39717.9 bits/s -- 104.1 %CPU (halt_poll_threshold=30000)
>> 40137.9 bits/s -- 104.4 %CPU (halt_poll_threshold=40000)
>> 40079.8 bits/s -- 105.6 %CPU (halt_poll_threshold=50000)
>>
>>
>> 4. "intel_idle: add pv cstates when running on kvm"
>>
>> 33041.8 bits/s -- 999.4 %CPU
>>
>>
>>
>>
>>
>> for __ctxsw__, the first column is the time per process context switches,
>> the second column is CPU utilzation..
>>
>> 1. upstream linux
>>
>> 3624.19 ns/ctxsw -- 191.9 %CPU
>>
>> 2. idle=poll
>>
>> 3419.66 ns/ctxsw -- 999.2 %CPU
>>
>> 3. "[RFC PATCH v2 0/7] x86/idle: add halt poll support", with different
>> values of parameter 'halt_poll_threshold':
>>
>> 1123.40 ns/ctxsw -- 199.6 %CPU (halt_poll_threshold=10000)
>> 1127.38 ns/ctxsw -- 199.7 %CPU (halt_poll_threshold=20000)
>> 1113.58 ns/ctxsw -- 199.6 %CPU (halt_poll_threshold=30000)
>> 1117.12 ns/ctxsw -- 199.6 %CPU (halt_poll_threshold=40000)
>> 1121.62 ns/ctxsw -- 199.6 %CPU (halt_poll_threshold=50000)
>>
>> 4. "intel_idle: add pv cstates when running on kvm"
>>
>> 3427.59 ns/ctxsw -- 999.4 %CPU
>>
>> -Quan
prev parent reply other threads:[~2017-09-14 9:40 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-29 11:46 [RFC PATCH v2 0/7] x86/idle: add halt poll support Yang Zhang
2017-08-29 11:46 ` [RFC PATCH v2 1/7] x86/paravirt: Add pv_idle_ops to paravirt ops Yang Zhang
2017-08-29 13:55 ` Konrad Rzeszutek Wilk
2017-08-30 7:33 ` Juergen Gross
2017-09-01 6:50 ` Yang Zhang
2017-08-29 11:46 ` [RFC PATCH v2 2/7] KVM guest: register kvm_idle_poll for pv_idle_ops Yang Zhang
2017-08-29 11:46 ` [RFC PATCH v2 3/7] sched/idle: Add poll before enter real idle path Yang Zhang
2017-08-29 12:45 ` Peter Zijlstra
2017-09-01 5:57 ` Quan Xu
2017-09-14 8:41 ` Quan Xu
2017-09-14 9:18 ` Borislav Petkov
2017-08-29 14:39 ` Borislav Petkov
2017-09-01 6:49 ` Quan Xu
2017-09-29 10:39 ` Quan Xu
2017-08-29 11:46 ` [RFC PATCH v2 4/7] x86/paravirt: Add update in x86/paravirt pv_idle_ops Yang Zhang
2017-08-29 11:46 ` [RFC PATCH v2 5/7] Documentation: Add three sysctls for smart idle poll Yang Zhang
2017-08-29 17:20 ` Luis R. Rodriguez
2017-08-29 11:46 ` [RFC PATCH v2 6/7] KVM guest: introduce smart idle poll algorithm Yang Zhang
2017-08-29 11:46 ` [RFC PATCH v2 7/7] sched/idle: update poll time when wakeup from idle Yang Zhang
2017-08-29 12:46 ` Peter Zijlstra
2017-09-01 7:30 ` Yang Zhang
2017-09-29 10:29 ` Quan Xu
2017-08-29 11:58 ` [RFC PATCH v2 0/7] x86/idle: add halt poll support Alexander Graf
2017-09-01 6:21 ` Yang Zhang
2017-08-29 13:03 ` Andi Kleen
2017-08-29 14:02 ` Wanpeng Li
2017-08-29 14:27 ` Konrad Rzeszutek Wilk
2017-08-29 14:36 ` Michael S. Tsirkin
2017-09-01 6:32 ` Yang Zhang
2017-09-01 6:52 ` Wanpeng Li
2017-09-01 6:44 ` Yang Zhang
2017-09-01 6:58 ` Wanpeng Li
2017-09-01 7:53 ` Yang Zhang
2017-08-29 14:56 ` Michael S. Tsirkin
2017-09-13 11:56 ` Yang Zhang
2017-09-14 8:36 ` Quan Xu
2017-09-14 9:19 ` Wanpeng Li
2017-09-14 9:40 ` Quan Xu [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b4ce4453-636f-ead5-8aab-e05102bcdd72@gmail.com \
--to=quan.xu0@gmail.com \
--cc=agraf@suse.de \
--cc=dmatlack@google.com \
--cc=kernellwp@gmail.com \
--cc=kvm@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterz@infradead.org \
--cc=rkrcmar@redhat.com \
--cc=tglx@linutronix.de \
--cc=wanpeng.li@hotmail.com \
--cc=yang.zhang.wz@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox