From: Paolo Bonzini <pbonzini@redhat.com>
To: David Matlack <dmatlack@google.com>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
kvm list <kvm@vger.kernel.org>,
riel@redhat.com, rkrcmar@redhat.com,
Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [PATCH RFC] kvm: x86: add halt_poll module parameter
Date: Thu, 05 Feb 2015 22:24:22 +0100 [thread overview]
Message-ID: <54D3DF86.4020300@redhat.com> (raw)
In-Reply-To: <CALzav=fj+-sr_RLP8eRnVm2dTYb8umak_XTWzpsfAO0maXDc-Q@mail.gmail.com>
On 05/02/2015 21:39, David Matlack wrote:
>> This parameter helps a lot for latency-bound workloads [...]
>> KVM's performance here is usually around 30% of bare metal,
>> or 50% if you use cache=directsync or cache=writethrough.
>> With this patch performance reaches 60-65% of bare metal and, more
>> important, 99% of what you get if you use idle=poll in the guest.
>
> I used loopback TCP_RR and loopback memcache as benchmarks for halt
> polling. I saw very similar results as you (before: 40% bare metal,
> after: 60-65% bare metal and 95% of guest idle=poll).
Good that it also works for network!
> Reviewed-by: David Matlack <dmatlack@google.com>
>
>> arch/x86/include/asm/kvm_host.h | 1 +
>> arch/x86/kvm/x86.c | 28 ++++++++++++++++++++++++----
>> include/linux/kvm_host.h | 1 +
>> virt/kvm/kvm_main.c | 22 +++++++++++++++-------
>> 4 files changed, 41 insertions(+), 11 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 848947ac6ade..a236e39cc385 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -655,6 +655,7 @@ struct kvm_vcpu_stat {
>> u32 irq_window_exits;
>> u32 nmi_window_exits;
>> u32 halt_exits;
>> + u32 halt_successful_poll;
>> u32 halt_wakeup;
>> u32 request_irq_exits;
>> u32 irq_exits;
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 1373e04e1f19..b7b20828f01c 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -96,6 +96,9 @@ EXPORT_SYMBOL_GPL(kvm_x86_ops);
>> static bool ignore_msrs = 0;
>> module_param(ignore_msrs, bool, S_IRUGO | S_IWUSR);
>>
>> +unsigned int halt_poll = 0;
>> +module_param(halt_poll, uint, S_IRUGO | S_IWUSR);
>
> Suggest encoding the units in the name. "halt_poll_cycles" in this case.
I left it out because of the parallel with ple_window/ple_gap. But I
will call it "halt_poll_ns" in the next version.
>> +
>> unsigned int min_timer_period_us = 500;
>> module_param(min_timer_period_us, uint, S_IRUGO | S_IWUSR);
>>
>> @@ -145,6 +148,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
>> { "irq_window", VCPU_STAT(irq_window_exits) },
>> { "nmi_window", VCPU_STAT(nmi_window_exits) },
>> { "halt_exits", VCPU_STAT(halt_exits) },
>> + { "halt_successful_poll", VCPU_STAT(halt_successful_poll) },
>> { "halt_wakeup", VCPU_STAT(halt_wakeup) },
>> { "hypercalls", VCPU_STAT(hypercalls) },
>> { "request_irq", VCPU_STAT(request_irq_exits) },
>> @@ -5819,13 +5823,29 @@ void kvm_arch_exit(void)
>> int kvm_emulate_halt(struct kvm_vcpu *vcpu)
>> {
>> ++vcpu->stat.halt_exits;
>> - if (irqchip_in_kernel(vcpu->kvm)) {
>> - vcpu->arch.mp_state = KVM_MP_STATE_HALTED;
>> - return 1;
>> - } else {
>> + if (!irqchip_in_kernel(vcpu->kvm)) {
>> vcpu->run->exit_reason = KVM_EXIT_HLT;
>> return 0;
>> }
>> +
>> + vcpu->arch.mp_state = KVM_MP_STATE_HALTED;
>> + if (halt_poll) {
>
> Would it be useful to poll in kvm_vcpu_block() for the benefit of all
> arch's?
Sure. Especially if I use time instead of cycles.
>> + u64 start, curr;
>> + rdtscll(start);
>
> Why cycles instead of time?
People's love for rdtsc grows every time you tell them it's wrong...
>> + do {
>> + /*
>> + * This sets KVM_REQ_UNHALT if an interrupt
>> + * arrives.
>> + */
>> + if (kvm_vcpu_check_block(vcpu) < 0) {
>> + ++vcpu->stat.halt_successful_poll;
>> + break;
>> + }
>> + rdtscll(curr);
>> + } while(!need_resched() && curr - start < halt_poll);
>
> I found that using need_resched() was not sufficient at preventing
> VCPUs from delaying their own progress. To test this try running with
> and without polling on a 2 VCPU VM, confined to 1 PCPU, that is running
> loopback TCP_RR in the VM. The problem goes away if you stop polling as
> soon as there are runnable threads on your cpu. (e.g. use
> "single_task_running()" instead of "!need_resched()"
> http://lxr.free-electrons.com/source/kernel/sched/core.c#L2398 ). This
> also guarantees polling only delays the idle thread.
Great, I'll include all of your suggestions in the next version of the
patch.
Paolo
next prev parent reply other threads:[~2015-02-05 21:24 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-02-05 16:05 [PATCH RFC] kvm: x86: add halt_poll module parameter Paolo Bonzini
2015-02-05 17:46 ` Rik van Riel
2015-02-05 17:53 ` Radim Krčmář
2015-02-05 19:18 ` Paolo Bonzini
2015-02-05 18:55 ` Jan Kiszka
2015-02-05 19:20 ` Paolo Bonzini
2015-02-05 19:23 ` Rik van Riel
2015-02-05 20:14 ` Paolo Bonzini
2015-02-05 20:39 ` David Matlack
2015-02-05 21:24 ` Paolo Bonzini [this message]
2015-02-05 23:34 ` Marcelo Tosatti
2015-02-05 23:47 ` Marcelo Tosatti
2015-02-06 7:50 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54D3DF86.4020300@redhat.com \
--to=pbonzini@redhat.com \
--cc=dmatlack@google.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=riel@redhat.com \
--cc=rkrcmar@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).