From: Avi Kivity <avi@redhat.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: kvm@vger.kernel.org
Subject: Re: [PATCH v2 2/3] KVM: Optimize vcpu->requests slow path slightly
Date: Thu, 31 May 2012 12:27:09 +0300 [thread overview]
Message-ID: <4FC7396D.7050803@redhat.com> (raw)
In-Reply-To: <20120530200334.GA23297@amt.cnet>
On 05/30/2012 11:03 PM, Marcelo Tosatti wrote:
> On Sun, May 20, 2012 at 04:49:27PM +0300, Avi Kivity wrote:
>> Instead of using a atomic operation per active request, use just one
>> to get all requests at once, then check them with local ops. This
>> probably isn't any faster, since simultaneous requests are rare, but
>> it does reduce code size.
>>
>> Signed-off-by: Avi Kivity <avi@redhat.com>
>> ---
>> arch/x86/kvm/x86.c | 33 ++++++++++++++++++---------------
>> 1 file changed, 18 insertions(+), 15 deletions(-)
>>
>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> index 953e692..c0209eb 100644
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -5232,55 +5232,58 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>> bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
>> vcpu->run->request_interrupt_window;
>> bool req_immediate_exit = 0;
>> + ulong reqs;
>>
>> if (unlikely(req_int_win))
>> kvm_make_request(KVM_REQ_EVENT, vcpu);
>>
>> if (vcpu->requests) {
>> - if (kvm_check_request(KVM_REQ_MMU_RELOAD, vcpu))
>> + reqs = xchg(&vcpu->requests, 0UL);
>> +
>> + if (test_bit(KVM_REQ_MMU_RELOAD, &reqs))
>> kvm_mmu_unload(vcpu);
>> - if (kvm_check_request(KVM_REQ_MIGRATE_TIMER, vcpu))
>> + if (test_bit(KVM_REQ_MIGRATE_TIMER, &reqs))
>> __kvm_migrate_timers(vcpu);
>> - if (kvm_check_request(KVM_REQ_CLOCK_UPDATE, vcpu)) {
>> + if (test_bit(KVM_REQ_CLOCK_UPDATE, &reqs)) {
>> r = kvm_guest_time_update(vcpu);
>> if (unlikely(r))
>> goto out;
>> }
>
> Bailing out loses requests in "reqs".
Whoops, good catch.
>
> Caching the requests makes the following type of sequence behave strangely
>
> req = xchg(&vcpu->requests);
> if request is set
> request handler
> ...
> set REQ_EVENT
> ...
>
> prepare for guest entry
> vcpu->requests set
> bail
I don't really mind that. But I do want to reduce the overhead of a
request, they're not that rare in normal workloads.
How about
for_each_set_bit(req, &vcpu->requests, BITS_PER_LONG) {
clear_bit(bit, &vcpu->requests);
r = request_handlers[bit](vcpu);
if (r)
goto out;
}
? That makes for O(1) handling since usually we only have one request
set (KVM_REQ_EVENT). We'll make that the last one to avoid the scenario
above.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2012-05-31 9:27 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-05-20 13:49 [PATCH v2 0/3] Minor vcpu->requests improvements Avi Kivity
2012-05-20 13:49 ` [PATCH v2 1/3] KVM: Simplify KVM_REQ_EVENT/req_int_win handling Avi Kivity
2012-05-20 13:49 ` [PATCH v2 2/3] KVM: Optimize vcpu->requests slow path slightly Avi Kivity
2012-05-30 20:03 ` Marcelo Tosatti
2012-05-31 9:27 ` Avi Kivity [this message]
2012-06-02 0:23 ` Marcelo Tosatti
2012-05-20 13:49 ` [PATCH v2 3/3] KVM: Move mmu reload out of line Avi Kivity
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FC7396D.7050803@redhat.com \
--to=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox