From: "Radim Krčmář" <rkrcmar@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: linux-kernel@vger.kernel.org, pbonzini@redhat.com, kvm@vger.kernel.org
Subject: Re: [PATCH 5/5] kvm: optimize GFN to memslot lookup with large slots amount
Date: Tue, 2 Dec 2014 18:33:51 +0100 [thread overview]
Message-ID: <20141202173350.GA11312@potion.brq.redhat.com> (raw)
In-Reply-To: <1417454967-4465-6-git-send-email-imammedo@redhat.com>
2014-12-01 17:29+0000, Igor Mammedov:
> Current linear search doesn't scale well when
> large amount of memslots is used and looked up slot
> is not in the beginning memslots array.
> Taking in account that memslots don't overlap, it's
> possible to switch sorting order of memslots array from
> 'npages' to 'base_gfn' and use binary search for
> memslot lookup by GFN.
>
> As result of switching to binary search lookup times
> are reduced with large amount of memslots.
>
> Following is a table of search_memslot() cycles
> during WS2008R2 guest boot.
>
> boot, boot + ~10 min
> mostly same of using it,
> slot lookup randomized lookup
> max average average
> cycles cycles cycles
>
> 13 slots : 1450 28 30
>
> 13 slots : 1400 30 40
> binary search
>
> 117 slots : 13000 30 460
>
> 117 slots : 2000 35 180
> binary search
>
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
Fast ... it looks that we don't even want to transfort the list-in-array
into a tree-in-array to have multiplication instead of division.
Reviewed-by: Radim Krčmář <rkrcmar@redhat.com>
(Actually, all patches.)
> include/linux/kvm_host.h | 34 ++++++++++++++++++++++------------
> virt/kvm/kvm_main.c | 8 +++++++-
> 2 files changed, 29 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 1a37144..193bca6 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -354,6 +354,7 @@ struct kvm_memslots {
> /* The mapping table from slot id to the index in memslots[]. */
> short id_to_index[KVM_MEM_SLOTS_NUM];
> atomic_t lru_slot;
> + int used_slots;
> };
>
> struct kvm {
> @@ -791,19 +792,28 @@ static inline void kvm_guest_exit(void)
> static inline struct kvm_memory_slot *
> search_memslots(struct kvm_memslots *slots, gfn_t gfn)
> {
> + int start = 0, end = slots->used_slots;
> int slot = atomic_read(&slots->lru_slot);
> - struct kvm_memory_slot *memslot = &slots->memslots[slot];
> -
> - if (gfn >= memslot->base_gfn &&
> - gfn < memslot->base_gfn + memslot->npages)
> - return memslot;
> -
> - kvm_for_each_memslot(memslot, slots)
> - if (gfn >= memslot->base_gfn &&
> - gfn < memslot->base_gfn + memslot->npages) {
> - atomic_set(&slots->lru_slot, memslot - slots->memslots);
> - return memslot;
> - }
> + struct kvm_memory_slot *memslots = slots->memslots;
> +
> + if (gfn >= memslots[slot].base_gfn &&
> + gfn < memslots[slot].base_gfn + memslots[slot].npages)
> + return &memslots[slot];
> +
> + while (start < end) {
> + slot = start + (end - start) / 2;
> +
> + if (gfn >= memslots[slot].base_gfn)
(Even thought division is costly, I think that checking here if 'slot'
is the one we want wouldn't help very much.)
> + end = slot;
> + else
> + start = slot + 1;
> + }
> +
> + if (gfn >= memslots[start].base_gfn &&
> + gfn < memslots[start].base_gfn + memslots[start].npages) {
> + atomic_set(&slots->lru_slot, start);
> + return &memslots[start];
> + }
>
> return NULL;
> }
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 162817f..759af659 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -679,8 +679,14 @@ static void update_memslots(struct kvm_memslots *slots,
> struct kvm_memory_slot *mslots = slots->memslots;
>
> WARN_ON(mslots[i].id != id);
> - if (!new->npages)
> + if (!new->npages) {
> new->base_gfn = 0;
> + if (mslots[i].npages)
> + slots->used_slots--;
> + } else {
> + if (!mslots[i].npages)
> + slots->used_slots++;
> + }
>
> while (i < KVM_MEM_SLOTS_NUM - 1 &&
> new->base_gfn <= mslots[i + 1].base_gfn) {
> --
> 1.8.3.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2014-12-02 17:33 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-12-01 17:29 [PATCH 0/5] kvm: memslots lookup optimization Igor Mammedov
2014-12-01 17:29 ` [PATCH 1/5] kvm: update_memslots: drop not needed check for the same number of pages Igor Mammedov
2014-12-01 17:29 ` [PATCH 2/5] kvm: update_memslots: drop not needed check for the same slot Igor Mammedov
2014-12-01 17:29 ` [PATCH 3/5] kvm: search_memslots: add simple LRU memslot caching Igor Mammedov
2014-12-01 17:29 ` [PATCH 4/5] kvm: change memslot sorting rule from size to GFN Igor Mammedov
2014-12-01 17:29 ` [PATCH 5/5] kvm: optimize GFN to memslot lookup with large slots amount Igor Mammedov
2014-12-02 17:33 ` Radim Krčmář [this message]
2014-12-02 18:45 ` Paolo Bonzini
2014-12-02 21:03 ` Radim Krčmář
2014-12-01 17:38 ` [PATCH 0/5] kvm: memslots lookup optimization Paolo Bonzini
2014-12-02 7:57 ` Igor Mammedov
2014-12-02 13:24 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20141202173350.GA11312@potion.brq.redhat.com \
--to=rkrcmar@redhat.com \
--cc=imammedo@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox