From: David Hildenbrand <david@redhat.com>
To: Peter Xu <peterx@redhat.com>, qemu-devel@nongnu.org
Cc: Juraj Marcin <jmarcin@redhat.com>,
Prasad Pandit <ppandit@redhat.com>,
Julia Suvorova <jusual@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Fabiano Rosas <farosas@suse.de>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Zhiyi Guo <zhguo@redhat.com>
Subject: Re: [PATCH 3/4] KVM: Dynamic sized kvm memslots array
Date: Wed, 4 Sep 2024 22:51:09 +0200 [thread overview]
Message-ID: <9c82dcd3-1326-4b78-a928-2a01b5c56c4d@redhat.com> (raw)
In-Reply-To: <4ed8cec2-413a-4254-8804-55befbcd0d00@redhat.com>
On 04.09.24 22:43, David Hildenbrand wrote:
> On 04.09.24 21:16, Peter Xu wrote:
>> Zhiyi reported an infinite loop issue in VFIO use case. The cause of that
>> was a separate discussion, however during that I found a regression of
>> dirty sync slowness when profiling.
>>
>> Each KVMMemoryListerner maintains an array of kvm memslots. Currently it's
>> statically allocated to be the max supported by the kernel. However after
>> Linux commit 4fc096a99e ("KVM: Raise the maximum number of user memslots"),
>> the max supported memslots reported now grows to some number large enough
>> so that it may not be wise to always statically allocate with the max
>> reported.
>>
>> What's worse, QEMU kvm code still walks all the allocated memslots entries
>> to do any form of lookups. It can drastically slow down all memslot
>> operations because each of such loop can run over 32K times on the new
>> kernels.
>>
>> Fix this issue by making the memslots to be allocated dynamically.
>
> Wouldn't it be sufficient to limit the walk to the actually used slots?
Ah, I remember that kvm_get_free_slot() is also rather inefficient
because we don't "close holes" when removing slots. So we would have to
walk up to the "highest slot ever used". Let me take a look at the patch.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-09-04 20:51 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-09-04 19:16 [PATCH 0/4] KVM: Dynamic sized memslots array Peter Xu
2024-09-04 19:16 ` [PATCH 1/4] KVM: Rename KVMState->nr_slots to nr_slots_max Peter Xu
2024-09-04 20:36 ` David Hildenbrand
2024-09-04 19:16 ` [PATCH 2/4] KVM: Define KVM_MEMSLOTS_NUM_MAX_DEFAULT Peter Xu
2024-09-04 20:39 ` David Hildenbrand
2024-09-04 20:56 ` Peter Xu
2024-09-04 19:16 ` [PATCH 3/4] KVM: Dynamic sized kvm memslots array Peter Xu
2024-09-04 20:43 ` David Hildenbrand
2024-09-04 20:51 ` David Hildenbrand [this message]
2024-09-04 20:55 ` Peter Xu
2024-09-04 21:07 ` David Hildenbrand
2024-09-04 21:20 ` Peter Xu
2024-09-04 21:23 ` David Hildenbrand
2024-09-04 21:34 ` Peter Xu
2024-09-04 21:38 ` David Hildenbrand
2024-09-04 21:46 ` Peter Xu
2024-09-04 21:58 ` Peter Xu
2024-09-04 19:16 ` [PATCH 4/4] KVM: Rename KVMMemoryListener.nr_used_slots to nr_slots_used Peter Xu
2024-09-04 20:40 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9c82dcd3-1326-4b78-a928-2a01b5c56c4d@redhat.com \
--to=david@redhat.com \
--cc=farosas@suse.de \
--cc=jmarcin@redhat.com \
--cc=jusual@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=ppandit@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=vkuznets@redhat.com \
--cc=zhguo@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).