From: Peter Xu <peterx@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel@nongnu.org
Subject: Re: [PATCH] memory: batch allocate ioeventfds[] in address_space_update_ioeventfds()
Date: Tue, 18 Feb 2020 16:49:32 -0500 [thread overview]
Message-ID: <20200218214932.GD7090@xz-x1> (raw)
In-Reply-To: <20200218182226.913977-1-stefanha@redhat.com>
On Tue, Feb 18, 2020 at 06:22:26PM +0000, Stefan Hajnoczi wrote:
> Reallocing the ioeventfds[] array each time an element is added is very
> expensive as the number of ioeventfds increases. Batch allocate instead
> to amortize the cost of realloc.
>
> This patch reduces Linux guest boot times from 362s to 140s when there
> are 2 virtio-blk devices with 1 virtqueue and 99 virtio-blk devices with
> 32 virtqueues.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> memory.c | 17 ++++++++++++++---
> 1 file changed, 14 insertions(+), 3 deletions(-)
>
> diff --git a/memory.c b/memory.c
> index aeaa8dcc9e..2d6f931f8c 100644
> --- a/memory.c
> +++ b/memory.c
> @@ -794,10 +794,18 @@ static void address_space_update_ioeventfds(AddressSpace *as)
> FlatView *view;
> FlatRange *fr;
> unsigned ioeventfd_nb = 0;
> - MemoryRegionIoeventfd *ioeventfds = NULL;
> + unsigned ioeventfd_max;
> + MemoryRegionIoeventfd *ioeventfds;
> AddrRange tmp;
> unsigned i;
>
> + /*
> + * It is likely that the number of ioeventfds hasn't changed much, so use
> + * the previous size as the starting value.
> + */
> + ioeventfd_max = as->ioeventfd_nb;
> + ioeventfds = g_new(MemoryRegionIoeventfd, ioeventfd_max);
Would the ioeventfd_max being cached and never goes down but it can
only keep or increase? I'm not sure if that's a big problem, but
considering the commit message mentioned 99 virtio-blk with 32 queues
each, I'm not sure... :)
I'm thinking maybe start with a relative big number but always under
control (e.g., 64), then...
> +
> view = address_space_get_flatview(as);
> FOR_EACH_FLAT_RANGE(fr, view) {
> for (i = 0; i < fr->mr->ioeventfd_nb; ++i) {
> @@ -806,8 +814,11 @@ static void address_space_update_ioeventfds(AddressSpace *as)
> int128_make64(fr->offset_in_region)));
> if (addrrange_intersects(fr->addr, tmp)) {
> ++ioeventfd_nb;
> - ioeventfds = g_realloc(ioeventfds,
> - ioeventfd_nb * sizeof(*ioeventfds));
> + if (ioeventfd_nb > ioeventfd_max) {
> + ioeventfd_max += 64;
... do exponential increase here (max*=2) instead so still easy to
converge?
Thanks,
> + ioeventfds = g_realloc(ioeventfds,
> + ioeventfd_max * sizeof(*ioeventfds));
> + }
> ioeventfds[ioeventfd_nb-1] = fr->mr->ioeventfds[i];
> ioeventfds[ioeventfd_nb-1].addr = tmp;
> }
> --
> 2.24.1
>
--
Peter Xu
next prev parent reply other threads:[~2020-02-18 21:50 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-18 18:22 [PATCH] memory: batch allocate ioeventfds[] in address_space_update_ioeventfds() Stefan Hajnoczi
2020-02-18 21:49 ` Peter Xu [this message]
2020-02-19 9:18 ` Stefan Hajnoczi
2020-02-19 11:37 ` Paolo Bonzini
2020-02-19 11:36 ` Paolo Bonzini
2020-02-19 16:45 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200218214932.GD7090@xz-x1 \
--to=peterx@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).