From: Alexey Kardashevskiy <aik@ozlabs.ru>
To: qemu-devel@nongnu.org
Cc: Alexey Kardashevskiy <aik@ozlabs.ru>,
David Gibson <david@gibson.dropbear.id.au>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: [Qemu-devel] [RFC PATCH qemu] exec: Destroy dispatch immediately
Date: Fri, 25 Aug 2017 18:31:23 +1000 [thread overview]
Message-ID: <20170825083123.47432-1-aik@ozlabs.ru> (raw)
In-Reply-To: <20170824123006.GK5379@umbus.fritz.box>
Otherwise old dispatch holds way too much memory before RCU gets
a chance to free old dispatches.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
This is a follow-up to the "Memory use with >100 virtio devices"
thread.
I assume this is a dirty hack (which fixes the problem though)
and I wonder what the proper solution would be. Thanks.
What happens here is that every virtio block device creates 2 address
spaces - for modern config space (called "virtio-pci-cfg-as") and
for busmaster (common pci thing, called after the device name,
in my case "virtio-blk-pci").
Each address_space_init() updates topology for _every_ address space.
Every topology update (address_space_update_topology()) creates a new
dispatch tree - AddressSpaceDispatch with nodes (1KB) and
sections (48KB) and destroys the old one.
However the dispatch destructor is postponed via RCU which does not
get a chance to execute until the machine is initialized but before
we get there, memory is not returned to the pool, and this is a lot
of memory which grows n^2.
Interestingly, mem_add() from exec.c is called twice:
as as->dispatch_listener.region_add() and
as as->dispatch_listener.region_nop() - I did not understand
the trick but it does not work if I remove the .region_nop() hook.
How does it work? :)
---
exec.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/exec.c b/exec.c
index 01ac21e3cd..ea5f3eb209 100644
--- a/exec.c
+++ b/exec.c
@@ -2707,7 +2707,7 @@ static void mem_commit(MemoryListener *listener)
atomic_rcu_set(&as->dispatch, next);
if (cur) {
- call_rcu(cur, address_space_dispatch_free, rcu);
+ address_space_dispatch_free(cur);
}
}
--
2.11.0
next prev parent reply other threads:[~2017-08-25 8:31 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-18 5:39 [Qemu-devel] Memory use with >100 virtio devices Alexey Kardashevskiy
2017-08-18 13:18 ` Stefan Hajnoczi
2017-08-21 4:31 ` David Gibson
2017-08-21 5:50 ` Alexey Kardashevskiy
2017-08-24 9:48 ` Alexey Kardashevskiy
2017-08-24 12:30 ` David Gibson
2017-08-25 8:31 ` Alexey Kardashevskiy [this message]
2017-08-25 8:53 ` [Qemu-devel] [RFC PATCH qemu] exec: Destroy dispatch immediately Paolo Bonzini
2017-08-25 9:22 ` Peter Maydell
2017-08-25 9:57 ` Paolo Bonzini
2017-08-25 13:19 ` David Gibson
2017-08-25 13:46 ` Peter Maydell
2017-08-25 14:04 ` Paolo Bonzini
2017-08-29 8:55 ` Alexey Kardashevskiy
2017-08-25 8:55 ` Paolo Bonzini
2017-08-25 9:01 ` Paolo Bonzini
2017-08-25 9:16 ` Alexey Kardashevskiy
2017-08-25 9:58 ` Paolo Bonzini
2017-08-25 10:48 ` [Qemu-devel] Memory use with >100 virtio devices Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170825083123.47432-1-aik@ozlabs.ru \
--to=aik@ozlabs.ru \
--cc=david@gibson.dropbear.id.au \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).