From: Yang Zhong <yang.zhong@intel.com>
To: qemu-devel@nongnu.org
Cc: pbonzini@redhat.com, anthony.xu@intel.com, chao.p.peng@intel.com,
Yang Zhong <yang.zhong@intel.com>
Subject: [Qemu-devel] [PATCH v1 1/2] reduce qemu's heap Rss size from 12252kB to 2752KB
Date: Fri, 10 Mar 2017 23:14:56 +0800 [thread overview]
Message-ID: <1489158897-9206-1-git-send-email-yang.zhong@intel.com> (raw)
There are a lot of memory allocation during the qemu bootup, which are
freed later by RCU thread,which means the heap size becomes biger and
biger when allocation happens, but the heap may not be shrinked even
after release happens,because some memory blocks in top of heap are still
being used.Decreasing the sleep and removing qemu_mutex_unlock_iothread()
lock, which make call_rcu_thread()thread response the free memory in time.
This patch will reduce heap Rss around 10M.
This patch is from Anthony xu <anthony.xu@intel.com>.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
---
util/rcu.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/util/rcu.c b/util/rcu.c
index 9adc5e4..c5c373c 100644
--- a/util/rcu.c
+++ b/util/rcu.c
@@ -167,7 +167,7 @@ void synchronize_rcu(void)
}
-#define RCU_CALL_MIN_SIZE 30
+#define RCU_CALL_MIN_SIZE 5
/* Multi-producer, single-consumer queue based on urcu/static/wfqueue.h
* from liburcu. Note that head is only used by the consumer.
@@ -241,7 +241,7 @@ static void *call_rcu_thread(void *opaque)
* added before synchronize_rcu() starts.
*/
while (n == 0 || (n < RCU_CALL_MIN_SIZE && ++tries <= 5)) {
- g_usleep(10000);
+ g_usleep(100);
if (n == 0) {
qemu_event_reset(&rcu_call_ready_event);
n = atomic_read(&rcu_call_count);
@@ -254,24 +254,20 @@ static void *call_rcu_thread(void *opaque)
atomic_sub(&rcu_call_count, n);
synchronize_rcu();
- qemu_mutex_lock_iothread();
while (n > 0) {
node = try_dequeue();
while (!node) {
- qemu_mutex_unlock_iothread();
qemu_event_reset(&rcu_call_ready_event);
node = try_dequeue();
if (!node) {
qemu_event_wait(&rcu_call_ready_event);
node = try_dequeue();
}
- qemu_mutex_lock_iothread();
}
n--;
node->func(node);
}
- qemu_mutex_unlock_iothread();
}
abort();
}
--
1.9.1
next reply other threads:[~2017-03-10 7:17 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-10 15:14 Yang Zhong [this message]
2017-03-10 8:41 ` [Qemu-devel] [PATCH v1 1/2] reduce qemu's heap Rss size from 12252kB to 2752KB Paolo Bonzini
2017-03-10 16:05 ` Xu, Anthony
2017-03-10 16:07 ` Paolo Bonzini
2017-03-10 19:30 ` Xu, Anthony
2017-03-11 17:02 ` Paolo Bonzini
2017-03-10 15:14 ` [Qemu-devel] [PATCH v1 2/2] " Yang Zhong
2017-03-10 9:14 ` Paolo Bonzini
2017-03-10 17:40 ` Xu, Anthony
2017-03-11 17:04 ` Paolo Bonzini
2017-03-14 5:14 ` Xu, Anthony
2017-03-14 10:14 ` Paolo Bonzini
2017-03-14 21:23 ` Xu, Anthony
2017-03-15 8:36 ` Paolo Bonzini
2017-03-15 19:05 ` Xu, Anthony
2017-03-15 20:21 ` Paolo Bonzini
2017-03-16 2:47 ` Zhong, Yang
2017-03-16 20:02 ` Xu, Anthony
2017-03-22 11:46 ` Paolo Bonzini
2017-03-22 18:26 ` Xu, Anthony
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1489158897-9206-1-git-send-email-yang.zhong@intel.com \
--to=yang.zhong@intel.com \
--cc=anthony.xu@intel.com \
--cc=chao.p.peng@intel.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).