From: Peter Lieven <pl@kamp.de>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
qemu-devel@nongnu.org, Fam Zheng <famz@redhat.com>,
Peter Maydell <peter.maydell@linaro.org>
Subject: Re: [Qemu-devel] Qemu and heavily increased RSS usage
Date: Fri, 24 Jun 2016 10:45:31 +0200 [thread overview]
Message-ID: <576CF32B.7040101@kamp.de> (raw)
In-Reply-To: <dbb956d8-fcdd-9392-8d3e-54acf4dc2cae@redhat.com>
Am 24.06.2016 um 10:20 schrieb Paolo Bonzini:
>
> On 24/06/2016 10:11, Peter Lieven wrote:
>> Am 24.06.2016 um 06:10 schrieb Paolo Bonzini:
>>>>> If it's 10M nothing. If there is a 100M regression that is also caused
>>>>> by RCU, we have to give up on it for that data structure, or mmap/munmap
>>>>> the affected data structures.
>>>> If it was only 10MB I would agree. But if I run the VM described earlier
>>>> in this thread it goes from ~35MB with Qemu-2.2.0 to ~130-150MB with
>>>> current master. This is with coroutine pool disabled. With the coroutine pool
>>>> it can grow to sth like 300-350MB.
>>>>
>>>> Is there an easy way to determinate if RCU is the problem? I have the same
>>>> symptoms, valgrind doesn't see the allocated memory. Is it possible
>>>> to make rcu_call directly invoking the function - maybe with a lock around it
>>>> that serializes the calls? Even if its expensive it might show if we search
>>>> at the right place.
>>> Yes, you can do that. Just make it call the function without locks, for
>>> a quick PoC it will be okay.
>> Unfortunately, it leads to immediate segfaults because a lot of things seem
>> to go horribly wrong ;-)
>>
>> Do you have any other idea than reverting all the rcu patches for this section?
> Try freeing under the big QEMU lock:
>
> if (qemu_mutex_iothread_locked()) {
> unlock = true;
> qemu_mutex_lock_iothread();
> }
> ...
> if (unlock) {
> qemu_mutex_unlock_iothread();
> }
>
> afbe70535ff1a8a7a32910cc15ebecc0ba92e7da should be easy to backport.
Will check this out. Meanwhile I read a little about returning RSS to the kernel as I was wondering
why RSS and HWM are almost at the same high level. It seems that ptmalloc (glibc default alloctor)
is very reluctant in retuning memory to the kernel. There indeed is no guarantee that freed memory
returned. Only mmap'ed memory that is unmapped is guaranteed to be returned.
So I tried the following without reverting anything:
MALLOC_MMAP_THRESHOLD_=4096 ./x86_64-softmmu/qemu-system-x86_64 ...
No idea on performance impact yet, but it solves the issue.
With default threshold my test VM rises up to 154MB RSS usage:
VmHWM: 154284 kB
VmRSS: 154284 kB
With the option it looks like this:
VmHWM: 50588 kB
VmRSS: 41920 kB
with jemalloc I can observe that the HWM is still high, but RSS is below its value. But still in the order of about 100MB.
Peter
prev parent reply other threads:[~2016-06-24 8:45 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-21 8:21 [Qemu-devel] Qemu and heavily increased RSS usage Peter Lieven
2016-06-21 13:18 ` Dr. David Alan Gilbert
2016-06-21 15:12 ` Peter Lieven
2016-06-22 10:56 ` Stefan Hajnoczi
2016-06-22 19:55 ` Peter Lieven
2016-06-22 20:56 ` Peter Maydell
2016-06-24 9:37 ` Stefan Hajnoczi
2016-06-24 9:53 ` Peter Lieven
2016-06-24 9:57 ` Dr. David Alan Gilbert
2016-06-24 9:58 ` Peter Maydell
2016-06-24 10:45 ` Peter Lieven
2016-06-27 12:39 ` Stefan Hajnoczi
2016-06-27 13:33 ` Peter Lieven
2016-06-23 9:57 ` Peter Lieven
2016-06-24 22:57 ` Michael S. Tsirkin
2016-06-23 14:58 ` Peter Lieven
2016-06-23 15:00 ` Dr. David Alan Gilbert
2016-06-23 15:02 ` Peter Lieven
2016-06-23 15:21 ` Paolo Bonzini
2016-06-23 15:31 ` Peter Lieven
2016-06-23 15:47 ` Paolo Bonzini
2016-06-23 16:19 ` Peter Lieven
2016-06-23 16:53 ` Paolo Bonzini
2016-06-23 21:28 ` Peter Lieven
2016-06-24 4:10 ` Paolo Bonzini
2016-06-24 8:11 ` Peter Lieven
2016-06-24 8:20 ` Paolo Bonzini
2016-06-24 8:45 ` Peter Lieven [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=576CF32B.7040101@kamp.de \
--to=pl@kamp.de \
--cc=dgilbert@redhat.com \
--cc=famz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).