From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35951) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bGMLh-00044p-DI for qemu-devel@nongnu.org; Fri, 24 Jun 2016 04:20:26 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bGMLa-0003XD-87 for qemu-devel@nongnu.org; Fri, 24 Jun 2016 04:20:24 -0400 Received: from mx1.redhat.com ([209.132.183.28]:51430) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bGMLa-0003X4-1h for qemu-devel@nongnu.org; Fri, 24 Jun 2016 04:20:18 -0400 References: <5768F923.7040502@kamp.de> <576BF910.70304@kamp.de> <178ee05d-cb23-e1ba-5a7f-87a5caef1e91@redhat.com> <576C00D1.9020202@kamp.de> <48f0c4a6-8c26-446d-1dfd-c79da0c18707@redhat.com> <576C0C1D.9090709@kamp.de> <576C5481.6070605@kamp.de> <7575263.1646445.1466741414660.JavaMail.zimbra@redhat.com> <576CEB1D.6040609@kamp.de> From: Paolo Bonzini Message-ID: Date: Fri, 24 Jun 2016 10:20:15 +0200 MIME-Version: 1.0 In-Reply-To: <576CEB1D.6040609@kamp.de> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] Qemu and heavily increased RSS usage List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Lieven Cc: "Dr. David Alan Gilbert" , qemu-devel@nongnu.org, Fam Zheng , Peter Maydell On 24/06/2016 10:11, Peter Lieven wrote: > Am 24.06.2016 um 06:10 schrieb Paolo Bonzini: >>>> If it's 10M nothing. If there is a 100M regression that is also caused >>>> by RCU, we have to give up on it for that data structure, or mmap/munmap >>>> the affected data structures. >>> If it was only 10MB I would agree. But if I run the VM described earlier >>> in this thread it goes from ~35MB with Qemu-2.2.0 to ~130-150MB with >>> current master. This is with coroutine pool disabled. With the coroutine pool >>> it can grow to sth like 300-350MB. >>> >>> Is there an easy way to determinate if RCU is the problem? I have the same >>> symptoms, valgrind doesn't see the allocated memory. Is it possible >>> to make rcu_call directly invoking the function - maybe with a lock around it >>> that serializes the calls? Even if its expensive it might show if we search >>> at the right place. >> Yes, you can do that. Just make it call the function without locks, for >> a quick PoC it will be okay. > > Unfortunately, it leads to immediate segfaults because a lot of things seem > to go horribly wrong ;-) > > Do you have any other idea than reverting all the rcu patches for this section? Try freeing under the big QEMU lock: if (qemu_mutex_iothread_locked()) { unlock = true; qemu_mutex_lock_iothread(); } ... if (unlock) { qemu_mutex_unlock_iothread(); } afbe70535ff1a8a7a32910cc15ebecc0ba92e7da should be easy to backport. Thanks, Paolo > I'm also wondering why the RSS is not returned to the kernel. One thing could > be fragmentation.... > > Peter >