From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54375) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bHs92-0000ee-R5 for qemu-devel@nongnu.org; Tue, 28 Jun 2016 08:29:37 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bHs8z-0002Wu-6y for qemu-devel@nongnu.org; Tue, 28 Jun 2016 08:29:35 -0400 Received: from mx6-phx2.redhat.com ([209.132.183.39]:49809) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bHs8y-0002WV-UY for qemu-devel@nongnu.org; Tue, 28 Jun 2016 08:29:33 -0400 Date: Tue, 28 Jun 2016 08:29:22 -0400 (EDT) From: Paolo Bonzini Message-ID: <1564831478.2624143.1467116962342.JavaMail.zimbra@redhat.com> In-Reply-To: <57726A20.4000808@kamp.de> References: <1467104499-27517-1-git-send-email-pl@kamp.de> <57726A20.4000808@kamp.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 00/15] optimize Qemu RSS usage List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Lieven Cc: qemu-devel@nongnu.org, kwolf@redhat.com, peter maydell , mst@redhat.com, dgilbert@redhat.com, mreitz@redhat.com, kraxel@redhat.com > Am 28.06.2016 um 13:37 schrieb Paolo Bonzini: > > On 28/06/2016 11:01, Peter Lieven wrote: > >> I recently found that Qemu is using several hundred megabytes of RSS > >> memory > >> more than older versions such as Qemu 2.2.0. So I started tracing > >> memory allocation and found 2 major reasons for this. > >> > >> 1) We changed the qemu coroutine pool to have a per thread and a global > >> release > >> pool. The choosen poolsize and the changed algorithm could lead to up > >> to > >> 192 free coroutines with just a single iothread. Each of the > >> coroutines > >> in the pool each having 1MB of stack memory. > > But the fix, as you correctly note, is to reduce the stack size. It > > would be nice to compile block-obj-y with -Wstack-usage=2048 too. > > To reveal if there are any big stack allocations in the block layer? Yes. Most should be fixed by now, but a handful are probably still there. (definitely one in vvfat.c). > As it seems reducing to 64kB breaks live migration in some (non reproducible) cases. Does it hit the guard page? > >> 2) Between Qemu 2.2.0 and 2.3.0 RCU was introduced which lead to delayed > >> freeing > >> of memory. This lead to higher heap allocations which could not > >> effectively > >> be returned to kernel (most likely due to fragmentation). > > I agree that some of the exec.c allocations need some care, but I would > > prefer to use a custom free list or lazy allocation instead of mmap. > > This would only help if the elements from the free list would be allocated > using mmap? The issue is that RCU delays the freeing so that the number of > concurrent allocations is high and then a bunch is freed at once. If the memory > was malloced it would still have caused trouble. The free list should improve reuse and fragmentation. I'll take a look at lazy allocation of subpages, too. Paolo