From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41450) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bHrIP-0001Y5-GU for qemu-devel@nongnu.org; Tue, 28 Jun 2016 07:35:14 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bHrII-0005oa-R0 for qemu-devel@nongnu.org; Tue, 28 Jun 2016 07:35:12 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59632) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bHrII-0005oB-LQ for qemu-devel@nongnu.org; Tue, 28 Jun 2016 07:35:06 -0400 Date: Tue, 28 Jun 2016 12:35:01 +0100 From: "Dr. David Alan Gilbert" Message-ID: <20160628113501.GH2243@work-vm> References: <1467104499-27517-1-git-send-email-pl@kamp.de> <1467104499-27517-4-git-send-email-pl@kamp.de> <20160628105707.GG2243@work-vm> <57725CB0.7090606@kamp.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <57725CB0.7090606@kamp.de> Subject: Re: [Qemu-devel] [PATCH 03/15] coroutine-ucontext: reduce stack size to 64kB List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Lieven Cc: Paolo Bonzini , qemu-devel@nongnu.org, kwolf@redhat.com, peter.maydell@linaro.org, mst@redhat.com, mreitz@redhat.com, kraxel@redhat.com * Peter Lieven (pl@kamp.de) wrote: > Am 28.06.2016 um 12:57 schrieb Dr. David Alan Gilbert: > > * Paolo Bonzini (pbonzini@redhat.com) wrote: > > > > > > On 28/06/2016 11:01, Peter Lieven wrote: > > > > evaluation with the recently introduced maximum stack size monitoring revealed > > > > that the actual used stack size was never above 4kB so allocating 1MB stack > > > > for each coroutine is a lot of wasted memory. So reduce the stack size to > > > > 64kB which should still give enough head room. > > > If we make the stack this much smaller, there is a non-zero chance of > > > smashing it. You must add a guard page if you do this (actually more > > > than one because QEMU will happily have stack frames as big as 16 KB). > > > The stack counts for RSS but it's not actually allocated memory, so why > > > does it matter? > > I think I'd be interested in seeing the /proc/.../smaps before and after this > > change to see if anything is visible and if we can see the difference > > in rss etc. > > Can you advise what in smaps should be especially looked at. > > As for RSS I can report hat the long term usage is significantly lower. > I had the strange observation that when the VM is running for some minutes > the RSS suddenly increases to the whole stack size. You can see the Rss of each mapping; if you knew where your stacks were it would be easy to see if it was the stacks that were Rss and if there was anything else odd about them. If you set hte mapping as growsdown then you can see the area that has a 'gd' in it's VmFlags. Dave > > Peter > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK