From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56679) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUrcq-00016u-MO for qemu-devel@nongnu.org; Wed, 03 Aug 2016 04:34:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bUrcm-0008J8-Mc for qemu-devel@nongnu.org; Wed, 03 Aug 2016 04:34:04 -0400 Received: from mail-wm0-x229.google.com ([2a00:1450:400c:c09::229]:35643) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bUrcm-0008J2-FF for qemu-devel@nongnu.org; Wed, 03 Aug 2016 04:34:00 -0400 Received: by mail-wm0-x229.google.com with SMTP id f65so438244033wmi.0 for ; Wed, 03 Aug 2016 01:34:00 -0700 (PDT) References: <1470158864-17651-1-git-send-email-alex.bennee@linaro.org> <1470158864-17651-14-git-send-email-alex.bennee@linaro.org> <20160802185303.GA18402@flamenco> From: Alex =?utf-8?Q?Benn=C3=A9e?= In-reply-to: <20160802185303.GA18402@flamenco> Date: Wed, 03 Aug 2016 09:34:07 +0100 Message-ID: <8760ri6ykg.fsf@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH v5 13/13] cpu-exec: replace cpu->queued_work with GArray List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Emilio G. Cota" Cc: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, serge.fdrv@gmail.com, bobby.prani@gmail.com, mark.burton@greensocs.com, pbonzini@redhat.com, jan.kiszka@siemens.com, rth@twiddle.net, peter.maydell@linaro.org, claudio.fontana@huawei.com, Peter Crosthwaite Emilio G. Cota writes: > On Tue, Aug 02, 2016 at 18:27:44 +0100, Alex Bennée wrote: >> Under times of high memory stress the additional small mallocs by a >> linked list are source of potential memory fragmentation. As we have >> worked hard to avoid mallocs elsewhere when queuing work we might as >> well do the same for the list. We convert the lists to a auto-resizeing >> GArray which will re-size in steps of powers of 2. > > Would be nice to see numbers on how this compares to simply using > tcmalloc/jemalloc (or the glibc allocator, really). glib just uses the glibc malloc/realloc underneath so it is all the same allocator just a different usage pattern. I was trying to find a decent way to measure the allocation usage and fragmentation other than watching the differential in htop's memory usage display with the two methods. Any ideas/suggestions? > > Thanks, > > Emilio -- Alex Bennée