From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NqMi7-0005Uq-Iw for qemu-devel@nongnu.org; Sat, 13 Mar 2010 03:28:40 -0500 Received: from [199.232.76.173] (port=53391 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NqMi5-0005UG-Cg for qemu-devel@nongnu.org; Sat, 13 Mar 2010 03:28:37 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NqMi4-0004Nq-PD for qemu-devel@nongnu.org; Sat, 13 Mar 2010 03:28:37 -0500 Received: from mx1.redhat.com ([209.132.183.28]:60343) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NqMi4-0004Nk-DX for qemu-devel@nongnu.org; Sat, 13 Mar 2010 03:28:36 -0500 Received: from int-mx03.intmail.prod.int.phx2.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.16]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o2D8SZpa015621 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Sat, 13 Mar 2010 03:28:35 -0500 Message-ID: <4B9B4CB0.7080200@redhat.com> Date: Sat, 13 Mar 2010 10:28:32 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH QEMU] transparent hugepage support References: <20100311151427.GE5677@random.random> <4B9911B0.5000302@redhat.com> <20100311160505.GG5677@random.random> In-Reply-To: <20100311160505.GG5677@random.random> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Andrea Arcangeli Cc: qemu-devel@nongnu.org On 03/11/2010 06:05 PM, Andrea Arcangeli wrote: > On Thu, Mar 11, 2010 at 05:52:16PM +0200, Avi Kivity wrote: > >> That is a little wasteful. How about a hint to mmap() requesting proper >> alignment (MAP_HPAGE_ALIGN)? >> > So you suggest adding a new kernel feature to mmap? Not sure if it's > worth it, considering it'd also increase the number of vmas because it > will have to leave an hole. Wasting 2M-4k of virtual memory is likely > cheaper than having 1 more vma in the rbtree for every page fault. So > I think it's better to just malloc and adjust ourselfs on the next > offset which is done in userland by qemu_memalign I think. > > Won't we get a new vma anyway due to the madvise() call later? But I agree it isn't worth it. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.