From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Nq8OD-0007r0-EI for qemu-devel@nongnu.org; Fri, 12 Mar 2010 12:11:09 -0500 Received: from [199.232.76.173] (port=59648 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Nq8OC-0007qL-MB for qemu-devel@nongnu.org; Fri, 12 Mar 2010 12:11:08 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1Nq8OB-00076v-W1 for qemu-devel@nongnu.org; Fri, 12 Mar 2010 12:11:08 -0500 Received: from mx20.gnu.org ([199.232.41.8]:55381) by monty-python.gnu.org with esmtps (TLS-1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1Nq8OB-00076n-O4 for qemu-devel@nongnu.org; Fri, 12 Mar 2010 12:11:07 -0500 Received: from mail.codesourcery.com ([38.113.113.100]) by mx20.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Nq8OA-0002Kf-MY for qemu-devel@nongnu.org; Fri, 12 Mar 2010 12:11:07 -0500 From: Paul Brook Subject: Re: [Qemu-devel] [PATCH QEMU] transparent hugepage support Date: Fri, 12 Mar 2010 17:10:54 +0000 References: <20100311151427.GE5677@random.random> <201003121624.24870.paul@codesourcery.com> <20100312165721.GU5677@random.random> In-Reply-To: <20100312165721.GU5677@random.random> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <201003121710.54782.paul@codesourcery.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Andrea Arcangeli Cc: qemu-devel@nongnu.org, Avi Kivity > > So shouldn't [the name of] the value the kernel provides for recommended > > alignment be equally implementation agnostic? > > Is sys/kernel/mm/transparent_hugepage directory implementation > agnostic in the first place? It's about as agnostic as MADV_HUGEPAGE :-) > If we want to fully take advantage of the feature (i.e. NPT and qemu > first 2M of guest physical ram where usually kernel resides) userspace > has to know the alignment size the kernel recommends. This is KVM specific, so my gut reaction is you should be asking KVM. > Only thing I'm undecided about is if this should be called > hpage_pmd_size or just hpage_size. Suppose amd/intel next year adds > 64k pages too and the kernel decides to use them too if it fails to > allocate a 2M page. So we escalate the fallback from 2M -> 64k -> 4k, > and HPAGE_PMD_SIZE becomes 64k. Still qemu has to align on the max > possible hpage_size provided by transparent hugepage. So with this new > reasoning I think hpage_size or max_hpage_size would be better sysfs > name for this. What do you think? Agreed. > hpage_size or max_hpage_size? No particular preference. Or you could have .../page_sizes list all available sizes, and have qemu take the first one (or last depending on sort order). Paul