From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36714) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VanQm-00061Y-JM for qemu-devel@nongnu.org; Mon, 28 Oct 2013 10:04:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VanQg-0002gk-Jo for qemu-devel@nongnu.org; Mon, 28 Oct 2013 10:04:32 -0400 Received: from mx1.redhat.com ([209.132.183.28]:9372) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VanQg-0002gS-CS for qemu-devel@nongnu.org; Mon, 28 Oct 2013 10:04:26 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r9SE4OGe021080 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 28 Oct 2013 10:04:25 -0400 Date: Mon, 28 Oct 2013 12:04:06 -0200 From: Marcelo Tosatti Message-ID: <20131028140406.GA18025@amt.cnet> References: <20131024211158.064049176@amt.cnet> <20131024211249.723543071@amt.cnet> <5269B378.6040409@redhat.com> <20131025045805.GA18280@amt.cnet> <20131025115718.15b6e788@redhat.com> <20131025133421.GA27529@amt.cnet> <20131027162044.19769397@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20131027162044.19769397@redhat.com> Subject: Re: [Qemu-devel] [patch 2/2] i386: pc: align gpa<->hpa on 1GB boundary List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: igor Mammedov Cc: aarcange@redhat.com, Paolo Bonzini , qemu-devel@nongnu.org, gleb@redhat.com On Sun, Oct 27, 2013 at 04:20:44PM +0100, igor Mammedov wrote: > > Yes, thought of that, unfortunately its cumbersome to add an interface > > for the user to supply both 2MB and 1GB hugetlbfs pages. > Could 2Mb tails be automated, meaning if host uses 1Gb hugepages and > there is/are tail/s, QEMU should be able to figure out alignment > issues and allocate with appropriate pages. Yes that would be ideal but the problem with hugetlbfs is that pages are preallocated. So in the end you'd have to expose the split of guest RAM in 2MB/1GB types to the user (it would be necessary for the user to calculate the size of the hole, etc). > Goal is separate host part allocation aspect from guest related one, > aliasing 32-bit hole size at the end doesn't help it at all, it's quite > opposite, it's making current code more complicated and harder to fix > in the future. You can simply back the 1GB areas which the hole reside with 2MB pages. Can't see why having the tail of RAM map to the hole is problematic. Understand your concern, but the complication is necessary: the host virtual/physical address and guest physical addresses must be aligned on largepage boundaries. Do you foresee any problem with memory hotplug? Could add a warning to memory API: if memory region is larger than 1GB and RAM is 1GB backed, and not properly aligned, warn.