From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matt Wilson Subject: [RFC PATCH] page_alloc: use first half of higher order chunks when halving Date: Tue, 25 Mar 2014 13:22:04 +0200 Message-ID: <1395746524-9670-1-git-send-email-msw@linux.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WSPR0-0006Nq-It for xen-devel@lists.xenproject.org; Tue, 25 Mar 2014 11:22:22 +0000 Received: by mail-bk0-f49.google.com with SMTP id my13so129888bkb.36 for ; Tue, 25 Mar 2014 04:22:20 -0700 (PDT) List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xenproject.org Cc: Keir Fraser , Matt Wilson , Andrew Cooper , Tim Deegan , Matt Rushton , Jan Beulich List-Id: xen-devel@lists.xenproject.org From: Matt Rushton This patch makes the Xen heap allocator use the first half of higher order chunks instead of the second half when breaking them down for smaller order allocations. Linux currently remaps the memory overlapping PCI space one page at a time. Before this change this resulted in the mfns being allocated in reverse order and led to discontiguous dom0 memory. This forced dom0 to use bounce buffers for doing DMA and resulted in poor performance. This change more gracefully handles the dom0 use case and returns contiguous memory for subsequent allocations. Cc: xen-devel@lists.xenproject.org Cc: Keir Fraser Cc: Jan Beulich Cc: Konrad Rzeszutek Wilk Cc: Tim Deegan Cc: Andrew Cooper Signed-off-by: Matt Rushton Signed-off-by: Matt Wilson --- xen/common/page_alloc.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 601319c..27e7f18 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -677,9 +677,10 @@ static struct page_info *alloc_heap_pages( /* We may have to halve the chunk a number of times. */ while ( j != order ) { - PFN_ORDER(pg) = --j; + struct page_info *pg2; + pg2 = pg + (1 << --j); + PFN_ORDER(pg) = j; page_list_add_tail(pg, &heap(node, zone, j)); - pg += 1 << j; } ASSERT(avail[node][zone] >= request); -- 1.7.9.5