From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753946AbcDOJJE (ORCPT ); Fri, 15 Apr 2016 05:09:04 -0400 Received: from outbound-smtp06.blacknight.com ([81.17.249.39]:42854 "EHLO outbound-smtp06.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753787AbcDOJI6 (ORCPT ); Fri, 15 Apr 2016 05:08:58 -0400 From: Mel Gorman To: Andrew Morton Cc: Vlastimil Babka , Jesper Dangaard Brouer , Linux-MM , LKML , Mel Gorman Subject: [PATCH 18/28] mm, page_alloc: Shorten the page allocator fast path Date: Fri, 15 Apr 2016 10:07:45 +0100 Message-Id: <1460711275-1130-6-git-send-email-mgorman@techsingularity.net> X-Mailer: git-send-email 2.6.4 In-Reply-To: <1460711275-1130-1-git-send-email-mgorman@techsingularity.net> References: <1460710760-32601-1-git-send-email-mgorman@techsingularity.net> <1460711275-1130-1-git-send-email-mgorman@techsingularity.net> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The page allocator fast path checks page multiple times unnecessarily. This patch avoids all the slowpath checks if the first allocation attempt succeeds. Signed-off-by: Mel Gorman --- mm/page_alloc.c | 29 +++++++++++++++-------------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 835a1c434832..7a5f6ff4ea06 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3392,22 +3392,17 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, /* First allocation attempt */ page = get_page_from_freelist(alloc_mask, order, alloc_flags, &ac); - if (unlikely(!page)) { - /* - * Runtime PM, block IO and its error handling path - * can deadlock because I/O on the device might not - * complete. - */ - alloc_mask = memalloc_noio_flags(gfp_mask); - ac.spread_dirty_pages = false; - - page = __alloc_pages_slowpath(alloc_mask, order, &ac); - } + if (likely(page)) + goto out; - if (kmemcheck_enabled && page) - kmemcheck_pagealloc_alloc(page, order, gfp_mask); + /* + * Runtime PM, block IO and its error handling path can deadlock + * because I/O on the device might not complete. + */ + alloc_mask = memalloc_noio_flags(gfp_mask); + ac.spread_dirty_pages = false; - trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype); + page = __alloc_pages_slowpath(alloc_mask, order, &ac); /* * When updating a task's mems_allowed, it is possible to race with @@ -3420,6 +3415,12 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, goto retry_cpuset; } +out: + if (kmemcheck_enabled && page) + kmemcheck_pagealloc_alloc(page, order, gfp_mask); + + trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype); + return page; } EXPORT_SYMBOL(__alloc_pages_nodemask); -- 2.6.4