From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932866AbcHJU7y (ORCPT ); Wed, 10 Aug 2016 16:59:54 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:35491 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752907AbcHJU7G (ORCPT ); Wed, 10 Aug 2016 16:59:06 -0400 Date: Wed, 10 Aug 2016 17:14:53 +0900 From: Sergey Senozhatsky To: js1304@gmail.com Cc: Andrew Morton , Vlastimil Babka , Minchan Kim , Michal Hocko , Sergey Senozhatsky , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Joonsoo Kim Subject: Re: [PATCH 1/5] mm/debug_pagealloc: clean-up guard page handling code Message-ID: <20160810081453.GB573@swordfish> References: <1470809784-11516-1-git-send-email-iamjoonsoo.kim@lge.com> <1470809784-11516-2-git-send-email-iamjoonsoo.kim@lge.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1470809784-11516-2-git-send-email-iamjoonsoo.kim@lge.com> User-Agent: Mutt/1.6.2 (2016-07-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, On (08/10/16 15:16), js1304@gmail.com wrote: [..] > -static inline void set_page_guard(struct zone *zone, struct page *page, > +static inline bool set_page_guard(struct zone *zone, struct page *page, > unsigned int order, int migratetype) > { > struct page_ext *page_ext; > > if (!debug_guardpage_enabled()) > - return; > + return false; > + > + if (order >= debug_guardpage_minorder()) > + return false; > > page_ext = lookup_page_ext(page); > if (unlikely(!page_ext)) > - return; > + return false; > > __set_bit(PAGE_EXT_DEBUG_GUARD, &page_ext->flags); > > @@ -656,6 +659,8 @@ static inline void set_page_guard(struct zone *zone, struct page *page, > set_page_private(page, order); > /* Guard pages are not available for any usage */ > __mod_zone_freepage_state(zone, -(1 << order), migratetype); > + > + return true; > } > > static inline void clear_page_guard(struct zone *zone, struct page *page, > @@ -678,8 +683,8 @@ static inline void clear_page_guard(struct zone *zone, struct page *page, > } > #else > struct page_ext_operations debug_guardpage_ops = { NULL, }; > -static inline void set_page_guard(struct zone *zone, struct page *page, > - unsigned int order, int migratetype) {} > +static inline bool set_page_guard(struct zone *zone, struct page *page, > + unsigned int order, int migratetype) { return false; } > static inline void clear_page_guard(struct zone *zone, struct page *page, > unsigned int order, int migratetype) {} > #endif > @@ -1650,18 +1655,15 @@ static inline void expand(struct zone *zone, struct page *page, > size >>= 1; > VM_BUG_ON_PAGE(bad_range(zone, &page[size]), &page[size]); > > - if (IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && > - debug_guardpage_enabled() && > - high < debug_guardpage_minorder()) { > - /* > - * Mark as guard pages (or page), that will allow to > - * merge back to allocator when buddy will be freed. > - * Corresponding page table entries will not be touched, > - * pages will stay not present in virtual address space > - */ > - set_page_guard(zone, &page[size], high, migratetype); > + /* > + * Mark as guard pages (or page), that will allow to > + * merge back to allocator when buddy will be freed. > + * Corresponding page table entries will not be touched, > + * pages will stay not present in virtual address space > + */ > + if (set_page_guard(zone, &page[size], high, migratetype)) > continue; > - } so previously IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) could have optimized out the entire branch -- no set_page_guard() invocation and checks, right? but now we would call set_page_guard() every time? -ss > + > list_add(&page[size].lru, &area->free_list[migratetype]); > area->nr_free++; > set_page_order(&page[size], high);