From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:42304 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752513AbdJSNO6 (ORCPT ); Thu, 19 Oct 2017 09:14:58 -0400 Subject: Patch "mm/memory_hotplug: set magic number to page->freelist instead of page->lru.next" has been added to the 4.9-stable tree To: yasu.isimatu@gmail.com, akpm@linux-foundation.org, alexander.levin@verizon.com, dave.hansen@linux.intel.com, gregkh@linuxfoundation.org, hpa@zytor.com, isimatu.yasuaki@jp.fujitsu.com, mgorman@techsingularity.net, mingo@redhat.com, qiuxishi@huawei.com, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz Cc: , From: Date: Thu, 19 Oct 2017 15:14:28 +0200 Message-ID: <150841886842152@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: This is a note to let you know that I've just added the patch titled mm/memory_hotplug: set magic number to page->freelist instead of page->lru.next to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: mm-memory_hotplug-set-magic-number-to-page-freelist-instead-of-page-lru.next.patch and it can be found in the queue-4.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let know about it. >>From foo@baz Thu Oct 19 15:04:02 CEST 2017 From: Yasuaki Ishimatsu Date: Wed, 22 Feb 2017 15:45:13 -0800 Subject: mm/memory_hotplug: set magic number to page->freelist instead of page->lru.next From: Yasuaki Ishimatsu [ Upstream commit ddffe98d166f4a93d996d5aa628fd745311fc1e7 ] To identify that pages of page table are allocated from bootmem allocator, magic number sets to page->lru.next. But page->lru list is initialized in reserve_bootmem_region(). So when calling free_pagetable(), the function cannot find the magic number of pages. And free_pagetable() frees the pages by free_reserved_page() not put_page_bootmem(). But if the pages are allocated from bootmem allocator and used as page table, the pages have private flag. So before freeing the pages, we should clear the private flag by put_page_bootmem(). Before applying the commit 7bfec6f47bb0 ("mm, page_alloc: check multiple page fields with a single branch"), we could find the following visible issue: BUG: Bad page state in process kworker/u1024:1 page:ffffea103cfd8040 count:0 mapcount:0 mappi flags: 0x6fffff80000800(private) page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set bad because of flags: 0x800(private) Call Trace: [...] dump_stack+0x63/0x87 [...] bad_page+0x114/0x130 [...] free_pages_prepare+0x299/0x2d0 [...] free_hot_cold_page+0x31/0x150 [...] __free_pages+0x25/0x30 [...] free_pagetable+0x6f/0xb4 [...] remove_pagetable+0x379/0x7ff [...] vmemmap_free+0x10/0x20 [...] sparse_remove_one_section+0x149/0x180 [...] __remove_pages+0x2e9/0x4f0 [...] arch_remove_memory+0x63/0xc0 [...] remove_memory+0x8c/0xc0 [...] acpi_memory_device_remove+0x79/0xa5 [...] acpi_bus_trim+0x5a/0x8d [...] acpi_bus_trim+0x38/0x8d [...] acpi_device_hotplug+0x1b7/0x418 [...] acpi_hotplug_work_fn+0x1e/0x29 [...] process_one_work+0x152/0x400 [...] worker_thread+0x125/0x4b0 [...] kthread+0xd8/0xf0 [...] ret_from_fork+0x22/0x40 And the issue still silently occurs. Until freeing the pages of page table allocated from bootmem allocator, the page->freelist is never used. So the patch sets magic number to page->freelist instead of page->lru.next. [isimatu.yasuaki@jp.fujitsu.com: fix merge issue] Link: http://lkml.kernel.org/r/722b1cc4-93ac-dd8b-2be2-7a7e313b3b0b@gmail.com Link: http://lkml.kernel.org/r/2c29bd9f-5b67-02d0-18a3-8828e78bbb6f@gmail.com Signed-off-by: Yasuaki Ishimatsu Cc: Thomas Gleixner Cc: Ingo Molnar Cc: H. Peter Anvin Cc: Dave Hansen Cc: Vlastimil Babka Cc: Mel Gorman Cc: Xishi Qiu Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- arch/x86/mm/init_64.c | 2 +- mm/memory_hotplug.c | 5 +++-- mm/sparse.c | 2 +- 3 files changed, 5 insertions(+), 4 deletions(-) --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -689,7 +689,7 @@ static void __meminit free_pagetable(str if (PageReserved(page)) { __ClearPageReserved(page); - magic = (unsigned long)page->lru.next; + magic = (unsigned long)page->freelist; if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) { while (nr_pages--) put_page_bootmem(page++); --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -179,7 +179,7 @@ static void release_memory_resource(stru void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) { - page->lru.next = (struct list_head *) type; + page->freelist = (void *)type; SetPagePrivate(page); set_page_private(page, info); page_ref_inc(page); @@ -189,11 +189,12 @@ void put_page_bootmem(struct page *page) { unsigned long type; - type = (unsigned long) page->lru.next; + type = (unsigned long) page->freelist; BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); if (page_ref_dec_return(page) == 1) { + page->freelist = NULL; ClearPagePrivate(page); set_page_private(page, 0); INIT_LIST_HEAD(&page->lru); --- a/mm/sparse.c +++ b/mm/sparse.c @@ -662,7 +662,7 @@ static void free_map_bootmem(struct page >> PAGE_SHIFT; for (i = 0; i < nr_pages; i++, page++) { - magic = (unsigned long) page->lru.next; + magic = (unsigned long) page->freelist; BUG_ON(magic == NODE_INFO); Patches currently in stable-queue which might be from yasu.isimatu@gmail.com are queue-4.9/mm-memory_hotplug-set-magic-number-to-page-freelist-instead-of-page-lru.next.patch