From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pd0-f179.google.com (mail-pd0-f179.google.com [209.85.192.179]) by kanga.kvack.org (Postfix) with ESMTP id 756BF6B0036 for ; Thu, 26 Sep 2013 11:43:22 -0400 (EDT) Received: by mail-pd0-f179.google.com with SMTP id v10so1311043pde.38 for ; Thu, 26 Sep 2013 08:43:22 -0700 (PDT) Received: by mail-pd0-f178.google.com with SMTP id w10so1327407pde.37 for ; Thu, 26 Sep 2013 08:43:19 -0700 (PDT) Message-ID: <52445606.7030108@gmail.com> Date: Thu, 26 Sep 2013 23:43:02 +0800 From: Zhang Yanfei MIME-Version: 1.0 Subject: Re: [PATCH v5 4/6] x86/mem-hotplug: Support initialize page tables in bottom-up References: <5241D897.1090905@gmail.com> <5241DA5B.8000909@gmail.com> <20130926144851.GF3482@htj.dyndns.org> In-Reply-To: <20130926144851.GF3482@htj.dyndns.org> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Tejun Heo Cc: "Rafael J . Wysocki" , lenb@kernel.org, Thomas Gleixner , mingo@elte.hu, "H. Peter Anvin" , Andrew Morton , Toshi Kani , Wanpeng Li , Thomas Renninger , Yinghai Lu , Jiang Liu , Wen Congyang , Lai Jiangshan , isimatu.yasuaki@jp.fujitsu.com, izumi.taku@jp.fujitsu.com, Mel Gorman , Minchan Kim , mina86@mina86.com, gong.chen@linux.intel.com, vasilis.liaskovitis@profitbricks.com, lwoodman@redhat.com, Rik van Riel , jweiner@redhat.com, prarit@redhat.com, "x86@kernel.org" , linux-doc@vger.kernel.org, "linux-kernel@vger.kernel.org" , Linux MM , linux-acpi@vger.kernel.org, imtangchen@gmail.com, Zhang Yanfei Hello tejun, On 09/26/2013 10:48 PM, Tejun Heo wrote: > Hello, > > On Wed, Sep 25, 2013 at 02:30:51AM +0800, Zhang Yanfei wrote: >> +/** >> + * memory_map_bottom_up - Map [map_start, map_end) bottom up >> + * @map_start: start address of the target memory range >> + * @map_end: end address of the target memory range >> + * >> + * This function will setup direct mapping for memory range >> + * [map_start, map_end) in bottom-up. > > Ditto about the comment. OK, will do. > >> + */ >> +static void __init memory_map_bottom_up(unsigned long map_start, >> + unsigned long map_end) >> +{ >> + unsigned long next, new_mapped_ram_size, start; >> + unsigned long mapped_ram_size = 0; >> + /* step_size need to be small so pgt_buf from BRK could cover it */ >> + unsigned long step_size = PMD_SIZE; >> + >> + start = map_start; >> + min_pfn_mapped = start >> PAGE_SHIFT; >> + >> + /* >> + * We start from the bottom (@map_start) and go to the top (@map_end). >> + * The memblock_find_in_range() gets us a block of RAM from the >> + * end of RAM in [min_pfn_mapped, max_pfn_mapped) used as new pages >> + * for page table. >> + */ >> + while (start < map_end) { >> + if (map_end - start > step_size) { >> + next = round_up(start + 1, step_size); >> + if (next > map_end) >> + next = map_end; >> + } else >> + next = map_end; >> + >> + new_mapped_ram_size = init_range_memory_mapping(start, next); >> + start = next; >> + >> + if (new_mapped_ram_size > mapped_ram_size) >> + step_size <<= STEP_SIZE_SHIFT; >> + mapped_ram_size += new_mapped_ram_size; >> + } >> +} > > As Yinghai pointed out in another thread, do we need to worry about > falling back to top-down? I've explained to him. Nop, we don't need to worry about that. Because even the min_pfn_mapped becomes ISA_END_ADDRESS in the second call below, we won't allocate memory below the kernel because we have limited the allocation above the kernel. Thanks. -- Thanks. Zhang Yanfei -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org