linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Santosh Shilimkar <santosh.shilimkar@ti.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Tejun Heo <tj@kernel.org>,
	Grygorii Strashko <grygorii.strashko@ti.com>,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org,
	Yinghai Lu <yinghai@kernel.org>
Subject: Re: [PATCH v2 08/23] mm/memblock: Add memblock memory allocation apis
Date: Fri, 10 Jan 2014 19:59:37 -0500	[thread overview]
Message-ID: <52D09779.9010900@ti.com> (raw)
In-Reply-To: <20140110165314.80adf6b53c310693529c3c80@linux-foundation.org>

On Friday 10 January 2014 07:53 PM, Andrew Morton wrote:
> On Thu, 5 Dec 2013 12:13:16 -0500 Santosh Shilimkar <santosh.shilimkar@ti.com> wrote:
> 
>> On Thursday 05 December 2013 11:59 AM, Tejun Heo wrote:
>>> Hello,
>>>
>>> On Thu, Dec 05, 2013 at 03:12:30PM +0200, Grygorii Strashko wrote:
>>>> I'll try to provide more technical details here.
>>>> As Santosh mentioned in previous e-mails, it's not easy to simply
>>>> get rid of using MAX_NUMNODES:
>>>> 1) we introduce new interface memblock_allocX 
>>>> 2) our interface uses memblock APIs __next_free_mem_range_rev()
>>>>    and __next_free_mem_range()
>>>> 3) __next_free_mem_range_rev() and __next_free_mem_range() use MAX_NUMNODES
>>>> 4) _next_free_mem_range_rev() and __next_free_mem_range() are used standalone,
>>>>    outside of our interface as part of *for_each_free_mem_range* or for_each_mem_pfn_range ..
>>>>
>>>> The point [4] leads to necessity to find and correct all places where memmblock APIs
>>>> are used and where it's expected to get MAX_NUMNODES as input parameter.
>>>> The major problem is that simple "grep" will not work, because memmblock APIs calls
>>>> are hidden inside other MM modules and it's not always clear
>>>> what will be passed as input parameters to APIs of these MM modules
>>>> (for example sparse_memory_present_with_active_regions() or sparse.c).
>>>
>>> Isn't that kinda trivial to work around?  Make those functions accept
>>> both MAX_NUMNODES and NUMA_NO_NODE but emit warning on MAX_NUMNODES
>>> (preferably throttled reasonably).  Given the history of API, we'd
>>> probably want to keep such warning for extended period of time but
>>> that's what we'd need to do no matter what.
>>>
>> Looks a good idea.
>>
>>>> As result, WIP patch, I did, and which was posted by Santosh illustrates
>>>> the probable size and complexity of the change.
>>>
>>> Again, I don't really mind the order things happen but I don't think
>>> it's a good idea to spread misusage with a new API.  You gotta deal
>>> with it one way or the other.
>>>
>>>> Sorry, but question here is not "Do or not to do?", but rather 'how to do?",
>>>> taking into account complexity and state of the current MM code.
>>>> For example. would it be ok if I'll workaround the issue as in the attached patch?
>>>
>>> Well, it's more of when.  It's not really a technically difficult
>>> task and all I'm saying is it better be sooner than later.
>>>
>> Fair enough. Based on your suggestion, we will try to see if
>> we can proceed with 4) accepting both MAX_NUMNODES and NUMA_NO_NODE.
>>
>> Thanks for the suggestion.
> 
> So where do we now stand with this MAX_NUMNODES-vs-NUMA_NO_NODE mess? 
> Is the conversion to NUMA_NO_NODE in current linux-next completed and
> nicely tested?
> 
>From all the report so far, there were actually only 3 places in x86
code using MAX_NUMNODES and fix for that is already in your queue.
So I guess we are good on that aspect now.

Regards,
Santosh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2014-01-11  0:59 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-03  2:27 [PATCH v2 00/23] mm: Use memblock interface instead of bootmem Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 01/23] mm/memblock: debug: correct displaying of upper memory boundary Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 02/23] mm/memblock: debug: don't free reserved array if !ARCH_DISCARD_MEMBLOCK Santosh Shilimkar
2013-12-03 22:52   ` Tejun Heo
2013-12-04 14:58     ` Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 03/23] mm/bootmem: remove duplicated declaration of __free_pages_bootmem() Santosh Shilimkar
2013-12-03 22:53   ` Tejun Heo
2013-12-03  2:27 ` [PATCH v2 04/23] mm/memory_hotplug: remove unnecessary inclusion of bootmem.h Santosh Shilimkar
2013-12-03 22:54   ` Tejun Heo
2013-12-03  2:27 ` [PATCH v2 05/23] mm/staging: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 06/23] mm/char: " Santosh Shilimkar
2013-12-03 22:55   ` Tejun Heo
2013-12-04 14:57     ` Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 07/23] mm/memblock: drop WARN and use SMP_CACHE_BYTES as a default alignment Santosh Shilimkar
2013-12-03 22:58   ` Tejun Heo
2013-12-03  2:27 ` [PATCH v2 08/23] mm/memblock: Add memblock memory allocation apis Santosh Shilimkar
2013-12-03 23:24   ` Tejun Heo
2013-12-04 15:54     ` Santosh Shilimkar
2013-12-04 16:07       ` Tejun Heo
2013-12-04 16:46         ` Santosh Shilimkar
2013-12-05 13:12           ` Grygorii Strashko
2013-12-05 16:59             ` Tejun Heo
2013-12-05 17:13               ` Santosh Shilimkar
2014-01-11  0:53                 ` Andrew Morton
2014-01-11  0:59                   ` Santosh Shilimkar [this message]
2013-12-05 16:35     ` Grygorii Strashko
2013-12-05 16:53       ` Tejun Heo
2013-12-05 18:48         ` Strashko, Grygorii
2013-12-05 18:51           ` Tejun Heo
2013-12-05 20:34           ` Santosh Shilimkar
2013-12-06 14:52             ` Grygorii Strashko
2013-12-03  2:27 ` [PATCH v2 09/23] mm/init: Use memblock apis for early memory allocations Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 10/23] mm/printk: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 11/23] mm/page_alloc: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 12/23] mm/power: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 13/23] mm/lib/swiotlb: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 14/23] mm/lib/cpumask: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 15/23] mm/sparse: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 16/23] mm/hugetlb: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 17/23] mm/page_cgroup: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 18/23] mm/percpu: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 19/23] mm/memory_hotplug: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 20/23] mm/firmware: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 21/23] mm/ARM: kernel: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 22/23] mm/ARM: mm: " Santosh Shilimkar
2013-12-03  2:27 ` [PATCH v2 23/23] mm/ARM: OMAP: " Santosh Shilimkar
2013-12-03 22:48 ` [PATCH v2 00/23] mm: Use memblock interface instead of bootmem Tejun Heo
2013-12-04 14:56   ` Santosh Shilimkar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52D09779.9010900@ti.com \
    --to=santosh.shilimkar@ti.com \
    --cc=akpm@linux-foundation.org \
    --cc=grygorii.strashko@ti.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=tj@kernel.org \
    --cc=yinghai@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).