public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* regarding mem_map in NUMA
@ 2006-07-09  9:38 Om.Turyx
  0 siblings, 0 replies; only message in thread
From: Om.Turyx @ 2006-07-09  9:38 UTC (permalink / raw)
  To: linux-kernel

Hi,
While going through Mel gorman's book, I read that mem_map for NUMA
systems is  treated as a virtual array at PAGE_OFFSET. I could 
understand the the explanation  as follows.

mem_map[pfn] should not be accessed as it is. calculate the actual
node corresponding to pfn and access the page as
pglist_data[A]->node_zonelists[B]->zone_mem_map[C], where,
A : node_id, calculated using pfn,
B : offset in the zonelist calculated using pfn and,
C : offset in the local mem_map calculated using pfn.

Searching google resulted in http://lwn.net/Articles/9188/ Under other 
memory management work, it states mem_map[x] is not recommended in NUMA. 
Instead pfn_to_page() must be used.
from include/asm-i386/mmzone.h,
#define pfn_to_page(pfn)                        \
({\
    unsigned long __pfn = pfn;\
    int __node  = pfn_to_nid(__pfn);\
    &NODE_DATA(__node)->node_mem_map[node_localnr(__pfn,__node)];\
})
Means, pfn is used to traverse nodeid, and node_mem_map.
But even now I have not found where mem_map is initialized to 
PAGE_OFFSET, neither the significance of it.

Now, why should the mem_map be initialized to PAGE_OFFSET? Where is it
done? In page_alloc.c, I found mem_map = NODE_DATA(0)->node_mem_map;
when CONFIG_FLAT_NODE_MEM_MAP  is defined (non numa case).  If my
understanding is correct, this case should hold for NUMA as well.


Regards,
Om.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2006-07-09 22:08 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-07-09  9:38 regarding mem_map in NUMA Om.Turyx

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox