kernelnewbies.kernelnewbies.org archive mirror
 help / color / mirror / Atom feed
* Basic HighMeM Question
@ 2011-06-27  9:12 piyush moghe
  2011-06-27  9:28 ` Prabhu nath
  0 siblings, 1 reply; 11+ messages in thread
From: piyush moghe @ 2011-06-27  9:12 UTC (permalink / raw)
  To: kernelnewbies

I have very basic some question's related to HighMem Memory Mapping:

1) Why can't we directly map memory in highmemory?

2) As documents at many places why is the limit of 896MB for ZONE_NORMAL?


Regards,
Piyush
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20110627/370a41f9/attachment.html 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Basic HighMeM Question
  2011-06-27  9:12 Basic HighMeM Question piyush moghe
@ 2011-06-27  9:28 ` Prabhu nath
       [not found]   ` <BANLkTinp6_n0z6OAzo1R6sq_nLynyo_3Xg@mail.gmail.com>
  0 siblings, 1 reply; 11+ messages in thread
From: Prabhu nath @ 2011-06-27  9:28 UTC (permalink / raw)
  To: kernelnewbies

Please see inline.

On Mon, Jun 27, 2011 at 2:42 PM, piyush moghe <pmkernel@gmail.com> wrote:

> I have very basic some question's related to HighMem Memory Mapping:
>
> 1) Why can't we directly map memory in highmemory?
>
        This question is incorrect ? Typically on a intel architecture, the
physical address space is divided into LOWMEM and HIGHMEM region. Lower 896
MB is marked as LOWMEM region and >896MB as HIGHMEM. More likely in the
intel architecture the memory is always decoded from 0x00000000. For Eg. If
you have memory of 1GB, then 896MB is decoded to LOWMEM and the rest is
decoded in HIGHMEM.

>
> 2) As documents at many places why is the limit of 896MB for ZONE_NORMAL?
>
        Since this 896 MB of physical address space is directly mapped to
the Kernel linear virtual address space i.e from (0xC0000000 to 0xF8000000).
Also, there is a Fixed constant offset relation between VA to PA i.e. VA =
PA + 0xC000000. This has been done to avoid any page table walk for
translating kernel virtual address to physical address.
Thus when kernel virtual address is generated for execution, PA is
calculated by MMU directly, thus making kernel code execution faster.

All the above explaination strictly holds good for Intel architecture on
Desktop machines.

Regards,
Prabhunath

>
>
> Regards,
> Piyush
>
> _______________________________________________
> Kernelnewbies mailing list
> Kernelnewbies at kernelnewbies.org
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20110627/333e493c/attachment-0001.html 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Basic HighMeM Question
       [not found]   ` <BANLkTinp6_n0z6OAzo1R6sq_nLynyo_3Xg@mail.gmail.com>
@ 2011-06-27 11:15     ` Prabhu nath
  2011-06-28  5:51       ` piyush moghe
  0 siblings, 1 reply; 11+ messages in thread
From: Prabhu nath @ 2011-06-27 11:15 UTC (permalink / raw)
  To: kernelnewbies

On Mon, Jun 27, 2011 at 3:58 PM, Paraneetharan Chandrasekaran <
paraneetharanc@gmail.com> wrote:

>
>
> On 27 June 2011 14:58, Prabhu nath <gprabhunath@gmail.com> wrote:
>
>> Please see inline.
>>
>> On Mon, Jun 27, 2011 at 2:42 PM, piyush moghe <pmkernel@gmail.com> wrote:
>>
>>> I have very basic some question's related to HighMem Memory Mapping:
>>>
>>> 1) Why can't we directly map memory in highmemory?
>>>
>>         This question is incorrect ? Typically on a intel architecture,
>> the physical address space is divided into LOWMEM and HIGHMEM region. Lower
>> 896 MB is marked as LOWMEM region and >896MB as HIGHMEM. More likely in the
>> intel architecture the memory is always decoded from 0x00000000. For Eg. If
>> you have memory of 1GB, then 896MB is decoded to LOWMEM and the rest is
>> decoded in HIGHMEM.
>>
>>>
>>> 2) As documents at many places why is the limit of 896MB for ZONE_NORMAL?
>>>
>>         Since this 896 MB of physical address space is directly mapped to
>> the Kernel linear virtual address space i.e from (0xC0000000 to 0xF8000000).
>> Also, there is a Fixed constant offset relation between VA to PA i.e. VA =
>> PA + 0xC000000. This has been done to avoid any page table walk for
>> translating kernel virtual address to physical address.
>> Thus when kernel virtual address is generated for execution, PA is
>> calculated by MMU directly, thus making kernel code execution faster.
>>
>
> Does this mean MMU doesnt look into TLB or pagetable when the kernel
> virtual address is referenced? how does the MMU know the range of direct
> mapping ( i.e offset-ed mapping)?
>
       Ideally yes, MMU should hold the range information. I do not know
about Intel architecture but I have learnt that in PowerPC architecture
there is a BAT register which will hold the offset information.
      Any comments ?

>
>> All the above explaination strictly holds good for Intel architecture on
>> Desktop machines.
>>
>> Regards,
>> Prabhunath
>>
>>>
>>>
>>> Regards,
>>> Piyush
>>>
>>> _______________________________________________
>>> Kernelnewbies mailing list
>>> Kernelnewbies at kernelnewbies.org
>>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>>>
>>>
>>
>> _______________________________________________
>> Kernelnewbies mailing list
>> Kernelnewbies at kernelnewbies.org
>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>>
>>
>
>
> --
> Regards,
> Paraneetharan C
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20110627/c0d79ab8/attachment.html 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Basic HighMeM Question
  2011-06-27 11:15     ` Prabhu nath
@ 2011-06-28  5:51       ` piyush moghe
  2011-06-28 10:16         ` Mulyadi Santosa
  0 siblings, 1 reply; 11+ messages in thread
From: piyush moghe @ 2011-06-28  5:51 UTC (permalink / raw)
  To: kernelnewbies

Thanks Prabhu.

So does this means that this ZONE_NORMAL limit of 896MB is because of 3:1
memory division?

If it is do then does this mean that if this ratio is changed then
ZONE_NORMAL limit will also be changed?


Regards,
Piyush

On Mon, Jun 27, 2011 at 4:45 PM, Prabhu nath <gprabhunath@gmail.com> wrote:

>
> On Mon, Jun 27, 2011 at 3:58 PM, Paraneetharan Chandrasekaran <
> paraneetharanc at gmail.com> wrote:
>
>>
>>
>> On 27 June 2011 14:58, Prabhu nath <gprabhunath@gmail.com> wrote:
>>
>>> Please see inline.
>>>
>>> On Mon, Jun 27, 2011 at 2:42 PM, piyush moghe <pmkernel@gmail.com>wrote:
>>>
>>>> I have very basic some question's related to HighMem Memory Mapping:
>>>>
>>>> 1) Why can't we directly map memory in highmemory?
>>>>
>>>         This question is incorrect ? Typically on a intel architecture,
>>> the physical address space is divided into LOWMEM and HIGHMEM region. Lower
>>> 896 MB is marked as LOWMEM region and >896MB as HIGHMEM. More likely in the
>>> intel architecture the memory is always decoded from 0x00000000. For Eg. If
>>> you have memory of 1GB, then 896MB is decoded to LOWMEM and the rest is
>>> decoded in HIGHMEM.
>>>
>>>>
>>>> 2) As documents at many places why is the limit of 896MB for
>>>> ZONE_NORMAL?
>>>>
>>>         Since this 896 MB of physical address space is directly mapped to
>>> the Kernel linear virtual address space i.e from (0xC0000000 to 0xF8000000).
>>> Also, there is a Fixed constant offset relation between VA to PA i.e. VA =
>>> PA + 0xC000000. This has been done to avoid any page table walk for
>>> translating kernel virtual address to physical address.
>>> Thus when kernel virtual address is generated for execution, PA is
>>> calculated by MMU directly, thus making kernel code execution faster.
>>>
>>
>> Does this mean MMU doesnt look into TLB or pagetable when the kernel
>> virtual address is referenced? how does the MMU know the range of direct
>> mapping ( i.e offset-ed mapping)?
>>
>        Ideally yes, MMU should hold the range information. I do not know
> about Intel architecture but I have learnt that in PowerPC architecture
> there is a BAT register which will hold the offset information.
>       Any comments ?
>
>>
>>> All the above explaination strictly holds good for Intel architecture on
>>> Desktop machines.
>>>
>>> Regards,
>>> Prabhunath
>>>
>>>>
>>>>
>>>> Regards,
>>>> Piyush
>>>>
>>>> _______________________________________________
>>>> Kernelnewbies mailing list
>>>> Kernelnewbies at kernelnewbies.org
>>>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Kernelnewbies mailing list
>>> Kernelnewbies at kernelnewbies.org
>>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>>>
>>>
>>
>>
>> --
>> Regards,
>> Paraneetharan C
>>
>
>
> _______________________________________________
> Kernelnewbies mailing list
> Kernelnewbies at kernelnewbies.org
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20110628/9a3b6dcf/attachment.html 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Basic HighMeM Question
  2011-06-28  5:51       ` piyush moghe
@ 2011-06-28 10:16         ` Mulyadi Santosa
  2011-06-28 10:49           ` Prabhu nath
  0 siblings, 1 reply; 11+ messages in thread
From: Mulyadi Santosa @ 2011-06-28 10:16 UTC (permalink / raw)
  To: kernelnewbies

Is it okay to jump in, guys?

On Tue, Jun 28, 2011 at 12:51, piyush moghe <pmkernel@gmail.com> wrote:
> Thanks Prabhu.
> So does this means that this ZONE_NORMAL limit of 896MB is because of 3:1
> memory division?

partly yes, plus 896 MB itself is due to some reservation in kernel
address space such as vmalloc, fixmap etc.

> If it is do then does this mean that if this ratio is changed then
> ZONE_NORMAL limit will also be changed?

see above...
-- 
regards,

Mulyadi Santosa
Freelance Linux trainer and consultant

blog: the-hydra.blogspot.com
training: mulyaditraining.blogspot.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Basic HighMeM Question
  2011-06-28 10:16         ` Mulyadi Santosa
@ 2011-06-28 10:49           ` Prabhu nath
  2011-06-28 15:26             ` Mulyadi Santosa
  0 siblings, 1 reply; 11+ messages in thread
From: Prabhu nath @ 2011-06-28 10:49 UTC (permalink / raw)
  To: kernelnewbies

Dear Mulyadi,

You are always an important character to jump in to any discussion and it
will be our great pleasure.

Please see inline for my views.

On Tue, Jun 28, 2011 at 3:46 PM, Mulyadi Santosa
<mulyadi.santosa@gmail.com>wrote:

> Is it okay to jump in, guys?
>
> On Tue, Jun 28, 2011 at 12:51, piyush moghe <pmkernel@gmail.com> wrote:
> > Thanks Prabhu.
> > So does this means that this ZONE_NORMAL limit of 896MB is because of 3:1
> > memory division?
>
> partly yes, plus 896 MB itself is due to some reservation in kernel
> address space such as vmalloc, fixmap etc.
>

    In a 3G/1G partition. 1G of Kernel virtual address space is divided into
896MB and 128 MB regions. I name them as
  Fixed Constant Offset mapped region (FCOM) - 0xC0000000 to 0xF8000000
  Dynamically Arbitrarily Mappable region (DAMR) - 0xF8000000 to 0xFFFFFFFF

   FCOM region virtual address as the name depicts is already mapped to the
physical address range from 0x00000000 to 0x38000000. i.e. for any virtual
address in the FCOM region there is a readily available Physical address.

   Base kernel code/data/stack and all kmalloc allocations are associated to
FCOM region virtual address which require contiguity in virtual address as
well as physical memory.

   Those Device drivers code/data which are built as modules, vmalloc
allocations which need not be contiguous in memory and but only contiguous
in virtual address space will be in associated to DAMR region.

   For exp. Check out the address of your kernel module functions which are
built as modules and insmoded to the kernel. it will be in DAMR region.

    The kernel symbol *high_memory* will give the start kernel virtual
address of DAMR region.

If you have 512 MB of RAM and it is decoded from 0x00000000 to 0x0x20000000.
Now you have ZONE_NORMAL size of ~512 MB.
Check out the value of high_memory. You will have lot of thoughts on this
and refer ULK chapter 8.



> > If it is do then does this mean that if this ratio is changed then
> > ZONE_NORMAL limit will also be changed?
>
> see above...
> --
> regards,
>
> Mulyadi Santosa
> Freelance Linux trainer and consultant
>
> blog: the-hydra.blogspot.com
> training: mulyaditraining.blogspot.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20110628/2f55f513/attachment.html 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Basic HighMeM Question
  2011-06-28 10:49           ` Prabhu nath
@ 2011-06-28 15:26             ` Mulyadi Santosa
  2011-06-29  6:30               ` piyush moghe
  0 siblings, 1 reply; 11+ messages in thread
From: Mulyadi Santosa @ 2011-06-28 15:26 UTC (permalink / raw)
  To: kernelnewbies

Hi :)

On Tue, Jun 28, 2011 at 17:49, Prabhu nath <gprabhunath@gmail.com> wrote:
> Dear Mulyadi,
>
> You are always an important character to jump in to any discussion and it
> will be our great pleasure.

Thanks :) Well, sometimes I just feel hesitate to break into someone's
else discussion.

> Please see inline for my views.
>
> On Tue, Jun 28, 2011 at 3:46 PM, Mulyadi Santosa <mulyadi.santosa@gmail.com>
> wrote:
> ? ? In a 3G/1G partition. 1G of Kernel virtual address space is divided into
> 896MB and 128 MB regions. I name them as
> ? Fixed Constant Offset mapped region (FCOM) - 0xC0000000 to 0xF8000000
> ? Dynamically Arbitrarily Mappable region (DAMR) - 0xF8000000 to 0xFFFFFFFF

Great naming! You beat me on that aspect :)

PS: Once there was a patch to create 4:4 VM split written by Ingo
Molnar. It does maximize address space at the expense of full TLB
flush on every context switch. AFAIK it once included in Fedora core 2
or 3, but dropped afterward since it puts more negative impacts rather
than positive enhancement (in virtual memory management point of view)

-- 
regards,

Mulyadi Santosa
Freelance Linux trainer and consultant

blog: the-hydra.blogspot.com
training: mulyaditraining.blogspot.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Basic HighMeM Question
  2011-06-28 15:26             ` Mulyadi Santosa
@ 2011-06-29  6:30               ` piyush moghe
  2011-06-29  6:38                 ` Mulyadi Santosa
  0 siblings, 1 reply; 11+ messages in thread
From: piyush moghe @ 2011-06-29  6:30 UTC (permalink / raw)
  To: kernelnewbies

Thanks Mulyadi and Prabhu for your enlightening description.

What a plight!!! memory has become soo cheap nowadays that I don't have less
than 1GB system and difficult to find someone in my knowledge having less
than 1 GB memory.

Although does this means that pages in FCOM will never have page fault? and
if this is true is this the reason why we assign NULL to memory descriptor (
mm_struct ) for kernel threads?


Regards,
Piyush

On Tue, Jun 28, 2011 at 8:56 PM, Mulyadi Santosa
<mulyadi.santosa@gmail.com>wrote:

> Hi :)
>
> On Tue, Jun 28, 2011 at 17:49, Prabhu nath <gprabhunath@gmail.com> wrote:
> > Dear Mulyadi,
> >
> > You are always an important character to jump in to any discussion and it
> > will be our great pleasure.
>
> Thanks :) Well, sometimes I just feel hesitate to break into someone's
> else discussion.
>
> > Please see inline for my views.
> >
> > On Tue, Jun 28, 2011 at 3:46 PM, Mulyadi Santosa <
> mulyadi.santosa at gmail.com>
> > wrote:
> >     In a 3G/1G partition. 1G of Kernel virtual address space is divided
> into
> > 896MB and 128 MB regions. I name them as
> >   Fixed Constant Offset mapped region (FCOM) - 0xC0000000 to 0xF8000000
> >   Dynamically Arbitrarily Mappable region (DAMR) - 0xF8000000 to
> 0xFFFFFFFF
>
> Great naming! You beat me on that aspect :)
>
> PS: Once there was a patch to create 4:4 VM split written by Ingo
> Molnar. It does maximize address space at the expense of full TLB
> flush on every context switch. AFAIK it once included in Fedora core 2
> or 3, but dropped afterward since it puts more negative impacts rather
> than positive enhancement (in virtual memory management point of view)
>
> --
> regards,
>
> Mulyadi Santosa
> Freelance Linux trainer and consultant
>
> blog: the-hydra.blogspot.com
> training: mulyaditraining.blogspot.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20110629/c94e6a03/attachment-0001.html 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Basic HighMeM Question
  2011-06-29  6:30               ` piyush moghe
@ 2011-06-29  6:38                 ` Mulyadi Santosa
  2011-06-29  7:34                   ` Paraneetharan Chandrasekaran
  0 siblings, 1 reply; 11+ messages in thread
From: Mulyadi Santosa @ 2011-06-29  6:38 UTC (permalink / raw)
  To: kernelnewbies

Hi :)

On Wed, Jun 29, 2011 at 13:30, piyush moghe <pmkernel@gmail.com> wrote:
> Thanks Mulyadi and Prabhu for your enlightening description.

You welcome :)

> What a plight!!! memory has become soo cheap nowadays that I don't have less
> than 1GB system and difficult to find someone in my knowledge having less
> than 1 GB memory.

In embedded world, it's still common scenario.... so it depends on
which side we see it :) That's the flexibility Linux kernel tries to
show...it does well on big memory machine...but it can also run in
small amount of memory... of course, with the right user space
applications :) (hint: Linux slitaz, puppy, tiny core...)


> Although does this means that pages in FCOM will never have page fault?

Everything mapped in kernel space ( I stress the word "mapped") is
designed to stay all the time in RAM in Linux kernel context. So based
on that AFAIK, we won't get page fault in kernel space. This is
strictly design choice IMHO.

>and
> if this is true is this the reason why we assign NULL to memory descriptor (
> mm_struct ) for kernel threads?

because kernel threads don't need to have specific address space owned
to them. They can simply "borrow" last scheduled process' address
space. After all, they just operate in kernel space, which is the same
for all processes, be it kernel threads or normal task.

-- 
regards,

Mulyadi Santosa
Freelance Linux trainer and consultant

blog: the-hydra.blogspot.com
training: mulyaditraining.blogspot.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Basic HighMeM Question
  2011-06-29  6:38                 ` Mulyadi Santosa
@ 2011-06-29  7:34                   ` Paraneetharan Chandrasekaran
  2011-06-29  9:03                     ` Mulyadi Santosa
  0 siblings, 1 reply; 11+ messages in thread
From: Paraneetharan Chandrasekaran @ 2011-06-29  7:34 UTC (permalink / raw)
  To: kernelnewbies

On 29 June 2011 12:08, Mulyadi Santosa <mulyadi.santosa@gmail.com> wrote:

> Hi :)
>
> On Wed, Jun 29, 2011 at 13:30, piyush moghe <pmkernel@gmail.com> wrote:
> > Thanks Mulyadi and Prabhu for your enlightening description.
>
> You welcome :)
>
> > What a plight!!! memory has become soo cheap nowadays that I don't have
> less
> > than 1GB system and difficult to find someone in my knowledge having less
> > than 1 GB memory.
>
> In embedded world, it's still common scenario.... so it depends on
> which side we see it :) That's the flexibility Linux kernel tries to
> show...it does well on big memory machine...but it can also run in
> small amount of memory... of course, with the right user space
> applications :) (hint: Linux slitaz, puppy, tiny core...)
>
>
> > Although does this means that pages in FCOM will never have page fault?
>
> Everything mapped in kernel space ( I stress the word "mapped") is
> designed to stay all the time in RAM in Linux kernel context. So based
> on that AFAIK, we won't get page fault in kernel space. This is
> strictly design choice IMHO.
>
> >and
> > if this is true is this the reason why we assign NULL to memory
> descriptor (
> > mm_struct ) for kernel threads?
>
> because kernel threads don't need to have specific address space owned
> to them. They can simply "borrow" last scheduled process' address
> space. After all, they just operate in kernel space, which is the same
> for all processes, be it kernel threads or normal task.
>

Thanks Mulyadi for your clarifications!
I am not getting the idea of "borrowing" last run process's address space. A
kernel thread refers only the addresses in kernel's address space (low-mem
area) which is mapped already, isnt it? How does the address space of last
run task comes into picture?


>
> --
> regards,
>
> Mulyadi Santosa
> Freelance Linux trainer and consultant
>
> blog: the-hydra.blogspot.com
> training: mulyaditraining.blogspot.com
>
> _______________________________________________
> Kernelnewbies mailing list
> Kernelnewbies at kernelnewbies.org
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>



-- 
Regards,
Paraneetharan C
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20110629/c7e9477e/attachment.html 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Basic HighMeM Question
  2011-06-29  7:34                   ` Paraneetharan Chandrasekaran
@ 2011-06-29  9:03                     ` Mulyadi Santosa
  0 siblings, 0 replies; 11+ messages in thread
From: Mulyadi Santosa @ 2011-06-29  9:03 UTC (permalink / raw)
  To: kernelnewbies

Hi :)

On Wed, Jun 29, 2011 at 14:34, Paraneetharan Chandrasekaran
<paraneetharanc@gmail.com> wrote:
> Thanks Mulyadi for your clarifications!
> I am not getting the idea of "borrowing" last run process's address space. A
> kernel thread refers only the addresses in kernel's address space (low-mem
> area) which is mapped already, isnt it? How does the address space of last
> run task comes into picture?

Think like this: kernel thread is supposed to be operate entirely in
kernel space, right?

Then, you also agree that kernel address space is the same for all
running processess, correct? The only thing differs is their user
space mapping, right?

Based on this facts, kernel threads could simply use any last
scheduled process address space descriptor. Remember: its descriptor
(thus logically use its mapping too). Is it fine? sure... check the
above facts if you are confused. By using this trick, we save few
kilobytes by not allocating memory for yet another virtual memory
space descriptor (mm_struct and its VMAs)

-- 
regards,

Mulyadi Santosa
Freelance Linux trainer and consultant

blog: the-hydra.blogspot.com
training: mulyaditraining.blogspot.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2011-06-29  9:03 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-27  9:12 Basic HighMeM Question piyush moghe
2011-06-27  9:28 ` Prabhu nath
     [not found]   ` <BANLkTinp6_n0z6OAzo1R6sq_nLynyo_3Xg@mail.gmail.com>
2011-06-27 11:15     ` Prabhu nath
2011-06-28  5:51       ` piyush moghe
2011-06-28 10:16         ` Mulyadi Santosa
2011-06-28 10:49           ` Prabhu nath
2011-06-28 15:26             ` Mulyadi Santosa
2011-06-29  6:30               ` piyush moghe
2011-06-29  6:38                 ` Mulyadi Santosa
2011-06-29  7:34                   ` Paraneetharan Chandrasekaran
2011-06-29  9:03                     ` Mulyadi Santosa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).