linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* TMPFS Maximum File Size
@ 2010-10-20 13:44 Tharindu Rukshan Bamunuarachchi
  2010-10-20 14:14 ` Christoph Lameter
  2010-10-20 16:57 ` Hugh Dickins
  0 siblings, 2 replies; 9+ messages in thread
From: Tharindu Rukshan Bamunuarachchi @ 2010-10-20 13:44 UTC (permalink / raw)
  To: hughd; +Cc: linux-mm

Hi Hugh/All,

Is there any kind of file size limitation in TMPFS ?
Our application SEGFAULT inside write() after filling 70% of TMPFS
mount. (re-creatable but does not happen every time).

We are using 98GB TMPFS without swap device. i.e. SWAP is turned off.
Applications does not take approx. 20GB memory.

we have Physical RAM of 128GB Intel x86 box running SLES 11 64bit.
We use Infiniband, export TMPFS over NFS and IBM GPFS in same box.
(hope those won't affect)

Bit confused about "triple-indirect swap vector" ?

Extracted from shmem.c ....

/*
 * The maximum size of a shmem/tmpfs file is limited by the maximum size of
 * its triple-indirect swap vector - see illustration at shmem_swp_entry().
 *
 * With 4kB page size, maximum file size is just over 2TB on a 32-bit kernel,
 * but one eighth of that on a 64-bit kernel.  With 8kB page size, maximum
 * file size is just over 4TB on a 64-bit kernel, but 16TB on a 32-bit kernel,
 * MAX_LFS_FILESIZE being then more restrictive than swap vector layout.
 *

Thankx a lot.
__
Tharindu R Bamunuarachchi.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TMPFS Maximum File Size
  2010-10-20 13:44 TMPFS Maximum File Size Tharindu Rukshan Bamunuarachchi
@ 2010-10-20 14:14 ` Christoph Lameter
  2010-10-20 16:57 ` Hugh Dickins
  1 sibling, 0 replies; 9+ messages in thread
From: Christoph Lameter @ 2010-10-20 14:14 UTC (permalink / raw)
  To: Tharindu Rukshan Bamunuarachchi; +Cc: hughd, linux-mm

On Wed, 20 Oct 2010, Tharindu Rukshan Bamunuarachchi wrote:

> Is there any kind of file size limitation in TMPFS ?
> Our application SEGFAULT inside write() after filling 70% of TMPFS
> mount. (re-creatable but does not happen every time).

Please show us the console output and the backtrace for the segfault.

> We are using 98GB TMPFS without swap device. i.e. SWAP is turned off.
> Applications does not take approx. 20GB memory.

Are you sure? If it would take too much memory then we should see an OOM
though. Kernel logs would be very useful to figure out what is going on.

> we have Physical RAM of 128GB Intel x86 box running SLES 11 64bit.
> We use Infiniband, export TMPFS over NFS and IBM GPFS in same box.
> (hope those won't affect)

Interesting use case.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TMPFS Maximum File Size
  2010-10-20 13:44 TMPFS Maximum File Size Tharindu Rukshan Bamunuarachchi
  2010-10-20 14:14 ` Christoph Lameter
@ 2010-10-20 16:57 ` Hugh Dickins
  2010-10-26 13:55   ` Tharindu Rukshan Bamunuarachchi
  1 sibling, 1 reply; 9+ messages in thread
From: Hugh Dickins @ 2010-10-20 16:57 UTC (permalink / raw)
  To: Tharindu Rukshan Bamunuarachchi; +Cc: Christoph Lameter, linux-mm

On Wed, Oct 20, 2010 at 6:44 AM, Tharindu Rukshan Bamunuarachchi
<btharindu@gmail.com> wrote:
>
> Is there any kind of file size limitation in TMPFS ?

There is, but it should not be affecting you.  In your x86_64 case,
the tmpfs filesize limit should be slightly over 256GB.

(There's no good reason for that limit when CONFIG_SWAP is not set,
and it's then just a waste of memory on those swap vectors: I've long
wanted to #iifdef CONFIG_SWAP them, but never put in the work to do so
cleanly.)

> Our application SEGFAULT inside write() after filling 70% of TMPFS
> mount. (re-creatable but does not happen every time).

I've no idea why that should be happening: I wonder if your case is
actually triggering some memory corruption, in application or in
kernel, that manifests in that way.

But I don't quite understand what you're seeing either: a segfault in
the write() library call of your libc?  an EFAULT from the kernel's
sys_write()?

Hugh

>
> We are using 98GB TMPFS without swap device. i.e. SWAP is turned off.
> Applications does not take approx. 20GB memory.
>
> we have Physical RAM of 128GB Intel x86 box running SLES 11 64bit.
> We use Infiniband, export TMPFS over NFS and IBM GPFS in same box.
> (hope those won't affect)
>
> Bit confused about "triple-indirect swap vector" ?
>
> Extracted from shmem.c ....
>
> /*
>  * The maximum size of a shmem/tmpfs file is limited by the maximum size of
>  * its triple-indirect swap vector - see illustration at shmem_swp_entry().
>  *
>  * With 4kB page size, maximum file size is just over 2TB on a 32-bit kernel,
>  * but one eighth of that on a 64-bit kernel.  With 8kB page size, maximum
>  * file size is just over 4TB on a 64-bit kernel, but 16TB on a 32-bit kernel,
>  * MAX_LFS_FILESIZE being then more restrictive than swap vector layout.
>  *
>
> Thankx a lot.
> __
> Tharindu R Bamunuarachchi.
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TMPFS Maximum File Size
  2010-10-20 16:57 ` Hugh Dickins
@ 2010-10-26 13:55   ` Tharindu Rukshan Bamunuarachchi
  2010-10-27  3:44     ` Hugh Dickins
  2010-10-27 20:08     ` Christoph Lameter
  0 siblings, 2 replies; 9+ messages in thread
From: Tharindu Rukshan Bamunuarachchi @ 2010-10-26 13:55 UTC (permalink / raw)
  To: Hugh Dickins; +Cc: Christoph Lameter, linux-mm

Dear Hugh/Christoph/All,

After investigating further into issue I experienced abnormal memory
allocation behavior.
I do not know whether this is the expected behavior or due to misconfiguration.

I have two node NUMA system and 100G TMPFS mount.

1. When "dd" running freely (without CPU affinity) all memory pages
were allocated from NODE 0 and then from NODE 1.

2. When "dd" running bound (using taskset) to CPU core in NODE 1 ....
    All memory pages were allocated from NODE 1.
    BUT machine stopped responding after exhausting NODE 1.
    No memory pages were allocated from NODE 0.

Do you have any comment / suggestions to try out ?
Why "dd" cannot allocate memory from NODE 0 when it is running bound
to NODE 1 CPU core ?




This is the back trace of core generated from our application process.

Core was generated by `DataWareHouseEngine Surv:1:1:DataWareHouseEngine:1'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007fd924b0cf7c in write () from /lib64/libc.so.6
(gdb) bt
#0  0x00007fd924b0cf7c in write () from /lib64/libc.so.6
#1  0x000000000053ed02 in NBasicFile::Write (this=0x7fd9100030c8,
pBuf=0x7fd91c0ce050, iBufLen=29)
    at /home/surv_3/0/src/app/SURV/libs/SurvNoraLite/29/NBasicFile.cpp:420
#2  0x00000000005454d4 in NIndex::GenarateHeader (this=0x7fd9100030c0,
rErr=@0x7fd91c8ccdd0)
    at /home/surv_3/0/src/app/SURV/libs/SurvNoraLite/29/NIndex.cpp:2350
#3  0x0000000000545a13 in NIndex::Sync (this=0x7fd9100030c0,
oNHandle=26832031833, rErr=@0x7fd91c8ccdd0)
    at /home/surv_3/0/src/app/SURV/libs/SurvNoraLite/29/NIndex.cpp:2440
#4  0x0000000000486538 in MIndex::Sync (this=0x7fd9100027b0,
roNHandleTableEnd=@0x2697f470) at
/home/surv_3/0/src/app/SURV/libs/SSDWI/67/MIndex.C:1562
#5  0x0000000000483ebf in MDataStore::Fix (this=0x2697f0e8) at
/home/surv_3/0/src/app/SURV/libs/SSDWI/67/MDataStore.C:762
#6  0x000000000047971b in SSPage::Connect (this=0x7fd90c01c320,
iPage=0, bIsRecover=true) at
/home/surv_3/0/src/app/SURV/libs/SSDWI/67/SSPage.cpp:1548
#7  0x000000000046a911 in DWHEWriter::Init (this=0x9640f0) at
/home/surv_3/0/src/app/SURV/components/DataWareHouseEngine/62/DWHEWriter.C:170
#8  0x000000000046ae8a in DWHEWriter::Run (pPT=0x9640f0) at
/home/surv_3/0/src/app/SURV/components/DataWareHouseEngine/62/DWHEWriter.C:97
#9  0x00007fd924832070 in start_thread () from /lib64/libpthread.so.0
#10 0x00007fd924b1a10d in clone () from /lib64/libc.so.6
#11 0x0000000000000000 in ?? ()

__
Tharindu R Bamunuarachchi.




On Wed, Oct 20, 2010 at 10:27 PM, Hugh Dickins <hughd@google.com> wrote:
> On Wed, Oct 20, 2010 at 6:44 AM, Tharindu Rukshan Bamunuarachchi
> <btharindu@gmail.com> wrote:
>>
>> Is there any kind of file size limitation in TMPFS ?
>
> There is, but it should not be affecting you.  In your x86_64 case,
> the tmpfs filesize limit should be slightly over 256GB.
>
> (There's no good reason for that limit when CONFIG_SWAP is not set,
> and it's then just a waste of memory on those swap vectors: I've long
> wanted to #iifdef CONFIG_SWAP them, but never put in the work to do so
> cleanly.)
>
>> Our application SEGFAULT inside write() after filling 70% of TMPFS
>> mount. (re-creatable but does not happen every time).
>
> I've no idea why that should be happening: I wonder if your case is
> actually triggering some memory corruption, in application or in
> kernel, that manifests in that way.
>
> But I don't quite understand what you're seeing either: a segfault in
> the write() library call of your libc?  an EFAULT from the kernel's
> sys_write()?
>
> Hugh
>
>>
>> We are using 98GB TMPFS without swap device. i.e. SWAP is turned off.
>> Applications does not take approx. 20GB memory.
>>
>> we have Physical RAM of 128GB Intel x86 box running SLES 11 64bit.
>> We use Infiniband, export TMPFS over NFS and IBM GPFS in same box.
>> (hope those won't affect)
>>
>> Bit confused about "triple-indirect swap vector" ?
>>
>> Extracted from shmem.c ....
>>
>> /*
>>  * The maximum size of a shmem/tmpfs file is limited by the maximum size of
>>  * its triple-indirect swap vector - see illustration at shmem_swp_entry().
>>  *
>>  * With 4kB page size, maximum file size is just over 2TB on a 32-bit kernel,
>>  * but one eighth of that on a 64-bit kernel.  With 8kB page size, maximum
>>  * file size is just over 4TB on a 64-bit kernel, but 16TB on a 32-bit kernel,
>>  * MAX_LFS_FILESIZE being then more restrictive than swap vector layout.
>>  *
>>
>> Thankx a lot.
>> __
>> Tharindu R Bamunuarachchi.
>>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TMPFS Maximum File Size
  2010-10-26 13:55   ` Tharindu Rukshan Bamunuarachchi
@ 2010-10-27  3:44     ` Hugh Dickins
  2010-10-27 20:08     ` Christoph Lameter
  1 sibling, 0 replies; 9+ messages in thread
From: Hugh Dickins @ 2010-10-27  3:44 UTC (permalink / raw)
  To: Tharindu Rukshan Bamunuarachchi; +Cc: Christoph Lameter, linux-mm

On Tue, 26 Oct 2010, Tharindu Rukshan Bamunuarachchi wrote:

> Dear Hugh/Christoph/All,
> 
> After investigating further into issue I experienced abnormal memory
> allocation behavior.
> I do not know whether this is the expected behavior or due to misconfiguration.
> 
> I have two node NUMA system and 100G TMPFS mount.
> 
> 1. When "dd" running freely (without CPU affinity) all memory pages
> were allocated from NODE 0 and then from NODE 1.
> 
> 2. When "dd" running bound (using taskset) to CPU core in NODE 1 ....
>     All memory pages were allocated from NODE 1.
>     BUT machine stopped responding after exhausting NODE 1.
>     No memory pages were allocated from NODE 0.
> 
> Do you have any comment / suggestions to try out ?
> Why "dd" cannot allocate memory from NODE 0 when it is running bound
> to NODE 1 CPU core ?

Please take a look at Documentation/filesystems/tmpfs.txt in the
kernel source tree, the section "tmpfs has a mount option to set
the NUMA memory allocation policy" explaining mpol=

I hope that mounting the tmpfs with mpol=interleave, or mpol=local,
will give you the behaviour you want; but perhaps not, since I notice
it does say that the policy applied will be modified by calling task's
cpuset constraints, and it sounds like your dd is constrained to use
memory only from its node.  Documentation/cgroups/cpusets.txt may be
needed too.


(I am not the right person to advise on managing NUMA and cpusets!)

> 
> This is the back trace of core generated from our application process.
> 
> Core was generated by `DataWareHouseEngine Surv:1:1:DataWareHouseEngine:1'.
> Program terminated with signal 11, Segmentation fault.
> #0  0x00007fd924b0cf7c in write () from /lib64/libc.so.6

Nor do I understand why you should be getting a SIGSEGV in libc's write(),
sorry.

> (gdb) bt
> #0  0x00007fd924b0cf7c in write () from /lib64/libc.so.6
> #1  0x000000000053ed02 in NBasicFile::Write (this=0x7fd9100030c8,
> pBuf=0x7fd91c0ce050, iBufLen=29)
>     at /home/surv_3/0/src/app/SURV/libs/SurvNoraLite/29/NBasicFile.cpp:420
> #2  0x00000000005454d4 in NIndex::GenarateHeader (this=0x7fd9100030c0,
> rErr=@0x7fd91c8ccdd0)
>     at /home/surv_3/0/src/app/SURV/libs/SurvNoraLite/29/NIndex.cpp:2350
> #3  0x0000000000545a13 in NIndex::Sync (this=0x7fd9100030c0,
> oNHandle=26832031833, rErr=@0x7fd91c8ccdd0)
>     at /home/surv_3/0/src/app/SURV/libs/SurvNoraLite/29/NIndex.cpp:2440
> #4  0x0000000000486538 in MIndex::Sync (this=0x7fd9100027b0,
> roNHandleTableEnd=@0x2697f470) at
> /home/surv_3/0/src/app/SURV/libs/SSDWI/67/MIndex.C:1562
> #5  0x0000000000483ebf in MDataStore::Fix (this=0x2697f0e8) at
> /home/surv_3/0/src/app/SURV/libs/SSDWI/67/MDataStore.C:762
> #6  0x000000000047971b in SSPage::Connect (this=0x7fd90c01c320,
> iPage=0, bIsRecover=true) at
> /home/surv_3/0/src/app/SURV/libs/SSDWI/67/SSPage.cpp:1548
> #7  0x000000000046a911 in DWHEWriter::Init (this=0x9640f0) at
> /home/surv_3/0/src/app/SURV/components/DataWareHouseEngine/62/DWHEWriter.C:170
> #8  0x000000000046ae8a in DWHEWriter::Run (pPT=0x9640f0) at
> /home/surv_3/0/src/app/SURV/components/DataWareHouseEngine/62/DWHEWriter.C:97
> #9  0x00007fd924832070 in start_thread () from /lib64/libpthread.so.0
> #10 0x00007fd924b1a10d in clone () from /lib64/libc.so.6
> #11 0x0000000000000000 in ?? ()

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TMPFS Maximum File Size
  2010-10-26 13:55   ` Tharindu Rukshan Bamunuarachchi
  2010-10-27  3:44     ` Hugh Dickins
@ 2010-10-27 20:08     ` Christoph Lameter
  2010-10-28 13:35       ` Tharindu Rukshan Bamunuarachchi
  1 sibling, 1 reply; 9+ messages in thread
From: Christoph Lameter @ 2010-10-27 20:08 UTC (permalink / raw)
  To: Tharindu Rukshan Bamunuarachchi; +Cc: Hugh Dickins, linux-mm

On Tue, 26 Oct 2010, Tharindu Rukshan Bamunuarachchi wrote:

> I have two node NUMA system and 100G TMPFS mount.
>
> 1. When "dd" running freely (without CPU affinity) all memory pages
> were allocated from NODE 0 and then from NODE 1.
>
> 2. When "dd" running bound (using taskset) to CPU core in NODE 1 ....
>     All memory pages were allocated from NODE 1.
>     BUT machine stopped responding after exhausting NODE 1.
>     No memory pages were allocated from NODE 0.

Hmmm... Strange it should fall back like under #1. Can you tell us where
it hung?

> Do you have any comment / suggestions to try out ?
> Why "dd" cannot allocate memory from NODE 0 when it is running bound
> to NODE 1 CPU core ?

Definitely looks like a bug somewhere. TMPFS policies are not correctly
falling over to more distant zones?

> Core was generated by `DataWareHouseEngine Surv:1:1:DataWareHouseEngine:1'.
> Program terminated with signal 11, Segmentation fault.
> #0  0x00007fd924b0cf7c in write () from /lib64/libc.so.6

Hmmm... Kernel oops? Or a segfault because of an invalid reference by your
app?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TMPFS Maximum File Size
  2010-10-27 20:08     ` Christoph Lameter
@ 2010-10-28 13:35       ` Tharindu Rukshan Bamunuarachchi
  2010-10-28 13:49         ` Christoph Lameter
  0 siblings, 1 reply; 9+ messages in thread
From: Tharindu Rukshan Bamunuarachchi @ 2010-10-28 13:35 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Hugh Dickins, linux-mm

Dear Hugh/Christoph/All,

I have done further testing to isolate the issue & found following.

1. At the moment .... Issue only occurs with IBM hardware. (x3550/x3650).
    It did not occur in HP Nehalem or Sun X4600. I have only IBM/HP/Sun boxes.
2. Issue is not visible with vanilla kernel 2.6.32 or 2.6.36.

SLES 11 is running with 2.6.27-45. I think I should turn to IBM/Novell
for further help.
I still wonder why this happens only with IBM+SLES 11 kernel ? Same HW
works with later kernels ?

__
Tharindu R Bamunuarachchi.



On Thu, Oct 28, 2010 at 1:38 AM, Christoph Lameter <cl@linux.com> wrote:
>
> On Tue, 26 Oct 2010, Tharindu Rukshan Bamunuarachchi wrote:
>
> > I have two node NUMA system and 100G TMPFS mount.
> >
> > 1. When "dd" running freely (without CPU affinity) all memory pages
> > were allocated from NODE 0 and then from NODE 1.
> >
> > 2. When "dd" running bound (using taskset) to CPU core in NODE 1 ....
> >     All memory pages were allocated from NODE 1.
> >     BUT machine stopped responding after exhausting NODE 1.
> >     No memory pages were allocated from NODE 0.
>
> Hmmm... Strange it should fall back like under #1. Can you tell us where
> it hung?
>
> > Do you have any comment / suggestions to try out ?
> > Why "dd" cannot allocate memory from NODE 0 when it is running bound
> > to NODE 1 CPU core ?
>
> Definitely looks like a bug somewhere. TMPFS policies are not correctly
> falling over to more distant zones?
>
> > Core was generated by `DataWareHouseEngine Surv:1:1:DataWareHouseEngine:1'.
> > Program terminated with signal 11, Segmentation fault.
> > #0  0x00007fd924b0cf7c in write () from /lib64/libc.so.6
>
> Hmmm... Kernel oops? Or a segfault because of an invalid reference by your
> app?
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TMPFS Maximum File Size
  2010-10-28 13:35       ` Tharindu Rukshan Bamunuarachchi
@ 2010-10-28 13:49         ` Christoph Lameter
  2010-10-29  2:01           ` Tharindu Rukshan Bamunuarachchi
  0 siblings, 1 reply; 9+ messages in thread
From: Christoph Lameter @ 2010-10-28 13:49 UTC (permalink / raw)
  To: Tharindu Rukshan Bamunuarachchi; +Cc: Hugh Dickins, linux-mm

On Thu, 28 Oct 2010, Tharindu Rukshan Bamunuarachchi wrote:

> SLES 11 is running with 2.6.27-45. I think I should turn to IBM/Novell
> for further help.

Good idea.

> I still wonder why this happens only with IBM+SLES 11 kernel ? Same HW
> works with later kernels ?

I have no idea how Novell hacks up their SLES11 kernels. Good to hear that
we do not have the issue upstream.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: TMPFS Maximum File Size
  2010-10-28 13:49         ` Christoph Lameter
@ 2010-10-29  2:01           ` Tharindu Rukshan Bamunuarachchi
  0 siblings, 0 replies; 9+ messages in thread
From: Tharindu Rukshan Bamunuarachchi @ 2010-10-29  2:01 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Hugh Dickins, linux-mm

On Thu, Oct 28, 2010 at 7:19 PM, Christoph Lameter <cl@linux.com> wrote:
>> I still wonder why this happens only with IBM+SLES 11 kernel ? Same HW
>> works with later kernels ?
>
> I have no idea how Novell hacks up their SLES11 kernels. Good to hear that
> we do not have the issue upstream.
>
>
Could this be a SLES 11 issue ? Even SLES 11 works well with different hardware.
I thought this is an IBM hardware issue.

Thankx.
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-10-29  2:01 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-10-20 13:44 TMPFS Maximum File Size Tharindu Rukshan Bamunuarachchi
2010-10-20 14:14 ` Christoph Lameter
2010-10-20 16:57 ` Hugh Dickins
2010-10-26 13:55   ` Tharindu Rukshan Bamunuarachchi
2010-10-27  3:44     ` Hugh Dickins
2010-10-27 20:08     ` Christoph Lameter
2010-10-28 13:35       ` Tharindu Rukshan Bamunuarachchi
2010-10-28 13:49         ` Christoph Lameter
2010-10-29  2:01           ` Tharindu Rukshan Bamunuarachchi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).