From: Tharindu Rukshan Bamunuarachchi <btharindu@gmail.com>
To: Hugh Dickins <hughd@google.com>
Cc: Christoph Lameter <cl@linux.com>, linux-mm@kvack.org
Subject: Re: TMPFS Maximum File Size
Date: Tue, 26 Oct 2010 19:25:14 +0530 [thread overview]
Message-ID: <AANLkTim=6Oan-CSnGMD1CTsd5iGRr98X44TAcirQt7Q_@mail.gmail.com> (raw)
In-Reply-To: <AANLkTin32b4SaC0PTJpX8Pg4anQ3aSMUZFe0QFbt9y36@mail.gmail.com>
Dear Hugh/Christoph/All,
After investigating further into issue I experienced abnormal memory
allocation behavior.
I do not know whether this is the expected behavior or due to misconfiguration.
I have two node NUMA system and 100G TMPFS mount.
1. When "dd" running freely (without CPU affinity) all memory pages
were allocated from NODE 0 and then from NODE 1.
2. When "dd" running bound (using taskset) to CPU core in NODE 1 ....
All memory pages were allocated from NODE 1.
BUT machine stopped responding after exhausting NODE 1.
No memory pages were allocated from NODE 0.
Do you have any comment / suggestions to try out ?
Why "dd" cannot allocate memory from NODE 0 when it is running bound
to NODE 1 CPU core ?
This is the back trace of core generated from our application process.
Core was generated by `DataWareHouseEngine Surv:1:1:DataWareHouseEngine:1'.
Program terminated with signal 11, Segmentation fault.
#0 0x00007fd924b0cf7c in write () from /lib64/libc.so.6
(gdb) bt
#0 0x00007fd924b0cf7c in write () from /lib64/libc.so.6
#1 0x000000000053ed02 in NBasicFile::Write (this=0x7fd9100030c8,
pBuf=0x7fd91c0ce050, iBufLen=29)
at /home/surv_3/0/src/app/SURV/libs/SurvNoraLite/29/NBasicFile.cpp:420
#2 0x00000000005454d4 in NIndex::GenarateHeader (this=0x7fd9100030c0,
rErr=@0x7fd91c8ccdd0)
at /home/surv_3/0/src/app/SURV/libs/SurvNoraLite/29/NIndex.cpp:2350
#3 0x0000000000545a13 in NIndex::Sync (this=0x7fd9100030c0,
oNHandle=26832031833, rErr=@0x7fd91c8ccdd0)
at /home/surv_3/0/src/app/SURV/libs/SurvNoraLite/29/NIndex.cpp:2440
#4 0x0000000000486538 in MIndex::Sync (this=0x7fd9100027b0,
roNHandleTableEnd=@0x2697f470) at
/home/surv_3/0/src/app/SURV/libs/SSDWI/67/MIndex.C:1562
#5 0x0000000000483ebf in MDataStore::Fix (this=0x2697f0e8) at
/home/surv_3/0/src/app/SURV/libs/SSDWI/67/MDataStore.C:762
#6 0x000000000047971b in SSPage::Connect (this=0x7fd90c01c320,
iPage=0, bIsRecover=true) at
/home/surv_3/0/src/app/SURV/libs/SSDWI/67/SSPage.cpp:1548
#7 0x000000000046a911 in DWHEWriter::Init (this=0x9640f0) at
/home/surv_3/0/src/app/SURV/components/DataWareHouseEngine/62/DWHEWriter.C:170
#8 0x000000000046ae8a in DWHEWriter::Run (pPT=0x9640f0) at
/home/surv_3/0/src/app/SURV/components/DataWareHouseEngine/62/DWHEWriter.C:97
#9 0x00007fd924832070 in start_thread () from /lib64/libpthread.so.0
#10 0x00007fd924b1a10d in clone () from /lib64/libc.so.6
#11 0x0000000000000000 in ?? ()
__
Tharindu R Bamunuarachchi.
On Wed, Oct 20, 2010 at 10:27 PM, Hugh Dickins <hughd@google.com> wrote:
> On Wed, Oct 20, 2010 at 6:44 AM, Tharindu Rukshan Bamunuarachchi
> <btharindu@gmail.com> wrote:
>>
>> Is there any kind of file size limitation in TMPFS ?
>
> There is, but it should not be affecting you. In your x86_64 case,
> the tmpfs filesize limit should be slightly over 256GB.
>
> (There's no good reason for that limit when CONFIG_SWAP is not set,
> and it's then just a waste of memory on those swap vectors: I've long
> wanted to #iifdef CONFIG_SWAP them, but never put in the work to do so
> cleanly.)
>
>> Our application SEGFAULT inside write() after filling 70% of TMPFS
>> mount. (re-creatable but does not happen every time).
>
> I've no idea why that should be happening: I wonder if your case is
> actually triggering some memory corruption, in application or in
> kernel, that manifests in that way.
>
> But I don't quite understand what you're seeing either: a segfault in
> the write() library call of your libc? an EFAULT from the kernel's
> sys_write()?
>
> Hugh
>
>>
>> We are using 98GB TMPFS without swap device. i.e. SWAP is turned off.
>> Applications does not take approx. 20GB memory.
>>
>> we have Physical RAM of 128GB Intel x86 box running SLES 11 64bit.
>> We use Infiniband, export TMPFS over NFS and IBM GPFS in same box.
>> (hope those won't affect)
>>
>> Bit confused about "triple-indirect swap vector" ?
>>
>> Extracted from shmem.c ....
>>
>> /*
>> * The maximum size of a shmem/tmpfs file is limited by the maximum size of
>> * its triple-indirect swap vector - see illustration at shmem_swp_entry().
>> *
>> * With 4kB page size, maximum file size is just over 2TB on a 32-bit kernel,
>> * but one eighth of that on a 64-bit kernel. With 8kB page size, maximum
>> * file size is just over 4TB on a 64-bit kernel, but 16TB on a 32-bit kernel,
>> * MAX_LFS_FILESIZE being then more restrictive than swap vector layout.
>> *
>>
>> Thankx a lot.
>> __
>> Tharindu R Bamunuarachchi.
>>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-10-26 13:55 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-10-20 13:44 TMPFS Maximum File Size Tharindu Rukshan Bamunuarachchi
2010-10-20 14:14 ` Christoph Lameter
2010-10-20 16:57 ` Hugh Dickins
2010-10-26 13:55 ` Tharindu Rukshan Bamunuarachchi [this message]
2010-10-27 3:44 ` Hugh Dickins
2010-10-27 20:08 ` Christoph Lameter
2010-10-28 13:35 ` Tharindu Rukshan Bamunuarachchi
2010-10-28 13:49 ` Christoph Lameter
2010-10-29 2:01 ` Tharindu Rukshan Bamunuarachchi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='AANLkTim=6Oan-CSnGMD1CTsd5iGRr98X44TAcirQt7Q_@mail.gmail.com' \
--to=btharindu@gmail.com \
--cc=cl@linux.com \
--cc=hughd@google.com \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).