From: "Krzysztof Błaszkowski" <kb@sysmikro.com.pl>
To: xfs@oss.sgi.com, Stan Hoeppner <stan@hardwarefreak.com>
Subject: Re: posix_fallocate
Date: Fri, 7 May 2010 11:48:02 +0200 [thread overview]
Message-ID: <201005071148.03012.kb@sysmikro.com.pl> (raw)
In-Reply-To: <4BE3DC2D.3000607@hardwarefreak.com>
On Friday 07 May 2010 11:23, Stan Hoeppner wrote:
> Krzysztof Błaszkowski put forth on 5/7/2010 3:22 AM:
> > Hello,
> >
> > I use this to preallocate large space but found an issue. Posix_fallocate
> > works right with sizes like 100G, 1T and even 10T on some boxes (on some
> > other can fail after e.g. 7T threshold) but if i tried e.g. 16T the user
> > space process would be "R"unning forever and it is not interruptible.
> > Furthermore some other not related processes like sshd, bash enter D
> > state. There is nothing in kernel log.
> >
> > I made so far a few logs with ftrace facility for 1G, 100G, 1T and 10T
> > sizes. I noticed that for 1st three sizes the log is as long as abt 1.5M
> > (2M peak) while 10T generates 94M long log. I couldn't retrieve a log for
> > 17T case because "cat /sys ... /trace" enters D.
> >
> > I would appreciate any help because i gave up with ftrace logs analysis.
> > The xfs_vn_fallocate is covered in abt 11k lines for a 1.5M log case
> > while there are abt 163k lines in 94M log. And all i could see is poss
> > some relationship between time spent in xfs_vn_fallocate subfunctions vs
> > requested space.
> >
> > Box details:
> > 16 Hitachi 2TB drives (backplane connected), dm, 1 lvm lun of 25T size,
> > kernel 2.6.31.5, more recent kernels neither xfs were not tested.
>
> 32 or 64 bit kernel?
sorry, i meant 64 bit.
> What is the size of the XFS filesystem on the 25TB
> LVM LUN against which you're running posix_fallocate?
xfs occupies whole lun (ie 25TB)
> The reason I ask is
> that XFS has a 16TB per filesystem limitation on 32 bit kernels. I can
> only assume that your XFS filesystem is larger than 16TB since you're
> attempting to posix_fallocate 16TB. But, it's best to ask for confirmation
> rather than assume, especially given that your problem is appearing near
> that magical 16TB boundary.
sure, i see. I use 64 bit by default.
Regards,
Krzysztof
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2010-05-07 9:46 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-05-07 8:22 posix_fallocate Krzysztof Błaszkowski
2010-05-07 9:23 ` posix_fallocate Stan Hoeppner
2010-05-07 9:48 ` Krzysztof Błaszkowski [this message]
2010-05-07 10:07 ` posix_fallocate Krzysztof Błaszkowski
2010-05-07 10:42 ` posix_fallocate Stan Hoeppner
2010-05-07 10:56 ` posix_fallocate Krzysztof Błaszkowski
2010-05-07 16:26 ` posix_fallocate Eric Sandeen
2010-05-07 16:53 ` posix_fallocate Eric Sandeen
2010-05-07 22:16 ` posix_fallocate Dave Chinner
2010-05-10 7:11 ` posix_fallocate Krzysztof Błaszkowski
2010-05-10 14:39 ` posix_fallocate Eric Sandeen
2010-05-10 18:17 ` posix_fallocate Krzysztof Błaszkowski
2010-05-10 18:45 ` posix_fallocate Eric Sandeen
2010-05-11 14:20 ` posix_fallocate Krzysztof Błaszkowski
2010-05-11 14:54 ` posix_fallocate Eric Sandeen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201005071148.03012.kb@sysmikro.com.pl \
--to=kb@sysmikro.com.pl \
--cc=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox