From: Eric Sandeen <sandeen@redhat.com>
To: paul.chavent@fnac.net
Cc: linux-ext4@vger.kernel.org
Subject: Re: What represent 646345728 bytes
Date: Mon, 01 Feb 2010 09:06:12 -0600 [thread overview]
Message-ID: <4B66EDE4.6000204@redhat.com> (raw)
In-Reply-To: <7618835.18201265033304935.JavaMail.www@wsfrf1112>
paul.chavent@fnac.net wrote:
> Hi
>
> I'am writing an application that write a stream of pictures of fixed
> size on a disk.
>
> My app run on a self integrated gnu/linux (based on a 2.6.31.6-rt19
> kernel).
>
> My media is formated with
>
> # mke2fs -t ext4 -L DATA -O large_file,^has_journal,extent -v
> /dev/sda3 [...]
>
> And it is mounted with
>
> # mount -t ext4 /dev/sda3 /var/data/ EXT4-fs (sda3): no journal
> EXT4-fs (sda3): delayed allocation enabled EXT4-fs: file extents
> enabled EXT4-fs: mballoc enabled EXT4-fs (sda3): mounted filesystem
> without journal
>
> My app opens the file with "O_WRONLY | O_CREAT | O_TRUNC | O_SYNC |
> O_DIRECT" flags.
>
> Each write takes ~4.2ms for 304K (it is very good since it is the
> write bandwidth of my hard drive). There is a write every 100ms.
So you are doing streaming writes in 304k chunks? Or each file
gets 304K? It sounds like you are writing multiple 304k pictures
to a single file, right?
> But every exactly 646345728 bytes, the write takes ~46ms.
Do you mean every 304K written to the fs, or to the file?
304k doesn't divide evenly into 646345728 bytes so I'm not sure...
> I had the same problem with ext2 but every ~620M (the amount wasn't
> so constant).
>
> Also i tryed to "posix_fallocate" with (eg 2G), and the first write
> overhead comes at this limit. I would like to avoid to preallocate.
Preallocation -should- be helpful in general, so if it's not ...
> I suppose it is a kind of block allocation issue. But i would like to
> have your opinion : - what is exatcly this amount of bytes ? - can i
> do something for having a "constant" write time from the user space
> point of view ? - is it a "probem" only for me ?
It might be interesting to know what the geometry of your filesystem is,
dumpe2fs -h would provide that. Also if you could mock up a testcase
that demonstrates the behavior, it would help in debugging.
Actually... as a first step I would redo the test yourself with
-O ^uninit_bg at mkfs time to see if the bitmap init is causing the
delay.
-Eric
> Thank you for your reading.
>
> Paul.
>
>
>
>
> -- To unsubscribe from this list: send the line "unsubscribe
> linux-ext4" in the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2010-02-01 15:06 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-02-01 14:08 What represent 646345728 bytes paul.chavent
2010-02-01 15:06 ` Eric Sandeen [this message]
2010-02-01 17:20 ` Aneesh Kumar K. V
2010-02-01 17:34 ` Eric Sandeen
-- strict thread matches above, loose matches on Subject: below --
2010-02-01 17:06 paul.chavent
2010-02-01 20:07 ` Eric Sandeen
2010-02-01 22:36 ` Eric Sandeen
2010-02-01 23:01 ` Andreas Dilger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4B66EDE4.6000204@redhat.com \
--to=sandeen@redhat.com \
--cc=linux-ext4@vger.kernel.org \
--cc=paul.chavent@fnac.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).