public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Roger Oberholtzer <roger@opq.se>
To: xfs@oss.sgi.com
Subject: Re: Questions about XFS
Date: Wed, 12 Jun 2013 15:48:47 +0200	[thread overview]
Message-ID: <1371044927.16366.14.camel@acme.pacific> (raw)
In-Reply-To: <51B865B2.5030208@hardwarefreak.com>

On Wed, 2013-06-12 at 07:12 -0500, Stan Hoeppner wrote:
> On 6/12/2013 3:26 AM, Roger Oberholtzer wrote:
> ...
> > I have an application that is streaming data to an XFS disk at a
> > sustained 25 MB/sec. This is well below what the hardware supports. The
> > application does fopen/fwrite/fclose (no active flushing or syncing).
> 
> Buffered IO.
> 
> > I see that as my application writes data (the only process writing the
> > only open file on the disk), the system cache grows and grows. Here is
> > the unusual part: periodically, writes take some number of seconds to
> > complete, rather than the typical <50 msecs). The increased time seems
> > to correspond to the increasing size of the page cache.
> 
> Standard Linux buffered IO behavior.  Note this is not XFS specific.

That is correct. But users of XFS, like any others, may experience this
and have a solution. And it seemed to be related the the question in the
original post.

> > If I do:
> > 
> > echo 1 > /proc/sys/vm/drop_caches
> 
> Dumps the page cache forcing your buffered writes to disk.

The interesting thing is that when this is done, and the 3 or 4 GB of
cache goes away, it seems rather quick. Like the pages are not
containing data that must be written. But if that is the case, why the
increasingly long periodic write delays as the cache gets bigger?

> > while the application is runnung, then the writes do not occasionally
> > take longer. Until the cache grows again, and I do the echo again.
> 
> Which seems a bit laborious.
> 
> > I am sure I must be misinterpreting what I see.
> 
> Nope.  The Linux virtual memory system has behaved this way for quite
> some time.  You can teak how long IOs stay in cache.  See dirty_* at
> https://www.kernel.org/doc/Documentation/sysctl/vm.txt

> 
> Given the streaming nature you describe, have you looked at possibly
> using O_DIRECT?

I would really like to avoid this if possible. The data is not in
uniform chunks, so it would need to be buffered in the app to make it
so. The system can obviously keep up with the data rate - as long as it
does not get greedy with all that RAM just sitting there...

I have been thinking that I may need to do an occasional
fflush/fdatasync to be sure the write cache stays reasonably small.


Yours sincerely,

Roger Oberholtzer

Ramböll RST / Systems

Office: Int +46 10-615 60 20
Mobile: Int +46 70-815 1696
roger.oberholtzer@ramboll.se
________________________________________

Ramböll Sverige AB
Krukmakargatan 21
P.O. Box 17009
SE-104 62 Stockholm, Sweden
www.rambollrst.se


_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2013-06-12 13:48 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-11  9:56 Questions about XFS Steve Bergman
2013-06-11 13:10 ` Emmanuel Florac
2013-06-11 13:35 ` Stefan Ring
2013-06-11 13:52 ` Ric Wheeler
2013-06-11 13:59 ` Ric Wheeler
2013-06-11 16:12   ` Steve Bergman
2013-06-11 17:19     ` Ric Wheeler
2013-06-11 17:27       ` Stefan Ring
2013-06-11 17:31         ` Ric Wheeler
2013-06-11 17:41           ` Stefan Ring
2013-06-11 18:03             ` Eric Sandeen
2013-06-11 19:30           ` Steve Bergman
2013-06-11 21:03             ` Dave Chinner
2013-06-11 21:43               ` Steve Bergman
2013-06-11 17:59         ` Ben Myers
2013-06-11 17:28     ` Eric Sandeen
2013-06-11 19:17       ` Steve Bergman
2013-06-11 21:47         ` Dave Chinner
2013-07-22 14:59       ` Steve Bergman
2013-07-22 15:16         ` Steve Bergman
2013-06-12  8:26     ` Roger Oberholtzer
2013-06-12 10:34       ` Ric Wheeler
2013-06-12 13:52         ` Roger Oberholtzer
2013-06-12 12:12       ` Stan Hoeppner
2013-06-12 13:48         ` Roger Oberholtzer [this message]
2013-06-13  0:48       ` Dave Chinner
2013-06-11 19:35 ` Ben Myers
2013-06-11 19:55   ` Steve Bergman
2013-06-11 20:08     ` Ben Myers
2013-06-11 21:57     ` Matthias Schniedermeyer
2013-06-11 22:18       ` Steve Bergman
  -- strict thread matches above, loose matches on Subject: below --
2013-10-25 14:28 harryxiyou
2013-10-25 14:42 ` Emmanuel Florac
2013-10-25 14:57   ` Eric Sandeen
2013-10-25 16:24     ` harryxiyou
2013-10-25 16:44     ` harryxiyou
2013-10-26 10:41     ` Stan Hoeppner
2013-10-27  3:29       ` Eric Sandeen
2013-10-25 16:13   ` harryxiyou
2013-10-25 16:16     ` Eric Sandeen
2007-03-13 13:40 clflush
2007-03-13 15:36 ` Klaus Strebel
2007-03-13 15:53 ` Stein M. Hugubakken
2007-03-13 15:55 ` Eric Sandeen
2007-03-14 16:33 ` Stewart Smith
2007-03-15  4:26   ` Taisuke Yamada
2007-03-15  9:07     ` clflush
2007-03-15 14:41       ` Geir A. Myrestrand
2007-03-16 10:36       ` Martin Steigerwald
2007-03-17  0:47         ` Jason White

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1371044927.16366.14.camel@acme.pacific \
    --to=roger@opq.se \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox