From: Ben Gamari <bgamari.foss@gmail.com>
To: Nick Piggin <npiggin@suse.de>, tytso@mit.edu
Cc: linux-kernel@vger.kernel.org, Olly Betts <olly@survex.com>,
martin f krafft <madduck@madduck.net>
Subject: Re: Poor interactive performance with I/O loads with fsync()ing
Date: Thu, 25 Mar 2010 20:28:25 -0700 (PDT) [thread overview]
Message-ID: <4bac29d9.9d15f10a.42df.183e@mx.google.com> (raw)
In-Reply-To: <20100317045350.GA2869@laptop>
On Wed, 17 Mar 2010 15:53:50 +1100, Nick Piggin <npiggin@suse.de> wrote:
> Where are the unrelated processes waiting? Can you get a sample of
> several backtraces? (/proc/<pid>/stack should do it)
>
I wish. One of the incredibly frustrating characteristics of this issue is the
difficulty in measuring it. By the time processes begin blocking, it's already
far too late to open a terminal and cat to a file. By the time the terminal has
opened, tens of seconds have passed and things have started to return to normal.
>
> > Moreover, the hit on unrelated processes is so bad
> > that I would almost suspect that swap I/O is being serialized by fsync() as
> > well, despite being on a separate swap partition beyond the control of the
> > filesystem.
>
> It shouldn't be, until it reaches the bio layer. If it is on the same
> block device, it will still fight for access. It could also be blocking
> on dirty data thresholds, or page reclaim though -- writeback and
> reclaim could easily be getting slowed down by the fsync activity.
>
Hmm, this sounds interesting. Is there a way to monitor writeback throughput.
> Swapping tends to cause fairly nasty disk access patterns, combined with
> fsync it could be pretty unavoidable.
>
This is definitely a possibility. However, it seems to me like swapping should
be at least mildly favored over other I/O by the I/O scheduler. That being
said, I can certainly see how it would be difficult to implement such a
heuristic in a fair way so as not to block out standard filesystem access
during a thrashing spree.
> >
> > Xapian, however, is far from the first time I have seen this sort of
> > performance cliff. Rsync, which also uses fsync(), can also trigger this sort
> > of thrashing during system backups, as can rdiff. slocate's updatedb
> > absolutely kills interactive performance as well.
> >
> > Issues similar to this have been widely reported[1-5] in the past, and despite
> > many attempts[5-8] within both I/O and memory managements subsystems to fix
> > it, the problem certainly remains. I have tried reducing swappiness from 60 to
> > 40, with some small improvement and it has been reported[20] that these sorts
> > of symptoms can be negated through use of memory control groups to prevent
> > interactive process pages from being evicted.
>
> So the workload is causing quite a lot of swapping as well? How much
> pagecache do you have? It could be that you have too much pagecache and
> it is pushing out anonymous memory too easily, or you might have too
> little pagecache causing suboptimal writeout patterns (possibly writeout
> from page reclaim rather than asynchronous dirty page cleaner threads,
> which can really hurt).
>
As far as I can tell, the workload should fit in memory without a problem. This
machine has 4 gigabytes of memory, of which currently 2.8GB is page cache.
Seems high perhaps? I've included meminfo below. I can completely see how
overly-aggressive page-cache would result in this sort of behavior.
- Ben
MemTotal: 4048068 kB
MemFree: 47232 kB
Buffers: 48 kB
Cached: 2774648 kB
SwapCached: 1148 kB
Active: 2353572 kB
Inactive: 1355980 kB
Active(anon): 1343176 kB
Inactive(anon): 342644 kB
Active(file): 1010396 kB
Inactive(file): 1013336 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 4883756 kB
SwapFree: 4882532 kB
Dirty: 24736 kB
Writeback: 0 kB
AnonPages: 933820 kB
Mapped: 88840 kB
Shmem: 750948 kB
Slab: 150752 kB
SReclaimable: 121404 kB
SUnreclaim: 29348 kB
KernelStack: 2672 kB
PageTables: 31312 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 6907788 kB
Committed_AS: 2773672 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 364080 kB
VmallocChunk: 34359299100 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 8552 kB
DirectMap2M: 4175872 kB
next prev parent reply other threads:[~2010-03-26 3:28 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-03-16 15:31 Poor interactive performance with I/O loads with fsync()ing Ben Gamari
2010-03-17 1:24 ` tytso
2010-03-17 3:18 ` Ben Gamari
2010-03-17 3:30 ` tytso
2010-03-17 4:31 ` Ben Gamari
2010-03-26 3:16 ` Ben Gamari
2010-03-17 4:53 ` Nick Piggin
2010-03-17 9:37 ` Ingo Molnar
2010-03-26 3:31 ` Ben Gamari
2010-04-09 15:21 ` Ben Gamari
2010-03-26 3:28 ` Ben Gamari [this message]
2010-03-23 19:51 ` Jesper Krogh
2010-03-26 3:13 ` Ben Gamari
2010-03-28 1:20 ` Ben Gamari
2010-03-28 1:29 ` Ben Gamari
2010-03-28 3:42 ` Arjan van de Ven
2010-03-28 14:06 ` Ben Gamari
2010-03-28 22:08 ` Andi Kleen
2010-04-09 14:56 ` Ben Gamari
2010-04-11 15:03 ` Avi Kivity
2010-04-11 16:35 ` Ben Gamari
2010-04-11 17:20 ` Andi Kleen
2010-04-11 18:16 ` Thomas Gleixner
2010-04-11 18:42 ` Andi Kleen
2010-04-11 21:54 ` Thomas Gleixner
2010-04-11 23:43 ` Hans-Peter Jansen
2010-04-12 0:22 ` Dave Chinner
2010-04-14 18:40 ` Ric Wheeler
-- strict thread matches above, loose matches on Subject: below --
2010-03-23 11:28 Pawel S
2010-03-23 13:27 ` Jens Axboe
2010-03-26 3:35 ` Ben Gamari
2010-03-30 10:46 ` Pawel S
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4bac29d9.9d15f10a.42df.183e@mx.google.com \
--to=bgamari.foss@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=madduck@madduck.net \
--cc=npiggin@suse.de \
--cc=olly@survex.com \
--cc=tytso@mit.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox