public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Nick Piggin <npiggin@suse.de>
To: Ben Gamari <bgamari.foss@gmail.com>, tytso@mit.edu
Cc: linux-kernel@vger.kernel.org, Olly Betts <olly@survex.com>,
	martin f krafft <madduck@madduck.net>
Subject: Re: Poor interactive performance with I/O loads with fsync()ing
Date: Wed, 17 Mar 2010 15:53:50 +1100	[thread overview]
Message-ID: <20100317045350.GA2869@laptop> (raw)
In-Reply-To: <4b9fa440.12135e0a.7fc8.ffffe745@mx.google.com>

Hi,

On Tue, Mar 16, 2010 at 08:31:12AM -0700, Ben Gamari wrote:
> Hey all,
> 
> Recently I started using the Xapian-based notmuch mail client for everyday
> use.  One of the things I was quite surprised by after the switch was the
> incredible hit in interactive performance that is observed during database
> updates. Things are particularly bad during runs of 'notmuch new,' which scans
> the file system looking for new messages and adds them to the database.
> Specifically, the worst of the performance hit appears to occur when the
> database is being updated.
> 
> During these periods, even small chunks of I/O can become minute-long ordeals.
> It is common for latencytop to show 30 second long latencies for page faults
> and writing pages.  Interactive performance is absolutely abysmal, with other
> unrelated processes feeling horrible latencies, causing media players,
> editors, and even terminals to grind to a halt.
> 
> Despite the system being clearly I/O bound, iostat shows pitiful disk
> throughput (700kByte/second read, 300 kByte/second write). Certainly this poor
> performance can, at least to some degree, be attributable to the fact that
> Xapian uses fdatasync() to ensure data consistency. That being said, it seems
> like Xapian's page usage causes horrible thrashing, hence the performance hit
> on unrelated processes.

Where are the unrelated processes waiting? Can you get a sample of
several backtraces?  (/proc/<pid>/stack should do it)


> Moreover, the hit on unrelated processes is so bad
> that I would almost suspect that swap I/O is being serialized by fsync() as
> well, despite being on a separate swap partition beyond the control of the
> filesystem.

It shouldn't be, until it reaches the bio layer. If it is on the same
block device, it will still fight for access. It could also be blocking
on dirty data thresholds, or page reclaim though -- writeback and
reclaim could easily be getting slowed down by the fsync activity.

Swapping tends to cause fairly nasty disk access patterns, combined with
fsync it could be pretty unavoidable.

> 
> Xapian, however, is far from the first time I have seen this sort of
> performance cliff. Rsync, which also uses fsync(), can also trigger this sort
> of thrashing during system backups, as can rdiff. slocate's updatedb
> absolutely kills interactive performance as well.
> 
> Issues similar to this have been widely reported[1-5] in the past, and despite
> many attempts[5-8] within both I/O and memory managements subsystems to fix
> it, the problem certainly remains. I have tried reducing swappiness from 60 to
> 40, with some small improvement and it has been reported[20] that these sorts
> of symptoms can be negated through use of memory control groups to prevent
> interactive process pages from being evicted.

So the workload is causing quite a lot of swapping as well? How much
pagecache do you have? It could be that you have too much pagecache and
it is pushing out anonymous memory too easily, or you might have too
little pagecache causing suboptimal writeout patterns (possibly writeout
from page reclaim rather than asynchronous dirty page cleaner threads,
which can really hurt).

Thanks,
Nick


  parent reply	other threads:[~2010-03-17  4:53 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-03-16 15:31 Poor interactive performance with I/O loads with fsync()ing Ben Gamari
2010-03-17  1:24 ` tytso
2010-03-17  3:18   ` Ben Gamari
2010-03-17  3:30     ` tytso
2010-03-17  4:31       ` Ben Gamari
2010-03-26  3:16         ` Ben Gamari
2010-03-17  4:53 ` Nick Piggin [this message]
2010-03-17  9:37   ` Ingo Molnar
2010-03-26  3:31     ` Ben Gamari
2010-04-09 15:21     ` Ben Gamari
2010-03-26  3:28   ` Ben Gamari
2010-03-23 19:51 ` Jesper Krogh
2010-03-26  3:13 ` Ben Gamari
2010-03-28  1:20 ` Ben Gamari
2010-03-28  1:29   ` Ben Gamari
2010-03-28  3:42   ` Arjan van de Ven
2010-03-28 14:06     ` Ben Gamari
2010-03-28 22:08       ` Andi Kleen
2010-04-09 14:56         ` Ben Gamari
2010-04-11 15:03           ` Avi Kivity
2010-04-11 16:35             ` Ben Gamari
2010-04-11 17:20               ` Andi Kleen
2010-04-11 18:16             ` Thomas Gleixner
2010-04-11 18:42               ` Andi Kleen
2010-04-11 21:54                 ` Thomas Gleixner
2010-04-11 23:43                   ` Hans-Peter Jansen
2010-04-12  0:22               ` Dave Chinner
2010-04-14 18:40                 ` Ric Wheeler
  -- strict thread matches above, loose matches on Subject: below --
2010-03-23 11:28 Pawel S
2010-03-23 13:27 ` Jens Axboe
2010-03-26  3:35   ` Ben Gamari
2010-03-30 10:46   ` Pawel S

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100317045350.GA2869@laptop \
    --to=npiggin@suse.de \
    --cc=bgamari.foss@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=madduck@madduck.net \
    --cc=olly@survex.com \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox