public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@infradead.org>
To: Brian Candler <B.Candler@pobox.com>
Cc: xfs@oss.sgi.com
Subject: Re: Performance problem - reads slower than writes
Date: Tue, 31 Jan 2012 09:52:05 -0500	[thread overview]
Message-ID: <20120131145205.GA6607@infradead.org> (raw)
In-Reply-To: <20120131103126.GA46170@nsrc.org>

On Tue, Jan 31, 2012 at 10:31:26AM +0000, Brian Candler wrote:
> - seek to inode (if the inode block isn't already in cache)
> - seek to extents table (if all extents don't fit in the inode)
> - seek(s) to the file contents, depending on how they're fragmented.
> 
> I am currently seeing somewhere between 7 and 8 seeks per file read, and
> this just doesn't seem right to me.

You don't just read a single file at a time but multiple ones, don't
you?

Try playing with the following tweaks to get larger I/O to the disk:

 a) make sure you use the noop or deadline elevators
 b) increase /sys/block/sdX/queue/max_sectors_kb from its low default
 c) dramatically increase /sys/devices/virtual/bdi/<major>:<minor>/read_ahead_kb

> OK. I saw "df -i" reporting a stupid number of available inodes, over 500
> million, so I decided to reduce it to 100 million.  But df -k didn't show
> any corresponding increase in disk space, so I'm guessing in xfs these are
> allocated on-demand, and the inode limit doesn't really matter?

Exactly, the number displayed is the upper bound.

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2012-01-31 14:52 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-01-30 22:00 Performance problem - reads slower than writes Brian Candler
2012-01-31  2:05 ` Dave Chinner
2012-01-31 10:31   ` Brian Candler
2012-01-31 14:16     ` Brian Candler
2012-01-31 20:25       ` Dave Chinner
2012-02-01  7:29         ` Stan Hoeppner
2012-02-03 18:47         ` Brian Candler
2012-02-03 19:03           ` Christoph Hellwig
2012-02-03 21:01             ` Brian Candler
2012-02-03 21:17               ` Brian Candler
2012-02-05 22:50                 ` Dave Chinner
2012-02-05 22:43               ` Dave Chinner
2012-01-31 14:52     ` Christoph Hellwig [this message]
2012-01-31 21:52       ` Brian Candler
2012-02-01  0:50         ` Raghavendra D Prabhu
2012-02-01  3:59         ` Dave Chinner
2012-02-03 11:54       ` Brian Candler
2012-02-03 19:42         ` Stan Hoeppner
2012-02-03 22:10           ` Brian Candler
2012-02-04  9:59             ` Stan Hoeppner
2012-02-04 11:24               ` Brian Candler
2012-02-04 12:49                 ` Stan Hoeppner
2012-02-04 20:04                   ` Brian Candler
2012-02-04 20:44                     ` Joe Landman
2012-02-06 10:40                       ` Brian Candler
2012-02-07 17:30                       ` Brian Candler
2012-02-05  5:16                     ` Stan Hoeppner
2012-02-05  9:05                       ` Brian Candler
2012-01-31 20:06     ` Dave Chinner
2012-01-31 21:35       ` Brian Candler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120131145205.GA6607@infradead.org \
    --to=hch@infradead.org \
    --cc=B.Candler@pobox.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox