From: Dave Chinner <david@fromorbit.com>
To: Paul Anderson <pha@umich.edu>
Cc: Christoph Hellwig <hch@infradead.org>, xfs-oss <xfs@oss.sgi.com>
Subject: Re: I/O hang, possibly XFS, possibly general
Date: Sat, 4 Jun 2011 13:15:37 +1000 [thread overview]
Message-ID: <20110604031537.GF561@dastard> (raw)
In-Reply-To: <BANLkTi=FjSzSZJXGofVjtiUe2ZNvki2R-Q@mail.gmail.com>
On Fri, Jun 03, 2011 at 11:59:02AM -0400, Paul Anderson wrote:
> On Thu, Jun 2, 2011 at 9:39 PM, Dave Chinner <david@fromorbit.com> wrote:
> > On Thu, Jun 02, 2011 at 08:42:47PM -0400, Christoph Hellwig wrote:
> >> On Thu, Jun 02, 2011 at 10:42:46AM -0400, Paul Anderson wrote:
> >> > This morning, I had a symptom of a I/O throughput problem in which
> >> > dirty pages appeared to be taking a long time to write to disk.
> >> >
> >> > The system is a large x64 192GiB dell 810 server running 2.6.38.5 from
> >> > kernel.org - the basic workload was data intensive - concurrent large
> >> > NFS (with high metadata/low filesize), rsync/lftp (with low
> >> > metadata/high file size) all working in a 200TiB XFS volume on a
> >> > software MD raid0 on top of 7 software MD raid6, each w/18 drives. I
> >> > had mounted the filesystem with inode64,largeio,logbufs=8,noatime.
> >>
> >> A few comments on the setup before trying to analze what's going on in
> >> detail. I'd absolutely recommend an external log device for this setup,
> >> that is buy another two fast but small disks, or take two existing ones
> >> and use a RAID 1 for the external log device. This will speed up
> >> anything log intensive, which both NFS, and resync workloads are lot.
> >>
> >> Second thing if you can split the workloads into multiple volumes if you
> >> have two such different workloads, so thay they don't interfear with
> >> each other.
> >>
> >> Second a RAID0 on top of RAID6 volumes sounds like a pretty worst case
> >> for almost any type of I/O. You end up doing even relatively small I/O
> >> to all of the disks in the worst case. I think you'd be much better
> >> off with a simple linear concatenation of the RAID6 devices, even if you
> >> can split them into multiple filesystems
> >>
> >> > The specific symptom was that 'sync' hung, a dpkg command hung
> >> > (presumably trying to issue fsync), and experimenting with "killall
> >> > -STOP" or "kill -STOP" of the workload jobs didn't let the system
> >> > drain I/O enough to finish the sync. I probably did not wait long
> >> > enough, however.
> >>
> >> It really sounds like you're simply killloing the MD setup with a
> >> log of log I/O that does to all the devices.
> >
> > And this is one of the reasons why I originally suggested that
> > storage at this scale really should be using hardware RAID with
> > large amounts of BBWC to isolate the backend from such problematic
> > IO patterns.
>
> > Dave Chinner
> > david@fromorbit.com
> >
>
> Good HW RAID cards are on order - seems to be backordered at least a
> few weeks now at CDW. Got the batteries immediately.
>
> That will give more options for test and deployment.
>
> Not sure what I can do about the log - man page says xfs_growfs
> doesn't implement log moving. I can rebuild the filesystems, but for
> the one mentioned in this theread, this will take a long time.
Once you have BBWC, the log IO gets aggregated into stripe width
writes to the back end (because it is always sequential IO), so it's
generally not a significant problem for HW RAID subsystems.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2011-06-04 3:15 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-02 14:42 I/O hang, possibly XFS, possibly general Paul Anderson
2011-06-02 16:17 ` Stan Hoeppner
2011-06-02 18:56 ` Peter Grandi
2011-06-02 21:24 ` Paul Anderson
2011-06-02 23:59 ` Phil Karn
2011-06-03 0:39 ` Dave Chinner
2011-06-03 2:11 ` Phil Karn
2011-06-03 2:54 ` Dave Chinner
2011-06-03 22:28 ` Phil Karn
2011-06-04 3:12 ` Dave Chinner
2011-06-03 22:19 ` Peter Grandi
2011-06-06 7:29 ` Michael Monnerie
2011-06-07 14:09 ` Peter Grandi
2011-06-08 5:18 ` Dave Chinner
2011-06-08 8:32 ` Michael Monnerie
2011-06-03 0:06 ` Phil Karn
2011-06-03 0:42 ` Christoph Hellwig
2011-06-03 1:39 ` Dave Chinner
2011-06-03 15:59 ` Paul Anderson
2011-06-04 3:15 ` Dave Chinner [this message]
2011-06-04 8:14 ` Stan Hoeppner
2011-06-04 10:32 ` Dave Chinner
2011-06-04 12:11 ` Stan Hoeppner
2011-06-04 23:10 ` Dave Chinner
2011-06-05 1:31 ` Stan Hoeppner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110604031537.GF561@dastard \
--to=david@fromorbit.com \
--cc=hch@infradead.org \
--cc=pha@umich.edu \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox