From: Brian Candler <B.Candler@pobox.com>
To: Stan Hoeppner <stan@hardwarefreak.com>
Cc: Christoph Hellwig <hch@infradead.org>, xfs@oss.sgi.com
Subject: Re: Performance problem - reads slower than writes
Date: Sun, 5 Feb 2012 09:05:02 +0000 [thread overview]
Message-ID: <20120205090502.GA3961@nsrc.org> (raw)
In-Reply-To: <4F2E10C1.3040200@hardwarefreak.com>
On Sat, Feb 04, 2012 at 11:16:49PM -0600, Stan Hoeppner wrote:
> When you lose a disk in this setup, how do you rebuild the replacement
> drive? Do you simply format it and then move 3TB of data across GbE
> from other Gluster nodes?
Basically, yes. When you read a file, it causes the mirror to synchronise
that particular file. To force the whole brick to come back into sync you
run find+stat across the whole filesystem.
http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/sect-Administration_Guide-Managing_Volumes-Self_heal.html
> Even if the disk is only 1/3rd full, such a
> restore seems like an expensive and time consuming operation. I'm
> thinking RAID has a significant advantage here.
Well, if you lose a 3TB disk in a RAID-1 type setup, then the whole disk has
to be copied block by block (whether it contains data or not). So the
consideration here is network bandwidth.
I am building with 10GE, but even 1G would be just about sufficient to carry
the peak bandwidth of a single one of these disks. (dd on the raw disk
gives 120MB/s at the start and 60MB/s at the end)
The whole manageability aspect certainly needs to be considered very
seriously though. With RAID1 or RAID10, dealing with a failed disk is
pretty much pull and plug; with Gluster we'd be looking at having to mkfs
the new filesystem, mount it at the right place, and then run the self-heal.
This will have to be weighed against the availability advantages of being
able to take an entire storage node out of service.
Regards,
Brian.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-02-05 9:05 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-01-30 22:00 Performance problem - reads slower than writes Brian Candler
2012-01-31 2:05 ` Dave Chinner
2012-01-31 10:31 ` Brian Candler
2012-01-31 14:16 ` Brian Candler
2012-01-31 20:25 ` Dave Chinner
2012-02-01 7:29 ` Stan Hoeppner
2012-02-03 18:47 ` Brian Candler
2012-02-03 19:03 ` Christoph Hellwig
2012-02-03 21:01 ` Brian Candler
2012-02-03 21:17 ` Brian Candler
2012-02-05 22:50 ` Dave Chinner
2012-02-05 22:43 ` Dave Chinner
2012-01-31 14:52 ` Christoph Hellwig
2012-01-31 21:52 ` Brian Candler
2012-02-01 0:50 ` Raghavendra D Prabhu
2012-02-01 3:59 ` Dave Chinner
2012-02-03 11:54 ` Brian Candler
2012-02-03 19:42 ` Stan Hoeppner
2012-02-03 22:10 ` Brian Candler
2012-02-04 9:59 ` Stan Hoeppner
2012-02-04 11:24 ` Brian Candler
2012-02-04 12:49 ` Stan Hoeppner
2012-02-04 20:04 ` Brian Candler
2012-02-04 20:44 ` Joe Landman
2012-02-06 10:40 ` Brian Candler
2012-02-07 17:30 ` Brian Candler
2012-02-05 5:16 ` Stan Hoeppner
2012-02-05 9:05 ` Brian Candler [this message]
2012-01-31 20:06 ` Dave Chinner
2012-01-31 21:35 ` Brian Candler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120205090502.GA3961@nsrc.org \
--to=b.candler@pobox.com \
--cc=hch@infradead.org \
--cc=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox