linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christopher Smith <csmith@nighthawkrad.net>
To: Paul Clements <paul.clements@steeleye.com>, linux-raid@vger.kernel.org
Subject: Re: Problems with software RAID + iSCSI or GNBD
Date: Wed, 29 Jun 2005 14:42:33 +1000	[thread overview]
Message-ID: <42C226B9.4000504@nighthawkrad.net> (raw)
In-Reply-To: <42C1F54B.1060405@steeleye.com>

Paul Clements wrote:
> Christopher Smith wrote:
> 
>>  stitch it together into a RAID1.  So, it looks like this:
>>
>>              "Concentrator"
>>                 /dev/md0
>>                  /     \
>>              GigE       GigE
>>                /         \
>>     "Disk node 1"       "Disk node 2"
>>
>> So far I've tried using iSCSI and GNBD as the "back end" to make the 
>> disk space in the nodes visible to the concentrator.  I've had two 
>> problems, one unique to using iSCSI and the other common to both.
> 
> 
> Personally, I wouldn't mess with iSCSI or GNBD. You don't need GNBD in 
> this scenario anyway; simple nbd (which is in the mainline kernel...get 
> the userland tools at sourceforge.net/projects/nbd) will do just fine, 
> and I'd be willing to bet that it is more stable and faster...

I'd briefly tried nbd, but decided to look elsewhere since it needed to 
much manual configuration (no included rc script, /dev nodes appear to 
have to be manually created - yes, I'm lazy).

I've just finished trying NBD now and it seems to solve both my problems 
- rebuild speed is a healthy 40MB/sec + and the failures are dealt with 
"properly" (ie: the md device goes into degraded mode if a component nbd 
suddenly disappears).  This looks like it might be a goer for the disk 
node/RAID-over-network back-end.

On the "front end", however, we have to use iSCSI because we're planning 
on doling the aggregate disk space out to a mix of platforms (some of 
them potentially clustered in the future) so they can re-share the space 
to "client" machines.  However, the IETD iSCSI Target seems pretty 
solid, so I'm not really concerned about that.

> Well, that's the fault of either iSCSI or GNBD. md/raid1 over nbd works 
> flawlessly in this scenario on 2.6 kernels (for 2.4, you'll need a 
> special patch -- ask me for it, if you want).

Yeah, I fixed that problem (at least with iSCSI - haven't tried with 
GNBD).  It was a PEBKAC issue :).

  reply	other threads:[~2005-06-29  4:42 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-06-27  2:42 Problems with software RAID + iSCSI or GNBD Christopher Smith
2005-06-28 16:05 ` Michael Stumpf
2005-06-29  2:09   ` Christopher Smith
2005-06-29  1:11 ` Paul Clements
2005-06-29  4:42   ` Christopher Smith [this message]
2005-06-29 14:40     ` Bill Davidsen
2005-06-29 15:24       ` David Dougall
2005-06-29 15:59         ` Bill Davidsen
2005-06-30 18:05         ` J. Ryan Earl

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=42C226B9.4000504@nighthawkrad.net \
    --to=csmith@nighthawkrad.net \
    --cc=linux-raid@vger.kernel.org \
    --cc=paul.clements@steeleye.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).