public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
To: Jan Engelhardt <jengelh@computergmbh.de>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Subject: Re: Distributed storage. Mirroring to any number of devices.
Date: Tue, 14 Aug 2007 21:40:03 +0400	[thread overview]
Message-ID: <20070814174003.GA31716@2ka.mipt.ru> (raw)
In-Reply-To: <Pine.LNX.4.64.0708141919230.10912@fbirervta.pbzchgretzou.qr>

On Tue, Aug 14, 2007 at 07:20:49PM +0200, Jan Engelhardt (jengelh@computergmbh.de) wrote:
> >I'm pleased to announce second release of the distributed storage
> >subsystem, which allows to form a storage on top of remote and local
> >nodes, which in turn can be exported to another storage as a node to
> >form tree-like storages.
> 
> I'll be quick: what is it good for, are there any users, and what could
> it have to do with DRBD and all the other distribution storage talk
> that has come up lately (namely NBD w/Raid1)?

It has number of advantages, outlined in the first release and on the
project homepage, namely:
* non-blocking processing without busy loops (compared to iSCSI and NBD)
* small, plugable architecture
* failover recovery (reconnect to remote target)
* autoconfiguration
* no additional allocatins (not including network  part) - at least two 
	in device mapper for fast path
* very simple - try to compare with iSCSI
* works with different network protocols
* storage can be formed on top of remote nodes and be exported 
	simultaneously (iSCSI is peer-to-peer only, NBD
	requires device mapper, is synchronous and wants special
	userspace thread)

Compared to DRBD, which is a mirroring of the local requests to remote
node, and raid on top of NBD, DST supports multiple remote nodes, it allows 
to remove any of them and then turn it back into the storage without
breaking the dataflow, dst core will reconnect automatically to the
failed remote nodes, it allows to work with detouched devices just like 
with usual filesystems (in case it was not formed as a part of linear 
storage, since in that case meta information is spreaded between nodes).
It does not require special processes on behalf of network connection,
everything will be performed automatically on behalf of DST core
workers, it allows to export new device, created on top of mirror or
linear combination of the others, which in turn can be formed on top of
another and so on...

This was designed to allow to create a distributed storage with
completely transparent failover recovery, with ability to detouch remote
nodes from mirror array to became standalone realtime backups (or
snapshots) and turn it back into the storage without stopping main 
device node.

-- 
	Evgeniy Polyakov

      reply	other threads:[~2007-08-14 17:40 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-08-14 16:29 Distributed storage. Mirroring to any number of devices Evgeniy Polyakov
2007-08-14 17:20 ` Jan Engelhardt
2007-08-14 17:40   ` Evgeniy Polyakov [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070814174003.GA31716@2ka.mipt.ru \
    --to=johnpol@2ka.mipt.ru \
    --cc=jengelh@computergmbh.de \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox