From: David Chow <davidchow@shaolinmicro.com>
To: Andreas Dilger <adilger@clusterfs.com>
Cc: Peter Braam <braam@clusterfs.com>, linux-fsdevel@vger.kernel.org
Subject: Re: [ANNOUNCE] Lustre Lite 1.0 beta 1
Date: Tue, 18 Mar 2003 01:13:50 +0800 [thread overview]
Message-ID: <3E76024E.9060607@shaolinmicro.com> (raw)
In-Reply-To: 20030316013858.C12806@schatzie.adilger.int
>
>
>Well, each OST is primarily (from the Lustre sense) the network interface
>protocol, and the internal implementation is opaque to the outside world.
>Each OST is independent of the others, although the clients end up allocating
>files on all of them.
>
>For Linux OSTs we use ext3 on regular block devices (raw disk, MD RAID,
>LOV, whatever you want to use) for the actual data storage, and the
>filesystem is journaled/managed totally independent from all of the
>other OSTs. We have also used reiserfs for OST storage at times (and
>concievably you could use XFS/JFS), and there are also 3rd party vendors
>who are building OST boxes with their own non-Linux internals.
>
>Since this is just regular disk attached to regular Linux boxes, it is
>also possible to do storage server failover (already being implemented)
>without clients even being aware of a problem. Single disk failure is
>expected to be handled by RAID of some kind.
>
>Cheers, Andreas
>
>
Andreas,
Thanks for your lengthly explanation. The design looked like Coda with
OST as you refer to the actual data storage. In fact, it is a stacked
file cache or your store data in files persistently on existing file
systesms. However, how can it handle a disconnected storage server?
Where this is the most diffcult problem for any cluster file systems
that support disconnection. It is obviously not allow disconnection for
system like having thousands of nodes is bad. The chance of node failure
is very high in those cases. As file allocation is still allowed to be
done across multiple storage servers. The answer to resolving data
conflicts transparently after disconnection is impossible! I would
really like to hear this from Lustre as it already played around with
1000 nodes. When I came down to design a distributed file system end up
blowing my head about this. Thanks for comments or may yo give some
directions for me as I am very interested in this topic.
regards,
David Chow
next prev parent reply other threads:[~2003-03-17 17:13 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2003-03-12 17:56 [ANNOUNCE] Lustre Lite 1.0 beta 1 Peter Braam
2003-03-16 5:38 ` David Chow
2003-03-16 8:38 ` Andreas Dilger
2003-03-17 17:13 ` David Chow [this message]
2003-03-17 17:46 ` Andreas Dilger
2003-03-17 17:47 ` Peter Braam
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3E76024E.9060607@shaolinmicro.com \
--to=davidchow@shaolinmicro.com \
--cc=adilger@clusterfs.com \
--cc=braam@clusterfs.com \
--cc=linux-fsdevel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).