From: Jeff Garzik <jeff@garzik.org>
To: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
linux-fsdevel@vger.kernel.org
Subject: Re: [0/3] POHMELFS high performance network filesystem. First steps in parallel processing.
Date: Sat, 14 Jun 2008 05:52:38 -0400 [thread overview]
Message-ID: <485394E6.9080809@garzik.org> (raw)
In-Reply-To: <20080613163700.GA25860@2ka.mipt.ru>
Evgeniy Polyakov wrote:
> Hi.
>
> I'm pleased to announce POHMEL high performance network parallel
> distributed filesystem.
> POHMELFS stands for Parallel Optimized Host Message Exchange Layered File System.
>
> Development status can be tracked in filesystem section [1].
>
> This is a high performance network filesystem with local coherent cache of data
> and metadata. Its main goal is distributed parallel processing of data.
>
> This release brings following features:
> * Read requests (data read, directory listing, lookup requests) balancing
> between multiple servers.
> * Write requests are sent to multiple servers and completed only
> when all of them sent an ack.
> * Ability to add and/or remove servers from working set at run-time from
> userspace (via netlink, so the same command can be processed from
> real network though, but since server does not support it yet,
> I dropped network part).
> * Documentation (overall view and protocol commands)!
> * Rename command (oops, forgot it in previous releases :)
> * Several new mount options to control client behaviour instead of
> hardcoded numbers.
> * Bug fixes.
Neat :) Thanks for protocol documentation, too. Do you plan to add
write-pages in addition to write-page? Also, write-page does not appear
to be documented.
Is race-across-directories race-free? That is a sticky area, see
Documentation/filesystems/directory-locking in particular.
With the exception of encryption, do you think the POHMELFS client is
mostly complete, at this point?
Jeff
next prev parent reply other threads:[~2008-06-14 9:52 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-06-13 16:37 [0/3] POHMELFS high performance network filesystem. First steps in parallel processing Evgeniy Polyakov
2008-06-13 16:40 ` [1/3] POHMELFS: VFS trivial change Evgeniy Polyakov
2008-06-13 16:41 ` [2/3] POHMELFS: Documentation Evgeniy Polyakov
2008-06-14 2:15 ` Jamie Lokier
2008-06-14 6:56 ` Evgeniy Polyakov
2008-06-14 9:49 ` Jeff Garzik
2008-06-14 18:45 ` Trond Myklebust
2008-06-14 19:25 ` Evgeniy Polyakov
2008-06-15 4:27 ` Sage Weil
2008-06-15 5:57 ` Evgeniy Polyakov
2008-06-15 16:41 ` Sage Weil
2008-06-15 17:50 ` Evgeniy Polyakov
2008-06-16 3:17 ` Sage Weil
2008-06-16 10:20 ` Evgeniy Polyakov
2008-06-13 16:42 ` [3/3] POHMELFS high performance network filesystem Evgeniy Polyakov
2008-06-15 7:47 ` Vegard Nossum
2008-06-15 9:14 ` Evgeniy Polyakov
2008-06-14 9:52 ` Jeff Garzik [this message]
2008-06-14 10:10 ` [0/3] POHMELFS high performance network filesystem. First steps in parallel processing Evgeniy Polyakov
-- strict thread matches above, loose matches on Subject: below --
2008-07-07 18:07 Evgeniy Polyakov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=485394E6.9080809@garzik.org \
--to=jeff@garzik.org \
--cc=johnpol@2ka.mipt.ru \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).