From: Jeff Garzik <jeff@garzik.org>
To: Evgeniy Polyakov <johnpol@2ka.mipt.ru>
Cc: Jamie Lokier <jamie@shareable.org>,
linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
linux-fsdevel@vger.kernel.org
Subject: Re: [2/3] POHMELFS: Documentation.
Date: Sat, 14 Jun 2008 05:49:04 -0400 [thread overview]
Message-ID: <48539410.30804@garzik.org> (raw)
In-Reply-To: <20080614065616.GA32585@2ka.mipt.ru>
Evgeniy Polyakov wrote:
> Oplocks and leases are essentially lock on given file, which allows one
> client to operate on it. POHMELFS does not have locks now, and they will
> be created depending on how distributed server will require them. In the
> simplesst case it can just lock file for writing and do not allow its
> updates from other clients. Lock aciquite can be done at write_begin
> time. Without lock and writeback cache in your case writeback for file Y
> can happen before writeback for file X, but if client does not only
> write, but also sync after its write, then yes, client will see later
> updates after more earlier. POHMELFS does not broadcast its interest in
> the file content until real writing happens, i.e. at writeback time.
> Although I can add a mode, when the same will be done during
> write_begin() time. In that case your example will work without sync.
For /locking/, life is easy, you don't have to worry about disallowing
client updates, because locking is advisory. However, there are some
guarantees you need for locking WRT write commit, and of course leases
are a totally different animal where you do block client updates.
Jeff
next prev parent reply other threads:[~2008-06-14 9:49 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-06-13 16:37 [0/3] POHMELFS high performance network filesystem. First steps in parallel processing Evgeniy Polyakov
2008-06-13 16:40 ` [1/3] POHMELFS: VFS trivial change Evgeniy Polyakov
2008-06-13 16:41 ` [2/3] POHMELFS: Documentation Evgeniy Polyakov
2008-06-14 2:15 ` Jamie Lokier
2008-06-14 6:56 ` Evgeniy Polyakov
2008-06-14 9:49 ` Jeff Garzik [this message]
2008-06-14 18:45 ` Trond Myklebust
2008-06-14 19:25 ` Evgeniy Polyakov
2008-06-15 4:27 ` Sage Weil
2008-06-15 5:57 ` Evgeniy Polyakov
2008-06-15 16:41 ` Sage Weil
2008-06-15 17:50 ` Evgeniy Polyakov
2008-06-16 3:17 ` Sage Weil
2008-06-16 10:20 ` Evgeniy Polyakov
2008-06-13 16:42 ` [3/3] POHMELFS high performance network filesystem Evgeniy Polyakov
2008-06-15 7:47 ` Vegard Nossum
2008-06-15 9:14 ` Evgeniy Polyakov
2008-06-14 9:52 ` [0/3] POHMELFS high performance network filesystem. First steps in parallel processing Jeff Garzik
2008-06-14 10:10 ` Evgeniy Polyakov
-- strict thread matches above, loose matches on Subject: below --
2008-07-07 18:07 Evgeniy Polyakov
2008-07-07 18:10 ` [2/3] POHMELFS: Documentation Evgeniy Polyakov
2008-07-12 7:01 ` Pavel Machek
2008-07-12 7:26 ` Evgeniy Polyakov
2008-10-07 21:19 [0/3] The new POHMELFS release Evgeniy Polyakov
2008-10-07 21:21 ` [2/3] POHMELFS: documentation Evgeniy Polyakov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=48539410.30804@garzik.org \
--to=jeff@garzik.org \
--cc=jamie@shareable.org \
--cc=johnpol@2ka.mipt.ru \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).