From: Lars Marowsky-Bree <lmb@suse.de>
To: Daniel Phillips <phillips@istop.com>, sdake@mvista.com
Cc: David Teigland <teigland@redhat.com>, linux-kernel@vger.kernel.org
Subject: Re: [ANNOUNCE] Minneapolis Cluster Summit, July 29-30
Date: Sun, 11 Jul 2004 23:06:24 +0200 [thread overview]
Message-ID: <20040711210624.GC3933@marowsky-bree.de> (raw)
In-Reply-To: <200407111544.25590.phillips@istop.com>
On 2004-07-11T15:44:25,
Daniel Phillips <phillips@istop.com> said:
> Unless you can prove that your userspace approach never deadlocks, the other
> questions don't even move the needle. I am sure that one day somebody, maybe
> you, will demonstrate a userspace approach that is provably correct.
If you can _prove_ your kernel-space implementation to be correct, I'll
drop all and every single complaint ;)
> Until then, if you want your cluster to stay up and fail over
> properly, there's only one game in town.
This however is not true; clusters have managed just fine running in
user-space (realtime priority, mlocked into (pre-allocated) memory
etc).
I agree that for a cluster filesystem it's much lower latency to have
the infrastructure in the kernel. Going back and forth to user-land just
ain't as fast and also not very neat.
However, the memory argument is pretty weak; the memory for
heartbeating and core functionality must be pre-allocated if you care
that much. And if you cannot allocate it, maybe you ain't healthy enough
to join the cluster in the first place.
Otherwise, I don't much care about whether it's in-kernel or not.
My main argument against being in the kernel space has always been
portability and ease of integration, which makes this quite annoying for
ISVs, and the support issues which arise. But if it's however a common
component part of the 'kernel proper', then this argument no longer
holds.
If the infrastructure takes that jump, I'd be happy. Infrastructure is
boring and has been solved/reinvented so often there's hardly anything
new and exciting about heartbeating, membership, there's more fun work
higher up the stack.
> > There is one more advantage to group messaging and distributed
> > locking implemented within the kernel, that I hadn't originally
> > considered; it sure is sexy.
> I don't think it's sexy, I think it's ugly, to tell the truth. I am
> actively researching how to move the slow-path cluster infrastructure
> out of kernel, and I would be pleased to work together with anyone
> else who is interested in this nasty problem.
Messaging (which hopefully includes strong authentication if not
encryption, though I could see that being delegated to IPsec) and
locking is in the fast-path, though.
Sincerely,
Lars Marowsky-Brée <lmb@suse.de>
--
High Availability & Clustering \ ever tried. ever failed. no matter.
SUSE Labs, Research and Development | try again. fail again. fail better.
SUSE LINUX AG - A Novell company \ -- Samuel Beckett
next prev parent reply other threads:[~2004-07-11 21:08 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-07-05 6:09 [ANNOUNCE] Minneapolis Cluster Summit, July 29-30 Daniel Phillips
2004-07-05 15:09 ` Christoph Hellwig
2004-07-05 18:42 ` Daniel Phillips
2004-07-05 19:08 ` Chris Friesen
2004-07-05 20:29 ` Daniel Phillips
2004-07-07 22:55 ` Steven Dake
2004-07-08 1:30 ` Daniel Phillips
2004-07-05 19:12 ` Lars Marowsky-Bree
2004-07-05 20:27 ` Daniel Phillips
2004-07-06 7:34 ` Lars Marowsky-Bree
2004-07-06 21:34 ` Daniel Phillips
2004-07-07 18:16 ` Lars Marowsky-Bree
2004-07-08 1:14 ` Daniel Phillips
2004-07-08 9:10 ` Lars Marowsky-Bree
2004-07-08 10:53 ` David Teigland
2004-07-08 14:14 ` Chris Friesen
2004-07-08 16:06 ` David Teigland
2004-07-08 18:22 ` Daniel Phillips
2004-07-08 19:41 ` Steven Dake
2004-07-10 4:58 ` David Teigland
2004-07-10 4:58 ` Daniel Phillips
2004-07-10 17:59 ` Steven Dake
2004-07-10 20:57 ` Daniel Phillips
2004-07-10 23:24 ` Steven Dake
2004-07-11 19:44 ` Daniel Phillips
2004-07-11 21:06 ` Lars Marowsky-Bree [this message]
2004-07-12 6:58 ` Arjan van de Ven
2004-07-12 10:05 ` Lars Marowsky-Bree
2004-07-12 10:11 ` Arjan van de Ven
2004-07-12 10:21 ` Lars Marowsky-Bree
2004-07-12 10:28 ` Arjan van de Ven
2004-07-12 11:50 ` Lars Marowsky-Bree
2004-07-12 12:01 ` Arjan van de Ven
2004-07-12 13:13 ` Lars Marowsky-Bree
2004-07-12 13:40 ` Nick Piggin
2004-07-12 20:54 ` Andrew Morton
2004-07-13 2:19 ` Daniel Phillips
2004-07-13 2:31 ` Nick Piggin
2004-07-27 3:31 ` Daniel Phillips
2004-07-27 4:07 ` Nick Piggin
2004-07-27 5:57 ` Daniel Phillips
2004-07-14 12:19 ` Pavel Machek
2004-07-15 2:19 ` Nick Piggin
2004-07-15 12:03 ` Marcelo Tosatti
2004-07-14 8:32 ` Pavel Machek
2004-07-12 4:08 ` Steven Dake
2004-07-12 4:23 ` Daniel Phillips
2004-07-12 18:21 ` Steven Dake
2004-07-12 19:54 ` Daniel Phillips
2004-07-13 20:06 ` Pavel Machek
2004-07-12 10:14 ` Lars Marowsky-Bree
[not found] <fa.io9lp90.1c02foo@ifi.uio.no>
[not found] ` <fa.go9f063.1i72joh@ifi.uio.no>
2004-07-06 6:39 ` Aneesh Kumar K.V
-- strict thread matches above, loose matches on Subject: below --
2004-07-10 14:58 James Bottomley
2004-07-10 16:04 ` David Teigland
2004-07-10 16:26 ` James Bottomley
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20040711210624.GC3933@marowsky-bree.de \
--to=lmb@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=phillips@istop.com \
--cc=sdake@mvista.com \
--cc=teigland@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox