From: "Peter T. Breuer" <ptb@it.uc3m.es>
To: Paul.Clements@steeleye.com
Cc: Edward Muller <emuller@learningpatterns.com>,
linux-kernel@vger.kernel.org
Subject: Re: Current NBD 'stuff'
Date: Wed, 5 Dec 2001 22:02:56 +0100 (CET) [thread overview]
Message-ID: <200112052102.WAA29674@nbd.it.uc3m.es> (raw)
In-Reply-To: <Pine.LNX.4.10.10112051058140.17617-100000@clements.sc.steeleye.com> "from Paul Clements at Dec 5, 2001 11:14:43 am"
"A month of sundays ago Paul Clements wrote:"
> On 4 Dec 2001, Edward Muller wrote:
>
> A word of caution on this. I played around with ENBD (as well as some
> others) about 6 months ago. I also did some performance testing with
> the different drivers and user-level utilities. What I found was that
> ENBD achieved only about 1/3 ~ 1/4 the throughput of NBD (even with
> multiple replication paths and various block sizes). YMMV.
It strikes me that possibly you were using the 2.2 kernel. My logic is
that (1) nowadays kernels coalesce all requests into large lumps - limited
only by the drivers wishes - before the driver gets them, (2) I don't
think I ever managed to get req merging working in kernel 2.2, but now
the kernel does it for free. When nbd sets the limit as 256KB, it gets
256KB sized requests to treat. Did you see the req size distribution in
my previous reply? It was flat-out at the size limit every time.
So whatever time is spent in the kernel or in userspace per req (possibly
the context switch is still significant, but make the lumps bigger then
..) is dwarfed by the time spent in kernel networking, making the lumps
go out and come back from the net. On 100BT we are talking about 1/4s in
networking, per request. If we waste 10m/s over pavel in actual coding
and context switches (ridiculous!) we lose only 4% in speed, not 75%!
So I think you could compile in visual basic and get no variance in
speed at the client end, at least. That leaves the server net-to-disk
time to contend with.
I don't know about that end. But Enbd does not do anything different in
principle from what kernel nbd does. It might well be slower because
the code is heavily layered. I suspect that at that end transfers are
done at the local disk blocksize, which may be small enough to make
code differences noticable. But in general I find that Enbd goes
either at the speed of the net or at the speed of the remote disk,
whichever is slower.
It also uses a trick when writing that usually results in exceeding the
cable bandwidth by a factor of two during raid resyncs over nbd.
> I also looked at DRBD, which performed pretty well (comparable to NBD).
>
> > But that's mostly because Pavel doesn't have much time at the moment for
> > it AFAIK.
Peter
next prev parent reply other threads:[~2001-12-05 21:04 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-12-03 18:02 Current NBD 'stuff' Edward Muller
2001-12-04 22:26 ` Paul Clements
2001-12-04 23:12 ` Edward Muller
2001-12-05 16:14 ` Paul Clements
2001-12-05 16:44 ` Peter T. Breuer
2001-12-05 21:02 ` Peter T. Breuer [this message]
2001-12-05 22:30 ` Paul Clements
2001-12-06 13:02 ` Pavel Machek
2001-12-06 22:13 ` [PATCH] " Paul Clements
2001-12-22 19:48 ` David Chow
2001-12-06 12:54 ` Pavel Machek
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200112052102.WAA29674@nbd.it.uc3m.es \
--to=ptb@it.uc3m.es \
--cc=Paul.Clements@steeleye.com \
--cc=emuller@learningpatterns.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox