From: Ming Zhang <mingz@ele.uri.edu>
To: Dan Smith <danms@us.ibm.com>
Cc: device-mapper development <dm-devel@redhat.com>,
linux-kernel@vger.kernel.org
Subject: Re: [dm-devel] [RFC] dm-userspace
Date: Wed, 26 Apr 2006 19:41:17 -0400 [thread overview]
Message-ID: <1146094877.14129.343.camel@localhost.localdomain> (raw)
In-Reply-To: <87psj45420.fsf@caffeine.beaverton.ibm.com>
On Wed, 2006-04-26 at 16:07 -0700, Dan Smith wrote:
> MZ> just curious, will the speed be a problem here?
>
> I'm glad you asked... :)
>
> MZ> considering each time it needs to contact user space for mapping a
> MZ> piece of data.
>
> Actually, that's not the case. The idea is for mappings to be cached
> in the kernel module so that the communication with userspace only
> needs to happen once per block. The thought is to ask once for a
> read, and then remember that mapping until a write happens, which
> might change the story. If so, we ask userspace again.
sounds reasonable. saw the caching now.
>
> Right now, the kernel module expires mappings in a pretty brain-dead
> way to make sure the list doesn't get too long. An intelligent data
> structure and expiration method would probably improve performance
> quite a bit.
>
> I don't have any benchmark data to post right now. I did some quick
> analysis a while back and found it to be not too bad. When using loop
> devices as a backing store, I achieved performance as high as a little
> under 50% of native.
o. :P 50% is a considerable amount. anyway, good start. ;)
>
> MZ> and the size unit is per sector in dm?
>
> Well, for qcow it is a sector, yes. The module itself, however, can
> use any block size (as long as it is a multiple of a sector). Before
> I started work on qcow support, I wrote a test application that used
> 2MiB blocks, which is where I got the approximately 50% performance
> value I described above.
pure read or read and write mixed?
>
> Our thought is that this would mostly be used for the OS images of
> virtual machines, which shouldn't change much, which would help to
> prevent constantly asking userspace to map blocks.
>
if this is the scenario, then may be more aggressive mapping can be used
here.
u might have interest on this. some developers are working on a general
scsi target layer that pass scsi cdb to user space for processing while
keep data transfer in kernel space. so both of u will meet same overhead
here. so 2 projects might learn from each other on this.
ps, trivial thing, the userspace_request is frequently used and can use
a slab cache.
ming
next prev parent reply other threads:[~2006-04-26 23:41 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-04-26 22:45 [RFC] dm-userspace Dan Smith
2006-04-26 22:55 ` [dm-devel] " Ming Zhang
2006-04-26 23:07 ` Dan Smith
2006-04-26 23:41 ` Ming Zhang [this message]
2006-04-27 2:22 ` Dan Smith
2006-04-27 13:09 ` Ming Zhang
2006-05-09 23:02 ` Dan Smith
2006-05-10 13:27 ` Ming Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1146094877.14129.343.camel@localhost.localdomain \
--to=mingz@ele.uri.edu \
--cc=danms@us.ibm.com \
--cc=dm-devel@redhat.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox