From: Hannes Reinecke <hare@suse.de>
To: Ming Lei <ming.lei@redhat.com>
Cc: Gabriel Krisman Bertazi <krisman@collabora.com>,
	lsf-pc@lists.linux-foundation.org, linux-block@vger.kernel.org,
	Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>,
	linux-mm@kvack.org
Subject: Re: [LSF/MM/BPF TOPIC] block drivers in user space
Date: Mon, 28 Mar 2022 07:48:47 +0200	[thread overview]
Message-ID: <f328815c-a68d-0d00-a8dd-5ed6ace491ce@suse.de> (raw)
In-Reply-To: <YkCSVSk1SwvtABIW@T590>
On 3/27/22 18:35, Ming Lei wrote:
> On Tue, Feb 22, 2022 at 07:57:27AM +0100, Hannes Reinecke wrote:
>> On 2/21/22 20:59, Gabriel Krisman Bertazi wrote:
>>> I'd like to discuss an interface to implement user space block devices,
>>> while avoiding local network NBD solutions.  There has been reiterated
>>> interest in the topic, both from researchers [1] and from the community,
>>> including a proposed session in LSFMM2018 [2] (though I don't think it
>>> happened).
>>>
>>> I've been working on top of the Google iblock implementation to find
>>> something upstreamable and would like to present my design and gather
>>> feedback on some points, in particular zero-copy and overall user space
>>> interface.
>>>
>>> The design I'm pending towards uses special fds opened by the driver to
>>> transfer data to/from the block driver, preferably through direct
>>> splicing as much as possible, to keep data only in kernel space.  This
>>> is because, in my use case, the driver usually only manipulates
>>> metadata, while data is forwarded directly through the network, or
>>> similar. It would be neat if we can leverage the existing
>>> splice/copy_file_range syscalls such that we don't ever need to bring
>>> disk data to user space, if we can avoid it.  I've also experimented
>>> with regular pipes, But I found no way around keeping a lot of pipes
>>> opened, one for each possible command 'slot'.
>>>
>>> [1] https://dl.acm.org/doi/10.1145/3456727.3463768
>>> [2] https://www.spinics.net/lists/linux-fsdevel/msg120674.html
>>>
>> Actually, I'd rather have something like an 'inverse io_uring', where an
>> application creates a memory region separated into several 'ring' for
>> submission and completion.
>> Then the kernel could write/map the incoming data onto the rings, and
>> application can read from there.
>> Maybe it'll be worthwhile to look at virtio here.
> 
> IMO it needn't 'inverse io_uring', the normal io_uring SQE/CQE model
> does cover this case, the userspace part can submit SQEs beforehand
> for getting notification of each incoming io request from kernel driver,
> then after one io request is queued to the driver, the driver can
> queue a CQE for the previous submitted SQE. Recent posted patch of
> IORING_OP_URING_CMD[1] is perfect for such purpose.
> 
Ah, cool idea.
> I have written one such userspace block driver recently, and [2] is the
> kernel part blk-mq driver(ubd driver), the userspace part is ubdsrv[3].
> Both the two parts look quite simple, but still in very early stage, so
> far only ubd-loop and ubd-null targets are implemented in [3]. Not only
> the io command communication channel is done via IORING_OP_URING_CMD, but
> also IO handling for ubd-loop is implemented via plain io_uring too.
> 
> It is basically working, for ubd-loop, not see regression in 'xfstests -g auto'
> on the ubd block device compared with same xfstests on underlying disk, and
> my simple performance test on VM shows the result isn't worse than kernel loop
> driver with dio, or even much better on some test situations.
> 
Neat. I'll have a look.
Thanks for doing that!
Cheers,
Hannes
-- 
Dr. Hannes Reinecke                Kernel Storage Architect
hare@suse.de                              +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
next prev parent reply	other threads:[~2022-03-28  5:48 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <87tucsf0sr.fsf@collabora.com>
     [not found] ` <986caf55-65d1-0755-383b-73834ec04967@suse.de>
2022-03-27 16:35   ` [LSF/MM/BPF TOPIC] block drivers in user space Ming Lei
2022-03-28  5:47     ` Kanchan Joshi
2022-03-28  5:48     ` Hannes Reinecke [this message]
2022-03-28 20:20     ` Gabriel Krisman Bertazi
2022-03-29  0:30       ` Ming Lei
2022-03-29 17:20         ` Gabriel Krisman Bertazi
2022-03-30  1:55           ` Ming Lei
2022-03-30 18:22             ` Gabriel Krisman Bertazi
2022-03-31  1:38               ` Ming Lei
2022-03-31  3:49                 ` Bart Van Assche
2022-04-08  6:52     ` Xiaoguang Wang
2022-04-08  7:44       ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox
  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):
  git send-email \
    --in-reply-to=f328815c-a68d-0d00-a8dd-5ed6ace491ce@suse.de \
    --to=hare@suse.de \
    --cc=krisman@collabora.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=ming.lei@redhat.com \
    --cc=xiaoguang.wang@linux.alibaba.com \
    /path/to/YOUR_REPLY
  https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
  Be sure your reply has a Subject: header at the top and a blank line
  before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).