public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Xiao Ni <xni@redhat.com>
Cc: lsf-pc@lists.linux-foundation.org,
	linux-block <linux-block@vger.kernel.org>,
	bpf@vger.kernel.org,
	Caleb Sander Mateos <csander@purestorage.com>
Subject: Re: [LSF/MM/BPF TOPIC] User space RAID5 with ublk and io_uring BPF
Date: Fri, 20 Feb 2026 22:13:53 +0800	[thread overview]
Message-ID: <aZhsIVa2TJ9-bMhj@fedora> (raw)
In-Reply-To: <CALTww28QMg=YXqKWpWLZrLO+xiqOe3LGyput8dx68-dnQsxg=g@mail.gmail.com>

Hi Xiao,

On Thu, Feb 19, 2026 at 01:38:46PM +0800, Xiao Ni wrote:
> Hi all
> 
> I'm doing some work on user-space RAID recently. I'd like to propose a
> topic for LSF/MM 2026 regarding the implementation of RAID5 in user
> space using ublk and io_uring bpf[1], particularly focusing on the
> challenges encountered and potential kernel improvements needed.
> 
> Ublk raid5 uses the ublk framework (tools/testing/selftests/ublk/),
> with the goal of leveraging io_uring's zero-copy capabilities and
> BPF[1] for performance optimization. The implementation includes:
> * RAID5 stripe handling with configurable chunk sizes
> * Multi-queue support via io_uring
> * Degraded mode for single disk failure tolerance
> * Integration with io_uring BPF struct_ops framework
> 
> During the implementation, I encountered several technical challenges.
> The primary challenge is performing XOR parity calculations in a true
> zero-copy manner:
> 
> 1. Bpf program support
> Raid5 needs to calculate parity with data. Now io_uring already
> supports zero copy. For targets such as raid0/raid1, it's enough. But
> for targets such as raid5, it still needs to copy data to userspace
> and calculate the parity. Bpf program is a nice solution[1] to resolve
> this issue. Besides the patch set[1], it still needs another bpf kfunc
> uring_bpf_xor() support.
> 
> 2. Register buffer per io
> Raid5 needs to calculate parity with data. So it still needs to
> pre-alloc buffer and register it for each io. The total memory is
> controlled by queue number and queue depth. With the pre-alloc memory
> and bpf program support, target such as raid5 can support true
> zero-copy manner.
> 
> Question for discussion:
> * Should the kernel provide XOR operation kfuncs for io_uring BPF?
> * What would be the appropriate API design?
> * Are there security or verification concerns with BPF performing XOR
> on registered buffers?
> 
> Current Status:
> * Basic RAID5 functionality: implemented
> * BPF struct_ops framework: successfully integrated
> * Performance testing: in progress
> * Main blocker: zero-copy XOR implementation waiting for kernel
> support I've written a patch for testing based on [1]
> 
> Why this matters
> User-space block devices (ublk) offer several advantages:
> * Faster development and iteration
> * Easier debugging
> *  Ability to leverage user-space libraries
> * No kernel panic risks during development
> * Easy to evaluate/compare performance between kernel raid and ublk
> based userspace raid.
> 
> Desired Outcome
> * If proposal in [1] would be acceptable
> * Get feedback on whether `uring_bpf_xor()` kfunc would be acceptable
> * Discuss API design for BPF-based computation on io_uring buffers
> * Understand the roadmap for io_uring + BPF capabilities
> * Learn best practices from the community for similar implementations

I am interested in this topic, and looks Caleb Sander Mateos has similar
requirement too.


Thanks,
Ming


  reply	other threads:[~2026-02-20 14:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-19  5:38 [LSF/MM/BPF TOPIC] User space RAID5 with ublk and io_uring BPF Xiao Ni
2026-02-20 14:13 ` Ming Lei [this message]
2026-02-21  9:11   ` Xiao Ni
2026-02-23 15:54 ` Pavel Begunkov
2026-02-23 16:40   ` Hannes Reinecke
2026-02-24 16:24 ` Keith Busch
2026-03-01  2:32 ` Cong Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aZhsIVa2TJ9-bMhj@fedora \
    --to=ming.lei@redhat.com \
    --cc=bpf@vger.kernel.org \
    --cc=csander@purestorage.com \
    --cc=linux-block@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=xni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox