public inbox for linux-fsdevel@vger.kernel.org
 help / color / mirror / Atom feed
* [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
@ 2024-01-09  6:30 Chaitanya Kulkarni
  2024-01-09 21:31 ` Bart Van Assche
  2024-01-17  8:50 ` Daniel Wagner
  0 siblings, 2 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2024-01-09  6:30 UTC (permalink / raw)
  To: lsf-pc@lists.linux-foundation.org,
	linux-fsdevel@vger.kernel.org >> linux-fsdevel,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org
  Cc: Jens Axboe, Bart Van Assche, josef@toxicpanda.com, Amir Goldstein,
	Javier González, Dan Williams, Christoph Hellwig,
	Keith Busch, Hannes Reinecke, Damien Le Moal,
	shinichiro.kawasaki@wdc.com, Johannes Thumshirn, jack@suse.com,
	Ming Lei, Sagi Grimberg, Theodore Ts'o, daniel@iogearbox.net,
	Daniel Wagner

Hi,

Since discussion of the storage stack and device driver at the
LSFMM 2017 (https://lwn.net/Articles/717699/), Omar Sandoval introduced
a new framework "blktests" dedicated for Linux Kernel Block layer
testing that is maintained by Shinichiro Kawasaki :-

https://lwn.net/Articles/722785/
https://github.com/osandov/blktests

As Linux Kernel Block layer is central to the various file systems and
underlying low-level device drivers it is important to have a centralized
testing framework and make sure it grows with the latest block layer
changes which are being added based on the different device features from
different device types (e.g. NVMe devices with Zoned Namespace support).

Since then blktests has grown and became go-to framework where we have
integrated different stand-alone test suites like SRP-tests, NVMFTESTS,
NVMe Multipath tests, zone block device tests, into one central
framework, which has made an overall block layer testing and development
much easier than having to configure and execute different test cases
for each kernel release for different subsystems such as FS, NVMe, Zone
Block devices, etc).

Here is the list of the existing test categories:-

block : 32
dm    : 1
loop  : 9
nbd   : 4
nvme  : 49
scsi  : 6
srp   : 15
ublk  : 6
zbd   : 10
----------------------------------------------------------------
9 Categories     :147 Tests

This project has gathered much attention and storage stack community is
actively participating and adding new test cases with different
categories to the framework.

Since addition of this framework we are consistently finding bugs
proactively with the help of blktests testcases.

For storage track, I would like to propose a session dedicated to
blktests. It is a great opportunity for the storage developers to gather
and have a discussion about:-

1. Current status of the blktests framework.
2. Any new/missing features that we want to add in the blktests.
3. Any new kernel features that could be used to make testing easier?
4. DM/MD Testcases.
5. Potentially adding VM support in the blktests.

E.g. Implementing new features in the null_blk.c in order to have device
independent complete test coverage. (e.g. adding discard command for
null_blk or any other specific REQ_OP). Discussion about having any new
tracepoint events in the block layer.

Any new test cases/categories which are lacking in the blktests
framework.

Required attendees :-

Shinichiro Kawasaki
Damien Le Moal
Daniel Wanger
Hannes Reinecke

-ck



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2024-01-09  6:30 Chaitanya Kulkarni
@ 2024-01-09 21:31 ` Bart Van Assche
  2024-01-09 22:01   ` Chaitanya Kulkarni
  2024-01-17  8:50 ` Daniel Wagner
  1 sibling, 1 reply; 25+ messages in thread
From: Bart Van Assche @ 2024-01-09 21:31 UTC (permalink / raw)
  To: Chaitanya Kulkarni, lsf-pc@lists.linux-foundation.org,
	linux-fsdevel@vger.kernel.org >> linux-fsdevel,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org
  Cc: Jens Axboe, josef@toxicpanda.com, Amir Goldstein,
	Javier González, Dan Williams, Christoph Hellwig,
	Keith Busch, Hannes Reinecke, Damien Le Moal,
	shinichiro.kawasaki@wdc.com, Johannes Thumshirn, jack@suse.com,
	Ming Lei, Sagi Grimberg, Theodore Ts'o, daniel@iogearbox.net,
	Daniel Wagner

On 1/8/24 22:30, Chaitanya Kulkarni wrote:
> 5. Potentially adding VM support in the blktests.

What is "VM support"? I'm running blktests in a VM all the time since
this test suite was introduced.

Bart.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2024-01-09 21:31 ` Bart Van Assche
@ 2024-01-09 22:01   ` Chaitanya Kulkarni
  2024-01-09 22:08     ` Bart Van Assche
  0 siblings, 1 reply; 25+ messages in thread
From: Chaitanya Kulkarni @ 2024-01-09 22:01 UTC (permalink / raw)
  To: Bart Van Assche, Shinichiro Kawasaki
  Cc: Jens Axboe, josef@toxicpanda.com, Amir Goldstein,
	Javier González, linux-block@vger.kernel.org,
	linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
	Dan Williams, linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org >> linux-fsdevel,
	Christoph Hellwig, Keith Busch, Hannes Reinecke, Damien Le Moal,
	shinichiro.kawasaki@wdc.com, Johannes Thumshirn, jack@suse.com,
	Ming Lei, Sagi Grimberg, Theodore Ts'o, daniel@iogearbox.net,
	Daniel Wagner

On 1/9/2024 1:31 PM, Bart Van Assche wrote:
> On 1/8/24 22:30, Chaitanya Kulkarni wrote:
>> 5. Potentially adding VM support in the blktests.
> 
> What is "VM support"? I'm running blktests in a VM all the time since
> this test suite was introduced.
> 
> Bart.

An ability to start, stop, suspend, issue commands to vm, recent
patchset async shutdown on linux-nvme list is one of the example why we
may need this feature, see [1].

-ck

[1]

https://lists.infradead.org/pipermail/linux-nvme/2024-Januar/044135.html



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2024-01-09 22:01   ` Chaitanya Kulkarni
@ 2024-01-09 22:08     ` Bart Van Assche
  0 siblings, 0 replies; 25+ messages in thread
From: Bart Van Assche @ 2024-01-09 22:08 UTC (permalink / raw)
  To: Chaitanya Kulkarni, Shinichiro Kawasaki
  Cc: Jens Axboe, josef@toxicpanda.com, Amir Goldstein,
	Javier González, linux-block@vger.kernel.org,
	linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
	Dan Williams, linux-scsi@vger.kernel.org,
	linux-fsdevel@vger.kernel.org >> linux-fsdevel,
	Christoph Hellwig, Keith Busch, Hannes Reinecke, Damien Le Moal,
	Johannes Thumshirn, jack@suse.com, Ming Lei, Sagi Grimberg,
	Theodore Ts'o, daniel@iogearbox.net, Daniel Wagner

On 1/9/24 14:01, Chaitanya Kulkarni wrote:
> On 1/9/2024 1:31 PM, Bart Van Assche wrote:
>> On 1/8/24 22:30, Chaitanya Kulkarni wrote:
>>> 5. Potentially adding VM support in the blktests.
>>
>> What is "VM support"? I'm running blktests in a VM all the time since
>> this test suite was introduced.
>>
>> Bart.
> 
> An ability to start, stop, suspend, issue commands to vm, recent
> patchset async shutdown on linux-nvme list is one of the example why we
> may need this feature, see [1].
> 
> -ck
> 
> [1]
> 
> https://lists.infradead.org/pipermail/linux-nvme/2024-Januar/044135.html
If I try to open that link, the following appears: "Not Found - The requested
URL was not found on this server." Anyway, I think I know which patch series
you are referring to. There may be better ways to trigger shutdown calls than
by stopping a VM.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2024-01-09  6:30 Chaitanya Kulkarni
  2024-01-09 21:31 ` Bart Van Assche
@ 2024-01-17  8:50 ` Daniel Wagner
  2024-01-23 15:07   ` Daniel Wagner
  1 sibling, 1 reply; 25+ messages in thread
From: Daniel Wagner @ 2024-01-17  8:50 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: lsf-pc@lists.linux-foundation.org,
	linux-fsdevel@vger.kernel.org >> linux-fsdevel,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org, Jens Axboe, Bart Van Assche,
	josef@toxicpanda.com, Amir Goldstein, Javier González,
	Dan Williams, Christoph Hellwig, Keith Busch, Hannes Reinecke,
	Damien Le Moal, shinichiro.kawasaki@wdc.com, Johannes Thumshirn,
	jack@suse.com, Ming Lei, Sagi Grimberg, Theodore Ts'o,
	daniel@iogearbox.net

On Tue, Jan 09, 2024 at 06:30:46AM +0000, Chaitanya Kulkarni wrote:
> For storage track, I would like to propose a session dedicated to
> blktests. It is a great opportunity for the storage developers to gather
> and have a discussion about:-
> 
> 1. Current status of the blktests framework.
> 2. Any new/missing features that we want to add in the blktests.
> 3. Any new kernel features that could be used to make testing easier?
> 4. DM/MD Testcases.
> 5. Potentially adding VM support in the blktests.

I am interested in such a session.

> Required attendees :-
> 
> Shinichiro Kawasaki
> Damien Le Moal
> Daniel Wanger
> Hannes Reinecke

If I get an invite I'll be there.

Thanks,
Daniel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2024-01-23 15:07   ` Daniel Wagner
@ 2024-02-14  7:32     ` Shinichiro Kawasaki
  2024-02-21 18:32     ` Luis Chamberlain
  1 sibling, 0 replies; 25+ messages in thread
From: Shinichiro Kawasaki @ 2024-02-14  7:32 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: Chaitanya Kulkarni, lsf-pc@lists.linux-foundation.org,
	linux-fsdevel@vger.kernel.org >> linux-fsdevel,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org, Jens Axboe, Bart Van Assche,
	josef@toxicpanda.com, Amir Goldstein, Javier González,
	Dan Williams, Christoph Hellwig, Keith Busch, Hannes Reinecke,
	Damien Le Moal, Johannes Thumshirn, jack@suse.com, Ming Lei,
	Sagi Grimberg, Theodore Ts'o, daniel@iogearbox.net

On Jan 23, 2024 / 16:07, Daniel Wagner wrote:
> On Wed, Jan 17, 2024 at 09:50:50AM +0100, Daniel Wagner wrote:
> > On Tue, Jan 09, 2024 at 06:30:46AM +0000, Chaitanya Kulkarni wrote:
> > > For storage track, I would like to propose a session dedicated to
> > > blktests. It is a great opportunity for the storage developers to gather
> > > and have a discussion about:-
> > > 
> > > 1. Current status of the blktests framework.
> > > 2. Any new/missing features that we want to add in the blktests.
> > > 3. Any new kernel features that could be used to make testing easier?
> > > 4. DM/MD Testcases.
> > > 5. Potentially adding VM support in the blktests.
> > 
> > I am interested in such a session.

Thanks Chaitanya, I'm interested in them too. I can share my view on the current
status of blktests.

> 
> One discussion point I'd like to add is
> 
>   - running blktest against real hardare/target

Agreed. I guess this maybe meant for real RDMA hardware, which was discussed in
a couple of GitHub pull requests [1][2].

  [1] https://github.com/osandov/blktests/pull/86
  [2] https://github.com/osandov/blktests/pull/127

Another topic I suggest is,

 - Automated blktests runs and reports

Recently I learned that CKI project runs blktests regularly against Linus master
branch and linux-block for-next branch, then makes the run results visible on
the net [3][4]. Now I'm trying to understand detailed conditions of the test
runs. If we can discuss and clarify what kind of run conditions will help
storage kernel developers, it will be a good input to CKI project, hopefully.

  [3] https://datawarehouse.cki-project.org/kcidb/tests?tree_filter=mainline.kernel.org&kernel_version_filter=&arch_filter=x86_64&test_filter=blktests&host_filter=&testresult_filter=
  [4] https://datawarehouse.cki-project.org/kcidb/tests?tree_filter=block&kernel_version_filter=&arch_filter=x86_64&test_filter=blktests&host_filter=&testresult_filter=

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2024-02-21 18:32     ` Luis Chamberlain
@ 2024-02-22  9:31       ` Daniel Wagner
  2024-02-22 15:54         ` Luis Chamberlain
  0 siblings, 1 reply; 25+ messages in thread
From: Daniel Wagner @ 2024-02-22  9:31 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: Chaitanya Kulkarni, lsf-pc@lists.linux-foundation.org,
	linux-fsdevel@vger.kernel.org >> linux-fsdevel,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org, Jens Axboe, Bart Van Assche,
	josef@toxicpanda.com, Amir Goldstein, Javier González,
	Dan Williams, Christoph Hellwig, Keith Busch, Hannes Reinecke,
	Damien Le Moal, shinichiro.kawasaki@wdc.com, Johannes Thumshirn,
	jack@suse.com, Ming Lei, Sagi Grimberg, Theodore Ts'o,
	daniel@iogearbox.net

On Wed, Feb 21, 2024 at 10:32:05AM -0800, Luis Chamberlain wrote:
> > One discussion point I'd like to add is
> > 
> >   - running blktest against real hardare/target
> 
> We've resolved this in fstests with canonicalizing device symlinks, and
> through kdevops its possible to even use PCIe passthrough onto a guest
> using dynamic kconfig (ie, specific to the host).
> 
> It should be possible to do that in blktests too, but the dynamic
> kconfig thing is outside of scope, but this is a long winded way of
> suggestin that if we extend blktests to add a canonon-similar device
> function, then since kdevops supports blktests you get that pcie
> passthrough for free too.

I should have been more precise here, I was trying to say supporting
real fabrics targets. blktests already has some logic for PCI targets
with $TEST_DEV but I haven't really looked into this part yet.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2024-02-22  9:31       ` Daniel Wagner
@ 2024-02-22 15:54         ` Luis Chamberlain
  2024-02-22 16:16           ` Daniel Wagner
  0 siblings, 1 reply; 25+ messages in thread
From: Luis Chamberlain @ 2024-02-22 15:54 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: Chaitanya Kulkarni, lsf-pc@lists.linux-foundation.org,
	linux-fsdevel@vger.kernel.org >> linux-fsdevel,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org, Jens Axboe, Bart Van Assche,
	josef@toxicpanda.com, Amir Goldstein, Javier González,
	Dan Williams, Christoph Hellwig, Keith Busch, Hannes Reinecke,
	Damien Le Moal, shinichiro.kawasaki@wdc.com, Johannes Thumshirn,
	jack@suse.com, Ming Lei, Sagi Grimberg, Theodore Ts'o,
	daniel@iogearbox.net

On Thu, Feb 22, 2024 at 10:31:53AM +0100, Daniel Wagner wrote:
> On Wed, Feb 21, 2024 at 10:32:05AM -0800, Luis Chamberlain wrote:
> > > One discussion point I'd like to add is
> > > 
> > >   - running blktest against real hardare/target
> > 
> > We've resolved this in fstests with canonicalizing device symlinks, and
> > through kdevops its possible to even use PCIe passthrough onto a guest
> > using dynamic kconfig (ie, specific to the host).
> > 
> > It should be possible to do that in blktests too, but the dynamic
> > kconfig thing is outside of scope, but this is a long winded way of
> > suggestin that if we extend blktests to add a canonon-similar device
> > function, then since kdevops supports blktests you get that pcie
> > passthrough for free too.
> 
> I should have been more precise here, I was trying to say supporting
> real fabrics targets. blktests already has some logic for PCI targets
> with $TEST_DEV but I haven't really looked into this part yet.

Do fabric targets have a symlink which remains static?

  Luis

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2024-02-22 15:54         ` Luis Chamberlain
@ 2024-02-22 16:16           ` Daniel Wagner
  0 siblings, 0 replies; 25+ messages in thread
From: Daniel Wagner @ 2024-02-22 16:16 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: Chaitanya Kulkarni, lsf-pc@lists.linux-foundation.org,
	linux-fsdevel@vger.kernel.org >> linux-fsdevel,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	linux-block@vger.kernel.org, Jens Axboe, Bart Van Assche,
	josef@toxicpanda.com, Amir Goldstein, Javier González,
	Dan Williams, Christoph Hellwig, Keith Busch, Hannes Reinecke,
	Damien Le Moal, shinichiro.kawasaki@wdc.com, Johannes Thumshirn,
	jack@suse.com, Ming Lei, Sagi Grimberg, Theodore Ts'o,
	daniel@iogearbox.net

On Thu, Feb 22, 2024 at 07:54:18AM -0800, Luis Chamberlain wrote:
> > I should have been more precise here, I was trying to say supporting
> > real fabrics targets. blktests already has some logic for PCI targets
> > with $TEST_DEV but I haven't really looked into this part yet.
> 
> Do fabric targets have a symlink which remains static?

A pretty typical nvme fabric test is:

setup phase target side:
 - create backing device (file/block)
 - create loopback device
 - create nvme subsystem

setup phase host side:
 - discover
 - connect to the target

test phase
 do something like reading/writing from '/dev/nvmeX'
 or 'nvme id-ctrl /dev/nvmeX', etc.

cleanup phase host side:
 - disconnect from the target

cleanup phase target side:
 - remove nvme subsystem
 - remove loopback device
 - remove backing device

I'd like to make the setup and cleanup target side more flexible. The
host side will not be affected at all by exchanging the current soft
target side (aka nvmet) with something else. This means it's not about
any device links in /dev.

Hope this makes it a bit clearer.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
@ 2026-02-11 20:35 Chaitanya Kulkarni
  2026-02-12  7:52 ` Daniel Wagner
  0 siblings, 1 reply; 25+ messages in thread
From: Chaitanya Kulkarni @ 2026-02-11 20:35 UTC (permalink / raw)
  To: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org
  Cc: Bart Van Assche, Shinichiro Kawasaki, Daniel Wagner,
	Hannes Reinecke, Christoph Hellwig, Jens Axboe, sagi@grimberg.me,
	tytso@mit.edu, Johannes Thumshirn, Christian Brauner,
	Martin K. Petersen, linux-fsdevel@vger.kernel.org,
	Javier González, willy@infradead.org, Jan Kara,
	amir73il@gmail.com, vbabka@suse.cz, Damien Le Moal

Hi all,

   Since the discussion at the LSFMM 2017 [1], Omar Sandoval introduced 
the new
   framework "blktests" dedicated for Linux Kernel Block layer testing.

   Blktests serves as the centralized testing framework. It has grown 
with the
   latest block layer changes and successfully integrated various 
stand-alone
   test suites like SRP-tests, NVMFTESTS, NVMe Multipath tests, zone 
block device
   tests. This integration has significantly simplified the process of 
block layer
   testing and development, eliminating the need to configure and 
execute test
   cases for each kernel release.

   The storage stack community is actively engaged, contributing and 
adding new
   test cases across diverse categories to the framework. Since the 
beginning, we
   are consistently finding bugs proactively with the help of blktests 
testcases.

   Below is a summary of the existing test categories and their test 
cases as of
   February 2026.

   block        :  41
   blktrace     :   2
   dm           :   3
   loop         :  11
   md           :   4
   meta         :  24
   nbd          :   4
   nvme         :  59
   rnbd         :   2
   scsi         :  10
   srp          :  15
   throtl       :   7
   ublk         :   6
   zbd          :  14
   ------------------

   14 Categories   : 202 Tests

   For the storage track at LSFMMBPF2026, I propose a session dedicated to
   blktests to discuss expansion plan and CI integration progress.

-ck




^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-11 20:35 [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework Chaitanya Kulkarni
@ 2026-02-12  7:52 ` Daniel Wagner
  2026-02-12  7:57   ` Johannes Thumshirn
  2026-02-13 11:23   ` Shinichiro Kawasaki
  0 siblings, 2 replies; 25+ messages in thread
From: Daniel Wagner @ 2026-02-12  7:52 UTC (permalink / raw)
  To: Chaitanya Kulkarni
  Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
	Bart Van Assche, Shinichiro Kawasaki, Hannes Reinecke,
	Christoph Hellwig, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
	Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal

On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
>    For the storage track at LSFMMBPF2026, I propose a session dedicated to
>    blktests to discuss expansion plan and CI integration progress.

Thanks for proposing this topic.

Just a few random topics which come to mind we could discuss:

- blktests has gain a bit of traction and some folks run on regular
  basis these tests. Can we gather feedback from them, what is working
  good, what is not? Are there feature wishes?
- Do we need some sort of configuration tool which allows to setup a
  config? I'd still have a TODO to provide a config example with all
  knobs which influence blktests, but I wonder if we should go a step
  further here, e.g. something like kdevops has?
- Which area do we lack tests? Should we just add an initial simple
  tests for the missing areas, so the basic infra is there and thus
  lowering the bar for adding new tests?
- The recent addition of kmemleak shows it's a great idea to enable more
  of the kernel test infrastructure when running the tests. Are there
  more such things we could/should enable?
- I would like to hear from Shin'ichiro if he is happy how things
  are going? :)

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-12  7:52 ` Daniel Wagner
@ 2026-02-12  7:57   ` Johannes Thumshirn
  2026-02-13 17:30     ` Bart Van Assche
  2026-02-13 11:23   ` Shinichiro Kawasaki
  1 sibling, 1 reply; 25+ messages in thread
From: Johannes Thumshirn @ 2026-02-12  7:57 UTC (permalink / raw)
  To: Daniel Wagner, Chaitanya Kulkarni
  Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
	Bart Van Assche, Shinichiro Kawasaki, Hannes Reinecke, hch,
	Jens Axboe, sagi@grimberg.me, tytso@mit.edu, Christian Brauner,
	Martin K. Petersen, linux-fsdevel@vger.kernel.org,
	Javier González, willy@infradead.org, Jan Kara,
	amir73il@gmail.com, vbabka@suse.cz, Damien Le Moal

On 2/12/26 8:52 AM, Daniel Wagner wrote:
> On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
>>     For the storage track at LSFMMBPF2026, I propose a session dedicated to
>>     blktests to discuss expansion plan and CI integration progress.
>>
>> - The recent addition of kmemleak shows it's a great idea to enable more
>>    of the kernel test infrastructure when running the tests. Are there
>>    more such things we could/should enable?

One thing that comes to my mind (and that I always wanted to do for 
fstests but didn't for $REASONS) is adding per-test code coverage 
information.

Something like the per test kmemleak and dmesg output. This way one can 
check that the test case is actually executing the code it intended to 
test. Also its a good way to see which areas of code lack proper testing.

Just my $.05.

Byte,

     Johannes


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-12  7:52 ` Daniel Wagner
  2026-02-12  7:57   ` Johannes Thumshirn
@ 2026-02-13 11:23   ` Shinichiro Kawasaki
  2026-02-13 14:18     ` Haris Iqbal
                       ` (2 more replies)
  1 sibling, 3 replies; 25+ messages in thread
From: Shinichiro Kawasaki @ 2026-02-13 11:23 UTC (permalink / raw)
  To: Daniel Wagner
  Cc: Chaitanya Kulkarni, linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	lsf-pc@lists.linux-foundation.org, Bart Van Assche,
	Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
	Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal

On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
> On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
> >    For the storage track at LSFMMBPF2026, I propose a session dedicated to
> >    blktests to discuss expansion plan and CI integration progress.
> 
> Thanks for proposing this topic.

Chaitanya, my thank also goes to you.

> Just a few random topics which come to mind we could discuss:
> 
> - blktests has gain a bit of traction and some folks run on regular
>   basis these tests. Can we gather feedback from them, what is working
>   good, what is not? Are there feature wishes?

Good topic, I also would like to hear about it.

FYI, from the past LSFMM sessions and hallway talks, major feedbacks I had
received are these two:

 1. blktests CI infra looks missing (other than CKI by Redhat)
    -> Some activities are ongoing to start blktests CI service.
       I hope the status are shared at the session.

 2. blktests are rather difficult to start using for some new users
    -> I think config example is demanded, so that new users can
       just copy it to start the first run, and understand the
       config options easily.

> - Do we need some sort of configuration tool which allows to setup a
>   config? I'd still have a TODO to provide a config example with all
>   knobs which influence blktests, but I wonder if we should go a step
>   further here, e.g. something like kdevops has?

Do you mean the "make menuconfig" style? Most of the blktests users are
familiar with menuconfig, so that would be an idea. If users really want
it, we can think of it. IMO, blktests still do not have so many options,
then config.example would be simpler and more appropriate, probably.

> - Which area do we lack tests? Should we just add an initial simple
>   tests for the missing areas, so the basic infra is there and thus
>   lowering the bar for adding new tests?

To identify the uncovered area, I think code coverage will be useful. A few
years ago, I measured it and shared in LSFMM, but that measurement was done for
each source tree directory. The coverage ratio by source file will be more
helpful to identify the missing area. I don't have time slot to measure it,
so if anyone can do it and share the result, it will be appreciated. Once we
know the missing areas, it sounds a good idea to add initial samples for each
of the areas.

> - The recent addition of kmemleak shows it's a great idea to enable more
>   of the kernel test infrastructure when running the tests.

Completely agreed.

>   Are there more such things we could/should enable?

I'm also interested in this question :)

> - I would like to hear from Shin'ichiro if he is happy how things
>   are going? :)

More importantly, I would like to listen to voices from storage sub-system
developers to see if they are happy or not, especially the maintainers.

From my view, blktests keep on finding kernel bugs. I think it demonstrates the
value of this community effort, and I'm happy about it. Said that, I find what
blktests can improve more, of course. Here I share the list of improvement
opportunities from my view point (I already mentioned the first three items).

 1. We can have more CI infra to make the most of blktests
 2. We can add config examples to help new users
 3. We can measure code coverage to identify missing test areas
 4. Long standing failures make test result reports dirty
    - I feel lockdep WARNs are tend to be left unfixed rather long period.
      How can we gather effort to fix them?
 5. We can refactor and clean up blktests framework for ease of maintainance
      (e.g. trap handling)
 6. Some users run blktests with built-in kernel modules, which makes a number
    of test cases skipped. We can add more built-in kernel modules support to
    expand test coverage for such use case.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-13 11:23   ` Shinichiro Kawasaki
@ 2026-02-13 14:18     ` Haris Iqbal
  2026-02-15 18:38     ` Nilay Shroff
  2026-02-15 21:18     ` Haris Iqbal
  2 siblings, 0 replies; 25+ messages in thread
From: Haris Iqbal @ 2026-02-13 14:18 UTC (permalink / raw)
  To: Shinichiro Kawasaki
  Cc: Daniel Wagner, Chaitanya Kulkarni, linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	lsf-pc@lists.linux-foundation.org, Bart Van Assche,
	Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
	Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal

On Fri, Feb 13, 2026 at 12:25 PM Shinichiro Kawasaki
<shinichiro.kawasaki@wdc.com> wrote:
>
> On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
> > On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
> > >    For the storage track at LSFMMBPF2026, I propose a session dedicated to
> > >    blktests to discuss expansion plan and CI integration progress.

I am interested in this topic.

> >
> > Thanks for proposing this topic.
>
> Chaitanya, my thank also goes to you.
>
> > Just a few random topics which come to mind we could discuss:
> >
> > - blktests has gain a bit of traction and some folks run on regular
> >   basis these tests. Can we gather feedback from them, what is working
> >   good, what is not? Are there feature wishes?
>
> Good topic, I also would like to hear about it.
>
> FYI, from the past LSFMM sessions and hallway talks, major feedbacks I had
> received are these two:
>
>  1. blktests CI infra looks missing (other than CKI by Redhat)
>     -> Some activities are ongoing to start blktests CI service.
>        I hope the status are shared at the session.
>
>  2. blktests are rather difficult to start using for some new users
>     -> I think config example is demanded, so that new users can
>        just copy it to start the first run, and understand the
>        config options easily.

+1 to this.

>
> > - Do we need some sort of configuration tool which allows to setup a
> >   config? I'd still have a TODO to provide a config example with all
> >   knobs which influence blktests, but I wonder if we should go a step
> >   further here, e.g. something like kdevops has?
>
> Do you mean the "make menuconfig" style? Most of the blktests users are
> familiar with menuconfig, so that would be an idea. If users really want
> it, we can think of it. IMO, blktests still do not have so many options,
> then config.example would be simpler and more appropriate, probably.
>
> > - Which area do we lack tests? Should we just add an initial simple
> >   tests for the missing areas, so the basic infra is there and thus
> >   lowering the bar for adding new tests?
>
> To identify the uncovered area, I think code coverage will be useful. A few
> years ago, I measured it and shared in LSFMM, but that measurement was done for
> each source tree directory. The coverage ratio by source file will be more
> helpful to identify the missing area. I don't have time slot to measure it,
> so if anyone can do it and share the result, it will be appreciated. Once we
> know the missing areas, it sounds a good idea to add initial samples for each
> of the areas.
>
> > - The recent addition of kmemleak shows it's a great idea to enable more
> >   of the kernel test infrastructure when running the tests.
>
> Completely agreed.
>
> >   Are there more such things we could/should enable?
>
> I'm also interested in this question :)
>
> > - I would like to hear from Shin'ichiro if he is happy how things
> >   are going? :)
>
> More importantly, I would like to listen to voices from storage sub-system
> developers to see if they are happy or not, especially the maintainers.

I like the idea of blktests.
We have internal tests which we run for RNBD (and RTRS).
And I plan to port the RNBD ones to blktests.

>
> From my view, blktests keep on finding kernel bugs. I think it demonstrates the
> value of this community effort, and I'm happy about it. Said that, I find what
> blktests can improve more, of course. Here I share the list of improvement
> opportunities from my view point (I already mentioned the first three items).
>
>  1. We can have more CI infra to make the most of blktests
>  2. We can add config examples to help new users
>  3. We can measure code coverage to identify missing test areas
>  4. Long standing failures make test result reports dirty
>     - I feel lockdep WARNs are tend to be left unfixed rather long period.
>       How can we gather effort to fix them?
>  5. We can refactor and clean up blktests framework for ease of maintainance
>       (e.g. trap handling)
>  6. Some users run blktests with built-in kernel modules, which makes a number
>     of test cases skipped. We can add more built-in kernel modules support to
>     expand test coverage for such use case.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-12  7:57   ` Johannes Thumshirn
@ 2026-02-13 17:30     ` Bart Van Assche
  2026-02-13 17:35       ` James Bottomley
  0 siblings, 1 reply; 25+ messages in thread
From: Bart Van Assche @ 2026-02-13 17:30 UTC (permalink / raw)
  To: Johannes Thumshirn, Daniel Wagner, Chaitanya Kulkarni
  Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
	Shinichiro Kawasaki, Hannes Reinecke, hch, Jens Axboe,
	sagi@grimberg.me, tytso@mit.edu, Christian Brauner,
	Martin K. Petersen, linux-fsdevel@vger.kernel.org,
	Javier González, willy@infradead.org, Jan Kara,
	amir73il@gmail.com, vbabka@suse.cz, Damien Le Moal

On 2/11/26 11:57 PM, Johannes Thumshirn wrote:
> One thing that comes to my mind (and that I always wanted to do for
> fstests but didn't for $REASONS) is adding per-test code coverage
> information.

Code coverage information is useful but it's important to keep in mind
that 100% code coverage (which is very hard to achieve) does not
guarantee code correctness. There are many state machines in the block
layer and also in block drivers. Code coverage information does not
reveal what percentage of the states of state machines has been
triggered.

Bart.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-13 17:30     ` Bart Van Assche
@ 2026-02-13 17:35       ` James Bottomley
  0 siblings, 0 replies; 25+ messages in thread
From: James Bottomley @ 2026-02-13 17:35 UTC (permalink / raw)
  To: Bart Van Assche, Johannes Thumshirn, Daniel Wagner,
	Chaitanya Kulkarni
  Cc: linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
	Shinichiro Kawasaki, Hannes Reinecke, hch, Jens Axboe,
	sagi@grimberg.me, tytso@mit.edu, Christian Brauner,
	Martin K. Petersen, linux-fsdevel@vger.kernel.org,
	Javier González, willy@infradead.org, Jan Kara,
	amir73il@gmail.com, vbabka@suse.cz, Damien Le Moal

On Fri, 2026-02-13 at 09:30 -0800, Bart Van Assche wrote:
> On 2/11/26 11:57 PM, Johannes Thumshirn wrote:
> > One thing that comes to my mind (and that I always wanted to do for
> > fstests but didn't for $REASONS) is adding per-test code coverage
> > information.
> 
> Code coverage information is useful but it's important to keep in
> mind that 100% code coverage (which is very hard to achieve) does not
> guarantee code correctness. There are many state machines in the
> block layer and also in block drivers. Code coverage information does
> not reveal what percentage of the states of state machines has been
> triggered.

This is not an either/or.  Usually our functional tests try to cover
the state machine (although often requiring error injection).  However,
a lot of our bugs hide in error legs and code coverage at least assures
us we've looked for them.

Regards,

James


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-13 11:23   ` Shinichiro Kawasaki
  2026-02-13 14:18     ` Haris Iqbal
@ 2026-02-15 18:38     ` Nilay Shroff
  2026-02-15 21:18     ` Haris Iqbal
  2 siblings, 0 replies; 25+ messages in thread
From: Nilay Shroff @ 2026-02-15 18:38 UTC (permalink / raw)
  To: Shinichiro Kawasaki, Daniel Wagner
  Cc: Chaitanya Kulkarni, linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	lsf-pc@lists.linux-foundation.org, Bart Van Assche,
	Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
	Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal



On 2/13/26 4:53 PM, Shinichiro Kawasaki wrote:
> On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
>> On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
>>>    For the storage track at LSFMMBPF2026, I propose a session dedicated to
>>>    blktests to discuss expansion plan and CI integration progress.
>>
>> Thanks for proposing this topic.
> 
> Chaitanya, my thank also goes to you.
> 
Yes thanks for proposing this!

>> Just a few random topics which come to mind we could discuss:
>>
>> - blktests has gain a bit of traction and some folks run on regular
>>   basis these tests. Can we gather feedback from them, what is working
>>   good, what is not? Are there feature wishes?
> 
> Good topic, I also would like to hear about it.
> 
One improvement I’d like to highlight is related to how blktests are executed
today. So far, we’ve been running blktests serially, but if it's possible to 
run tests in parallel to improve test turnaround time and make large-scale or
CI-based testing more efficient? For instance, adding parallel_safe Tags: Marking tests
that don't modify global kernel state so they can be safely offloaded to parallel
workers. Marking parallel_safe tags would allow the runner to distinguish:

Safe Tests: Tests that only perform I/O on a specific, non-shared device or 
check static kernel parameters.

Unsafe Tests: Tests that reload kernel modules, modify global /sys or /proc entries,
or require exclusive access to specific hardware addresses.

Yes adding parallel execution support shall require framework/design changes. 

> FYI, from the past LSFMM sessions and hallway talks, major feedbacks I had
> received are these two:
> 
>  1. blktests CI infra looks missing (other than CKI by Redhat)
>     -> Some activities are ongoing to start blktests CI service.
>        I hope the status are shared at the session.
> 
>  2. blktests are rather difficult to start using for some new users
>     -> I think config example is demanded, so that new users can
>        just copy it to start the first run, and understand the
>        config options easily.
> 
>> - Do we need some sort of configuration tool which allows to setup a
>>   config? I'd still have a TODO to provide a config example with all
>>   knobs which influence blktests, but I wonder if we should go a step
>>   further here, e.g. something like kdevops has?
> 
> Do you mean the "make menuconfig" style? Most of the blktests users are
> familiar with menuconfig, so that would be an idea. If users really want
> it, we can think of it. IMO, blktests still do not have so many options,
> then config.example would be simpler and more appropriate, probably.
> 
>> - Which area do we lack tests? Should we just add an initial simple
>>   tests for the missing areas, so the basic infra is there and thus
>>   lowering the bar for adding new tests?
> 
> To identify the uncovered area, I think code coverage will be useful. A few
> years ago, I measured it and shared in LSFMM, but that measurement was done for
> each source tree directory. The coverage ratio by source file will be more
> helpful to identify the missing area. I don't have time slot to measure it,
> so if anyone can do it and share the result, it will be appreciated. Once we
> know the missing areas, it sounds a good idea to add initial samples for each
> of the areas.
> 
>> - The recent addition of kmemleak shows it's a great idea to enable more
>>   of the kernel test infrastructure when running the tests.
> 
> Completely agreed.
> 
>>   Are there more such things we could/should enable?
> 
> I'm also interested in this question :)
> 
>> - I would like to hear from Shin'ichiro if he is happy how things
>>   are going? :)
> 
> More importantly, I would like to listen to voices from storage sub-system
> developers to see if they are happy or not, especially the maintainers.
> 
> From my view, blktests keep on finding kernel bugs. I think it demonstrates the
> value of this community effort, and I'm happy about it. Said that, I find what
> blktests can improve more, of course. Here I share the list of improvement
> opportunities from my view point (I already mentioned the first three items).
> 
>  1. We can have more CI infra to make the most of blktests
>  2. We can add config examples to help new users
>  3. We can measure code coverage to identify missing test areas
>  4. Long standing failures make test result reports dirty
>     - I feel lockdep WARNs are tend to be left unfixed rather long period.
>       How can we gather effort to fix them?

I agree regarding lockdep; recently we did see quite a few lockdep splats.
That said, I believe the number has dropped significantly and only a small
set remains. From what I can tell, most of the outstanding lockdep issues
are related to fs-reclaim paths recursing into the block layer while the
queue is frozen. We should be able to resolve most of these soon, or at
least before the conference. If anything is still outstanding after that,
we can discuss it during the conference and work toward addressing it as
quickly as possible.

>  5. We can refactor and clean up blktests framework for ease of maintainance
>       (e.g. trap handling)
>  6. Some users run blktests with built-in kernel modules, which makes a number
>     of test cases skipped. We can add more built-in kernel modules support to
>     expand test coverage for such use case.

Thanks,
--Nilay

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-13 11:23   ` Shinichiro Kawasaki
  2026-02-13 14:18     ` Haris Iqbal
  2026-02-15 18:38     ` Nilay Shroff
@ 2026-02-15 21:18     ` Haris Iqbal
  2026-02-16  0:33       ` Chaitanya Kulkarni
                         ` (2 more replies)
  2 siblings, 3 replies; 25+ messages in thread
From: Haris Iqbal @ 2026-02-15 21:18 UTC (permalink / raw)
  To: Shinichiro Kawasaki
  Cc: Daniel Wagner, Chaitanya Kulkarni, linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	lsf-pc@lists.linux-foundation.org, Bart Van Assche,
	Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
	Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal

On Fri, Feb 13, 2026 at 12:25 PM Shinichiro Kawasaki
<shinichiro.kawasaki@wdc.com> wrote:
>
> On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
> > On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
> > >    For the storage track at LSFMMBPF2026, I propose a session dedicated to
> > >    blktests to discuss expansion plan and CI integration progress.
> >
> > Thanks for proposing this topic.
>
> Chaitanya, my thank also goes to you.
>
> > Just a few random topics which come to mind we could discuss:
> >
> > - blktests has gain a bit of traction and some folks run on regular
> >   basis these tests. Can we gather feedback from them, what is working
> >   good, what is not? Are there feature wishes?
>
> Good topic, I also would like to hear about it.
>
> FYI, from the past LSFMM sessions and hallway talks, major feedbacks I had
> received are these two:
>
>  1. blktests CI infra looks missing (other than CKI by Redhat)
>     -> Some activities are ongoing to start blktests CI service.
>        I hope the status are shared at the session.
>
>  2. blktests are rather difficult to start using for some new users
>     -> I think config example is demanded, so that new users can
>        just copy it to start the first run, and understand the
>        config options easily.
>
> > - Do we need some sort of configuration tool which allows to setup a
> >   config? I'd still have a TODO to provide a config example with all
> >   knobs which influence blktests, but I wonder if we should go a step
> >   further here, e.g. something like kdevops has?
>
> Do you mean the "make menuconfig" style? Most of the blktests users are
> familiar with menuconfig, so that would be an idea. If users really want
> it, we can think of it. IMO, blktests still do not have so many options,
> then config.example would be simpler and more appropriate, probably.
>
> > - Which area do we lack tests? Should we just add an initial simple
> >   tests for the missing areas, so the basic infra is there and thus
> >   lowering the bar for adding new tests?
>
> To identify the uncovered area, I think code coverage will be useful. A few
> years ago, I measured it and shared in LSFMM, but that measurement was done for
> each source tree directory. The coverage ratio by source file will be more
> helpful to identify the missing area. I don't have time slot to measure it,
> so if anyone can do it and share the result, it will be appreciated. Once we
> know the missing areas, it sounds a good idea to add initial samples for each
> of the areas.
>
> > - The recent addition of kmemleak shows it's a great idea to enable more
> >   of the kernel test infrastructure when running the tests.
>
> Completely agreed.
>
> >   Are there more such things we could/should enable?
>
> I'm also interested in this question :)
>
> > - I would like to hear from Shin'ichiro if he is happy how things
> >   are going? :)
>
> More importantly, I would like to listen to voices from storage sub-system
> developers to see if they are happy or not, especially the maintainers.
>
> From my view, blktests keep on finding kernel bugs. I think it demonstrates the
> value of this community effort, and I'm happy about it. Said that, I find what
> blktests can improve more, of course. Here I share the list of improvement
> opportunities from my view point (I already mentioned the first three items).

A possible feature for blktest could be integration with something
like virtme-ng.
Running on VM can be versatile and fast. The run can be made parallel
too, by spawning multiple VMs simultaneously.

>
>  1. We can have more CI infra to make the most of blktests
>  2. We can add config examples to help new users
>  3. We can measure code coverage to identify missing test areas
>  4. Long standing failures make test result reports dirty
>     - I feel lockdep WARNs are tend to be left unfixed rather long period.
>       How can we gather effort to fix them?
>  5. We can refactor and clean up blktests framework for ease of maintainance
>       (e.g. trap handling)
>  6. Some users run blktests with built-in kernel modules, which makes a number
>     of test cases skipped. We can add more built-in kernel modules support to
>     expand test coverage for such use case.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-15 21:18     ` Haris Iqbal
@ 2026-02-16  0:33       ` Chaitanya Kulkarni
  2026-02-23  7:44       ` Johannes Thumshirn
  2026-02-23 17:08       ` Bart Van Assche
  2 siblings, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2026-02-16  0:33 UTC (permalink / raw)
  To: Haris Iqbal, Shinichiro Kawasaki
  Cc: Daniel Wagner, linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	lsf-pc@lists.linux-foundation.org, Bart Van Assche,
	Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
	Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal

On 2/15/26 13:18, Haris Iqbal wrote:
> On Fri, Feb 13, 2026 at 12:25 PM Shinichiro Kawasaki
> <shinichiro.kawasaki@wdc.com> wrote:
>> On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
>>> On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
>>>>     For the storage track at LSFMMBPF2026, I propose a session dedicated to
>>>>     blktests to discuss expansion plan and CI integration progress.
>>> Thanks for proposing this topic.
>> Chaitanya, my thank also goes to you.
>>
>>> Just a few random topics which come to mind we could discuss:
>>>
>>> - blktests has gain a bit of traction and some folks run on regular
>>>    basis these tests. Can we gather feedback from them, what is working
>>>    good, what is not? Are there feature wishes?
>> Good topic, I also would like to hear about it.
>>
>> FYI, from the past LSFMM sessions and hallway talks, major feedbacks I had
>> received are these two:
>>
>>   1. blktests CI infra looks missing (other than CKI by Redhat)
>>      -> Some activities are ongoing to start blktests CI service.
>>         I hope the status are shared at the session.
>>
>>   2. blktests are rather difficult to start using for some new users
>>      -> I think config example is demanded, so that new users can
>>         just copy it to start the first run, and understand the
>>         config options easily.
>>
>>> - Do we need some sort of configuration tool which allows to setup a
>>>    config? I'd still have a TODO to provide a config example with all
>>>    knobs which influence blktests, but I wonder if we should go a step
>>>    further here, e.g. something like kdevops has?
>> Do you mean the "make menuconfig" style? Most of the blktests users are
>> familiar with menuconfig, so that would be an idea. If users really want
>> it, we can think of it. IMO, blktests still do not have so many options,
>> then config.example would be simpler and more appropriate, probably.
>>
>>> - Which area do we lack tests? Should we just add an initial simple
>>>    tests for the missing areas, so the basic infra is there and thus
>>>    lowering the bar for adding new tests?
>> To identify the uncovered area, I think code coverage will be useful. A few
>> years ago, I measured it and shared in LSFMM, but that measurement was done for
>> each source tree directory. The coverage ratio by source file will be more
>> helpful to identify the missing area. I don't have time slot to measure it,
>> so if anyone can do it and share the result, it will be appreciated. Once we
>> know the missing areas, it sounds a good idea to add initial samples for each
>> of the areas.
>>
>>> - The recent addition of kmemleak shows it's a great idea to enable more
>>>    of the kernel test infrastructure when running the tests.
>> Completely agreed.
>>
>>>    Are there more such things we could/should enable?
>> I'm also interested in this question 🙂
>>
>>> - I would like to hear from Shin'ichiro if he is happy how things
>>>    are going? 🙂
>> More importantly, I would like to listen to voices from storage sub-system
>> developers to see if they are happy or not, especially the maintainers.
>>
>>  From my view, blktests keep on finding kernel bugs. I think it demonstrates the
>> value of this community effort, and I'm happy about it. Said that, I find what
>> blktests can improve more, of course. Here I share the list of improvement
>> opportunities from my view point (I already mentioned the first three items).
> A possible feature for blktest could be integration with something
> like virtme-ng.
> Running on VM can be versatile and fast. The run can be made parallel
> too, by spawning multiple VMs simultaneously.

This is my goal and I had proposed this topic few yeard back
to have blktests integrated with VM. I've spent sometime on a initial
setup but never got it to finish it.

If someone is working on it I'll be happy to help and review also test the
implementation.

-ck



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-15 21:18     ` Haris Iqbal
  2026-02-16  0:33       ` Chaitanya Kulkarni
@ 2026-02-23  7:44       ` Johannes Thumshirn
  2026-02-25 10:15         ` Haris Iqbal
  2026-02-23 17:08       ` Bart Van Assche
  2 siblings, 1 reply; 25+ messages in thread
From: Johannes Thumshirn @ 2026-02-23  7:44 UTC (permalink / raw)
  To: Haris Iqbal, Shinichiro Kawasaki
  Cc: Daniel Wagner, Chaitanya Kulkarni, linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	lsf-pc@lists.linux-foundation.org, Bart Van Assche,
	Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
	Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal

On 2/15/26 10:18 PM, Haris Iqbal wrote:
>>  From my view, blktests keep on finding kernel bugs. I think it demonstrates the
>> value of this community effort, and I'm happy about it. Said that, I find what
>> blktests can improve more, of course. Here I share the list of improvement
>> opportunities from my view point (I already mentioned the first three items).
> A possible feature for blktest could be integration with something
> like virtme-ng.
> Running on VM can be versatile and fast. The run can be made parallel
> too, by spawning multiple VMs simultaneously.

This is actually rather trivial to solve I have some pre-made things for 
fstests and that can be adopted for blktests as well:

vng \
     --user=root -v --name vng-tcmu-runner \
     -a loglevel=3 \
     --run $KDIR \
     --cpus=8 --memory=8G \
     --exec "~johannes/src/ci/run-fstests.sh" \
     --qemu-opts="-device virtio-scsi,id=scsi0 -drive 
file=/dev/sda,format=raw,if=none,id=zbc0 -device 
scsi-block,bus=scsi0.0,drive=zbc0" \
     --qemu-opts="-device virtio-scsi,id=scsi1 -drive 
file=/dev/sdb,format=raw,if=none,id=zbc1 -device 
scsi-block,bus=scsi1.0,drive=zbc1"

and run-fstests.sh is:

#!/bin/sh
# SPDX-License-Identifier: GPL-2.0

DIR="/tmp/"
MKFS="mkfs.btrfs -f"
FSTESTS_DIR="/home/johannes/src/fstests"
HOSTCONF="$FSTESTS_DIR/configs/$(hostname -s)"
TESTDEV="$(grep TEST_DEV $HOSTCONF | cut -d '=' -f 2)"

mkdir -p $DIR/{test,scratch,results}
$MKFS $TESTDEV

cd $FSTESTS_DIR
./check -x raid

I'm not sure it'll make sense to include this into blktests other than 
maybe providing an example in the README.


Byte,

     Johannes


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-15 21:18     ` Haris Iqbal
  2026-02-16  0:33       ` Chaitanya Kulkarni
  2026-02-23  7:44       ` Johannes Thumshirn
@ 2026-02-23 17:08       ` Bart Van Assche
  2026-02-25  2:55         ` Chaitanya Kulkarni
  2026-02-25 10:07         ` Haris Iqbal
  2 siblings, 2 replies; 25+ messages in thread
From: Bart Van Assche @ 2026-02-23 17:08 UTC (permalink / raw)
  To: Haris Iqbal, Shinichiro Kawasaki
  Cc: Daniel Wagner, Chaitanya Kulkarni, linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	lsf-pc@lists.linux-foundation.org, Hannes Reinecke, hch,
	Jens Axboe, sagi@grimberg.me, tytso@mit.edu, Johannes Thumshirn,
	Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal

On 2/15/26 1:18 PM, Haris Iqbal wrote:
> A possible feature for blktest could be integration with something
> like virtme-ng.
> Running on VM can be versatile and fast. The run can be made parallel
> too, by spawning multiple VMs simultaneously.
Hmm ... this probably would break tests that measure performance and
also tests that modify data or reservations of a physical storage
device.

Bart.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-23 17:08       ` Bart Van Assche
@ 2026-02-25  2:55         ` Chaitanya Kulkarni
  2026-02-25 10:07         ` Haris Iqbal
  1 sibling, 0 replies; 25+ messages in thread
From: Chaitanya Kulkarni @ 2026-02-25  2:55 UTC (permalink / raw)
  To: Bart Van Assche, Haris Iqbal, Shinichiro Kawasaki
  Cc: Daniel Wagner, linux-block@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
	lsf-pc@lists.linux-foundation.org, Hannes Reinecke, hch,
	Jens Axboe, sagi@grimberg.me, tytso@mit.edu, Johannes Thumshirn,
	Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal

On 2/23/26 09:08, Bart Van Assche wrote:
> On 2/15/26 1:18 PM, Haris Iqbal wrote:
>> A possible feature for blktest could be integration with something
>> like virtme-ng.
>> Running on VM can be versatile and fast. The run can be made parallel
>> too, by spawning multiple VMs simultaneously.
> Hmm ... this probably would break tests that measure performance and
> also tests that modify data or reservations of a physical storage
> device.
>
> Bart.

We can always make a flag and make test that are parallel compatible ?
so we don't have to for the parallel execution by default.

WDYT ?

-ck



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-23 17:08       ` Bart Van Assche
  2026-02-25  2:55         ` Chaitanya Kulkarni
@ 2026-02-25 10:07         ` Haris Iqbal
  2026-02-25 16:29           ` Bart Van Assche
  1 sibling, 1 reply; 25+ messages in thread
From: Haris Iqbal @ 2026-02-25 10:07 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Shinichiro Kawasaki, Daniel Wagner, Chaitanya Kulkarni,
	linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
	Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
	Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal

On Mon, Feb 23, 2026 at 6:08 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 2/15/26 1:18 PM, Haris Iqbal wrote:
> > A possible feature for blktest could be integration with something
> > like virtme-ng.
> > Running on VM can be versatile and fast. The run can be made parallel
> > too, by spawning multiple VMs simultaneously.
> Hmm ... this probably would break tests that measure performance and
> also tests that modify data or reservations of a physical storage
> device.

Performance related tests can be skipped when running in a virtual environment.
Regarding data modification, if the tests do not involve any crash or
reboot, then the VMs can be started in "snapshot" mode. This gives a
number of advantages.
a) Data modifications will not persist once the VM is shut down. This
means the disk will be clean for the next test cycle.
b) Using just a single set of qcow files, one can bring up any number
of VMs in snapshot mode. The data written while the VM is running can
be safely read/modified, but it disappears after a reboot.

>
> Bart.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-23  7:44       ` Johannes Thumshirn
@ 2026-02-25 10:15         ` Haris Iqbal
  0 siblings, 0 replies; 25+ messages in thread
From: Haris Iqbal @ 2026-02-25 10:15 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: Shinichiro Kawasaki, Daniel Wagner, Chaitanya Kulkarni,
	linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
	Bart Van Assche, Hannes Reinecke, hch, Jens Axboe,
	sagi@grimberg.me, tytso@mit.edu, Christian Brauner,
	Martin K. Petersen, linux-fsdevel@vger.kernel.org,
	Javier González, willy@infradead.org, Jan Kara,
	amir73il@gmail.com, vbabka@suse.cz, Damien Le Moal

On Mon, Feb 23, 2026 at 8:44 AM Johannes Thumshirn
<Johannes.Thumshirn@wdc.com> wrote:
>
> On 2/15/26 10:18 PM, Haris Iqbal wrote:
> >>  From my view, blktests keep on finding kernel bugs. I think it demonstrates the
> >> value of this community effort, and I'm happy about it. Said that, I find what
> >> blktests can improve more, of course. Here I share the list of improvement
> >> opportunities from my view point (I already mentioned the first three items).
> > A possible feature for blktest could be integration with something
> > like virtme-ng.
> > Running on VM can be versatile and fast. The run can be made parallel
> > too, by spawning multiple VMs simultaneously.
>
> This is actually rather trivial to solve I have some pre-made things for
> fstests and that can be adopted for blktests as well:
>
> vng \
>      --user=root -v --name vng-tcmu-runner \
>      -a loglevel=3 \
>      --run $KDIR \
>      --cpus=8 --memory=8G \
>      --exec "~johannes/src/ci/run-fstests.sh" \
>      --qemu-opts="-device virtio-scsi,id=scsi0 -drive
> file=/dev/sda,format=raw,if=none,id=zbc0 -device
> scsi-block,bus=scsi0.0,drive=zbc0" \
>      --qemu-opts="-device virtio-scsi,id=scsi1 -drive
> file=/dev/sdb,format=raw,if=none,id=zbc1 -device
> scsi-block,bus=scsi1.0,drive=zbc1"
>
> and run-fstests.sh is:
>
> #!/bin/sh
> # SPDX-License-Identifier: GPL-2.0
>
> DIR="/tmp/"
> MKFS="mkfs.btrfs -f"
> FSTESTS_DIR="/home/johannes/src/fstests"
> HOSTCONF="$FSTESTS_DIR/configs/$(hostname -s)"
> TESTDEV="$(grep TEST_DEV $HOSTCONF | cut -d '=' -f 2)"
>
> mkdir -p $DIR/{test,scratch,results}
> $MKFS $TESTDEV
>
> cd $FSTESTS_DIR
> ./check -x raid
>
> I'm not sure it'll make sense to include this into blktests other than
> maybe providing an example in the README.

You're right. It is pretty trivial to run on VMs, but only after
everything is set up.
Adding it to blktests would allow this setup to be done (and run tests
after that) on any system by running just a couple of commands.

>
>
> Byte,
>
>      Johannes
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
  2026-02-25 10:07         ` Haris Iqbal
@ 2026-02-25 16:29           ` Bart Van Assche
  0 siblings, 0 replies; 25+ messages in thread
From: Bart Van Assche @ 2026-02-25 16:29 UTC (permalink / raw)
  To: Haris Iqbal
  Cc: Shinichiro Kawasaki, Daniel Wagner, Chaitanya Kulkarni,
	linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
	Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
	Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
	linux-fsdevel@vger.kernel.org, Javier González,
	willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
	Damien Le Moal

On 2/25/26 2:07 AM, Haris Iqbal wrote:
> Regarding data modification, if the tests do not involve any crash or
> reboot, then the VMs can be started in "snapshot" mode.
I'm not sure that proposal makes sense. If e.g. an NVMe device is
specified in the blktests config file, it probably is the intention of
the person who runs the test to test the NVMe driver and/or the NVMe
device. Using any method to create a "snapshot" of the device and to
run blktests against that snapshot changes the kernel driver and also
the physical device that are tested. Not modifying the kernel driver
or physical device that are tested implies using PCIe passthrough. And
the PCIe passthrough mechanism can only be used by one VM at a time as
far as I know.

Bart.

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2026-02-25 16:30 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-11 20:35 [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework Chaitanya Kulkarni
2026-02-12  7:52 ` Daniel Wagner
2026-02-12  7:57   ` Johannes Thumshirn
2026-02-13 17:30     ` Bart Van Assche
2026-02-13 17:35       ` James Bottomley
2026-02-13 11:23   ` Shinichiro Kawasaki
2026-02-13 14:18     ` Haris Iqbal
2026-02-15 18:38     ` Nilay Shroff
2026-02-15 21:18     ` Haris Iqbal
2026-02-16  0:33       ` Chaitanya Kulkarni
2026-02-23  7:44       ` Johannes Thumshirn
2026-02-25 10:15         ` Haris Iqbal
2026-02-23 17:08       ` Bart Van Assche
2026-02-25  2:55         ` Chaitanya Kulkarni
2026-02-25 10:07         ` Haris Iqbal
2026-02-25 16:29           ` Bart Van Assche
  -- strict thread matches above, loose matches on Subject: below --
2024-01-09  6:30 Chaitanya Kulkarni
2024-01-09 21:31 ` Bart Van Assche
2024-01-09 22:01   ` Chaitanya Kulkarni
2024-01-09 22:08     ` Bart Van Assche
2024-01-17  8:50 ` Daniel Wagner
2024-01-23 15:07   ` Daniel Wagner
2024-02-14  7:32     ` Shinichiro Kawasaki
2024-02-21 18:32     ` Luis Chamberlain
2024-02-22  9:31       ` Daniel Wagner
2024-02-22 15:54         ` Luis Chamberlain
2024-02-22 16:16           ` Daniel Wagner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox