* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-13 11:23 ` Shinichiro Kawasaki
@ 2026-02-13 14:18 ` Haris Iqbal
2026-02-15 18:38 ` Nilay Shroff
` (2 subsequent siblings)
3 siblings, 0 replies; 28+ messages in thread
From: Haris Iqbal @ 2026-02-13 14:18 UTC (permalink / raw)
To: Shinichiro Kawasaki
Cc: Daniel Wagner, Chaitanya Kulkarni, linux-block@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
lsf-pc@lists.linux-foundation.org, Bart Van Assche,
Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On Fri, Feb 13, 2026 at 12:25 PM Shinichiro Kawasaki
<shinichiro.kawasaki@wdc.com> wrote:
>
> On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
> > On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
> > > For the storage track at LSFMMBPF2026, I propose a session dedicated to
> > > blktests to discuss expansion plan and CI integration progress.
I am interested in this topic.
> >
> > Thanks for proposing this topic.
>
> Chaitanya, my thank also goes to you.
>
> > Just a few random topics which come to mind we could discuss:
> >
> > - blktests has gain a bit of traction and some folks run on regular
> > basis these tests. Can we gather feedback from them, what is working
> > good, what is not? Are there feature wishes?
>
> Good topic, I also would like to hear about it.
>
> FYI, from the past LSFMM sessions and hallway talks, major feedbacks I had
> received are these two:
>
> 1. blktests CI infra looks missing (other than CKI by Redhat)
> -> Some activities are ongoing to start blktests CI service.
> I hope the status are shared at the session.
>
> 2. blktests are rather difficult to start using for some new users
> -> I think config example is demanded, so that new users can
> just copy it to start the first run, and understand the
> config options easily.
+1 to this.
>
> > - Do we need some sort of configuration tool which allows to setup a
> > config? I'd still have a TODO to provide a config example with all
> > knobs which influence blktests, but I wonder if we should go a step
> > further here, e.g. something like kdevops has?
>
> Do you mean the "make menuconfig" style? Most of the blktests users are
> familiar with menuconfig, so that would be an idea. If users really want
> it, we can think of it. IMO, blktests still do not have so many options,
> then config.example would be simpler and more appropriate, probably.
>
> > - Which area do we lack tests? Should we just add an initial simple
> > tests for the missing areas, so the basic infra is there and thus
> > lowering the bar for adding new tests?
>
> To identify the uncovered area, I think code coverage will be useful. A few
> years ago, I measured it and shared in LSFMM, but that measurement was done for
> each source tree directory. The coverage ratio by source file will be more
> helpful to identify the missing area. I don't have time slot to measure it,
> so if anyone can do it and share the result, it will be appreciated. Once we
> know the missing areas, it sounds a good idea to add initial samples for each
> of the areas.
>
> > - The recent addition of kmemleak shows it's a great idea to enable more
> > of the kernel test infrastructure when running the tests.
>
> Completely agreed.
>
> > Are there more such things we could/should enable?
>
> I'm also interested in this question :)
>
> > - I would like to hear from Shin'ichiro if he is happy how things
> > are going? :)
>
> More importantly, I would like to listen to voices from storage sub-system
> developers to see if they are happy or not, especially the maintainers.
I like the idea of blktests.
We have internal tests which we run for RNBD (and RTRS).
And I plan to port the RNBD ones to blktests.
>
> From my view, blktests keep on finding kernel bugs. I think it demonstrates the
> value of this community effort, and I'm happy about it. Said that, I find what
> blktests can improve more, of course. Here I share the list of improvement
> opportunities from my view point (I already mentioned the first three items).
>
> 1. We can have more CI infra to make the most of blktests
> 2. We can add config examples to help new users
> 3. We can measure code coverage to identify missing test areas
> 4. Long standing failures make test result reports dirty
> - I feel lockdep WARNs are tend to be left unfixed rather long period.
> How can we gather effort to fix them?
> 5. We can refactor and clean up blktests framework for ease of maintainance
> (e.g. trap handling)
> 6. Some users run blktests with built-in kernel modules, which makes a number
> of test cases skipped. We can add more built-in kernel modules support to
> expand test coverage for such use case.
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-13 11:23 ` Shinichiro Kawasaki
2026-02-13 14:18 ` Haris Iqbal
@ 2026-02-15 18:38 ` Nilay Shroff
2026-04-21 6:19 ` Shin'ichiro Kawasaki
2026-02-15 21:18 ` Haris Iqbal
2026-04-21 6:37 ` Shin'ichiro Kawasaki
3 siblings, 1 reply; 28+ messages in thread
From: Nilay Shroff @ 2026-02-15 18:38 UTC (permalink / raw)
To: Shinichiro Kawasaki, Daniel Wagner
Cc: Chaitanya Kulkarni, linux-block@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
lsf-pc@lists.linux-foundation.org, Bart Van Assche,
Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On 2/13/26 4:53 PM, Shinichiro Kawasaki wrote:
> On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
>> On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
>>> For the storage track at LSFMMBPF2026, I propose a session dedicated to
>>> blktests to discuss expansion plan and CI integration progress.
>>
>> Thanks for proposing this topic.
>
> Chaitanya, my thank also goes to you.
>
Yes thanks for proposing this!
>> Just a few random topics which come to mind we could discuss:
>>
>> - blktests has gain a bit of traction and some folks run on regular
>> basis these tests. Can we gather feedback from them, what is working
>> good, what is not? Are there feature wishes?
>
> Good topic, I also would like to hear about it.
>
One improvement I’d like to highlight is related to how blktests are executed
today. So far, we’ve been running blktests serially, but if it's possible to
run tests in parallel to improve test turnaround time and make large-scale or
CI-based testing more efficient? For instance, adding parallel_safe Tags: Marking tests
that don't modify global kernel state so they can be safely offloaded to parallel
workers. Marking parallel_safe tags would allow the runner to distinguish:
Safe Tests: Tests that only perform I/O on a specific, non-shared device or
check static kernel parameters.
Unsafe Tests: Tests that reload kernel modules, modify global /sys or /proc entries,
or require exclusive access to specific hardware addresses.
Yes adding parallel execution support shall require framework/design changes.
> FYI, from the past LSFMM sessions and hallway talks, major feedbacks I had
> received are these two:
>
> 1. blktests CI infra looks missing (other than CKI by Redhat)
> -> Some activities are ongoing to start blktests CI service.
> I hope the status are shared at the session.
>
> 2. blktests are rather difficult to start using for some new users
> -> I think config example is demanded, so that new users can
> just copy it to start the first run, and understand the
> config options easily.
>
>> - Do we need some sort of configuration tool which allows to setup a
>> config? I'd still have a TODO to provide a config example with all
>> knobs which influence blktests, but I wonder if we should go a step
>> further here, e.g. something like kdevops has?
>
> Do you mean the "make menuconfig" style? Most of the blktests users are
> familiar with menuconfig, so that would be an idea. If users really want
> it, we can think of it. IMO, blktests still do not have so many options,
> then config.example would be simpler and more appropriate, probably.
>
>> - Which area do we lack tests? Should we just add an initial simple
>> tests for the missing areas, so the basic infra is there and thus
>> lowering the bar for adding new tests?
>
> To identify the uncovered area, I think code coverage will be useful. A few
> years ago, I measured it and shared in LSFMM, but that measurement was done for
> each source tree directory. The coverage ratio by source file will be more
> helpful to identify the missing area. I don't have time slot to measure it,
> so if anyone can do it and share the result, it will be appreciated. Once we
> know the missing areas, it sounds a good idea to add initial samples for each
> of the areas.
>
>> - The recent addition of kmemleak shows it's a great idea to enable more
>> of the kernel test infrastructure when running the tests.
>
> Completely agreed.
>
>> Are there more such things we could/should enable?
>
> I'm also interested in this question :)
>
>> - I would like to hear from Shin'ichiro if he is happy how things
>> are going? :)
>
> More importantly, I would like to listen to voices from storage sub-system
> developers to see if they are happy or not, especially the maintainers.
>
> From my view, blktests keep on finding kernel bugs. I think it demonstrates the
> value of this community effort, and I'm happy about it. Said that, I find what
> blktests can improve more, of course. Here I share the list of improvement
> opportunities from my view point (I already mentioned the first three items).
>
> 1. We can have more CI infra to make the most of blktests
> 2. We can add config examples to help new users
> 3. We can measure code coverage to identify missing test areas
> 4. Long standing failures make test result reports dirty
> - I feel lockdep WARNs are tend to be left unfixed rather long period.
> How can we gather effort to fix them?
I agree regarding lockdep; recently we did see quite a few lockdep splats.
That said, I believe the number has dropped significantly and only a small
set remains. From what I can tell, most of the outstanding lockdep issues
are related to fs-reclaim paths recursing into the block layer while the
queue is frozen. We should be able to resolve most of these soon, or at
least before the conference. If anything is still outstanding after that,
we can discuss it during the conference and work toward addressing it as
quickly as possible.
> 5. We can refactor and clean up blktests framework for ease of maintainance
> (e.g. trap handling)
> 6. Some users run blktests with built-in kernel modules, which makes a number
> of test cases skipped. We can add more built-in kernel modules support to
> expand test coverage for such use case.
Thanks,
--Nilay
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-15 18:38 ` Nilay Shroff
@ 2026-04-21 6:19 ` Shin'ichiro Kawasaki
0 siblings, 0 replies; 28+ messages in thread
From: Shin'ichiro Kawasaki @ 2026-04-21 6:19 UTC (permalink / raw)
To: Nilay Shroff
Cc: Daniel Wagner, Chaitanya Kulkarni, linux-block@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
lsf-pc@lists.linux-foundation.org, Bart Van Assche,
Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On Feb 16, 2026 / 00:08, Nilay Shroff wrote:
>
>
> On 2/13/26 4:53 PM, Shinichiro Kawasaki wrote:
> > On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
> >> On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
> >>> For the storage track at LSFMMBPF2026, I propose a session dedicated to
> >>> blktests to discuss expansion plan and CI integration progress.
> >>
> >> Thanks for proposing this topic.
> >
> > Chaitanya, my thank also goes to you.
> >
> Yes thanks for proposing this!
>
> >> Just a few random topics which come to mind we could discuss:
> >>
> >> - blktests has gain a bit of traction and some folks run on regular
> >> basis these tests. Can we gather feedback from them, what is working
> >> good, what is not? Are there feature wishes?
> >
> > Good topic, I also would like to hear about it.
> >
> One improvement I’d like to highlight is related to how blktests are executed
> today. So far, we’ve been running blktests serially, but if it's possible to
> run tests in parallel to improve test turnaround time and make large-scale or
> CI-based testing more efficient? For instance, adding parallel_safe Tags: Marking tests
> that don't modify global kernel state so they can be safely offloaded to parallel
> workers. Marking parallel_safe tags would allow the runner to distinguish:
>
> Safe Tests: Tests that only perform I/O on a specific, non-shared device or
> check static kernel parameters.
>
> Unsafe Tests: Tests that reload kernel modules, modify global /sys or /proc entries,
> or require exclusive access to specific hardware addresses.
>
> Yes adding parallel execution support shall require framework/design changes.
Hi Nilay, thanks for the idea. I understand that shorter test time will make CI
cycles faster and improve the development efficiency.
Said that, the safe/unsafe testing idea may not be enough. I think majority of
test case does kernel module set up using null_blk, scsi_debug, or nvme target
drivers. Then I foresee the majority of the test cases will be "unsafe", and
cannot be run in parallel.
Also, parallel runs on single system will affect dmesg or kmemleak checking.
We cannot tell which run caused a dmesg message or a memory leak.
For the runtime reduction by parallel runs, I guess blktests run on VMs might be
the good approach as Haris pointed out. Anyway, this topic will need more
discussion.
[...]
> > 4. Long standing failures make test result reports dirty
> > - I feel lockdep WARNs are tend to be left unfixed rather long period.
> > How can we gather effort to fix them?
>
> I agree regarding lockdep; recently we did see quite a few lockdep splats.
> That said, I believe the number has dropped significantly and only a small
> set remains. From what I can tell, most of the outstanding lockdep issues
> are related to fs-reclaim paths recursing into the block layer while the
> queue is frozen. We should be able to resolve most of these soon, or at
> least before the conference. If anything is still outstanding after that,
> we can discuss it during the conference and work toward addressing it as
> quickly as possible.
Taking this chance, I'd like to express my appreciation for the effort to
resolve the lockdep issues. It is great that a number of lockdeps are already
fixed. Said that, two lockdep issues are still observed with v7.0 kernel at
nvme/005 and nbd/002 [1]. I would like to gather attentions to the failures.
[1] https://lore.kernel.org/linux-block/ynmi72x5wt5ooljjafebhcarit3pvu6axkslqenikb2p5txe57@ldytqa2t4i2x/
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-13 11:23 ` Shinichiro Kawasaki
2026-02-13 14:18 ` Haris Iqbal
2026-02-15 18:38 ` Nilay Shroff
@ 2026-02-15 21:18 ` Haris Iqbal
2026-02-16 0:33 ` Chaitanya Kulkarni
` (2 more replies)
2026-04-21 6:37 ` Shin'ichiro Kawasaki
3 siblings, 3 replies; 28+ messages in thread
From: Haris Iqbal @ 2026-02-15 21:18 UTC (permalink / raw)
To: Shinichiro Kawasaki
Cc: Daniel Wagner, Chaitanya Kulkarni, linux-block@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
lsf-pc@lists.linux-foundation.org, Bart Van Assche,
Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On Fri, Feb 13, 2026 at 12:25 PM Shinichiro Kawasaki
<shinichiro.kawasaki@wdc.com> wrote:
>
> On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
> > On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
> > > For the storage track at LSFMMBPF2026, I propose a session dedicated to
> > > blktests to discuss expansion plan and CI integration progress.
> >
> > Thanks for proposing this topic.
>
> Chaitanya, my thank also goes to you.
>
> > Just a few random topics which come to mind we could discuss:
> >
> > - blktests has gain a bit of traction and some folks run on regular
> > basis these tests. Can we gather feedback from them, what is working
> > good, what is not? Are there feature wishes?
>
> Good topic, I also would like to hear about it.
>
> FYI, from the past LSFMM sessions and hallway talks, major feedbacks I had
> received are these two:
>
> 1. blktests CI infra looks missing (other than CKI by Redhat)
> -> Some activities are ongoing to start blktests CI service.
> I hope the status are shared at the session.
>
> 2. blktests are rather difficult to start using for some new users
> -> I think config example is demanded, so that new users can
> just copy it to start the first run, and understand the
> config options easily.
>
> > - Do we need some sort of configuration tool which allows to setup a
> > config? I'd still have a TODO to provide a config example with all
> > knobs which influence blktests, but I wonder if we should go a step
> > further here, e.g. something like kdevops has?
>
> Do you mean the "make menuconfig" style? Most of the blktests users are
> familiar with menuconfig, so that would be an idea. If users really want
> it, we can think of it. IMO, blktests still do not have so many options,
> then config.example would be simpler and more appropriate, probably.
>
> > - Which area do we lack tests? Should we just add an initial simple
> > tests for the missing areas, so the basic infra is there and thus
> > lowering the bar for adding new tests?
>
> To identify the uncovered area, I think code coverage will be useful. A few
> years ago, I measured it and shared in LSFMM, but that measurement was done for
> each source tree directory. The coverage ratio by source file will be more
> helpful to identify the missing area. I don't have time slot to measure it,
> so if anyone can do it and share the result, it will be appreciated. Once we
> know the missing areas, it sounds a good idea to add initial samples for each
> of the areas.
>
> > - The recent addition of kmemleak shows it's a great idea to enable more
> > of the kernel test infrastructure when running the tests.
>
> Completely agreed.
>
> > Are there more such things we could/should enable?
>
> I'm also interested in this question :)
>
> > - I would like to hear from Shin'ichiro if he is happy how things
> > are going? :)
>
> More importantly, I would like to listen to voices from storage sub-system
> developers to see if they are happy or not, especially the maintainers.
>
> From my view, blktests keep on finding kernel bugs. I think it demonstrates the
> value of this community effort, and I'm happy about it. Said that, I find what
> blktests can improve more, of course. Here I share the list of improvement
> opportunities from my view point (I already mentioned the first three items).
A possible feature for blktest could be integration with something
like virtme-ng.
Running on VM can be versatile and fast. The run can be made parallel
too, by spawning multiple VMs simultaneously.
>
> 1. We can have more CI infra to make the most of blktests
> 2. We can add config examples to help new users
> 3. We can measure code coverage to identify missing test areas
> 4. Long standing failures make test result reports dirty
> - I feel lockdep WARNs are tend to be left unfixed rather long period.
> How can we gather effort to fix them?
> 5. We can refactor and clean up blktests framework for ease of maintainance
> (e.g. trap handling)
> 6. Some users run blktests with built-in kernel modules, which makes a number
> of test cases skipped. We can add more built-in kernel modules support to
> expand test coverage for such use case.
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-15 21:18 ` Haris Iqbal
@ 2026-02-16 0:33 ` Chaitanya Kulkarni
2026-02-23 7:44 ` Johannes Thumshirn
2026-02-23 17:08 ` Bart Van Assche
2 siblings, 0 replies; 28+ messages in thread
From: Chaitanya Kulkarni @ 2026-02-16 0:33 UTC (permalink / raw)
To: Haris Iqbal, Shinichiro Kawasaki
Cc: Daniel Wagner, linux-block@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
lsf-pc@lists.linux-foundation.org, Bart Van Assche,
Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On 2/15/26 13:18, Haris Iqbal wrote:
> On Fri, Feb 13, 2026 at 12:25 PM Shinichiro Kawasaki
> <shinichiro.kawasaki@wdc.com> wrote:
>> On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
>>> On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
>>>> For the storage track at LSFMMBPF2026, I propose a session dedicated to
>>>> blktests to discuss expansion plan and CI integration progress.
>>> Thanks for proposing this topic.
>> Chaitanya, my thank also goes to you.
>>
>>> Just a few random topics which come to mind we could discuss:
>>>
>>> - blktests has gain a bit of traction and some folks run on regular
>>> basis these tests. Can we gather feedback from them, what is working
>>> good, what is not? Are there feature wishes?
>> Good topic, I also would like to hear about it.
>>
>> FYI, from the past LSFMM sessions and hallway talks, major feedbacks I had
>> received are these two:
>>
>> 1. blktests CI infra looks missing (other than CKI by Redhat)
>> -> Some activities are ongoing to start blktests CI service.
>> I hope the status are shared at the session.
>>
>> 2. blktests are rather difficult to start using for some new users
>> -> I think config example is demanded, so that new users can
>> just copy it to start the first run, and understand the
>> config options easily.
>>
>>> - Do we need some sort of configuration tool which allows to setup a
>>> config? I'd still have a TODO to provide a config example with all
>>> knobs which influence blktests, but I wonder if we should go a step
>>> further here, e.g. something like kdevops has?
>> Do you mean the "make menuconfig" style? Most of the blktests users are
>> familiar with menuconfig, so that would be an idea. If users really want
>> it, we can think of it. IMO, blktests still do not have so many options,
>> then config.example would be simpler and more appropriate, probably.
>>
>>> - Which area do we lack tests? Should we just add an initial simple
>>> tests for the missing areas, so the basic infra is there and thus
>>> lowering the bar for adding new tests?
>> To identify the uncovered area, I think code coverage will be useful. A few
>> years ago, I measured it and shared in LSFMM, but that measurement was done for
>> each source tree directory. The coverage ratio by source file will be more
>> helpful to identify the missing area. I don't have time slot to measure it,
>> so if anyone can do it and share the result, it will be appreciated. Once we
>> know the missing areas, it sounds a good idea to add initial samples for each
>> of the areas.
>>
>>> - The recent addition of kmemleak shows it's a great idea to enable more
>>> of the kernel test infrastructure when running the tests.
>> Completely agreed.
>>
>>> Are there more such things we could/should enable?
>> I'm also interested in this question 🙂
>>
>>> - I would like to hear from Shin'ichiro if he is happy how things
>>> are going? 🙂
>> More importantly, I would like to listen to voices from storage sub-system
>> developers to see if they are happy or not, especially the maintainers.
>>
>> From my view, blktests keep on finding kernel bugs. I think it demonstrates the
>> value of this community effort, and I'm happy about it. Said that, I find what
>> blktests can improve more, of course. Here I share the list of improvement
>> opportunities from my view point (I already mentioned the first three items).
> A possible feature for blktest could be integration with something
> like virtme-ng.
> Running on VM can be versatile and fast. The run can be made parallel
> too, by spawning multiple VMs simultaneously.
This is my goal and I had proposed this topic few yeard back
to have blktests integrated with VM. I've spent sometime on a initial
setup but never got it to finish it.
If someone is working on it I'll be happy to help and review also test the
implementation.
-ck
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-15 21:18 ` Haris Iqbal
2026-02-16 0:33 ` Chaitanya Kulkarni
@ 2026-02-23 7:44 ` Johannes Thumshirn
2026-02-25 10:15 ` Haris Iqbal
2026-04-21 6:05 ` Shin'ichiro Kawasaki
2026-02-23 17:08 ` Bart Van Assche
2 siblings, 2 replies; 28+ messages in thread
From: Johannes Thumshirn @ 2026-02-23 7:44 UTC (permalink / raw)
To: Haris Iqbal, Shinichiro Kawasaki
Cc: Daniel Wagner, Chaitanya Kulkarni, linux-block@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
lsf-pc@lists.linux-foundation.org, Bart Van Assche,
Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On 2/15/26 10:18 PM, Haris Iqbal wrote:
>> From my view, blktests keep on finding kernel bugs. I think it demonstrates the
>> value of this community effort, and I'm happy about it. Said that, I find what
>> blktests can improve more, of course. Here I share the list of improvement
>> opportunities from my view point (I already mentioned the first three items).
> A possible feature for blktest could be integration with something
> like virtme-ng.
> Running on VM can be versatile and fast. The run can be made parallel
> too, by spawning multiple VMs simultaneously.
This is actually rather trivial to solve I have some pre-made things for
fstests and that can be adopted for blktests as well:
vng \
--user=root -v --name vng-tcmu-runner \
-a loglevel=3 \
--run $KDIR \
--cpus=8 --memory=8G \
--exec "~johannes/src/ci/run-fstests.sh" \
--qemu-opts="-device virtio-scsi,id=scsi0 -drive
file=/dev/sda,format=raw,if=none,id=zbc0 -device
scsi-block,bus=scsi0.0,drive=zbc0" \
--qemu-opts="-device virtio-scsi,id=scsi1 -drive
file=/dev/sdb,format=raw,if=none,id=zbc1 -device
scsi-block,bus=scsi1.0,drive=zbc1"
and run-fstests.sh is:
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
DIR="/tmp/"
MKFS="mkfs.btrfs -f"
FSTESTS_DIR="/home/johannes/src/fstests"
HOSTCONF="$FSTESTS_DIR/configs/$(hostname -s)"
TESTDEV="$(grep TEST_DEV $HOSTCONF | cut -d '=' -f 2)"
mkdir -p $DIR/{test,scratch,results}
$MKFS $TESTDEV
cd $FSTESTS_DIR
./check -x raid
I'm not sure it'll make sense to include this into blktests other than
maybe providing an example in the README.
Byte,
Johannes
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-23 7:44 ` Johannes Thumshirn
@ 2026-02-25 10:15 ` Haris Iqbal
2026-04-21 6:05 ` Shin'ichiro Kawasaki
1 sibling, 0 replies; 28+ messages in thread
From: Haris Iqbal @ 2026-02-25 10:15 UTC (permalink / raw)
To: Johannes Thumshirn
Cc: Shinichiro Kawasaki, Daniel Wagner, Chaitanya Kulkarni,
linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
Bart Van Assche, Hannes Reinecke, hch, Jens Axboe,
sagi@grimberg.me, tytso@mit.edu, Christian Brauner,
Martin K. Petersen, linux-fsdevel@vger.kernel.org,
Javier González, willy@infradead.org, Jan Kara,
amir73il@gmail.com, vbabka@suse.cz, Damien Le Moal
On Mon, Feb 23, 2026 at 8:44 AM Johannes Thumshirn
<Johannes.Thumshirn@wdc.com> wrote:
>
> On 2/15/26 10:18 PM, Haris Iqbal wrote:
> >> From my view, blktests keep on finding kernel bugs. I think it demonstrates the
> >> value of this community effort, and I'm happy about it. Said that, I find what
> >> blktests can improve more, of course. Here I share the list of improvement
> >> opportunities from my view point (I already mentioned the first three items).
> > A possible feature for blktest could be integration with something
> > like virtme-ng.
> > Running on VM can be versatile and fast. The run can be made parallel
> > too, by spawning multiple VMs simultaneously.
>
> This is actually rather trivial to solve I have some pre-made things for
> fstests and that can be adopted for blktests as well:
>
> vng \
> --user=root -v --name vng-tcmu-runner \
> -a loglevel=3 \
> --run $KDIR \
> --cpus=8 --memory=8G \
> --exec "~johannes/src/ci/run-fstests.sh" \
> --qemu-opts="-device virtio-scsi,id=scsi0 -drive
> file=/dev/sda,format=raw,if=none,id=zbc0 -device
> scsi-block,bus=scsi0.0,drive=zbc0" \
> --qemu-opts="-device virtio-scsi,id=scsi1 -drive
> file=/dev/sdb,format=raw,if=none,id=zbc1 -device
> scsi-block,bus=scsi1.0,drive=zbc1"
>
> and run-fstests.sh is:
>
> #!/bin/sh
> # SPDX-License-Identifier: GPL-2.0
>
> DIR="/tmp/"
> MKFS="mkfs.btrfs -f"
> FSTESTS_DIR="/home/johannes/src/fstests"
> HOSTCONF="$FSTESTS_DIR/configs/$(hostname -s)"
> TESTDEV="$(grep TEST_DEV $HOSTCONF | cut -d '=' -f 2)"
>
> mkdir -p $DIR/{test,scratch,results}
> $MKFS $TESTDEV
>
> cd $FSTESTS_DIR
> ./check -x raid
>
> I'm not sure it'll make sense to include this into blktests other than
> maybe providing an example in the README.
You're right. It is pretty trivial to run on VMs, but only after
everything is set up.
Adding it to blktests would allow this setup to be done (and run tests
after that) on any system by running just a couple of commands.
>
>
> Byte,
>
> Johannes
>
^ permalink raw reply [flat|nested] 28+ messages in thread* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-23 7:44 ` Johannes Thumshirn
2026-02-25 10:15 ` Haris Iqbal
@ 2026-04-21 6:05 ` Shin'ichiro Kawasaki
1 sibling, 0 replies; 28+ messages in thread
From: Shin'ichiro Kawasaki @ 2026-04-21 6:05 UTC (permalink / raw)
To: Johannes Thumshirn
Cc: Haris Iqbal, Daniel Wagner, Chaitanya Kulkarni,
linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
Bart Van Assche, Hannes Reinecke, hch, Jens Axboe,
sagi@grimberg.me, tytso@mit.edu, Christian Brauner,
Martin K. Petersen, linux-fsdevel@vger.kernel.org,
Javier González, willy@infradead.org, Jan Kara,
amir73il@gmail.com, vbabka@suse.cz, Damien Le Moal
[-- Attachment #1: Type: text/plain, Size: 2250 bytes --]
On Feb 23, 2026 / 07:44, Johannes Thumshirn wrote:
> On 2/15/26 10:18 PM, Haris Iqbal wrote:
> >> From my view, blktests keep on finding kernel bugs. I think it demonstrates the
> >> value of this community effort, and I'm happy about it. Said that, I find what
> >> blktests can improve more, of course. Here I share the list of improvement
> >> opportunities from my view point (I already mentioned the first three items).
> > A possible feature for blktest could be integration with something
> > like virtme-ng.
> > Running on VM can be versatile and fast. The run can be made parallel
> > too, by spawning multiple VMs simultaneously.
>
> This is actually rather trivial to solve I have some pre-made things for
> fstests and that can be adopted for blktests as well:
>
> vng \
> --user=root -v --name vng-tcmu-runner \
> -a loglevel=3 \
> --run $KDIR \
> --cpus=8 --memory=8G \
> --exec "~johannes/src/ci/run-fstests.sh" \
> --qemu-opts="-device virtio-scsi,id=scsi0 -drive
> file=/dev/sda,format=raw,if=none,id=zbc0 -device
> scsi-block,bus=scsi0.0,drive=zbc0" \
> --qemu-opts="-device virtio-scsi,id=scsi1 -drive
> file=/dev/sdb,format=raw,if=none,id=zbc1 -device
> scsi-block,bus=scsi1.0,drive=zbc1"
>
> and run-fstests.sh is:
>
> #!/bin/sh
> # SPDX-License-Identifier: GPL-2.0
>
> DIR="/tmp/"
> MKFS="mkfs.btrfs -f"
> FSTESTS_DIR="/home/johannes/src/fstests"
> HOSTCONF="$FSTESTS_DIR/configs/$(hostname -s)"
> TESTDEV="$(grep TEST_DEV $HOSTCONF | cut -d '=' -f 2)"
>
> mkdir -p $DIR/{test,scratch,results}
> $MKFS $TESTDEV
>
> cd $FSTESTS_DIR
> ./check -x raid
>
> I'm not sure it'll make sense to include this into blktests other than
> maybe providing an example in the README.
I guess the example can be added in contrib/. This idea interested me, so I did
some quick scripting and created the patch attached. I did some blktests runs
with virtme-ng, and found that:
- virtme-ng allows to skip kernel installation step. Fast and useful as Haris
pointed out.
- systemd does not look working well with virtme-ng, even when I specify the
--systemd option. This will be a condition difference from normal
blktests runs.
I hope that the patch will help discussion at the conference.
[-- Attachment #2: 0001-contrib-add-scripts-to-run-blktests-with-virtme-ng.patch --]
[-- Type: text/plain, Size: 5517 bytes --]
From 4a310c98f674e65218fb7e33e3b176a85d897333 Mon Sep 17 00:00:00 2001
From: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Date: Tue, 21 Apr 2026 14:26:01 +0900
Subject: [PATCH blktests RFC] contrib: add scripts to run blktests with
virtme-ng
It takes rather long time to run blktests repeatedly for debug target
kernels because of the time to install the test target kernel to the
test target system. To reduce the turn around time, virtme-ng [1] is
useful. Add helper scripts to run blktests using virtme-ng per
suggestion by Johaness [2].
After building a kernel in ~/linux, the command lines below will run
blktests on the built kernel. With this, kernel installation step is no
longer required.
$ KDIR=~/linux contrib/run-vng loop/010
virtme: waiting for virtiofsd to start
virtme: use 'microvm' QEMU architecture
early console in setup code
Probing EDD (edd=off to disable)... ok
[ 0.000000][ T0] Linux version 7.0.0 (shin@shinmob) (gcc (GCC) 15.2.1 20260123 (Red Hat 15.2.1-7), GNU ld version 2.45.1-4.fc43) #22 SMP PREEMPT_DYNAMIC Mon Apr 13 15:44:07 JST 2026
[ 0.000000][ T0] Command line: virtme_hostname=vng-blktests-runner nr_open=2147483584 virtme_link_mods=/home/shin/linux/.virtme_mods/lib/modules/0.0.0 virtme_initmount0=tmp virtme_rw_overlay0=/etc virtme_rw_overlay1=/lib virtme_rw_overlay2=/home virtme_rw_overlay3=/opt virtme_rw_overlay4=/srv virtme_rw_overlay5=/usr virtme_rw_overlay6=/var virtme_rw_overlay7=/tmp virtme_console=ttyS0 console=ttyS0 earlyprintk=serial,ttyS0,115200 panic=-1 virtme.exec=`L2hvbWUvc2hpbi9CbGt0ZXN0cy9ibGt0ZXN0cy9jb250cmliL3J1bi1ibGt0ZXN0cy1xdWljayBsb29wLzAwMQ==` virtme.ssh virtme_ssh_channel=vsock virtme_ssh_cache=/home/shin/.cache/virtme-ng/.ssh virtme_chdir=home/shin/Blktests/blktests/contrib debug loglevel=3 init=/usr/lib/python3.14/site-packages/virtme/guest/bin/virtme-ng-init
[ 0.000000][ T0] x86/split lock detection: #DB: warning on user-space bus_locks
[ 0.000000][ T0] BIOS-provided physical RAM map:
[ 0.000000][ T0] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] System RAM
[ 0.000000][ T0] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] device reserved
[ 0.000000][ T0] BIOS-e820: [gap 0x00000000000a0000-0x00000000000effff]
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] device reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x0000000000100000-0x00000000bfffefff] System RAM
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000bffff000-0x00000000bfffffff] device reserved
[ 0.000000][ T0] BIOS-e820: [gap 0x00000000c0000000-0x00000000feffbfff]
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] device reserved
[ 0.000000][ T0] BIOS-e820: [gap 0x00000000ff000000-0x00000000fffbffff]
[ 0.000000][ T0] BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] device reserved
[ 0.000000][ T0] BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] System RAM
[ 0.000000][ T0] printk: legacy bootconsole [earlyser0] enabled
Poking KASLR using RDRAND RDTSC...
Clean up /tmp/results ...
loop/001 (scan loop device partitions)
loop/001 (scan loop device partitions) [passed]
runtime ... 14.610s
Run logs:
[ 90.927921][ T1] reboot: Power down
When I ran the script on my Fedora 43 testnode for some test groups, I
observed hangs at nvme/010 and nvme/012. Also I observed a failure at
loop/010. Some fixes will be required in blktests and kernel to run the
new script in stable manner.
Of note is that the script does not support test target block device
pass-through yet.
[1] https://github.com/arighi/virtme-ng
[2] https://lore.kernel.org/linux-block/ae47ef06-3f66-4aab-b4ab-f3ae2b634f87@wdc.com/
Suggested-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
---
contrib/run-blktests-quick | 27 +++++++++++++++++++++++++++
contrib/run-vng | 18 ++++++++++++++++++
2 files changed, 45 insertions(+)
create mode 100755 contrib/run-blktests-quick
create mode 100755 contrib/run-vng
diff --git a/contrib/run-blktests-quick b/contrib/run-blktests-quick
new file mode 100755
index 0000000..9324c38
--- /dev/null
+++ b/contrib/run-blktests-quick
@@ -0,0 +1,27 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-3.0+
+# Copyright (C) 2026 Western Digital Corporation or its affiliates.
+
+declare DIR LOGDIR BLKTESTS_DIR
+
+DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
+BLKTESTS_DIR="${DIR}/../"
+
+LOGDIR="/tmp/results"
+
+if [[ -e $LOGDIR ]]; then
+ echo "Clean up ${LOGDIR} ..."
+ rm -rf $LOGDIR
+fi
+mkdir -p "$LOGDIR"
+
+cd "$BLKTESTS_DIR" || exit
+
+cat > config << EOF
+QUICK_RUN=1
+TIMEOUT=10
+EOF
+
+./check -o "$LOGDIR" "$@"
+echo "Run logs: ${LOGIDR}"
+
diff --git a/contrib/run-vng b/contrib/run-vng
new file mode 100755
index 0000000..9aa4b91
--- /dev/null
+++ b/contrib/run-vng
@@ -0,0 +1,18 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-3.0+
+# Copyright (C) 2026 Western Digital Corporation or its affiliates.
+
+declare DIR
+declare -a BLKTESTS_OPTS
+DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" >/dev/null 2>&1 && pwd)"
+
+if (($#)); then
+ BLKTESTS_OPTS=("$@")
+else
+ BLKTESTS_OPTS=(block nvme loop scsi)
+fi
+
+vng \
+ --user=root --verbose --name vng-blktests-runner --cpus=4 --memory=4G \
+ --append loglevel=3 --run "$KDIR" --ssh \
+ --rwdir=/tmp --exec "$DIR/run-blktests-quick ${BLKTESTS_OPTS[*]}"
--
2.53.0
^ permalink raw reply related [flat|nested] 28+ messages in thread
* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-15 21:18 ` Haris Iqbal
2026-02-16 0:33 ` Chaitanya Kulkarni
2026-02-23 7:44 ` Johannes Thumshirn
@ 2026-02-23 17:08 ` Bart Van Assche
2026-02-25 2:55 ` Chaitanya Kulkarni
2026-02-25 10:07 ` Haris Iqbal
2 siblings, 2 replies; 28+ messages in thread
From: Bart Van Assche @ 2026-02-23 17:08 UTC (permalink / raw)
To: Haris Iqbal, Shinichiro Kawasaki
Cc: Daniel Wagner, Chaitanya Kulkarni, linux-block@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
lsf-pc@lists.linux-foundation.org, Hannes Reinecke, hch,
Jens Axboe, sagi@grimberg.me, tytso@mit.edu, Johannes Thumshirn,
Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On 2/15/26 1:18 PM, Haris Iqbal wrote:
> A possible feature for blktest could be integration with something
> like virtme-ng.
> Running on VM can be versatile and fast. The run can be made parallel
> too, by spawning multiple VMs simultaneously.
Hmm ... this probably would break tests that measure performance and
also tests that modify data or reservations of a physical storage
device.
Bart.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-23 17:08 ` Bart Van Assche
@ 2026-02-25 2:55 ` Chaitanya Kulkarni
2026-02-25 10:07 ` Haris Iqbal
1 sibling, 0 replies; 28+ messages in thread
From: Chaitanya Kulkarni @ 2026-02-25 2:55 UTC (permalink / raw)
To: Bart Van Assche, Haris Iqbal, Shinichiro Kawasaki
Cc: Daniel Wagner, linux-block@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
lsf-pc@lists.linux-foundation.org, Hannes Reinecke, hch,
Jens Axboe, sagi@grimberg.me, tytso@mit.edu, Johannes Thumshirn,
Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On 2/23/26 09:08, Bart Van Assche wrote:
> On 2/15/26 1:18 PM, Haris Iqbal wrote:
>> A possible feature for blktest could be integration with something
>> like virtme-ng.
>> Running on VM can be versatile and fast. The run can be made parallel
>> too, by spawning multiple VMs simultaneously.
> Hmm ... this probably would break tests that measure performance and
> also tests that modify data or reservations of a physical storage
> device.
>
> Bart.
We can always make a flag and make test that are parallel compatible ?
so we don't have to for the parallel execution by default.
WDYT ?
-ck
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-23 17:08 ` Bart Van Assche
2026-02-25 2:55 ` Chaitanya Kulkarni
@ 2026-02-25 10:07 ` Haris Iqbal
2026-02-25 16:29 ` Bart Van Assche
1 sibling, 1 reply; 28+ messages in thread
From: Haris Iqbal @ 2026-02-25 10:07 UTC (permalink / raw)
To: Bart Van Assche
Cc: Shinichiro Kawasaki, Daniel Wagner, Chaitanya Kulkarni,
linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On Mon, Feb 23, 2026 at 6:08 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 2/15/26 1:18 PM, Haris Iqbal wrote:
> > A possible feature for blktest could be integration with something
> > like virtme-ng.
> > Running on VM can be versatile and fast. The run can be made parallel
> > too, by spawning multiple VMs simultaneously.
> Hmm ... this probably would break tests that measure performance and
> also tests that modify data or reservations of a physical storage
> device.
Performance related tests can be skipped when running in a virtual environment.
Regarding data modification, if the tests do not involve any crash or
reboot, then the VMs can be started in "snapshot" mode. This gives a
number of advantages.
a) Data modifications will not persist once the VM is shut down. This
means the disk will be clean for the next test cycle.
b) Using just a single set of qcow files, one can bring up any number
of VMs in snapshot mode. The data written while the VM is running can
be safely read/modified, but it disappears after a reboot.
>
> Bart.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-25 10:07 ` Haris Iqbal
@ 2026-02-25 16:29 ` Bart Van Assche
0 siblings, 0 replies; 28+ messages in thread
From: Bart Van Assche @ 2026-02-25 16:29 UTC (permalink / raw)
To: Haris Iqbal
Cc: Shinichiro Kawasaki, Daniel Wagner, Chaitanya Kulkarni,
linux-block@vger.kernel.org, linux-scsi@vger.kernel.org,
linux-nvme@lists.infradead.org, lsf-pc@lists.linux-foundation.org,
Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On 2/25/26 2:07 AM, Haris Iqbal wrote:
> Regarding data modification, if the tests do not involve any crash or
> reboot, then the VMs can be started in "snapshot" mode.
I'm not sure that proposal makes sense. If e.g. an NVMe device is
specified in the blktests config file, it probably is the intention of
the person who runs the test to test the NVMe driver and/or the NVMe
device. Using any method to create a "snapshot" of the device and to
run blktests against that snapshot changes the kernel driver and also
the physical device that are tested. Not modifying the kernel driver
or physical device that are tested implies using PCIe passthrough. And
the PCIe passthrough mechanism can only be used by one VM at a time as
far as I know.
Bart.
^ permalink raw reply [flat|nested] 28+ messages in thread
* Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
2026-02-13 11:23 ` Shinichiro Kawasaki
` (2 preceding siblings ...)
2026-02-15 21:18 ` Haris Iqbal
@ 2026-04-21 6:37 ` Shin'ichiro Kawasaki
3 siblings, 0 replies; 28+ messages in thread
From: Shin'ichiro Kawasaki @ 2026-04-21 6:37 UTC (permalink / raw)
To: Daniel Wagner
Cc: Chaitanya Kulkarni, linux-block@vger.kernel.org,
linux-scsi@vger.kernel.org, linux-nvme@lists.infradead.org,
lsf-pc@lists.linux-foundation.org, Bart Van Assche,
Hannes Reinecke, hch, Jens Axboe, sagi@grimberg.me, tytso@mit.edu,
Johannes Thumshirn, Christian Brauner, Martin K. Petersen,
linux-fsdevel@vger.kernel.org, Javier González,
willy@infradead.org, Jan Kara, amir73il@gmail.com, vbabka@suse.cz,
Damien Le Moal
On Feb 13, 2026 / 11:23, Shinichiro Kawasaki wrote:
[...]
> Here I share the list of improvement
> opportunities from my view point (I already mentioned the first three items).
>
> 1. We can have more CI infra to make the most of blktests
> 2. We can add config examples to help new users
> 3. We can measure code coverage to identify missing test areas
> 4. Long standing failures make test result reports dirty
> - I feel lockdep WARNs are tend to be left unfixed rather long period.
> How can we gather effort to fix them?
> 5. We can refactor and clean up blktests framework for ease of maintainance
> (e.g. trap handling)
> 6. Some users run blktests with built-in kernel modules, which makes a number
> of test cases skipped. We can add more built-in kernel modules support to
> expand test coverage for such use case.
I read through this e-mail thread again, and revised the list of improvement
opportunities. Hope this list helps the discussion at the session.
1. configuration example, documentation, or tool for ease of new users
2. blktests framework capabilitly
- more bug detection capability (such as kmemleak)
- code coverage measurement per test case
- VM integration (virtme-ng? valid use case?)
- parallel execution for shorter run
- built-in module testing
- multi device testing
3. blktests CI
4. longstanding failures
- nvme/005 tcp transport, nbd/002
5. test area coverage expansion (which area do we lack tests?)
- code coverage information as the guide
- new category candidates (rnbd?)
6. blktests framework refactoring
- trap handling
One thing I added was "multi device testing". Most of the test cases use
single test device for single test run, while md/003 and bcache/001 require
multiple devices for single test run. There was a discussion how to specify
multiple devices in the config file [*].
[*] https://lore.kernel.org/linux-block/aXHbPWCf5SfycpBX@shinmob/
^ permalink raw reply [flat|nested] 28+ messages in thread