From: Nilay Shroff <nilay@linux.ibm.com>
To: "Shin'ichiro Kawasaki" <shinichiro.kawasaki@wdc.com>
Cc: "Daniel Wagner" <dwagner@suse.de>,
"Chaitanya Kulkarni" <chaitanyak@nvidia.com>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
"lsf-pc@lists.linux-foundation.org"
<lsf-pc@lists.linux-foundation.org>,
"Bart Van Assche" <bvanassche@acm.org>,
"Hannes Reinecke" <hare@suse.de>, hch <hch@lst.de>,
"Jens Axboe" <axboe@kernel.dk>,
"sagi@grimberg.me" <sagi@grimberg.me>,
"tytso@mit.edu" <tytso@mit.edu>,
"Johannes Thumshirn" <Johannes.Thumshirn@wdc.com>,
"Christian Brauner" <brauner@kernel.org>,
"Martin K. Petersen" <martin.petersen@oracle.com>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
"Javier González" <javier@javigon.com>,
"willy@infradead.org" <willy@infradead.org>,
"Jan Kara" <jack@suse.cz>,
"amir73il@gmail.com" <amir73il@gmail.com>,
"vbabka@suse.cz" <vbabka@suse.cz>,
"Damien Le Moal" <dlemoal@kernel.org>
Subject: Re: [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework
Date: Thu, 23 Apr 2026 13:35:47 +0530 [thread overview]
Message-ID: <d6282aa7-4673-4bae-a0ff-fbd84f0a610f@linux.ibm.com> (raw)
In-Reply-To: <aecTw6IYs1fo26EX@shinmob>
On 4/21/26 11:49 AM, Shin'ichiro Kawasaki wrote:
> On Feb 16, 2026 / 00:08, Nilay Shroff wrote:
>>
>>
>> On 2/13/26 4:53 PM, Shinichiro Kawasaki wrote:
>>> On Feb 12, 2026 / 08:52, Daniel Wagner wrote:
>>>> On Wed, Feb 11, 2026 at 08:35:30PM +0000, Chaitanya Kulkarni wrote:
>>>>> For the storage track at LSFMMBPF2026, I propose a session dedicated to
>>>>> blktests to discuss expansion plan and CI integration progress.
>>>>
>>>> Thanks for proposing this topic.
>>>
>>> Chaitanya, my thank also goes to you.
>>>
>> Yes thanks for proposing this!
>>
>>>> Just a few random topics which come to mind we could discuss:
>>>>
>>>> - blktests has gain a bit of traction and some folks run on regular
>>>> basis these tests. Can we gather feedback from them, what is working
>>>> good, what is not? Are there feature wishes?
>>>
>>> Good topic, I also would like to hear about it.
>>>
>> One improvement I’d like to highlight is related to how blktests are executed
>> today. So far, we’ve been running blktests serially, but if it's possible to
>> run tests in parallel to improve test turnaround time and make large-scale or
>> CI-based testing more efficient? For instance, adding parallel_safe Tags: Marking tests
>> that don't modify global kernel state so they can be safely offloaded to parallel
>> workers. Marking parallel_safe tags would allow the runner to distinguish:
>>
>> Safe Tests: Tests that only perform I/O on a specific, non-shared device or
>> check static kernel parameters.
>>
>> Unsafe Tests: Tests that reload kernel modules, modify global /sys or /proc entries,
>> or require exclusive access to specific hardware addresses.
>>
>> Yes adding parallel execution support shall require framework/design changes.
>
> Hi Nilay, thanks for the idea. I understand that shorter test time will make CI
> cycles faster and improve the development efficiency.
>
> Said that, the safe/unsafe testing idea may not be enough. I think majority of
> test case does kernel module set up using null_blk, scsi_debug, or nvme target
> drivers. Then I foresee the majority of the test cases will be "unsafe", and
> cannot be run in parallel.
>
> Also, parallel runs on single system will affect dmesg or kmemleak checking.
> We cannot tell which run caused a dmesg message or a memory leak.
>
> For the runtime reduction by parallel runs, I guess blktests run on VMs might be
> the good approach as Haris pointed out. Anyway, this topic will need more
> discussion.
>
> [...]
Alright, see if we may discuss this during LSFMM.
>
>>> 4. Long standing failures make test result reports dirty
>>> - I feel lockdep WARNs are tend to be left unfixed rather long period.
>>> How can we gather effort to fix them?
>>
>> I agree regarding lockdep; recently we did see quite a few lockdep splats.
>> That said, I believe the number has dropped significantly and only a small
>> set remains. From what I can tell, most of the outstanding lockdep issues
>> are related to fs-reclaim paths recursing into the block layer while the
>> queue is frozen. We should be able to resolve most of these soon, or at
>> least before the conference. If anything is still outstanding after that,
>> we can discuss it during the conference and work toward addressing it as
>> quickly as possible.
>
> Taking this chance, I'd like to express my appreciation for the effort to
> resolve the lockdep issues. It is great that a number of lockdeps are already
> fixed. Said that, two lockdep issues are still observed with v7.0 kernel at
> nvme/005 and nbd/002 [1]. I would like to gather attentions to the failures.
>
> [1] https://lore.kernel.org/linux-block/ynmi72x5wt5ooljjafebhcarit3pvu6axkslqenikb2p5txe57@ldytqa2t4i2x/
>
I think nvme/005 and nbd/002 failures shall be addressed with this
patch: https://lore.kernel.org/all/20260413171628.6204-1-kch@nvidia.com/
It's currently applied to nvme-7.1 and not there yet to mainline kernel.
Thanks,
--Nilay
next prev parent reply other threads:[~2026-04-23 8:06 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-11 20:35 [LSF/MM/BPF ATTEND][LSF/MM/BPF TOPIC] : blktests: status, expansion plan for the storage stack test framework Chaitanya Kulkarni
2026-02-12 7:52 ` Daniel Wagner
2026-02-12 7:57 ` Johannes Thumshirn
2026-02-13 17:30 ` Bart Van Assche
2026-02-13 17:35 ` James Bottomley
2026-02-13 11:23 ` Shinichiro Kawasaki
2026-02-13 14:18 ` Haris Iqbal
2026-02-15 18:38 ` Nilay Shroff
2026-04-21 6:19 ` Shin'ichiro Kawasaki
2026-04-23 8:05 ` Nilay Shroff [this message]
2026-04-23 9:36 ` Daniel Wagner
2026-04-27 11:50 ` Shin'ichiro Kawasaki
2026-02-15 21:18 ` Haris Iqbal
2026-02-16 0:33 ` Chaitanya Kulkarni
2026-02-23 7:44 ` Johannes Thumshirn
2026-02-25 10:15 ` Haris Iqbal
2026-04-21 6:05 ` Shin'ichiro Kawasaki
2026-02-23 17:08 ` Bart Van Assche
2026-02-25 2:55 ` Chaitanya Kulkarni
2026-02-25 10:07 ` Haris Iqbal
2026-02-25 16:29 ` Bart Van Assche
2026-04-21 6:37 ` Shin'ichiro Kawasaki
-- strict thread matches above, loose matches on Subject: below --
2024-01-09 6:30 Chaitanya Kulkarni
2024-01-09 21:31 ` Bart Van Assche
2024-01-09 22:01 ` Chaitanya Kulkarni
2024-01-09 22:08 ` Bart Van Assche
2024-01-17 8:50 ` Daniel Wagner
2024-01-23 15:07 ` Daniel Wagner
2024-02-14 7:32 ` Shinichiro Kawasaki
2024-02-21 18:32 ` Luis Chamberlain
2024-02-22 9:31 ` Daniel Wagner
2024-02-22 15:54 ` Luis Chamberlain
2024-02-22 16:16 ` Daniel Wagner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d6282aa7-4673-4bae-a0ff-fbd84f0a610f@linux.ibm.com \
--to=nilay@linux.ibm.com \
--cc=Johannes.Thumshirn@wdc.com \
--cc=amir73il@gmail.com \
--cc=axboe@kernel.dk \
--cc=brauner@kernel.org \
--cc=bvanassche@acm.org \
--cc=chaitanyak@nvidia.com \
--cc=dlemoal@kernel.org \
--cc=dwagner@suse.de \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=jack@suse.cz \
--cc=javier@javigon.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=martin.petersen@oracle.com \
--cc=sagi@grimberg.me \
--cc=shinichiro.kawasaki@wdc.com \
--cc=tytso@mit.edu \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox