public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Bart Van Assche <bvanassche@acm.org>
To: Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>
Cc: Daniel Wagner <dwagner@suse.de>, Sagi Grimberg <sagi@grimberg.me>,
	Hannes Reinecke <hare@suse.de>
Subject: Re: blktests: running nvme and srp tests with real RDMA hardware
Date: Tue, 24 Oct 2023 06:57:51 -0700	[thread overview]
Message-ID: <aa259295-1ec9-41e1-9527-409f3799e8df@acm.org> (raw)
In-Reply-To: <vaijnbobhxyz4nkk2csv3nfhnpeupbudakcn3qgmo7o6vii4x5@rfnfdll6iloo>

On 10/23/23 19:59, Shinichiro Kawasaki wrote:
> Hello blktests users,
> 
> As of today, software RDMA driver "siw" or "rdma_rxe" is used to run "nvme"
> group with nvme_trtype=rdma or "srp" (scsi rdma protocol) group. Now it is
> suggested to run the test groups with real RDMA hardware to run tests in
> more realistic conditions. A GitHub pull request is under review to support
> it [1]. If you are interested in, please take a look and comment.
> 
> [1] https://github.com/osandov/blktests/pull/86

When I wrote the SRP tests, my goal was to test the SRP initiator
driver, SRP target driver and dm-multipath drivers and also to allow
users who do not have RDMA hardware to run these tests. Running these
tests against a real RDMA adapter tests other functionality than block
layer code. I see this as a use case that falls outside the original
scope of the blktests test suite. Running NVMe tests against a real
storage array also falls outside the scope of testing block driver
functionality. I'm fine with adding this functionality but I hope that
it does not become a burden for blktests contributors who are not
interested in maintaining functionality that falls outside the original
scope of blktests.

Bart.



      parent reply	other threads:[~2023-10-24 13:58 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-24  2:59 blktests: running nvme and srp tests with real RDMA hardware Shinichiro Kawasaki
2023-10-24  5:43 ` Hannes Reinecke
2023-10-24  5:55   ` Chaitanya Kulkarni
2023-10-24  6:51     ` Hannes Reinecke
2023-10-24 13:57 ` Bart Van Assche [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aa259295-1ec9-41e1-9527-409f3799e8df@acm.org \
    --to=bvanassche@acm.org \
    --cc=dwagner@suse.de \
    --cc=hare@suse.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=sagi@grimberg.me \
    --cc=shinichiro.kawasaki@wdc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox