linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Meneghini <jmeneghi@redhat.com>
To: Xose Vazquez Perez <xose.vazquez@gmail.com>
Cc: Wayne Berthiaume <Wayne.Berthiaume@dell.com>,
	Vasuki Manikarnike <vasuki.manikarnike@hpe.com>,
	Matthias Rudolph <Matthias.Rudolph@hitachivantara.com>,
	Martin George <marting@netapp.com>,
	NetApp RDAC team <ng-eseries-upstream-maintainers@netapp.com>,
	Zou Ming <zouming.zouming@huawei.com>,
	Li Xiaokeng <lixiaokeng@huawei.com>,
	Randy Jennings <randyj@purestorage.com>,
	Jyoti Rani <jrani@purestorage.com>,
	Brian Bunker <brian@purestorage.com>,
	Uday Shankar <ushankar@purestorage.com>,
	Chaitanya Kulkarni <kch@nvidia.com>,
	Sagi Grimberg <sagi@grimberg.me>, Keith Busch <kbusch@kernel.org>,
	Christoph Hellwig <hch@lst.de>,
	Marco Patalano <mpatalan@redhat.com>,
	"Ewan D. Milne" <emilne@redhat.com>,
	Daniel Wagner <dwagner@suse.de>, Daniel Wagner <wagi@monom.org>,
	Hannes Reinecke <hare@suse.de>, Martin Wilck <mwilck@suse.com>,
	Benjamin Marzinski <bmarzins@redhat.com>,
	Christophe Varoqui <christophe.varoqui@opensvc.com>,
	BLOCK-ML <linux-block@vger.kernel.org>,
	NVME-ML <linux-nvme@lists.infradead.org>,
	SCSI-ML <linux-scsi@vger.kernel.org>,
	DM_DEVEL-ML <dm-devel@lists.linux.dev>
Subject: Re: [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays
Date: Wed, 27 Aug 2025 17:51:48 -0400	[thread overview]
Message-ID: <ccb441d0-7603-4e96-b59a-d69ee6a95e6d@redhat.com> (raw)
In-Reply-To: <20250820213254.220715-1-xose.vazquez@gmail.com>

I'm sorry but Red Hat will not approve any upstream change like this that modifies the policy for OTHER VENDORS stuff.

You can't simply change the IO policy for all of these arrays.  Many vendors have no autoconnect/udev-rules because they don't want one.  They want to use the default ctrl_loss_tmo and the default iopolicy (numa)... you can't just change this for them.

If you want people to migrate their udev rules out of separate files and into a single autoconnect file like this then you'll have to get them to agree.

When I look upstream I see exactly 3 vendors who have a udev-rule for their iopolicy.

nvme-cli(master) > ls -1 nvmf-autoconnect/udev-rules/71*
nvmf-autoconnect/udev-rules/71-nvmf-hpe.rules.in
nvmf-autoconnect/udev-rules/71-nvmf-netapp.rules.in
nvmf-autoconnect/udev-rules/71-nvmf-vastdata.rules.in

I suggest that you get these three vendors to agree to move their policy into a single 71-nvmf-mulitpath-policy.rules.in file, and then leave everyone else's stuff alone.

In the future, vendors who want to add a multipath-policy rule can then use the new file instead of adding their own.

/John

On 8/20/25 5:32 PM, Xose Vazquez Perez wrote:
> One file per vendor, or device, is a bit excessive for two-four rules.
> 
> 
> If possible, select round-robin (>=5.1), or queue-depth (>=6.11).
> round-robin is a basic selector, and only works well under ideal conditions.
> 
> A nvme benchmark, round-robin vs queue-depth, shows how bad it is:
> https://marc.info/?l=linux-kernel&m=171931850925572
> https://marc.info/?l=linux-kernel&m=171931852025575
> https://github.com/johnmeneghini/iopolicy/?tab=readme-ov-file#sample-data
> https://people.redhat.com/jmeneghi/ALPSS_2023/NVMe_QD_Multipathing.pdf
> 
> 
> [ctrl_loss_tmo default value is 600 (ten minutes)]

You can't remove this because vendors have ctrl_loss_tmo set to -1 on purpose.

> v3:
>   - add Fujitsu/ETERNUS AB/HB
>   - add Hitachi/VSP
> 
> v2:
>   - fix ctrl_loss_tmo commnent
>   - add Infinidat/InfiniBox
> 
> 
> Cc: Wayne Berthiaume <Wayne.Berthiaume@dell.com>
> Cc: Vasuki Manikarnike <vasuki.manikarnike@hpe.com>
> Cc: Matthias Rudolph <Matthias.Rudolph@hitachivantara.com>
> Cc: Martin George <marting@netapp.com>
> Cc: NetApp RDAC team <ng-eseries-upstream-maintainers@netapp.com>
> Cc: Zou Ming <zouming.zouming@huawei.com>
> Cc: Li Xiaokeng <lixiaokeng@huawei.com>
> Cc: Randy Jennings <randyj@purestorage.com>
> Cc: Jyoti Rani <jrani@purestorage.com>
> Cc: Brian Bunker <brian@purestorage.com>
> Cc: Uday Shankar <ushankar@purestorage.com>
> Cc: Chaitanya Kulkarni <kch@nvidia.com>
> Cc: Sagi Grimberg <sagi@grimberg.me>
> Cc: Keith Busch <kbusch@kernel.org>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marco Patalano <mpatalan@redhat.com>
> Cc: Ewan D. Milne <emilne@redhat.com>
> Cc: John Meneghini <jmeneghi@redhat.com>
> Cc: Daniel Wagner <dwagner@suse.de>
> Cc: Daniel Wagner <wagi@monom.org>
> Cc: Hannes Reinecke <hare@suse.de>
> Cc: Martin Wilck <mwilck@suse.com>
> Cc: Benjamin Marzinski <bmarzins@redhat.com>
> Cc: Christophe Varoqui <christophe.varoqui@opensvc.com>
> Cc: BLOCK-ML <linux-block@vger.kernel.org>
> Cc: NVME-ML <linux-nvme@lists.infradead.org>
> Cc: SCSI-ML <linux-scsi@vger.kernel.org>
> Cc: DM_DEVEL-ML <dm-devel@lists.linux.dev>
> Signed-off-by: Xose Vazquez Perez <xose.vazquez@gmail.com>
> ---
> 
> This will be the last iteration of this patch, there are no more NVMe storage
> array manufacturers.
> 
> 
> Maybe these rules should be merged into this new file. ???
> 71-nvmf-hpe.rules.in
> 71-nvmf-netapp.rules.in
> 71-nvmf-vastdata.rules.in
> 
> ---
>   .../80-nvmf-storage_arrays.rules.in           | 48 +++++++++++++++++++
>   1 file changed, 48 insertions(+)
>   create mode 100644 nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
> 
> diff --git a/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
> new file mode 100644
> index 00000000..ac5df797
> --- /dev/null
> +++ b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
> @@ -0,0 +1,48 @@
> +##### Storage arrays
> +
> +#### Set iopolicy for NVMe-oF
> +### iopolicy: numa (default), round-robin (>=5.1), or queue-depth (>=6.11)
> +
> +## Dell EMC
> +# PowerMax
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="EMC PowerMax"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="EMC PowerMax"
> +# PowerStore
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="dellemc-powerstore"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="dellemc-powerstore"
> +
> +## Fujitsu
> +# ETERNUS AB/HB
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Fujitsu ETERNUS AB/HB Series"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Fujitsu ETERNUS AB/HB Series"
> +
> +## Hitachi Vantara
> +# VSP
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="HITACHI SVOS-RF-System"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="HITACHI SVOS-RF-System"
> +
> +## Huawei
> +# OceanStor
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Huawei-XSG1"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Huawei-XSG1"
> +
> +## IBM
> +# FlashSystem (RamSan)
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="FlashSystem"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="FlashSystem"
> +# FlashSystem (Storwize/SVC)
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="IBM*214"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="IBM*214"
> +
> +## Infinidat
> +# InfiniBox
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="InfiniBox"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="InfiniBox"
> +
> +## Pure
> +# FlashArray
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Pure Storage FlashArray"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Pure Storage FlashArray"
> +
> +
> +##### EOF


      parent reply	other threads:[~2025-08-27 21:52 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-20 21:32 [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays Xose Vazquez Perez
2025-08-22 19:42 ` Manikarnike, Vasuki
2025-08-27 19:52 ` Martin Wilck
2025-08-27 21:51 ` John Meneghini [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ccb441d0-7603-4e96-b59a-d69ee6a95e6d@redhat.com \
    --to=jmeneghi@redhat.com \
    --cc=Matthias.Rudolph@hitachivantara.com \
    --cc=Wayne.Berthiaume@dell.com \
    --cc=bmarzins@redhat.com \
    --cc=brian@purestorage.com \
    --cc=christophe.varoqui@opensvc.com \
    --cc=dm-devel@lists.linux.dev \
    --cc=dwagner@suse.de \
    --cc=emilne@redhat.com \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=jrani@purestorage.com \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=lixiaokeng@huawei.com \
    --cc=marting@netapp.com \
    --cc=mpatalan@redhat.com \
    --cc=mwilck@suse.com \
    --cc=ng-eseries-upstream-maintainers@netapp.com \
    --cc=randyj@purestorage.com \
    --cc=sagi@grimberg.me \
    --cc=ushankar@purestorage.com \
    --cc=vasuki.manikarnike@hpe.com \
    --cc=wagi@monom.org \
    --cc=xose.vazquez@gmail.com \
    --cc=zouming.zouming@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).