dm-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays
@ 2025-08-20 21:32 Xose Vazquez Perez
  2025-08-22 19:42 ` Manikarnike, Vasuki
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Xose Vazquez Perez @ 2025-08-20 21:32 UTC (permalink / raw)
  Cc: Xose Vazquez Perez, Wayne Berthiaume, Vasuki Manikarnike,
	Matthias Rudolph, Martin George, NetApp RDAC team, Zou Ming,
	Li Xiaokeng, Randy Jennings, Jyoti Rani, Brian Bunker,
	Uday Shankar, Chaitanya Kulkarni, Sagi Grimberg, Keith Busch,
	Christoph Hellwig, Marco Patalano, Ewan D. Milne, John Meneghini,
	Daniel Wagner, Daniel Wagner, Hannes Reinecke, Martin Wilck,
	Benjamin Marzinski, Christophe Varoqui, BLOCK-ML, NVME-ML,
	SCSI-ML, DM_DEVEL-ML

One file per vendor, or device, is a bit excessive for two-four rules.


If possible, select round-robin (>=5.1), or queue-depth (>=6.11).
round-robin is a basic selector, and only works well under ideal conditions.

A nvme benchmark, round-robin vs queue-depth, shows how bad it is:
https://marc.info/?l=linux-kernel&m=171931850925572
https://marc.info/?l=linux-kernel&m=171931852025575
https://github.com/johnmeneghini/iopolicy/?tab=readme-ov-file#sample-data
https://people.redhat.com/jmeneghi/ALPSS_2023/NVMe_QD_Multipathing.pdf


[ctrl_loss_tmo default value is 600 (ten minutes)]


v3:
 - add Fujitsu/ETERNUS AB/HB
 - add Hitachi/VSP

v2:
 - fix ctrl_loss_tmo commnent
 - add Infinidat/InfiniBox


Cc: Wayne Berthiaume <Wayne.Berthiaume@dell.com>
Cc: Vasuki Manikarnike <vasuki.manikarnike@hpe.com>
Cc: Matthias Rudolph <Matthias.Rudolph@hitachivantara.com>
Cc: Martin George <marting@netapp.com>
Cc: NetApp RDAC team <ng-eseries-upstream-maintainers@netapp.com>
Cc: Zou Ming <zouming.zouming@huawei.com>
Cc: Li Xiaokeng <lixiaokeng@huawei.com>
Cc: Randy Jennings <randyj@purestorage.com>
Cc: Jyoti Rani <jrani@purestorage.com>
Cc: Brian Bunker <brian@purestorage.com>
Cc: Uday Shankar <ushankar@purestorage.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Cc: Keith Busch <kbusch@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marco Patalano <mpatalan@redhat.com>
Cc: Ewan D. Milne <emilne@redhat.com>
Cc: John Meneghini <jmeneghi@redhat.com>
Cc: Daniel Wagner <dwagner@suse.de>
Cc: Daniel Wagner <wagi@monom.org>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Martin Wilck <mwilck@suse.com>
Cc: Benjamin Marzinski <bmarzins@redhat.com>
Cc: Christophe Varoqui <christophe.varoqui@opensvc.com>
Cc: BLOCK-ML <linux-block@vger.kernel.org>
Cc: NVME-ML <linux-nvme@lists.infradead.org>
Cc: SCSI-ML <linux-scsi@vger.kernel.org>
Cc: DM_DEVEL-ML <dm-devel@lists.linux.dev>
Signed-off-by: Xose Vazquez Perez <xose.vazquez@gmail.com>
---

This will be the last iteration of this patch, there are no more NVMe storage
array manufacturers.


Maybe these rules should be merged into this new file. ???
71-nvmf-hpe.rules.in
71-nvmf-netapp.rules.in
71-nvmf-vastdata.rules.in

---
 .../80-nvmf-storage_arrays.rules.in           | 48 +++++++++++++++++++
 1 file changed, 48 insertions(+)
 create mode 100644 nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in

diff --git a/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
new file mode 100644
index 00000000..ac5df797
--- /dev/null
+++ b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
@@ -0,0 +1,48 @@
+##### Storage arrays
+
+#### Set iopolicy for NVMe-oF
+### iopolicy: numa (default), round-robin (>=5.1), or queue-depth (>=6.11)
+
+## Dell EMC
+# PowerMax
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="EMC PowerMax"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="EMC PowerMax"
+# PowerStore
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="dellemc-powerstore"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="dellemc-powerstore"
+
+## Fujitsu
+# ETERNUS AB/HB
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Fujitsu ETERNUS AB/HB Series"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Fujitsu ETERNUS AB/HB Series"
+
+## Hitachi Vantara
+# VSP
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="HITACHI SVOS-RF-System"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="HITACHI SVOS-RF-System"
+
+## Huawei
+# OceanStor
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Huawei-XSG1"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Huawei-XSG1"
+
+## IBM
+# FlashSystem (RamSan)
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="FlashSystem"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="FlashSystem"
+# FlashSystem (Storwize/SVC)
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="IBM*214"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="IBM*214"
+
+## Infinidat
+# InfiniBox
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="InfiniBox"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="InfiniBox"
+
+## Pure
+# FlashArray
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Pure Storage FlashArray"
+ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Pure Storage FlashArray"
+
+
+##### EOF
-- 
2.50.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays
  2025-08-20 21:32 [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays Xose Vazquez Perez
@ 2025-08-22 19:42 ` Manikarnike, Vasuki
  2025-08-27 19:52 ` Martin Wilck
  2025-08-27 21:51 ` John Meneghini
  2 siblings, 0 replies; 4+ messages in thread
From: Manikarnike, Vasuki @ 2025-08-22 19:42 UTC (permalink / raw)
  To: Xose Vazquez Perez
  Cc: Xose Vazquez Perez, Wayne Berthiaume, Matthias Rudolph,
	Martin George, NetApp RDAC team, Zou Ming, Li Xiaokeng,
	Randy Jennings, Jyoti Rani, Brian Bunker, Uday Shankar,
	Chaitanya Kulkarni, Sagi Grimberg, Keith Busch, Christoph Hellwig,
	Marco Patalano, Ewan D. Milne, John Meneghini, Daniel Wagner,
	Daniel Wagner, Hannes Reinecke, Martin Wilck, Benjamin Marzinski,
	Christophe Varoqui, BLOCK-ML, NVME-ML, SCSI-ML, DM_DEVEL-ML

On Wed, Aug 20, 2025, Xose Vasquez Perez wrote:

> Maybe these rules should be merged into this new file. ???
> 71-nvmf-hpe.rules.in
> 71-nvmf-netapp.rules.in
> 71-nvmf-vastdata.rules.in

That's a good idea. Will you merge the hpe rules into the new file in this patch?

Regards,
Vasuki

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays
  2025-08-20 21:32 [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays Xose Vazquez Perez
  2025-08-22 19:42 ` Manikarnike, Vasuki
@ 2025-08-27 19:52 ` Martin Wilck
  2025-08-27 21:51 ` John Meneghini
  2 siblings, 0 replies; 4+ messages in thread
From: Martin Wilck @ 2025-08-27 19:52 UTC (permalink / raw)
  To: Xose Vazquez Perez
  Cc: Wayne Berthiaume, Vasuki Manikarnike, Matthias Rudolph,
	Martin George, NetApp RDAC team, Zou Ming, Li Xiaokeng,
	Randy Jennings, Jyoti Rani, Brian Bunker, Uday Shankar,
	Chaitanya Kulkarni, Sagi Grimberg, Keith Busch, Christoph Hellwig,
	Marco Patalano, Ewan D. Milne, John Meneghini, Daniel Wagner,
	Daniel Wagner, Hannes Reinecke, Benjamin Marzinski,
	Christophe Varoqui, BLOCK-ML, NVME-ML, SCSI-ML, DM_DEVEL-ML

On Wed, 2025-08-20 at 23:32 +0200, Xose Vazquez Perez wrote:
> One file per vendor, or device, is a bit excessive for two-four
> rules.
> 
> 
> If possible, select round-robin (>=5.1), or queue-depth (>=6.11).
> round-robin is a basic selector, and only works well under ideal
> conditions.
> 
> A nvme benchmark, round-robin vs queue-depth, shows how bad it is:
> https://marc.info/?l=linux-kernel&m=171931850925572
> https://marc.info/?l=linux-kernel&m=171931852025575
> https://github.com/johnmeneghini/iopolicy/?tab=readme-ov-file#sample-data
> https://people.redhat.com/jmeneghi/ALPSS_2023/NVMe_QD_Multipathing.pdf
> 
> 
> [ctrl_loss_tmo default value is 600 (ten minutes)]
> 
> 
> v3:
>  - add Fujitsu/ETERNUS AB/HB
>  - add Hitachi/VSP
> 
> v2:
>  - fix ctrl_loss_tmo commnent
>  - add Infinidat/InfiniBox
> 
> 
> Cc: Wayne Berthiaume <Wayne.Berthiaume@dell.com>
> Cc: Vasuki Manikarnike <vasuki.manikarnike@hpe.com>
> Cc: Matthias Rudolph <Matthias.Rudolph@hitachivantara.com>
> Cc: Martin George <marting@netapp.com>
> Cc: NetApp RDAC team <ng-eseries-upstream-maintainers@netapp.com>
> Cc: Zou Ming <zouming.zouming@huawei.com>
> Cc: Li Xiaokeng <lixiaokeng@huawei.com>
> Cc: Randy Jennings <randyj@purestorage.com>
> Cc: Jyoti Rani <jrani@purestorage.com>
> Cc: Brian Bunker <brian@purestorage.com>
> Cc: Uday Shankar <ushankar@purestorage.com>
> Cc: Chaitanya Kulkarni <kch@nvidia.com>
> Cc: Sagi Grimberg <sagi@grimberg.me>
> Cc: Keith Busch <kbusch@kernel.org>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marco Patalano <mpatalan@redhat.com>
> Cc: Ewan D. Milne <emilne@redhat.com>
> Cc: John Meneghini <jmeneghi@redhat.com>
> Cc: Daniel Wagner <dwagner@suse.de>
> Cc: Daniel Wagner <wagi@monom.org>
> Cc: Hannes Reinecke <hare@suse.de>
> Cc: Martin Wilck <mwilck@suse.com>
> Cc: Benjamin Marzinski <bmarzins@redhat.com>
> Cc: Christophe Varoqui <christophe.varoqui@opensvc.com>
> Cc: BLOCK-ML <linux-block@vger.kernel.org>
> Cc: NVME-ML <linux-nvme@lists.infradead.org>
> Cc: SCSI-ML <linux-scsi@vger.kernel.org>
> Cc: DM_DEVEL-ML <dm-devel@lists.linux.dev>
> Signed-off-by: Xose Vazquez Perez <xose.vazquez@gmail.com>
> ---
> 
> This will be the last iteration of this patch, there are no more NVMe
> storage
> array manufacturers.
> 
> 
> Maybe these rules should be merged into this new file. ???
> 71-nvmf-hpe.rules.in
> 71-nvmf-netapp.rules.in
> 71-nvmf-vastdata.rules.in
> 
> ---
>  .../80-nvmf-storage_arrays.rules.in           | 48
> +++++++++++++++++++
>  1 file changed, 48 insertions(+)
>  create mode 100644 nvmf-autoconnect/udev-rules/80-nvmf-
> storage_arrays.rules.in
> 
> diff --git a/nvmf-autoconnect/udev-rules/80-nvmf-
> storage_arrays.rules.in b/nvmf-autoconnect/udev-rules/80-nvmf-
> storage_arrays.rules.in
> new file mode 100644
> index 00000000..ac5df797
> --- /dev/null
> +++ b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
> @@ -0,0 +1,48 @@
> +##### Storage arrays
> +
> +#### Set iopolicy for NVMe-oF
> +### iopolicy: numa (default), round-robin (>=5.1), or queue-depth
> (>=6.11)
> +
> +## Dell EMC
> +# PowerMax
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem",
> ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin",
> ATTR{model}=="EMC PowerMax"

Do you have a specific reason to add the model match after the
assignment? It isn't wrong AFAIK, but highly unusual and confusing.

> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem",·
> ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth",
> ATTR{model}=="EMC PowerMax"

I am assuming the idea here is that if queue-depth is unsupported, the
second command will fail, and thus round-robin will be selected?
I am not sure if that's a good idea. 

The "best" iopolicy doesn't depend on the storage array in use.
It depends on what the kernel supports, the workload, and the user
preferences.

I suggest using something like this instead:

ENV{.NVME_IOPOLICY}!="?*", ENV{.NVME_IOPOLICY}="queue-depth"

This allows users to add a early rule file 00-nvme-policy.rules
to override the default:

ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ENV{.NVME_IOPOLICY}="round-robin"


Then you could simply do this:


ACTION!="add|change", GOTO="iopolicy_end"
SUBSYSTEM!="nvme-subsystem", GOTO="iopolicy_end"
ATTR{subsystype}!="nvm", GOTO="iopolicy_end"

ATTR{model}=="dellemc-powerstore", ATTR{iopolicy}="$env{NVME_IOPOLICY}"
# other models ...

LABEL="iopolicy_end"


Anyway, I dislike the idea of maintaining a potentially ever-growing
list of storage models with special policies in generic udev rules.
Udev rules 

*If* we want to pursue model-specific settings, we should rather use
the systemd hwdb [1] for that purpose. For example,


ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", \
    ENV{.NVME_IOPOLICY}!="?*", \
    IMPORT{builtin}="hwdb nvme_subsys:$attr{model}

# .NVME_IOPOLICY would be set by hwdb if a match was found
ENV{.NVME_IOPOLICY}=="?*", ATTR{iopolicy}="$env{.NVME_IOPOLICY}"


Vendors could then just put their preference into the hwdb.

But first of all I'd be curious why this setting would be 
model-specific in the first place.

Regards
Martin

[1] https://man7.org/linux/man-pages/man7/hwdb.7.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays
  2025-08-20 21:32 [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays Xose Vazquez Perez
  2025-08-22 19:42 ` Manikarnike, Vasuki
  2025-08-27 19:52 ` Martin Wilck
@ 2025-08-27 21:51 ` John Meneghini
  2 siblings, 0 replies; 4+ messages in thread
From: John Meneghini @ 2025-08-27 21:51 UTC (permalink / raw)
  To: Xose Vazquez Perez
  Cc: Wayne Berthiaume, Vasuki Manikarnike, Matthias Rudolph,
	Martin George, NetApp RDAC team, Zou Ming, Li Xiaokeng,
	Randy Jennings, Jyoti Rani, Brian Bunker, Uday Shankar,
	Chaitanya Kulkarni, Sagi Grimberg, Keith Busch, Christoph Hellwig,
	Marco Patalano, Ewan D. Milne, Daniel Wagner, Daniel Wagner,
	Hannes Reinecke, Martin Wilck, Benjamin Marzinski,
	Christophe Varoqui, BLOCK-ML, NVME-ML, SCSI-ML, DM_DEVEL-ML

I'm sorry but Red Hat will not approve any upstream change like this that modifies the policy for OTHER VENDORS stuff.

You can't simply change the IO policy for all of these arrays.  Many vendors have no autoconnect/udev-rules because they don't want one.  They want to use the default ctrl_loss_tmo and the default iopolicy (numa)... you can't just change this for them.

If you want people to migrate their udev rules out of separate files and into a single autoconnect file like this then you'll have to get them to agree.

When I look upstream I see exactly 3 vendors who have a udev-rule for their iopolicy.

nvme-cli(master) > ls -1 nvmf-autoconnect/udev-rules/71*
nvmf-autoconnect/udev-rules/71-nvmf-hpe.rules.in
nvmf-autoconnect/udev-rules/71-nvmf-netapp.rules.in
nvmf-autoconnect/udev-rules/71-nvmf-vastdata.rules.in

I suggest that you get these three vendors to agree to move their policy into a single 71-nvmf-mulitpath-policy.rules.in file, and then leave everyone else's stuff alone.

In the future, vendors who want to add a multipath-policy rule can then use the new file instead of adding their own.

/John

On 8/20/25 5:32 PM, Xose Vazquez Perez wrote:
> One file per vendor, or device, is a bit excessive for two-four rules.
> 
> 
> If possible, select round-robin (>=5.1), or queue-depth (>=6.11).
> round-robin is a basic selector, and only works well under ideal conditions.
> 
> A nvme benchmark, round-robin vs queue-depth, shows how bad it is:
> https://marc.info/?l=linux-kernel&m=171931850925572
> https://marc.info/?l=linux-kernel&m=171931852025575
> https://github.com/johnmeneghini/iopolicy/?tab=readme-ov-file#sample-data
> https://people.redhat.com/jmeneghi/ALPSS_2023/NVMe_QD_Multipathing.pdf
> 
> 
> [ctrl_loss_tmo default value is 600 (ten minutes)]

You can't remove this because vendors have ctrl_loss_tmo set to -1 on purpose.

> v3:
>   - add Fujitsu/ETERNUS AB/HB
>   - add Hitachi/VSP
> 
> v2:
>   - fix ctrl_loss_tmo commnent
>   - add Infinidat/InfiniBox
> 
> 
> Cc: Wayne Berthiaume <Wayne.Berthiaume@dell.com>
> Cc: Vasuki Manikarnike <vasuki.manikarnike@hpe.com>
> Cc: Matthias Rudolph <Matthias.Rudolph@hitachivantara.com>
> Cc: Martin George <marting@netapp.com>
> Cc: NetApp RDAC team <ng-eseries-upstream-maintainers@netapp.com>
> Cc: Zou Ming <zouming.zouming@huawei.com>
> Cc: Li Xiaokeng <lixiaokeng@huawei.com>
> Cc: Randy Jennings <randyj@purestorage.com>
> Cc: Jyoti Rani <jrani@purestorage.com>
> Cc: Brian Bunker <brian@purestorage.com>
> Cc: Uday Shankar <ushankar@purestorage.com>
> Cc: Chaitanya Kulkarni <kch@nvidia.com>
> Cc: Sagi Grimberg <sagi@grimberg.me>
> Cc: Keith Busch <kbusch@kernel.org>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marco Patalano <mpatalan@redhat.com>
> Cc: Ewan D. Milne <emilne@redhat.com>
> Cc: John Meneghini <jmeneghi@redhat.com>
> Cc: Daniel Wagner <dwagner@suse.de>
> Cc: Daniel Wagner <wagi@monom.org>
> Cc: Hannes Reinecke <hare@suse.de>
> Cc: Martin Wilck <mwilck@suse.com>
> Cc: Benjamin Marzinski <bmarzins@redhat.com>
> Cc: Christophe Varoqui <christophe.varoqui@opensvc.com>
> Cc: BLOCK-ML <linux-block@vger.kernel.org>
> Cc: NVME-ML <linux-nvme@lists.infradead.org>
> Cc: SCSI-ML <linux-scsi@vger.kernel.org>
> Cc: DM_DEVEL-ML <dm-devel@lists.linux.dev>
> Signed-off-by: Xose Vazquez Perez <xose.vazquez@gmail.com>
> ---
> 
> This will be the last iteration of this patch, there are no more NVMe storage
> array manufacturers.
> 
> 
> Maybe these rules should be merged into this new file. ???
> 71-nvmf-hpe.rules.in
> 71-nvmf-netapp.rules.in
> 71-nvmf-vastdata.rules.in
> 
> ---
>   .../80-nvmf-storage_arrays.rules.in           | 48 +++++++++++++++++++
>   1 file changed, 48 insertions(+)
>   create mode 100644 nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
> 
> diff --git a/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
> new file mode 100644
> index 00000000..ac5df797
> --- /dev/null
> +++ b/nvmf-autoconnect/udev-rules/80-nvmf-storage_arrays.rules.in
> @@ -0,0 +1,48 @@
> +##### Storage arrays
> +
> +#### Set iopolicy for NVMe-oF
> +### iopolicy: numa (default), round-robin (>=5.1), or queue-depth (>=6.11)
> +
> +## Dell EMC
> +# PowerMax
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="EMC PowerMax"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="EMC PowerMax"
> +# PowerStore
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="dellemc-powerstore"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="dellemc-powerstore"
> +
> +## Fujitsu
> +# ETERNUS AB/HB
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Fujitsu ETERNUS AB/HB Series"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Fujitsu ETERNUS AB/HB Series"
> +
> +## Hitachi Vantara
> +# VSP
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="HITACHI SVOS-RF-System"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="HITACHI SVOS-RF-System"
> +
> +## Huawei
> +# OceanStor
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Huawei-XSG1"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Huawei-XSG1"
> +
> +## IBM
> +# FlashSystem (RamSan)
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="FlashSystem"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="FlashSystem"
> +# FlashSystem (Storwize/SVC)
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="IBM*214"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="IBM*214"
> +
> +## Infinidat
> +# InfiniBox
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="InfiniBox"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="InfiniBox"
> +
> +## Pure
> +# FlashArray
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="round-robin", ATTR{model}=="Pure Storage FlashArray"
> +ACTION=="add|change", SUBSYSTEM=="nvme-subsystem", ATTR{subsystype}=="nvm", ATTR{iopolicy}="queue-depth", ATTR{model}=="Pure Storage FlashArray"
> +
> +
> +##### EOF


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-08-27 21:52 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-20 21:32 [PATCH v3] nvme-cli: nvmf-autoconnect: udev-rule: add a file for new arrays Xose Vazquez Perez
2025-08-22 19:42 ` Manikarnike, Vasuki
2025-08-27 19:52 ` Martin Wilck
2025-08-27 21:51 ` John Meneghini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).