* [PATCH 1/4] soc: marvell: Add a general purpose RVU PF driver
2024-09-20 11:23 [PATCH 0/4] soc: marvell: Add a general purpose RVU physical Anshumali Gaur
@ 2024-09-20 11:23 ` Anshumali Gaur
2024-09-20 22:30 ` Alexander Sverdlin
2024-09-21 21:38 ` Alexander Sverdlin
2024-09-20 11:23 ` [PATCH 2/4] soc: marvell: rvu-pf: Add PF to AF mailbox communication support Anshumali Gaur
` (2 subsequent siblings)
3 siblings, 2 replies; 14+ messages in thread
From: Anshumali Gaur @ 2024-09-20 11:23 UTC (permalink / raw)
To: conor.dooley, ulf.hansson, arnd, linus.walleij, nikita.shubin,
alexander.sverdlin, vkoul, cyy, krzysztof.kozlowski, linux-kernel,
sgoutham
Cc: Anshumali Gaur
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="y", Size: 8761 bytes --]
Resource virtualization unit (RVU) on Marvell's Octeon series of
silicons maps HW resources from the network, crypto and other
functional blocks into PCI-compatible physical and virtual functions.
Each functional block again has multiple local functions (LFs) for
provisioning to PCI devices.
RVU supports multiple PCIe SRIOV physical functions (PFs) and virtual
functions (VFs). And RVU admin function (AF) is the one which manages
all the resources (local functions etc) in the system.
Functionality of these PFs and VFs depends on which block LFs are
attached to them. Depending on usecase some PFs might support IO
(ie LFs attached) and some may not. For the usecases where PF
doesn't (need to) support IO, PF's driver will be limited to below
functionality.
1. Creating and destroying of PCIe SRIOV VFs
2. Support mailbox communication between VFs and admin function
(RVU AF)
3. PCIe Function level reset (FLR) for VFs
For such PFs this patch series adds a general purpose driver which
supports above functionality. This will avoid duplicating same
functionality for different RVU PFs.
This patch adds basic stub PF driver with PCI device init logic and
SRIOV enable/disable support.
Signed-off-by: Anshumali Gaur <agaur@marvell.com>
---
drivers/soc/Kconfig | 1 +
drivers/soc/Makefile | 1 +
drivers/soc/marvell/Kconfig | 19 +++
drivers/soc/marvell/Makefile | 2 +
drivers/soc/marvell/rvu_gen_pf/Makefile | 5 +
drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 159 ++++++++++++++++++++++++
drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 19 +++
7 files changed, 206 insertions(+)
create mode 100644 drivers/soc/marvell/Kconfig
create mode 100644 drivers/soc/marvell/Makefile
create mode 100644 drivers/soc/marvell/rvu_gen_pf/Makefile
create mode 100644 drivers/soc/marvell/rvu_gen_pf/gen_pf.c
create mode 100644 drivers/soc/marvell/rvu_gen_pf/gen_pf.h
diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig
index 6a8daeb8c4b9..a5d3770a6acf 100644
--- a/drivers/soc/Kconfig
+++ b/drivers/soc/Kconfig
@@ -15,6 +15,7 @@ source "drivers/soc/imx/Kconfig"
source "drivers/soc/ixp4xx/Kconfig"
source "drivers/soc/litex/Kconfig"
source "drivers/soc/loongson/Kconfig"
+source "drivers/soc/marvell/Kconfig"
source "drivers/soc/mediatek/Kconfig"
source "drivers/soc/microchip/Kconfig"
source "drivers/soc/nuvoton/Kconfig"
diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
index 2037a8695cb2..b20ec6071302 100644
--- a/drivers/soc/Makefile
+++ b/drivers/soc/Makefile
@@ -20,6 +20,7 @@ obj-y += ixp4xx/
obj-$(CONFIG_SOC_XWAY) += lantiq/
obj-$(CONFIG_LITEX_SOC_CONTROLLER) += litex/
obj-y += loongson/
+obj-y += marvell/
obj-y += mediatek/
obj-y += microchip/
obj-y += nuvoton/
diff --git a/drivers/soc/marvell/Kconfig b/drivers/soc/marvell/Kconfig
new file mode 100644
index 000000000000..b55d3bbfaf2a
--- /dev/null
+++ b/drivers/soc/marvell/Kconfig
@@ -0,0 +1,19 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# MARVELL SoC drivers
+#
+
+menu "Marvell SoC drivers"
+
+config MARVELL_OCTEON_RVU_GEN_PF
+ tristate "Marvell Octeon RVU Generic PF Driver"
+ depends on ARM64 && PCI && OCTEONTX2_AF
+ default n
+ help
+ This driver is used to create and destroy PCIe SRIOV VFs of the
+ RVU PFs that doesn't need to support any I/O functionality. It also
+ enables VFs to communicate with RVU admin function (AF) & handles
+ PCIe FLR for VFs.
+
+ Say ‘Yes’ to this driver if you have such a RVU PF device.
+endmenu
diff --git a/drivers/soc/marvell/Makefile b/drivers/soc/marvell/Makefile
new file mode 100644
index 000000000000..9a6917393873
--- /dev/null
+++ b/drivers/soc/marvell/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_MARVELL_OCTEON_RVU_GEN_PF) += rvu_gen_pf/
diff --git a/drivers/soc/marvell/rvu_gen_pf/Makefile b/drivers/soc/marvell/rvu_gen_pf/Makefile
new file mode 100644
index 000000000000..6c3d2568942b
--- /dev/null
+++ b/drivers/soc/marvell/rvu_gen_pf/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for Marvell's Octeon RVU GENERIC PF driver
+#
+obj-$(CONFIG_MARVELL_OCTEON_RVU_GEN_PF) += gen_pf.o
+ccflags-y += -I$(srctree)/drivers/net/ethernet/marvell/octeontx2/af
diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
new file mode 100644
index 000000000000..b9ddf3746a44
--- /dev/null
+++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
@@ -0,0 +1,159 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Marvell Octeon RVU Generic Physical Function driver
+ *
+ * Copyright (C) 2024 Marvell.
+ *
+ */
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/sysfs.h>
+
+#include "gen_pf.h"
+#include <rvu_trace.h>
+#include <rvu.h>
+
+#define DRV_NAME "rvu_generic_pf"
+
+/* Supported devices */
+static const struct pci_device_id rvu_gen_pf_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, 0xA0F6) },
+ { } /* end of table */
+};
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Marvell Octeon RVU Generic PF Driver");
+MODULE_DEVICE_TABLE(pci, rvu_gen_pf_id_table);
+
+static int rvu_gen_pf_check_pf_usable(struct gen_pf_dev *pfdev)
+{
+ u64 rev;
+
+ rev = readq(pfdev->reg_base + RVU_PF_BLOCK_ADDRX_DISC(BLKADDR_RVUM));
+ rev = (rev >> 12) & 0xFF;
+ /* Check if AF has setup revision for RVUM block,
+ * otherwise this driver probe should be deferred
+ * until AF driver comes up.
+ */
+ if (!rev) {
+ dev_warn(pfdev->dev,
+ "AF is not initialized, deferring probe\n");
+ return -EPROBE_DEFER;
+ }
+ return 0;
+}
+
+static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs)
+{
+ int ret;
+
+ ret = pci_enable_sriov(pdev, numvfs);
+ if (ret)
+ return ret;
+
+ return numvfs;
+}
+
+static int rvu_gen_pf_sriov_disable(struct pci_dev *pdev)
+{
+ int numvfs = pci_num_vf(pdev);
+
+ if (!numvfs)
+ return 0;
+
+ pci_disable_sriov(pdev);
+
+ return 0;
+}
+
+static int rvu_gen_pf_sriov_configure(struct pci_dev *pdev, int numvfs)
+{
+ if (numvfs == 0)
+ return rvu_gen_pf_sriov_disable(pdev);
+
+ return rvu_gen_pf_sriov_enable(pdev, numvfs);
+}
+
+static void rvu_gen_pf_remove(struct pci_dev *pdev)
+{
+ struct gen_pf_dev *pfdev = pci_get_drvdata(pdev);
+
+ rvu_gen_pf_sriov_disable(pfdev->pdev);
+ pci_set_drvdata(pdev, NULL);
+
+ pci_release_regions(pdev);
+}
+
+static int rvu_gen_pf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct device *dev = &pdev->dev;
+ struct gen_pf_dev *pfdev;
+ int err;
+
+ err = pcim_enable_device(pdev);
+ if (err) {
+ dev_err(dev, "Failed to enable PCI device\n");
+ return err;
+ }
+
+ err = pci_request_regions(pdev, DRV_NAME);
+ if (err) {
+ dev_err(dev, "PCI request regions failed %d\n", err);
+ goto err_map_failed;
+ }
+
+ err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48));
+ if (err) {
+ dev_err(dev, "DMA mask config failed, abort\n");
+ goto err_release_regions;
+ }
+
+ pci_set_master(pdev);
+
+ pfdev = devm_kzalloc(dev, sizeof(struct gen_pf_dev), GFP_KERNEL);
+ if (!pfdev) {
+ err = -ENOMEM;
+ goto err_release_regions;
+ }
+
+ pci_set_drvdata(pdev, pfdev);
+ pfdev->pdev = pdev;
+ pfdev->dev = dev;
+ pfdev->total_vfs = pci_sriov_get_totalvfs(pdev);
+
+ err = rvu_gen_pf_check_pf_usable(pfdev);
+ if (err)
+ goto err_release_regions;
+
+ return 0;
+
+err_release_regions:
+ pci_release_regions(pdev);
+ pci_set_drvdata(pdev, NULL);
+err_map_failed:
+ pci_disable_device(pdev);
+ return err;
+}
+
+static struct pci_driver rvu_gen_driver = {
+ .name = DRV_NAME,
+ .id_table = rvu_gen_pf_id_table,
+ .probe = rvu_gen_pf_probe,
+ .remove = rvu_gen_pf_remove,
+ .sriov_configure = rvu_gen_pf_sriov_configure,
+};
+
+static int __init rvu_gen_pf_init_module(void)
+{
+ return pci_register_driver(&rvu_gen_driver);
+}
+
+static void __exit rvu_gen_pf_cleanup_module(void)
+{
+ pci_unregister_driver(&rvu_gen_driver);
+}
+
+module_init(rvu_gen_pf_init_module);
+module_exit(rvu_gen_pf_cleanup_module);
diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
new file mode 100644
index 000000000000..4cf12e65a526
--- /dev/null
+++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Marvell Octeon RVU Generic Physical Function driver
+ *
+ * Copyright (C) 2024 Marvell.
+ */
+#include <linux/device.h>
+#include <linux/pci.h>
+
+#define RVU_PFFUNC(pf, func) \
+ ((((pf) & RVU_PFVF_PF_MASK) << RVU_PFVF_PF_SHIFT) | \
+ (((func) & RVU_PFVF_FUNC_MASK) << RVU_PFVF_FUNC_SHIFT))
+
+struct gen_pf_dev {
+ struct pci_dev *pdev;
+ struct device *dev;
+ void __iomem *reg_base;
+ int pf;
+ u8 total_vfs;
+};
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH 1/4] soc: marvell: Add a general purpose RVU PF driver
2024-09-20 11:23 ` [PATCH 1/4] soc: marvell: Add a general purpose RVU PF driver Anshumali Gaur
@ 2024-09-20 22:30 ` Alexander Sverdlin
2024-09-21 21:38 ` Alexander Sverdlin
1 sibling, 0 replies; 14+ messages in thread
From: Alexander Sverdlin @ 2024-09-20 22:30 UTC (permalink / raw)
To: Anshumali Gaur, conor.dooley, ulf.hansson, arnd, linus.walleij,
nikita.shubin, vkoul, cyy, krzysztof.kozlowski, linux-kernel,
sgoutham
Hi!
On Fri, 2024-09-20 at 16:53 +0530, Anshumali Gaur wrote:
> Resource virtualization unit (RVU) on Marvell's Octeon series of
> silicons maps HW resources from the network, crypto and other
> functional blocks into PCI-compatible physical and virtual functions.
> Each functional block again has multiple local functions (LFs) for
> provisioning to PCI devices.
> RVU supports multiple PCIe SRIOV physical functions (PFs) and virtual
> functions (VFs). And RVU admin function (AF) is the one which manages
> all the resources (local functions etc) in the system.
>
> Functionality of these PFs and VFs depends on which block LFs are
> attached to them. Depending on usecase some PFs might support IO
> (ie LFs attached) and some may not. For the usecases where PF
> doesn't (need to) support IO, PF's driver will be limited to below
> functionality.
> 1. Creating and destroying of PCIe SRIOV VFs
> 2. Support mailbox communication between VFs and admin function
> (RVU AF)
> 3. PCIe Function level reset (FLR) for VFs
>
> For such PFs this patch series adds a general purpose driver which
> supports above functionality. This will avoid duplicating same
> functionality for different RVU PFs.
>
> This patch adds basic stub PF driver with PCI device init logic and
> SRIOV enable/disable support.
>
> Signed-off-by: Anshumali Gaur <agaur@marvell.com>
Reviewed-by: Alexander Sverdlin <alexander.sverdlin@gmail.com>
> ---
> drivers/soc/Kconfig | 1 +
> drivers/soc/Makefile | 1 +
> drivers/soc/marvell/Kconfig | 19 +++
> drivers/soc/marvell/Makefile | 2 +
> drivers/soc/marvell/rvu_gen_pf/Makefile | 5 +
> drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 159 ++++++++++++++++++++++++
> drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 19 +++
> 7 files changed, 206 insertions(+)
> create mode 100644 drivers/soc/marvell/Kconfig
> create mode 100644 drivers/soc/marvell/Makefile
> create mode 100644 drivers/soc/marvell/rvu_gen_pf/Makefile
> create mode 100644 drivers/soc/marvell/rvu_gen_pf/gen_pf.c
> create mode 100644 drivers/soc/marvell/rvu_gen_pf/gen_pf.h
>
> diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig
> index 6a8daeb8c4b9..a5d3770a6acf 100644
> --- a/drivers/soc/Kconfig
> +++ b/drivers/soc/Kconfig
> @@ -15,6 +15,7 @@ source "drivers/soc/imx/Kconfig"
> source "drivers/soc/ixp4xx/Kconfig"
> source "drivers/soc/litex/Kconfig"
> source "drivers/soc/loongson/Kconfig"
> +source "drivers/soc/marvell/Kconfig"
> source "drivers/soc/mediatek/Kconfig"
> source "drivers/soc/microchip/Kconfig"
> source "drivers/soc/nuvoton/Kconfig"
> diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
> index 2037a8695cb2..b20ec6071302 100644
> --- a/drivers/soc/Makefile
> +++ b/drivers/soc/Makefile
> @@ -20,6 +20,7 @@ obj-y += ixp4xx/
> obj-$(CONFIG_SOC_XWAY) += lantiq/
> obj-$(CONFIG_LITEX_SOC_CONTROLLER) += litex/
> obj-y += loongson/
> +obj-y += marvell/
> obj-y += mediatek/
> obj-y += microchip/
> obj-y += nuvoton/
> diff --git a/drivers/soc/marvell/Kconfig b/drivers/soc/marvell/Kconfig
> new file mode 100644
> index 000000000000..b55d3bbfaf2a
> --- /dev/null
> +++ b/drivers/soc/marvell/Kconfig
> @@ -0,0 +1,19 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# MARVELL SoC drivers
> +#
> +
> +menu "Marvell SoC drivers"
> +
> +config MARVELL_OCTEON_RVU_GEN_PF
> + tristate "Marvell Octeon RVU Generic PF Driver"
> + depends on ARM64 && PCI && OCTEONTX2_AF
> + default n
> + help
> + This driver is used to create and destroy PCIe SRIOV VFs of the
> + RVU PFs that doesn't need to support any I/O functionality. It also
> + enables VFs to communicate with RVU admin function (AF) & handles
> + PCIe FLR for VFs.
> +
> + Say ‘Yes’ to this driver if you have such a RVU PF device.
> +endmenu
> diff --git a/drivers/soc/marvell/Makefile b/drivers/soc/marvell/Makefile
> new file mode 100644
> index 000000000000..9a6917393873
> --- /dev/null
> +++ b/drivers/soc/marvell/Makefile
> @@ -0,0 +1,2 @@
> +# SPDX-License-Identifier: GPL-2.0
> +obj-$(CONFIG_MARVELL_OCTEON_RVU_GEN_PF) += rvu_gen_pf/
> diff --git a/drivers/soc/marvell/rvu_gen_pf/Makefile b/drivers/soc/marvell/rvu_gen_pf/Makefile
> new file mode 100644
> index 000000000000..6c3d2568942b
> --- /dev/null
> +++ b/drivers/soc/marvell/rvu_gen_pf/Makefile
> @@ -0,0 +1,5 @@
> +#
> +# Makefile for Marvell's Octeon RVU GENERIC PF driver
> +#
> +obj-$(CONFIG_MARVELL_OCTEON_RVU_GEN_PF) += gen_pf.o
> +ccflags-y += -I$(srctree)/drivers/net/ethernet/marvell/octeontx2/af
> diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
> new file mode 100644
> index 000000000000..b9ddf3746a44
> --- /dev/null
> +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
> @@ -0,0 +1,159 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Marvell Octeon RVU Generic Physical Function driver
> + *
> + * Copyright (C) 2024 Marvell.
> + *
> + */
> +#include <linux/delay.h>
> +#include <linux/interrupt.h>
> +#include <linux/irq.h>
> +#include <linux/list.h>
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/sysfs.h>
> +
> +#include "gen_pf.h"
> +#include <rvu_trace.h>
> +#include <rvu.h>
> +
> +#define DRV_NAME "rvu_generic_pf"
> +
> +/* Supported devices */
> +static const struct pci_device_id rvu_gen_pf_id_table[] = {
> + { PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, 0xA0F6) },
> + { } /* end of table */
> +};
> +MODULE_LICENSE("GPL");
> +MODULE_DESCRIPTION("Marvell Octeon RVU Generic PF Driver");
> +MODULE_DEVICE_TABLE(pci, rvu_gen_pf_id_table);
> +
> +static int rvu_gen_pf_check_pf_usable(struct gen_pf_dev *pfdev)
> +{
> + u64 rev;
> +
> + rev = readq(pfdev->reg_base + RVU_PF_BLOCK_ADDRX_DISC(BLKADDR_RVUM));
> + rev = (rev >> 12) & 0xFF;
> + /* Check if AF has setup revision for RVUM block,
> + * otherwise this driver probe should be deferred
> + * until AF driver comes up.
> + */
> + if (!rev) {
> + dev_warn(pfdev->dev,
> + "AF is not initialized, deferring probe\n");
> + return -EPROBE_DEFER;
> + }
> + return 0;
> +}
> +
> +static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs)
> +{
> + int ret;
> +
> + ret = pci_enable_sriov(pdev, numvfs);
> + if (ret)
> + return ret;
> +
> + return numvfs;
> +}
> +
> +static int rvu_gen_pf_sriov_disable(struct pci_dev *pdev)
> +{
> + int numvfs = pci_num_vf(pdev);
> +
> + if (!numvfs)
> + return 0;
> +
> + pci_disable_sriov(pdev);
> +
> + return 0;
> +}
> +
> +static int rvu_gen_pf_sriov_configure(struct pci_dev *pdev, int numvfs)
> +{
> + if (numvfs == 0)
> + return rvu_gen_pf_sriov_disable(pdev);
> +
> + return rvu_gen_pf_sriov_enable(pdev, numvfs);
> +}
> +
> +static void rvu_gen_pf_remove(struct pci_dev *pdev)
> +{
> + struct gen_pf_dev *pfdev = pci_get_drvdata(pdev);
> +
> + rvu_gen_pf_sriov_disable(pfdev->pdev);
> + pci_set_drvdata(pdev, NULL);
> +
> + pci_release_regions(pdev);
> +}
> +
> +static int rvu_gen_pf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> +{
> + struct device *dev = &pdev->dev;
> + struct gen_pf_dev *pfdev;
> + int err;
> +
> + err = pcim_enable_device(pdev);
> + if (err) {
> + dev_err(dev, "Failed to enable PCI device\n");
> + return err;
> + }
> +
> + err = pci_request_regions(pdev, DRV_NAME);
> + if (err) {
> + dev_err(dev, "PCI request regions failed %d\n", err);
> + goto err_map_failed;
> + }
> +
> + err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(48));
> + if (err) {
> + dev_err(dev, "DMA mask config failed, abort\n");
> + goto err_release_regions;
> + }
> +
> + pci_set_master(pdev);
> +
> + pfdev = devm_kzalloc(dev, sizeof(struct gen_pf_dev), GFP_KERNEL);
> + if (!pfdev) {
> + err = -ENOMEM;
> + goto err_release_regions;
> + }
> +
> + pci_set_drvdata(pdev, pfdev);
> + pfdev->pdev = pdev;
> + pfdev->dev = dev;
> + pfdev->total_vfs = pci_sriov_get_totalvfs(pdev);
> +
> + err = rvu_gen_pf_check_pf_usable(pfdev);
> + if (err)
> + goto err_release_regions;
> +
> + return 0;
> +
> +err_release_regions:
> + pci_release_regions(pdev);
> + pci_set_drvdata(pdev, NULL);
> +err_map_failed:
> + pci_disable_device(pdev);
> + return err;
> +}
> +
> +static struct pci_driver rvu_gen_driver = {
> + .name = DRV_NAME,
> + .id_table = rvu_gen_pf_id_table,
> + .probe = rvu_gen_pf_probe,
> + .remove = rvu_gen_pf_remove,
> + .sriov_configure = rvu_gen_pf_sriov_configure,
> +};
> +
> +static int __init rvu_gen_pf_init_module(void)
> +{
> + return pci_register_driver(&rvu_gen_driver);
> +}
> +
> +static void __exit rvu_gen_pf_cleanup_module(void)
> +{
> + pci_unregister_driver(&rvu_gen_driver);
> +}
> +
> +module_init(rvu_gen_pf_init_module);
> +module_exit(rvu_gen_pf_cleanup_module);
> diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
> new file mode 100644
> index 000000000000..4cf12e65a526
> --- /dev/null
> +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
> @@ -0,0 +1,19 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Marvell Octeon RVU Generic Physical Function driver
> + *
> + * Copyright (C) 2024 Marvell.
> + */
> +#include <linux/device.h>
> +#include <linux/pci.h>
> +
> +#define RVU_PFFUNC(pf, func) \
> + ((((pf) & RVU_PFVF_PF_MASK) << RVU_PFVF_PF_SHIFT) | \
> + (((func) & RVU_PFVF_FUNC_MASK) << RVU_PFVF_FUNC_SHIFT))
> +
> +struct gen_pf_dev {
> + struct pci_dev *pdev;
> + struct device *dev;
> + void __iomem *reg_base;
> + int pf;
> + u8 total_vfs;
> +};
--
Alexander Sverdlin.
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH 1/4] soc: marvell: Add a general purpose RVU PF driver
2024-09-20 11:23 ` [PATCH 1/4] soc: marvell: Add a general purpose RVU PF driver Anshumali Gaur
2024-09-20 22:30 ` Alexander Sverdlin
@ 2024-09-21 21:38 ` Alexander Sverdlin
1 sibling, 0 replies; 14+ messages in thread
From: Alexander Sverdlin @ 2024-09-21 21:38 UTC (permalink / raw)
To: Anshumali Gaur, conor.dooley, ulf.hansson, arnd, linus.walleij,
nikita.shubin, vkoul, cyy, krzysztof.kozlowski, linux-kernel,
sgoutham
Hi Anshumali!
On Fri, 2024-09-20 at 16:53 +0530, Anshumali Gaur wrote:
> Resource virtualization unit (RVU) on Marvell's Octeon series of
> silicons maps HW resources from the network, crypto and other
> functional blocks into PCI-compatible physical and virtual functions.
> Each functional block again has multiple local functions (LFs) for
> provisioning to PCI devices.
> RVU supports multiple PCIe SRIOV physical functions (PFs) and virtual
> functions (VFs). And RVU admin function (AF) is the one which manages
> all the resources (local functions etc) in the system.
>
> Functionality of these PFs and VFs depends on which block LFs are
> attached to them. Depending on usecase some PFs might support IO
> (ie LFs attached) and some may not. For the usecases where PF
> doesn't (need to) support IO, PF's driver will be limited to below
> functionality.
> 1. Creating and destroying of PCIe SRIOV VFs
> 2. Support mailbox communication between VFs and admin function
> (RVU AF)
> 3. PCIe Function level reset (FLR) for VFs
>
> For such PFs this patch series adds a general purpose driver which
> supports above functionality. This will avoid duplicating same
> functionality for different RVU PFs.
>
> This patch adds basic stub PF driver with PCI device init logic and
> SRIOV enable/disable support.
>
> Signed-off-by: Anshumali Gaur <agaur@marvell.com>
> ---
>
[]
> diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
> new file mode 100644
> index 000000000000..4cf12e65a526
> --- /dev/null
> +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
> @@ -0,0 +1,19 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Marvell Octeon RVU Generic Physical Function driver
> + *
> + * Copyright (C) 2024 Marvell.
> + */
> +#include <linux/device.h>
> +#include <linux/pci.h>
> +
> +#define RVU_PFFUNC(pf, func) \
> + ((((pf) & RVU_PFVF_PF_MASK) << RVU_PFVF_PF_SHIFT) | \
> + (((func) & RVU_PFVF_FUNC_MASK) << RVU_PFVF_FUNC_SHIFT))
> +
> +struct gen_pf_dev {
> + struct pci_dev *pdev;
> + struct device *dev;
> + void __iomem *reg_base;
> + int pf;
> + u8 total_vfs;
> +};
The above struct has strange indentation with tabs and spaces.
--
Alexander Sverdlin.
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 2/4] soc: marvell: rvu-pf: Add PF to AF mailbox communication support.
2024-09-20 11:23 [PATCH 0/4] soc: marvell: Add a general purpose RVU physical Anshumali Gaur
2024-09-20 11:23 ` [PATCH 1/4] soc: marvell: Add a general purpose RVU PF driver Anshumali Gaur
@ 2024-09-20 11:23 ` Anshumali Gaur
2024-09-21 21:43 ` Alexander Sverdlin
2024-09-24 23:09 ` Alexander Sverdlin
2024-09-20 11:23 ` [PATCH 3/4] soc: marvell: rvu-pf: Add mailbox communication btw RVU VFs and PF Anshumali Gaur
2024-09-20 11:23 ` [PATCH 4/4] soc: marvell: rvu-pf: Handle function level reset (FLR) IRQs for VFs Anshumali Gaur
3 siblings, 2 replies; 14+ messages in thread
From: Anshumali Gaur @ 2024-09-20 11:23 UTC (permalink / raw)
To: conor.dooley, ulf.hansson, arnd, linus.walleij, nikita.shubin,
alexander.sverdlin, vkoul, cyy, krzysztof.kozlowski, linux-kernel,
sgoutham
Cc: Anshumali Gaur
Resource provisioning for virtual functions (VFs) is done by RVU admin
function (AF). RVU PF and AF shares a memory region which can be used
for communication. This patch adds support for mailbox communication
between PF and AF, notification of messages is via IRQs.
Example mailbox messages types and structures can be found at
drivers/net/ethernet/marvell/octeontx2/af/mbox.h
Signed-off-by: Anshumali Gaur <agaur@marvell.com>
---
drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 263 +++++++++++++++++++++++-
drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 124 +++++++++++
2 files changed, 386 insertions(+), 1 deletion(-)
diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
index b9ddf3746a44..c859be1b1651 100644
--- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
+++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
@@ -16,6 +16,10 @@
#include <rvu_trace.h>
#include <rvu.h>
+ /* PCI BAR nos */
+#define PCI_CFG_REG_BAR_NUM 2
+#define PCI_MBOX_BAR_NUM 4
+
#define DRV_NAME "rvu_generic_pf"
/* Supported devices */
@@ -45,6 +49,228 @@ static int rvu_gen_pf_check_pf_usable(struct gen_pf_dev *pfdev)
return 0;
}
+static irqreturn_t rvu_gen_pf_pfaf_mbox_intr_handler(int irq, void *pf_irq)
+{
+ struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
+ struct mbox *mw = &pfdev->mbox;
+ struct otx2_mbox_dev *mdev;
+ struct otx2_mbox *mbox;
+ struct mbox_hdr *hdr;
+ u64 mbox_data;
+
+ /* Clear the IRQ */
+ writeq(BIT_ULL(0), pfdev->reg_base + RVU_PF_INT);
+
+ mbox_data = readq(pfdev->reg_base + RVU_PF_PFAF_MBOX0);
+
+ if (mbox_data & MBOX_UP_MSG) {
+ mbox_data &= ~MBOX_UP_MSG;
+ writeq(mbox_data, pfdev->reg_base + RVU_PF_PFAF_MBOX0);
+
+ mbox = &mw->mbox_up;
+ mdev = &mbox->dev[0];
+ otx2_sync_mbox_bbuf(mbox, 0);
+
+ hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+ if (hdr->num_msgs)
+ queue_work(pfdev->mbox_wq, &mw->mbox_up_wrk);
+
+ trace_otx2_msg_interrupt(pfdev->pdev, "UP message from AF to PF",
+ BIT_ULL(0));
+ }
+
+ if (mbox_data & MBOX_DOWN_MSG) {
+ mbox_data &= ~MBOX_DOWN_MSG;
+ writeq(mbox_data, pfdev->reg_base + RVU_PF_PFAF_MBOX0);
+
+ mbox = &mw->mbox;
+ mdev = &mbox->dev[0];
+ otx2_sync_mbox_bbuf(mbox, 0);
+
+ hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+ if (hdr->num_msgs)
+ queue_work(pfdev->mbox_wq, &mw->mbox_wrk);
+
+ trace_otx2_msg_interrupt(pfdev->pdev, "DOWN reply from AF to PF",
+ BIT_ULL(0));
+ }
+ return IRQ_HANDLED;
+}
+
+static void rvu_gen_pf_disable_mbox_intr(struct gen_pf_dev *pfdev)
+{
+ int vector = pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_AFPF_MBOX);
+
+ /* Disable AF => PF mailbox IRQ */
+ writeq(BIT_ULL(0), pfdev->reg_base + RVU_PF_INT_ENA_W1C);
+ free_irq(vector, pfdev);
+}
+
+static int rvu_gen_pf_register_mbox_intr(struct gen_pf_dev *pfdev)
+{
+ struct msg_req *req;
+ char *irq_name;
+ int err;
+
+ /* Register mailbox interrupt handler */
+ irq_name = &pfdev->irq_name[RVU_PF_INT_VEC_AFPF_MBOX * NAME_SIZE];
+ snprintf(irq_name, NAME_SIZE, "Generic RVUPFAF Mbox");
+ err = request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_AFPF_MBOX),
+ rvu_gen_pf_pfaf_mbox_intr_handler, 0, irq_name, pfdev);
+ if (err) {
+ dev_err(pfdev->dev,
+ "GenPF: IRQ registration failed for PFAF mbox irq\n");
+ return err;
+ }
+
+ /* Enable mailbox interrupt for msgs coming from AF.
+ * First clear to avoid spurious interrupts, if any.
+ */
+ writeq(BIT_ULL(0), pfdev->reg_base + RVU_PF_INT);
+ writeq(BIT_ULL(0), pfdev->reg_base + RVU_PF_INT_ENA_W1S);
+
+ /* Check mailbox communication with AF */
+ req = gen_pf_mbox_alloc_msg_ready(&pfdev->mbox);
+ if (!req) {
+ rvu_gen_pf_disable_mbox_intr(pfdev);
+ return -ENOMEM;
+ }
+ err = rvu_gen_pf_sync_mbox_msg(&pfdev->mbox);
+ if (err) {
+ dev_warn(pfdev->dev,
+ "AF not responding to mailbox, deferring probe\n");
+ rvu_gen_pf_disable_mbox_intr(pfdev);
+ return -EPROBE_DEFER;
+ }
+ return 0;
+}
+
+static void rvu_gen_pf_pfaf_mbox_destroy(struct gen_pf_dev *pfdev)
+{
+ struct mbox *mbox = &pfdev->mbox;
+
+ if (pfdev->mbox_wq) {
+ destroy_workqueue(pfdev->mbox_wq);
+ pfdev->mbox_wq = NULL;
+ }
+
+ if (mbox->mbox.hwbase)
+ iounmap((void __iomem *)mbox->mbox.hwbase);
+
+ otx2_mbox_destroy(&mbox->mbox);
+ otx2_mbox_destroy(&mbox->mbox_up);
+}
+
+static void rvu_gen_pf_process_pfaf_mbox_msg(struct gen_pf_dev *pfdev,
+ struct mbox_msghdr *msg)
+{
+ if (msg->id >= MBOX_MSG_MAX) {
+ dev_err(pfdev->dev,
+ "Mbox msg with unknown ID 0x%x\n", msg->id);
+ return;
+ }
+
+ if (msg->sig != OTX2_MBOX_RSP_SIG) {
+ dev_err(pfdev->dev,
+ "Mbox msg with wrong signature %x, ID 0x%x\n",
+ msg->sig, msg->id);
+ return;
+ }
+
+ switch (msg->id) {
+ case MBOX_MSG_READY:
+ pfdev->pcifunc = msg->pcifunc;
+ break;
+ default:
+ if (msg->rc)
+ dev_err(pfdev->dev,
+ "Mbox msg response has err %d, ID 0x%x\n",
+ msg->rc, msg->id);
+ break;
+ }
+}
+
+static void rvu_gen_pf_pfaf_mbox_handler(struct work_struct *work)
+{
+ struct otx2_mbox_dev *mdev;
+ struct gen_pf_dev *pfdev;
+ struct mbox_hdr *rsp_hdr;
+ struct mbox_msghdr *msg;
+ struct otx2_mbox *mbox;
+ struct mbox *af_mbox;
+ int offset, id;
+ u16 num_msgs;
+
+ af_mbox = container_of(work, struct mbox, mbox_wrk);
+ mbox = &af_mbox->mbox;
+ mdev = &mbox->dev[0];
+ rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+ num_msgs = rsp_hdr->num_msgs;
+
+ offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
+ pfdev = af_mbox->pfvf;
+
+ for (id = 0; id < num_msgs; id++) {
+ msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ rvu_gen_pf_process_pfaf_mbox_msg(pfdev, msg);
+ offset = mbox->rx_start + msg->next_msgoff;
+ if (mdev->msgs_acked == (num_msgs - 1))
+ __otx2_mbox_reset(mbox, 0);
+ mdev->msgs_acked++;
+ }
+}
+
+static int rvu_gen_pf_pfaf_mbox_init(struct gen_pf_dev *pfdev)
+{
+ struct mbox *mbox = &pfdev->mbox;
+ void __iomem *hwbase;
+ int err;
+
+ mbox->pfvf = pfdev;
+ pfdev->mbox_wq = alloc_ordered_workqueue("otx2_pfaf_mailbox",
+ WQ_HIGHPRI | WQ_MEM_RECLAIM);
+
+ if (!pfdev->mbox_wq)
+ return -ENOMEM;
+
+ /* Mailbox is a reserved memory (in RAM) region shared between
+ * admin function (i.e AF) and this PF, shouldn't be mapped as
+ * device memory to allow unaligned accesses.
+ */
+
+ hwbase = ioremap_wc(pci_resource_start(pfdev->pdev, PCI_MBOX_BAR_NUM),
+ MBOX_SIZE);
+
+ if (!hwbase) {
+ dev_err(pfdev->dev, "Unable to map PFAF mailbox region\n");
+ err = -ENOMEM;
+ goto exit;
+ }
+
+ err = otx2_mbox_init(&mbox->mbox, hwbase, pfdev->pdev, pfdev->reg_base,
+ MBOX_DIR_PFAF, 1);
+ if (err)
+ goto exit;
+
+ err = otx2_mbox_init(&mbox->mbox_up, hwbase, pfdev->pdev, pfdev->reg_base,
+ MBOX_DIR_PFAF_UP, 1);
+
+ if (err)
+ goto exit;
+
+ err = otx2_mbox_bbuf_init(mbox, pfdev->pdev);
+ if (err)
+ goto exit;
+
+ INIT_WORK(&mbox->mbox_wrk, rvu_gen_pf_pfaf_mbox_handler);
+ mutex_init(&mbox->lock);
+
+ return 0;
+exit:
+ rvu_gen_pf_pfaf_mbox_destroy(pfdev);
+ return err;
+}
+
static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs)
{
int ret;
@@ -90,6 +316,7 @@ static int rvu_gen_pf_probe(struct pci_dev *pdev, const struct pci_device_id *id
{
struct device *dev = &pdev->dev;
struct gen_pf_dev *pfdev;
+ int num_vec;
int err;
err = pcim_enable_device(pdev);
@@ -122,13 +349,47 @@ static int rvu_gen_pf_probe(struct pci_dev *pdev, const struct pci_device_id *id
pfdev->pdev = pdev;
pfdev->dev = dev;
pfdev->total_vfs = pci_sriov_get_totalvfs(pdev);
+ num_vec = pci_msix_vec_count(pdev);
+ pfdev->irq_name = devm_kmalloc_array(&pfdev->pdev->dev, num_vec, NAME_SIZE,
+ GFP_KERNEL);
+
+ /* Map CSRs */
+ pfdev->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
+ if (!pfdev->reg_base) {
+ dev_err(dev, "Unable to map physical function CSRs, aborting\n");
+ err = -ENOMEM;
+ goto err_release_regions;
+ }
err = rvu_gen_pf_check_pf_usable(pfdev);
if (err)
- goto err_release_regions;
+ goto err_pcim_iounmap;
+
+ err = pci_alloc_irq_vectors(pfdev->pdev, num_vec, num_vec, PCI_IRQ_MSIX);
+ if (err < 0) {
+ dev_err(dev, "%s: Failed to alloc %d IRQ vectors\n",
+ __func__, num_vec);
+ goto err_pcim_iounmap;
+ }
+
+ /* Init PF <=> AF mailbox stuff */
+ err = rvu_gen_pf_pfaf_mbox_init(pfdev);
+ if (err)
+ goto err_free_irq_vectors;
+
+ /* Register mailbox interrupt */
+ err = rvu_gen_pf_register_mbox_intr(pfdev);
+ if (err)
+ goto err_mbox_destroy;
return 0;
+err_mbox_destroy:
+ rvu_gen_pf_pfaf_mbox_destroy(pfdev);
+err_free_irq_vectors:
+ pci_free_irq_vectors(pfdev->pdev);
+err_pcim_iounmap:
+ pcim_iounmap(pdev, pfdev->reg_base);
err_release_regions:
pci_release_regions(pdev);
pci_set_drvdata(pdev, NULL);
diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
index 4cf12e65a526..40847e5bbedc 100644
--- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
+++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
@@ -5,15 +5,139 @@
*/
#include <linux/device.h>
#include <linux/pci.h>
+#include <rvu_trace.h>
+#include "mbox.h"
#define RVU_PFFUNC(pf, func) \
((((pf) & RVU_PFVF_PF_MASK) << RVU_PFVF_PF_SHIFT) | \
(((func) & RVU_PFVF_FUNC_MASK) << RVU_PFVF_FUNC_SHIFT))
+#define NAME_SIZE 32
+
+struct gen_pf_dev;
+
+struct mbox {
+ struct otx2_mbox mbox;
+ struct work_struct mbox_wrk;
+ struct otx2_mbox mbox_up;
+ struct work_struct mbox_up_wrk;
+ struct gen_pf_dev *pfvf;
+ void *bbuf_base; /* Bounce buffer for mbox memory */
+ struct mutex lock; /* serialize mailbox access */
+ int num_msgs; /* mbox number of messages */
+ int up_num_msgs; /* mbox_up number of messages */
+};
+
struct gen_pf_dev {
struct pci_dev *pdev;
struct device *dev;
void __iomem *reg_base;
+ char *irq_name;
+ struct work_struct mbox_wrk;
+ struct work_struct mbox_wrk_up;
+
+ /* Mbox */
+ struct mbox mbox;
+ struct workqueue_struct *mbox_wq;
+
int pf;
+ u16 pcifunc; /* RVU PF_FUNC */
u8 total_vfs;
};
+
+/* Mbox APIs */
+static inline int rvu_gen_pf_sync_mbox_msg(struct mbox *mbox)
+{
+ int err;
+
+ if (!otx2_mbox_nonempty(&mbox->mbox, 0))
+ return 0;
+ otx2_mbox_msg_send(&mbox->mbox, 0);
+ err = otx2_mbox_wait_for_rsp(&mbox->mbox, 0);
+ if (err)
+ return err;
+
+ return otx2_mbox_check_rsp_msgs(&mbox->mbox, 0);
+}
+
+static inline int rvu_gen_pf_sync_mbox_up_msg(struct mbox *mbox, int devid)
+{
+ int err;
+
+ if (!otx2_mbox_nonempty(&mbox->mbox_up, devid))
+ return 0;
+ otx2_mbox_msg_send_up(&mbox->mbox_up, devid);
+ err = otx2_mbox_wait_for_rsp(&mbox->mbox_up, devid);
+ if (err)
+ return err;
+
+ return otx2_mbox_check_rsp_msgs(&mbox->mbox_up, devid);
+}
+
+#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
+static struct _req_type __maybe_unused \
+*gen_pf_mbox_alloc_msg_ ## _fn_name(struct mbox *mbox) \
+{ \
+ struct _req_type *req; \
+ u16 id = _id; \
+ \
+ req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \
+ &mbox->mbox, 0, sizeof(struct _req_type), \
+ sizeof(struct _rsp_type)); \
+ if (!req) \
+ return NULL; \
+ req->hdr.sig = OTX2_MBOX_REQ_SIG; \
+ req->hdr.id = id; \
+ trace_otx2_msg_alloc(mbox->mbox.pdev, id, sizeof(*req)); \
+ return req; \
+}
+
+MBOX_MESSAGES
+#undef M
+
+/* Mbox bounce buffer APIs */
+static inline int otx2_mbox_bbuf_init(struct mbox *mbox, struct pci_dev *pdev)
+{
+ struct otx2_mbox *otx2_mbox;
+ struct otx2_mbox_dev *mdev;
+
+ mbox->bbuf_base = devm_kmalloc(&pdev->dev, MBOX_SIZE, GFP_KERNEL);
+
+ if (!mbox->bbuf_base)
+ return -ENOMEM;
+
+ /* Overwrite mbox mbase to point to bounce buffer, so that PF/VF
+ * prepare all mbox messages in bounce buffer instead of directly
+ * in hw mbox memory.
+ */
+ otx2_mbox = &mbox->mbox;
+ mdev = &otx2_mbox->dev[0];
+ mdev->mbase = mbox->bbuf_base;
+
+ otx2_mbox = &mbox->mbox_up;
+ mdev = &otx2_mbox->dev[0];
+ mdev->mbase = mbox->bbuf_base;
+ return 0;
+}
+
+static inline void otx2_sync_mbox_bbuf(struct otx2_mbox *mbox, int devid)
+{
+ u16 msgs_offset = ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
+ void *hw_mbase = mbox->hwbase + (devid * MBOX_SIZE);
+ struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+ struct mbox_hdr *hdr;
+ u64 msg_size;
+
+ if (mdev->mbase == hw_mbase)
+ return;
+
+ hdr = hw_mbase + mbox->rx_start;
+ msg_size = hdr->msg_size;
+
+ if (msg_size > mbox->rx_size - msgs_offset)
+ msg_size = mbox->rx_size - msgs_offset;
+
+ /* Copy mbox messages from mbox memory to bounce buffer */
+ memcpy(mdev->mbase + mbox->rx_start,
+ hw_mbase + mbox->rx_start, msg_size + msgs_offset);
+}
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH 2/4] soc: marvell: rvu-pf: Add PF to AF mailbox communication support.
2024-09-20 11:23 ` [PATCH 2/4] soc: marvell: rvu-pf: Add PF to AF mailbox communication support Anshumali Gaur
@ 2024-09-21 21:43 ` Alexander Sverdlin
2024-09-24 23:09 ` Alexander Sverdlin
1 sibling, 0 replies; 14+ messages in thread
From: Alexander Sverdlin @ 2024-09-21 21:43 UTC (permalink / raw)
To: Anshumali Gaur, conor.dooley, ulf.hansson, arnd, linus.walleij,
nikita.shubin, vkoul, cyy, krzysztof.kozlowski, linux-kernel,
sgoutham
Hi Anshumali!
On Fri, 2024-09-20 at 16:53 +0530, Anshumali Gaur wrote:
> Resource provisioning for virtual functions (VFs) is done by RVU admin
> function (AF). RVU PF and AF shares a memory region which can be used
> for communication. This patch adds support for mailbox communication
> between PF and AF, notification of messages is via IRQs.
>
> Example mailbox messages types and structures can be found at
> drivers/net/ethernet/marvell/octeontx2/af/mbox.h
>
> Signed-off-by: Anshumali Gaur <agaur@marvell.com>
>
[]
> +#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
> +static struct _req_type __maybe_unused \
> +*gen_pf_mbox_alloc_msg_ ## _fn_name(struct mbox *mbox) \
> +{ \
> + struct _req_type *req; \
> + u16 id = _id; \
> + \
> + req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \
> + &mbox->mbox, 0, sizeof(struct _req_type), \
> + sizeof(struct _rsp_type)); \
> + if (!req) \
> + return NULL; \
> + req->hdr.sig = OTX2_MBOX_REQ_SIG; \
> + req->hdr.id = id; \
> + trace_otx2_msg_alloc(mbox->mbox.pdev, id, sizeof(*req)); \
> + return req; \
> +}
> +
> +MBOX_MESSAGES
> +#undef M
While checkpatch is wondering about _name:
WARNING: Argument '_name' is not used in function-like macro
#399: FILE: drivers/soc/marvell/rvu_gen_pf/gen_pf.h:77:
+#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
... I ask myself what actually happens here with "M" and "MBOX_MESSAGES"?
--
Alexander Sverdlin.
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH 2/4] soc: marvell: rvu-pf: Add PF to AF mailbox communication support.
2024-09-20 11:23 ` [PATCH 2/4] soc: marvell: rvu-pf: Add PF to AF mailbox communication support Anshumali Gaur
2024-09-21 21:43 ` Alexander Sverdlin
@ 2024-09-24 23:09 ` Alexander Sverdlin
1 sibling, 0 replies; 14+ messages in thread
From: Alexander Sverdlin @ 2024-09-24 23:09 UTC (permalink / raw)
To: Anshumali Gaur, conor.dooley, ulf.hansson, arnd, linus.walleij,
nikita.shubin, vkoul, cyy, krzysztof.kozlowski, linux-kernel,
sgoutham
Hi Anshumali,
thanks for explanation!
On Fri, 2024-09-20 at 16:53 +0530, Anshumali Gaur wrote:
> Resource provisioning for virtual functions (VFs) is done by RVU admin
> function (AF). RVU PF and AF shares a memory region which can be used
> for communication. This patch adds support for mailbox communication
> between PF and AF, notification of messages is via IRQs.
>
> Example mailbox messages types and structures can be found at
> drivers/net/ethernet/marvell/octeontx2/af/mbox.h
>
> Signed-off-by: Anshumali Gaur <agaur@marvell.com>
Reviewed-by: Alexander Sverdlin <alexander.sverdlin@gmail.com>
> ---
> drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 263 +++++++++++++++++++++++-
> drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 124 +++++++++++
> 2 files changed, 386 insertions(+), 1 deletion(-)
--
Alexander Sverdlin.
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 3/4] soc: marvell: rvu-pf: Add mailbox communication btw RVU VFs and PF.
2024-09-20 11:23 [PATCH 0/4] soc: marvell: Add a general purpose RVU physical Anshumali Gaur
2024-09-20 11:23 ` [PATCH 1/4] soc: marvell: Add a general purpose RVU PF driver Anshumali Gaur
2024-09-20 11:23 ` [PATCH 2/4] soc: marvell: rvu-pf: Add PF to AF mailbox communication support Anshumali Gaur
@ 2024-09-20 11:23 ` Anshumali Gaur
2024-09-21 22:22 ` Alexander Sverdlin
2024-09-20 11:23 ` [PATCH 4/4] soc: marvell: rvu-pf: Handle function level reset (FLR) IRQs for VFs Anshumali Gaur
3 siblings, 1 reply; 14+ messages in thread
From: Anshumali Gaur @ 2024-09-20 11:23 UTC (permalink / raw)
To: conor.dooley, ulf.hansson, arnd, linus.walleij, nikita.shubin,
alexander.sverdlin, vkoul, cyy, krzysztof.kozlowski, linux-kernel,
sgoutham
Cc: Anshumali Gaur
RVU PF shares a dedicated memory region with each of it's VFs.
This memory region is used to establish communication between them.
Since Admin function (AF) handles resource management, PF doesn't
process the messages sent by VFs. It acts as an intermediary device
process the messages sent by VFs. It acts as an intermediary device.
Hardware doesn't support direct communication between AF and VFs.
Signed-off-by: Anshumali Gaur <agaur@marvell.com>
---
drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 437 ++++++++++++++++++++++++
drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 2 +
2 files changed, 439 insertions(+)
diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
index c859be1b1651..624c55123a19 100644
--- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
+++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
@@ -31,6 +31,11 @@ MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Marvell Octeon RVU Generic PF Driver");
MODULE_DEVICE_TABLE(pci, rvu_gen_pf_id_table);
+inline int rvu_get_pf(u16 pcifunc)
+{
+ return (pcifunc >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
+}
+
static int rvu_gen_pf_check_pf_usable(struct gen_pf_dev *pfdev)
{
u64 rev;
@@ -49,6 +54,117 @@ static int rvu_gen_pf_check_pf_usable(struct gen_pf_dev *pfdev)
return 0;
}
+static void rvu_gen_pf_forward_msg_pfvf(struct otx2_mbox_dev *mdev,
+ struct otx2_mbox *pfvf_mbox, void *bbuf_base,
+ int devid)
+{
+ struct otx2_mbox_dev *src_mdev = mdev;
+ int offset;
+
+ /* Msgs are already copied, trigger VF's mbox irq */
+ smp_wmb();
+
+ otx2_mbox_wait_for_zero(pfvf_mbox, devid);
+ offset = pfvf_mbox->trigger | (devid << pfvf_mbox->tr_shift);
+ writeq(MBOX_DOWN_MSG, (void __iomem *)pfvf_mbox->reg_base + offset);
+
+ /* Restore VF's mbox bounce buffer region address */
+ src_mdev->mbase = bbuf_base;
+}
+
+static int rvu_gen_pf_forward_vf_mbox_msgs(struct gen_pf_dev *pfdev,
+ struct otx2_mbox *src_mbox,
+ int dir, int vf, int num_msgs)
+{
+ struct otx2_mbox_dev *src_mdev, *dst_mdev;
+ struct mbox_hdr *mbox_hdr;
+ struct mbox_hdr *req_hdr;
+ struct mbox *dst_mbox;
+ int dst_size, err;
+
+ if (dir == MBOX_DIR_PFAF) {
+ /* Set VF's mailbox memory as PF's bounce buffer memory, so
+ * that explicit copying of VF's msgs to PF=>AF mbox region
+ * and AF=>PF responses to VF's mbox region can be avoided.
+ */
+ src_mdev = &src_mbox->dev[vf];
+ mbox_hdr = src_mbox->hwbase +
+ src_mbox->rx_start + (vf * MBOX_SIZE);
+
+ dst_mbox = &pfdev->mbox;
+ dst_size = dst_mbox->mbox.tx_size -
+ ALIGN(sizeof(*mbox_hdr), MBOX_MSG_ALIGN);
+ /* Check if msgs fit into destination area and has valid size */
+ if (mbox_hdr->msg_size > dst_size || !mbox_hdr->msg_size)
+ return -EINVAL;
+
+ dst_mdev = &dst_mbox->mbox.dev[0];
+
+ mutex_lock(&pfdev->mbox.lock);
+ dst_mdev->mbase = src_mdev->mbase;
+ dst_mdev->msg_size = mbox_hdr->msg_size;
+ dst_mdev->num_msgs = num_msgs;
+ err = rvu_gen_pf_sync_mbox_msg(dst_mbox);
+ /* Error code -EIO indicate there is a communication failure
+ * to the AF. Rest of the error codes indicate that AF processed
+ * VF messages and set the error codes in response messages
+ * (if any) so simply forward responses to VF.
+ */
+ if (err == -EIO) {
+ dev_warn(pfdev->dev,
+ "AF not responding to VF%d messages\n", vf);
+ /* restore PF mbase and exit */
+ dst_mdev->mbase = pfdev->mbox.bbuf_base;
+ mutex_unlock(&pfdev->mbox.lock);
+ return err;
+ }
+ /* At this point, all the VF messages sent to AF are acked
+ * with proper responses and responses are copied to VF
+ * mailbox hence raise interrupt to VF.
+ */
+ req_hdr = (struct mbox_hdr *)(dst_mdev->mbase +
+ dst_mbox->mbox.rx_start);
+ req_hdr->num_msgs = num_msgs;
+
+ rvu_gen_pf_forward_msg_pfvf(dst_mdev, &pfdev->mbox_pfvf[0].mbox,
+ pfdev->mbox.bbuf_base, vf);
+ mutex_unlock(&pfdev->mbox.lock);
+ } else if (dir == MBOX_DIR_PFVF_UP) {
+ src_mdev = &src_mbox->dev[0];
+ mbox_hdr = src_mbox->hwbase + src_mbox->rx_start;
+ req_hdr = (struct mbox_hdr *)(src_mdev->mbase +
+ src_mbox->rx_start);
+ req_hdr->num_msgs = num_msgs;
+
+ dst_mbox = &pfdev->mbox_pfvf[0];
+ dst_size = dst_mbox->mbox_up.tx_size -
+ ALIGN(sizeof(*mbox_hdr), MBOX_MSG_ALIGN);
+ /* Check if msgs fit into destination area */
+ if (mbox_hdr->msg_size > dst_size)
+ return -EINVAL;
+ dst_mdev = &dst_mbox->mbox_up.dev[vf];
+ dst_mdev->mbase = src_mdev->mbase;
+ dst_mdev->msg_size = mbox_hdr->msg_size;
+ dst_mdev->num_msgs = mbox_hdr->num_msgs;
+ err = rvu_gen_pf_sync_mbox_up_msg(dst_mbox, vf);
+ if (err) {
+ dev_warn(pfdev->dev,
+ "VF%d is not responding to mailbox\n", vf);
+ return err;
+ }
+ } else if (dir == MBOX_DIR_VFPF_UP) {
+ req_hdr = (struct mbox_hdr *)(src_mbox->dev[0].mbase +
+ src_mbox->rx_start);
+ req_hdr->num_msgs = num_msgs;
+ rvu_gen_pf_forward_msg_pfvf(&pfdev->mbox_pfvf->mbox_up.dev[vf],
+ &pfdev->mbox.mbox_up,
+ pfdev->mbox_pfvf[vf].bbuf_base,
+ 0);
+ }
+
+ return 0;
+}
+
static irqreturn_t rvu_gen_pf_pfaf_mbox_intr_handler(int irq, void *pf_irq)
{
struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
@@ -190,6 +306,39 @@ static void rvu_gen_pf_process_pfaf_mbox_msg(struct gen_pf_dev *pfdev,
}
}
+static void rvu_gen_pf_pfaf_mbox_up_handler(struct work_struct *work)
+{
+ struct mbox *af_mbox = container_of(work, struct mbox, mbox_up_wrk);
+ struct otx2_mbox *mbox = &af_mbox->mbox_up;
+ struct otx2_mbox_dev *mdev = &mbox->dev[0];
+ struct gen_pf_dev *pfdev = af_mbox->pfvf;
+ int offset, id, devid = 0;
+ struct mbox_hdr *rsp_hdr;
+ struct mbox_msghdr *msg;
+ u16 num_msgs;
+
+ rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+ num_msgs = rsp_hdr->num_msgs;
+
+ offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
+
+ for (id = 0; id < num_msgs; id++) {
+ msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+
+ devid = msg->pcifunc & RVU_PFVF_FUNC_MASK;
+ offset = mbox->rx_start + msg->next_msgoff;
+ }
+ /* Forward to VF iff VFs are really present */
+ if (devid && pci_num_vf(pfdev->pdev)) {
+ rvu_gen_pf_forward_vf_mbox_msgs(pfdev, &pfdev->mbox.mbox_up,
+ MBOX_DIR_PFVF_UP, devid - 1,
+ num_msgs);
+ return;
+ }
+
+ otx2_mbox_msg_send(mbox, 0);
+}
+
static void rvu_gen_pf_pfaf_mbox_handler(struct work_struct *work)
{
struct otx2_mbox_dev *mdev;
@@ -263,6 +412,7 @@ static int rvu_gen_pf_pfaf_mbox_init(struct gen_pf_dev *pfdev)
goto exit;
INIT_WORK(&mbox->mbox_wrk, rvu_gen_pf_pfaf_mbox_handler);
+ INIT_WORK(&mbox->mbox_up_wrk, rvu_gen_pf_pfaf_mbox_up_handler);
mutex_init(&mbox->lock);
return 0;
@@ -271,19 +421,303 @@ static int rvu_gen_pf_pfaf_mbox_init(struct gen_pf_dev *pfdev)
return err;
}
+static void rvu_gen_pf_pfvf_mbox_handler(struct work_struct *work)
+{
+ struct mbox_msghdr *msg = NULL;
+ int offset, vf_idx, id, err;
+ struct otx2_mbox_dev *mdev;
+ struct gen_pf_dev *pfdev;
+ struct mbox_hdr *req_hdr;
+ struct otx2_mbox *mbox;
+ struct mbox *vf_mbox;
+
+ vf_mbox = container_of(work, struct mbox, mbox_wrk);
+ pfdev = vf_mbox->pfvf;
+ vf_idx = vf_mbox - pfdev->mbox_pfvf;
+
+ mbox = &pfdev->mbox_pfvf[0].mbox;
+ mdev = &mbox->dev[vf_idx];
+ req_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+
+ offset = ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
+
+ for (id = 0; id < vf_mbox->num_msgs; id++) {
+ msg = (struct mbox_msghdr *)(mdev->mbase + mbox->rx_start +
+ offset);
+
+ if (msg->sig != OTX2_MBOX_REQ_SIG)
+ goto inval_msg;
+
+ /* Set VF's number in each of the msg */
+ msg->pcifunc &= ~RVU_PFVF_FUNC_MASK;
+ msg->pcifunc |= (vf_idx + 1) & RVU_PFVF_FUNC_MASK;
+ offset = msg->next_msgoff;
+ }
+ err = rvu_gen_pf_forward_vf_mbox_msgs(pfdev, mbox, MBOX_DIR_PFAF, vf_idx,
+ vf_mbox->num_msgs);
+ if (err)
+ goto inval_msg;
+ return;
+
+inval_msg:
+ if (!msg)
+ return;
+
+ otx2_reply_invalid_msg(mbox, vf_idx, 0, msg->id);
+ otx2_mbox_msg_send(mbox, vf_idx);
+}
+
+static int rvu_gen_pf_pfvf_mbox_init(struct gen_pf_dev *pfdev, int numvfs)
+{
+ void __iomem *hwbase;
+ struct mbox *mbox;
+ int err, vf;
+ u64 base;
+
+ if (!numvfs)
+ return -EINVAL;
+
+ pfdev->mbox_pfvf = devm_kcalloc(&pfdev->pdev->dev, numvfs,
+ sizeof(struct mbox), GFP_KERNEL);
+
+ if (!pfdev->mbox_pfvf)
+ return -ENOMEM;
+
+ pfdev->mbox_pfvf_wq = alloc_workqueue("otx2_pfvf_mailbox",
+ WQ_UNBOUND | WQ_HIGHPRI |
+ WQ_MEM_RECLAIM, 0);
+ if (!pfdev->mbox_pfvf_wq)
+ return -ENOMEM;
+
+ /* PF <-> VF mailbox region follows after
+ * PF <-> AF mailbox region.
+ */
+ base = pci_resource_start(pfdev->pdev, PCI_MBOX_BAR_NUM) + MBOX_SIZE;
+
+ hwbase = ioremap_wc(base, MBOX_SIZE * pfdev->total_vfs);
+ if (!hwbase) {
+ err = -ENOMEM;
+ goto free_wq;
+ }
+
+ mbox = &pfdev->mbox_pfvf[0];
+ err = otx2_mbox_init(&mbox->mbox, hwbase, pfdev->pdev, pfdev->reg_base,
+ MBOX_DIR_PFVF, numvfs);
+ if (err)
+ goto free_iomem;
+
+ err = otx2_mbox_init(&mbox->mbox_up, hwbase, pfdev->pdev, pfdev->reg_base,
+ MBOX_DIR_PFVF_UP, numvfs);
+ if (err)
+ goto free_iomem;
+
+ for (vf = 0; vf < numvfs; vf++) {
+ mbox->pfvf = pfdev;
+ INIT_WORK(&mbox->mbox_wrk, rvu_gen_pf_pfvf_mbox_handler);
+ mbox++;
+ }
+
+ return 0;
+
+free_iomem:
+ if (hwbase)
+ iounmap(hwbase);
+free_wq:
+ destroy_workqueue(pfdev->mbox_pfvf_wq);
+ return err;
+}
+
+static void rvu_gen_pf_pfvf_mbox_destroy(struct gen_pf_dev *pfdev)
+{
+ struct mbox *mbox = &pfdev->mbox_pfvf[0];
+
+ if (!mbox)
+ return;
+
+ if (pfdev->mbox_pfvf_wq) {
+ destroy_workqueue(pfdev->mbox_pfvf_wq);
+ pfdev->mbox_pfvf_wq = NULL;
+ }
+
+ if (mbox->mbox.hwbase)
+ iounmap((void __iomem *)mbox->mbox.hwbase);
+
+ otx2_mbox_destroy(&mbox->mbox);
+}
+
+static void rvu_gen_pf_enable_pfvf_mbox_intr(struct gen_pf_dev *pfdev, int numvfs)
+{
+ /* Clear PF <=> VF mailbox IRQ */
+ writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0));
+ writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1));
+
+ /* Enable PF <=> VF mailbox IRQ */
+ writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1SX(0));
+ if (numvfs > 64) {
+ numvfs -= 64;
+ writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1SX(1));
+ }
+}
+
+static void rvu_gen_pf_disable_pfvf_mbox_intr(struct gen_pf_dev *pfdev, int numvfs)
+{
+ int vector;
+
+ /* Disable PF <=> VF mailbox IRQ */
+ writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(0));
+ writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INT_ENA_W1CX(1));
+
+ writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0));
+ vector = pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFPF_MBOX0);
+ free_irq(vector, pfdev);
+
+ if (numvfs > 64) {
+ writeq(~0ull, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1));
+ vector = pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFPF_MBOX1);
+ free_irq(vector, pfdev);
+ }
+}
+
+static void rvu_gen_pf_queue_vf_work(struct mbox *mw, struct workqueue_struct *mbox_wq,
+ int first, int mdevs, u64 intr)
+{
+ struct otx2_mbox_dev *mdev;
+ struct otx2_mbox *mbox;
+ struct mbox_hdr *hdr;
+ int i;
+
+ for (i = first; i < mdevs; i++) {
+ /* start from 0 */
+ if (!(intr & BIT_ULL(i - first)))
+ continue;
+
+ mbox = &mw->mbox;
+ mdev = &mbox->dev[i];
+ hdr = mdev->mbase + mbox->rx_start;
+ /* The hdr->num_msgs is set to zero immediately in the interrupt
+ * handler to ensure that it holds a correct value next time
+ * when the interrupt handler is called. pf->mw[i].num_msgs
+ * holds the data for use in otx2_pfvf_mbox_handler and
+ * pf->mw[i].up_num_msgs holds the data for use in
+ * otx2_pfvf_mbox_up_handler.
+ */
+ if (hdr->num_msgs) {
+ mw[i].num_msgs = hdr->num_msgs;
+ hdr->num_msgs = 0;
+ queue_work(mbox_wq, &mw[i].mbox_wrk);
+ }
+
+ mbox = &mw->mbox_up;
+ mdev = &mbox->dev[i];
+ hdr = mdev->mbase + mbox->rx_start;
+ if (hdr->num_msgs) {
+ mw[i].up_num_msgs = hdr->num_msgs;
+ hdr->num_msgs = 0;
+ queue_work(mbox_wq, &mw[i].mbox_up_wrk);
+ }
+ }
+}
+
+static irqreturn_t rvu_gen_pf_pfvf_mbox_intr_handler(int irq, void *pf_irq)
+{
+ struct gen_pf_dev *pfdev = (struct gen_pf_dev *)(pf_irq);
+ int vfs = pfdev->total_vfs;
+ struct mbox *mbox;
+ u64 intr;
+
+ mbox = pfdev->mbox_pfvf;
+ /* Handle VF interrupts */
+ if (vfs > 64) {
+ intr = readq(pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1));
+ writeq(intr, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(1));
+ rvu_gen_pf_queue_vf_work(mbox, pfdev->mbox_pfvf_wq, 64, vfs, intr);
+ if (intr)
+ trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr);
+ vfs = 64;
+ }
+
+ intr = readq(pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0));
+ writeq(intr, pfdev->reg_base + RVU_PF_VFPF_MBOX_INTX(0));
+
+ rvu_gen_pf_queue_vf_work(mbox, pfdev->mbox_pfvf_wq, 0, vfs, intr);
+
+ if (intr)
+ trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr);
+
+ return IRQ_HANDLED;
+}
+
+static int rvu_gen_pf_register_pfvf_mbox_intr(struct gen_pf_dev *pfdev, int numvfs)
+{
+ char *irq_name;
+ int err;
+
+ /* Register MBOX0 interrupt handler */
+ irq_name = &pfdev->irq_name[RVU_PF_INT_VEC_VFPF_MBOX0 * NAME_SIZE];
+ if (pfdev->pcifunc)
+ snprintf(irq_name, NAME_SIZE,
+ "Generic RVUPF%d_VF Mbox0", rvu_get_pf(pfdev->pcifunc));
+ else
+ snprintf(irq_name, NAME_SIZE, "Generic RVUPF_VF Mbox0");
+ err = request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFPF_MBOX0),
+ rvu_gen_pf_pfvf_mbox_intr_handler, 0, irq_name, pfdev);
+ if (err) {
+ dev_err(pfdev->dev,
+ "RVUPF: IRQ registration failed for PFVF mbox0 irq\n");
+ return err;
+ }
+
+ if (numvfs > 64) {
+ /* Register MBOX1 interrupt handler */
+ irq_name = &pfdev->irq_name[RVU_PF_INT_VEC_VFPF_MBOX1 * NAME_SIZE];
+ if (pfdev->pcifunc)
+ snprintf(irq_name, NAME_SIZE,
+ "Generic RVUPF%d_VF Mbox1", rvu_get_pf(pfdev->pcifunc));
+ else
+ snprintf(irq_name, NAME_SIZE, "Generic RVUPF_VF Mbox1");
+ err = request_irq(pci_irq_vector(pfdev->pdev,
+ RVU_PF_INT_VEC_VFPF_MBOX1),
+ rvu_gen_pf_pfvf_mbox_intr_handler,
+ 0, irq_name, pfdev);
+ if (err) {
+ dev_err(pfdev->dev,
+ "RVUPF: IRQ registration failed for PFVF mbox1 irq\n");
+ return err;
+ }
+ }
+
+ rvu_gen_pf_enable_pfvf_mbox_intr(pfdev, numvfs);
+
+ return 0;
+}
+
static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs)
{
+ struct gen_pf_dev *pfdev = pci_get_drvdata(pdev);
int ret;
+ /* Init PF <=> VF mailbox stuff */
+ ret = rvu_gen_pf_pfvf_mbox_init(pfdev, numvfs);
+ if (ret)
+ return ret;
+
+ ret = rvu_gen_pf_register_pfvf_mbox_intr(pfdev, numvfs);
+ if (ret)
+ goto free_mbox;
+
ret = pci_enable_sriov(pdev, numvfs);
if (ret)
return ret;
return numvfs;
+free_mbox:
+ rvu_gen_pf_pfvf_mbox_destroy(pfdev);
+ return ret;
}
static int rvu_gen_pf_sriov_disable(struct pci_dev *pdev)
{
+ struct gen_pf_dev *pfdev = pci_get_drvdata(pdev);
int numvfs = pci_num_vf(pdev);
if (!numvfs)
@@ -291,6 +725,9 @@ static int rvu_gen_pf_sriov_disable(struct pci_dev *pdev)
pci_disable_sriov(pdev);
+ rvu_gen_pf_disable_pfvf_mbox_intr(pfdev, numvfs);
+ rvu_gen_pf_pfvf_mbox_destroy(pfdev);
+
return 0;
}
diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
index 40847e5bbedc..a37ed6803107 100644
--- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
+++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
@@ -38,7 +38,9 @@ struct gen_pf_dev {
/* Mbox */
struct mbox mbox;
+ struct mbox *mbox_pfvf;
struct workqueue_struct *mbox_wq;
+ struct workqueue_struct *mbox_pfvf_wq;
int pf;
u16 pcifunc; /* RVU PF_FUNC */
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH 3/4] soc: marvell: rvu-pf: Add mailbox communication btw RVU VFs and PF.
2024-09-20 11:23 ` [PATCH 3/4] soc: marvell: rvu-pf: Add mailbox communication btw RVU VFs and PF Anshumali Gaur
@ 2024-09-21 22:22 ` Alexander Sverdlin
0 siblings, 0 replies; 14+ messages in thread
From: Alexander Sverdlin @ 2024-09-21 22:22 UTC (permalink / raw)
To: Anshumali Gaur, conor.dooley, ulf.hansson, arnd, linus.walleij,
nikita.shubin, vkoul, cyy, krzysztof.kozlowski, linux-kernel,
sgoutham
Hi Anshumali!
On Fri, 2024-09-20 at 16:53 +0530, Anshumali Gaur wrote:
> RVU PF shares a dedicated memory region with each of it's VFs.
> This memory region is used to establish communication between them.
> Since Admin function (AF) handles resource management, PF doesn't
> process the messages sent by VFs. It acts as an intermediary device
> process the messages sent by VFs. It acts as an intermediary device.
> Hardware doesn't support direct communication between AF and VFs.
>
> Signed-off-by: Anshumali Gaur <agaur@marvell.com>
Reviewed-by: Alexander Sverdlin <alexander.sverdlin@gmail.com>
> ---
> drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 437 ++++++++++++++++++++++++
> drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 2 +
> 2 files changed, 439 insertions(+)
--
Alexander Sverdlin.
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 4/4] soc: marvell: rvu-pf: Handle function level reset (FLR) IRQs for VFs
2024-09-20 11:23 [PATCH 0/4] soc: marvell: Add a general purpose RVU physical Anshumali Gaur
` (2 preceding siblings ...)
2024-09-20 11:23 ` [PATCH 3/4] soc: marvell: rvu-pf: Add mailbox communication btw RVU VFs and PF Anshumali Gaur
@ 2024-09-20 11:23 ` Anshumali Gaur
2024-09-21 22:49 ` Alexander Sverdlin
3 siblings, 1 reply; 14+ messages in thread
From: Anshumali Gaur @ 2024-09-20 11:23 UTC (permalink / raw)
To: conor.dooley, ulf.hansson, arnd, linus.walleij, nikita.shubin,
alexander.sverdlin, vkoul, cyy, krzysztof.kozlowski, linux-kernel,
sgoutham
Cc: Anshumali Gaur
Added PCIe FLR interrupt handler for VFs. When FLR is triggered for VFs,
parent PF gets an interrupt. PF creates a mbox message and sends it to
RVU Admin function (AF). AF cleans up all the resources attached to that
specific VF and acks the PF that FLR is handled.
Signed-off-by: Anshumali Gaur <agaur@marvell.com>
---
drivers/soc/marvell/rvu_gen_pf/gen_pf.c | 232 +++++++++++++++++++++++-
drivers/soc/marvell/rvu_gen_pf/gen_pf.h | 7 +
2 files changed, 238 insertions(+), 1 deletion(-)
diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
index 624c55123a19..e2e7c11dd85d 100644
--- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
+++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
@@ -618,6 +618,15 @@ static void rvu_gen_pf_queue_vf_work(struct mbox *mw, struct workqueue_struct *m
}
}
+static void rvu_gen_pf_flr_wq_destroy(struct gen_pf_dev *pfdev)
+{
+ if (!pfdev->flr_wq)
+ return;
+ destroy_workqueue(pfdev->flr_wq);
+ pfdev->flr_wq = NULL;
+ devm_kfree(pfdev->dev, pfdev->flr_wrk);
+}
+
static irqreturn_t rvu_gen_pf_pfvf_mbox_intr_handler(int irq, void *pf_irq)
{
struct gen_pf_dev *pfdev = (struct gen_pf_dev *)(pf_irq);
@@ -691,6 +700,211 @@ static int rvu_gen_pf_register_pfvf_mbox_intr(struct gen_pf_dev *pfdev, int numv
return 0;
}
+static void rvu_gen_pf_flr_handler(struct work_struct *work)
+{
+ struct flr_work *flrwork = container_of(work, struct flr_work, work);
+ struct gen_pf_dev *pfdev = flrwork->pfdev;
+ struct mbox *mbox = &pfdev->mbox;
+ struct msg_req *req;
+ int vf, reg = 0;
+
+ vf = flrwork - pfdev->flr_wrk;
+
+ mutex_lock(&mbox->lock);
+ req = gen_pf_mbox_alloc_msg_vf_flr(mbox);
+ if (!req) {
+ mutex_unlock(&mbox->lock);
+ return;
+ }
+ req->hdr.pcifunc &= RVU_PFVF_FUNC_MASK;
+ req->hdr.pcifunc |= (vf + 1) & RVU_PFVF_FUNC_MASK;
+
+ if (!rvu_gen_pf_sync_mbox_msg(&pfdev->mbox)) {
+ if (vf >= 64) {
+ reg = 1;
+ vf = vf - 64;
+ }
+ /* clear transcation pending bit */
+ writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFTRPENDX(reg));
+ writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1SX(reg));
+ }
+
+ mutex_unlock(&mbox->lock);
+}
+
+static irqreturn_t rvu_gen_pf_me_intr_handler(int irq, void *pf_irq)
+{
+ struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
+ int vf, reg, num_reg = 1;
+ u64 intr;
+
+ if (pfdev->total_vfs > 64)
+ num_reg = 2;
+
+ for (reg = 0; reg < num_reg; reg++) {
+ intr = readq(pfdev->reg_base + RVU_PF_VFME_INTX(reg));
+ if (!intr)
+ continue;
+ for (vf = 0; vf < 64; vf++) {
+ if (!(intr & BIT_ULL(vf)))
+ continue;
+ /* clear trpend bit */
+ writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFTRPENDX(reg));
+ /* clear interrupt */
+ writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFME_INTX(reg));
+ }
+ }
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t rvu_gen_pf_flr_intr_handler(int irq, void *pf_irq)
+{
+ struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
+ int reg, dev, vf, start_vf, num_reg = 1;
+ u64 intr;
+
+ if (pfdev->total_vfs > 64)
+ num_reg = 2;
+
+ for (reg = 0; reg < num_reg; reg++) {
+ intr = readq(pfdev->reg_base + RVU_PF_VFFLR_INTX(reg));
+ if (!intr)
+ continue;
+ start_vf = 64 * reg;
+ for (vf = 0; vf < 64; vf++) {
+ if (!(intr & BIT_ULL(vf)))
+ continue;
+ dev = vf + start_vf;
+ queue_work(pfdev->flr_wq, &pfdev->flr_wrk[dev].work);
+ /* Clear interrupt */
+ writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INTX(reg));
+ /* Disable the interrupt */
+ writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1CX(reg));
+ }
+ }
+ return IRQ_HANDLED;
+}
+
+static int rvu_gen_pf_register_flr_me_intr(struct gen_pf_dev *pfdev, int numvfs)
+{
+ char *irq_name;
+ int ret;
+
+ /* Register ME interrupt handler*/
+ irq_name = &pfdev->irq_name[RVU_PF_INT_VEC_VFME0 * NAME_SIZE];
+ snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_ME0", rvu_get_pf(pfdev->pcifunc));
+ ret = request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFME0),
+ rvu_gen_pf_me_intr_handler, 0, irq_name, pfdev);
+
+ if (ret) {
+ dev_err(pfdev->dev,
+ "Generic RVUPF: IRQ registration failed for ME0\n");
+ }
+
+ /* Register FLR interrupt handler */
+ irq_name = &pfdev->irq_name[RVU_PF_INT_VEC_VFFLR0 * NAME_SIZE];
+ snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_FLR0", rvu_get_pf(pfdev->pcifunc));
+ ret = request_irq(pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFFLR0),
+ rvu_gen_pf_flr_intr_handler, 0, irq_name, pfdev);
+ if (ret) {
+ dev_err(pfdev->dev,
+ "Generic RVUPF: IRQ registration failed for FLR0\n");
+ return ret;
+ }
+
+ if (numvfs > 64) {
+ irq_name = &pfdev->irq_name[RVU_PF_INT_VEC_VFME1 * NAME_SIZE];
+ snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_ME1",
+ rvu_get_pf(pfdev->pcifunc));
+ ret = request_irq(pci_irq_vector
+ (pfdev->pdev, RVU_PF_INT_VEC_VFME1),
+ rvu_gen_pf_me_intr_handler, 0, irq_name, pfdev);
+ if (ret) {
+ dev_err(pfdev->dev,
+ "Generic RVUPF: IRQ registration failed for ME1\n");
+ }
+ irq_name = &pfdev->irq_name[RVU_PF_INT_VEC_VFFLR1 * NAME_SIZE];
+ snprintf(irq_name, NAME_SIZE, "Generic RVUPF%d_FLR1",
+ rvu_get_pf(pfdev->pcifunc));
+ ret = request_irq(pci_irq_vector
+ (pfdev->pdev, RVU_PF_INT_VEC_VFFLR1),
+ rvu_gen_pf_flr_intr_handler, 0, irq_name, pfdev);
+ if (ret) {
+ dev_err(pfdev->dev,
+ "Generic RVUPF: IRQ registration failed for FLR1\n");
+ return ret;
+ }
+ }
+
+ /* Enable ME interrupt for all VFs*/
+ writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INTX(0));
+ writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1SX(0));
+
+ /* Enable FLR interrupt for all VFs*/
+ writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INTX(0));
+ writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1SX(0));
+
+ if (numvfs > 64) {
+ numvfs -= 64;
+
+ writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INTX(1));
+ writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1SX(1));
+
+ writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INTX(1));
+ writeq(INTR_MASK(numvfs), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1SX(1));
+ }
+ return 0;
+}
+
+static void rvu_gen_pf_disable_flr_me_intr(struct gen_pf_dev *pfdev)
+{
+ int irq, vfs = pfdev->total_vfs;
+
+ /* Disable VFs ME interrupts */
+ writeq(INTR_MASK(vfs), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1CX(0));
+ irq = pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFME0);
+ free_irq(irq, pfdev);
+
+ /* Disable VFs FLR interrupts */
+ writeq(INTR_MASK(vfs), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1CX(0));
+ irq = pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFFLR0);
+ free_irq(irq, pfdev);
+
+ if (vfs <= 64)
+ return;
+
+ writeq(INTR_MASK(vfs - 64), pfdev->reg_base + RVU_PF_VFME_INT_ENA_W1CX(1));
+ irq = pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFME1);
+ free_irq(irq, pfdev);
+
+ writeq(INTR_MASK(vfs - 64), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1CX(1));
+ irq = pci_irq_vector(pfdev->pdev, RVU_PF_INT_VEC_VFFLR1);
+ free_irq(irq, pfdev);
+}
+
+static int rvu_gen_pf_flr_init(struct gen_pf_dev *pfdev, int num_vfs)
+{
+ int vf;
+
+ pfdev->flr_wq = alloc_ordered_workqueue("otx2_pf_flr_wq", WQ_HIGHPRI);
+ if (!pfdev->flr_wq)
+ return -ENOMEM;
+
+ pfdev->flr_wrk = devm_kcalloc(pfdev->dev, num_vfs,
+ sizeof(struct flr_work), GFP_KERNEL);
+ if (!pfdev->flr_wrk) {
+ destroy_workqueue(pfdev->flr_wq);
+ return -ENOMEM;
+ }
+
+ for (vf = 0; vf < num_vfs; vf++) {
+ pfdev->flr_wrk[vf].pfdev = pfdev;
+ INIT_WORK(&pfdev->flr_wrk[vf].work, rvu_gen_pf_flr_handler);
+ }
+
+ return 0;
+}
+
static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs)
{
struct gen_pf_dev *pfdev = pci_get_drvdata(pdev);
@@ -705,11 +919,25 @@ static int rvu_gen_pf_sriov_enable(struct pci_dev *pdev, int numvfs)
if (ret)
goto free_mbox;
+ ret = rvu_gen_pf_flr_init(pfdev, numvfs);
+ if (ret)
+ goto free_intr;
+
+ ret = rvu_gen_pf_register_flr_me_intr(pfdev, numvfs);
+ if (ret)
+ goto free_flr;
+
ret = pci_enable_sriov(pdev, numvfs);
if (ret)
- return ret;
+ goto free_flr_intr;
return numvfs;
+free_flr_intr:
+ rvu_gen_pf_disable_flr_me_intr(pfdev);
+free_flr:
+ rvu_gen_pf_flr_wq_destroy(pfdev);
+free_intr:
+ rvu_gen_pf_disable_pfvf_mbox_intr(pfdev, numvfs);
free_mbox:
rvu_gen_pf_pfvf_mbox_destroy(pfdev);
return ret;
@@ -725,6 +953,8 @@ static int rvu_gen_pf_sriov_disable(struct pci_dev *pdev)
pci_disable_sriov(pdev);
+ rvu_gen_pf_disable_flr_me_intr(pfdev);
+ rvu_gen_pf_flr_wq_destroy(pfdev);
rvu_gen_pf_disable_pfvf_mbox_intr(pfdev, numvfs);
rvu_gen_pf_pfvf_mbox_destroy(pfdev);
diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
index a37ed6803107..8cfe4e01e838 100644
--- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
+++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.h
@@ -16,6 +16,11 @@
struct gen_pf_dev;
+struct flr_work {
+ struct work_struct work;
+ struct gen_pf_dev *pfdev;
+};
+
struct mbox {
struct otx2_mbox mbox;
struct work_struct mbox_wrk;
@@ -33,6 +38,8 @@ struct gen_pf_dev {
struct device *dev;
void __iomem *reg_base;
char *irq_name;
+ struct workqueue_struct *flr_wq;
+ struct flr_work *flr_wrk;
struct work_struct mbox_wrk;
struct work_struct mbox_wrk_up;
--
2.25.1
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH 4/4] soc: marvell: rvu-pf: Handle function level reset (FLR) IRQs for VFs
2024-09-20 11:23 ` [PATCH 4/4] soc: marvell: rvu-pf: Handle function level reset (FLR) IRQs for VFs Anshumali Gaur
@ 2024-09-21 22:49 ` Alexander Sverdlin
2024-09-25 9:13 ` Anshumali Gaur
0 siblings, 1 reply; 14+ messages in thread
From: Alexander Sverdlin @ 2024-09-21 22:49 UTC (permalink / raw)
To: Anshumali Gaur, conor.dooley, ulf.hansson, arnd, linus.walleij,
nikita.shubin, vkoul, cyy, krzysztof.kozlowski, linux-kernel,
sgoutham
Hi Anshumali!
On Fri, 2024-09-20 at 16:53 +0530, Anshumali Gaur wrote:
> Added PCIe FLR interrupt handler for VFs. When FLR is triggered for VFs,
> parent PF gets an interrupt. PF creates a mbox message and sends it to
> RVU Admin function (AF). AF cleans up all the resources attached to that
> specific VF and acks the PF that FLR is handled.
>
> Signed-off-by: Anshumali Gaur <agaur@marvell.com>
> ---
[]
> diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
> index 624c55123a19..e2e7c11dd85d 100644
> --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
> +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
> @@ -691,6 +700,211 @@ static int rvu_gen_pf_register_pfvf_mbox_intr(struct gen_pf_dev *pfdev, int numv
> return 0;
> }
>
> +static void rvu_gen_pf_flr_handler(struct work_struct *work)
> +{
> + struct flr_work *flrwork = container_of(work, struct flr_work, work);
> + struct gen_pf_dev *pfdev = flrwork->pfdev;
> + struct mbox *mbox = &pfdev->mbox;
> + struct msg_req *req;
> + int vf, reg = 0;
> +
> + vf = flrwork - pfdev->flr_wrk;
> +
> + mutex_lock(&mbox->lock);
> + req = gen_pf_mbox_alloc_msg_vf_flr(mbox);
So this function want's to be a product of "M" macro from patch 2?
But does it really happen?
> + if (!req) {
> + mutex_unlock(&mbox->lock);
> + return;
> + }
> + req->hdr.pcifunc &= RVU_PFVF_FUNC_MASK;
Did you mean "req->hdr.pcifunc &= ~RVU_PFVF_FUNC_MASK;"?
> + req->hdr.pcifunc |= (vf + 1) & RVU_PFVF_FUNC_MASK;
> +
> + if (!rvu_gen_pf_sync_mbox_msg(&pfdev->mbox)) {
> + if (vf >= 64) {
> + reg = 1;
> + vf = vf - 64;
> + }
> + /* clear transcation pending bit */
> + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFTRPENDX(reg));
> + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1SX(reg));
> + }
> +
> + mutex_unlock(&mbox->lock);
> +}
> +
> +static irqreturn_t rvu_gen_pf_me_intr_handler(int irq, void *pf_irq)
> +{
> + struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
> + int vf, reg, num_reg = 1;
> + u64 intr;
> +
> + if (pfdev->total_vfs > 64)
> + num_reg = 2;
> +
> + for (reg = 0; reg < num_reg; reg++) {
> + intr = readq(pfdev->reg_base + RVU_PF_VFME_INTX(reg));
> + if (!intr)
> + continue;
> + for (vf = 0; vf < 64; vf++) {
> + if (!(intr & BIT_ULL(vf)))
> + continue;
> + /* clear trpend bit */
> + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFTRPENDX(reg));
> + /* clear interrupt */
> + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFME_INTX(reg));
> + }
> + }
Should anything else have been performed in the IRQ handler besides acknowledging the
IRQ request?
> + return IRQ_HANDLED;
> +}
> +
> +static irqreturn_t rvu_gen_pf_flr_intr_handler(int irq, void *pf_irq)
> +{
> + struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
> + int reg, dev, vf, start_vf, num_reg = 1;
> + u64 intr;
> +
> + if (pfdev->total_vfs > 64)
> + num_reg = 2;
> +
> + for (reg = 0; reg < num_reg; reg++) {
> + intr = readq(pfdev->reg_base + RVU_PF_VFFLR_INTX(reg));
> + if (!intr)
> + continue;
> + start_vf = 64 * reg;
> + for (vf = 0; vf < 64; vf++) {
> + if (!(intr & BIT_ULL(vf)))
> + continue;
> + dev = vf + start_vf;
> + queue_work(pfdev->flr_wq, &pfdev->flr_wrk[dev].work);
> + /* Clear interrupt */
> + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INTX(reg));
> + /* Disable the interrupt */
> + writeq(BIT_ULL(vf), pfdev->reg_base + RVU_PF_VFFLR_INT_ENA_W1CX(reg));
> + }
> + }
> + return IRQ_HANDLED;
> +}
[]
--
Alexander Sverdlin.
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH 4/4] soc: marvell: rvu-pf: Handle function level reset (FLR) IRQs for VFs
2024-09-21 22:49 ` Alexander Sverdlin
@ 2024-09-25 9:13 ` Anshumali Gaur
2024-09-25 9:19 ` Alexander Sverdlin
0 siblings, 1 reply; 14+ messages in thread
From: Anshumali Gaur @ 2024-09-25 9:13 UTC (permalink / raw)
To: Alexander Sverdlin, conor.dooley@microchip.com,
ulf.hansson@linaro.org, arnd@arndb.de, linus.walleij@linaro.org,
nikita.shubin@maquefel.me, vkoul@kernel.org, cyy@cyyself.name,
krzysztof.kozlowski@linaro.org, linux-kernel@vger.kernel.org,
Sunil Kovvuri Goutham
> -----Original Message-----
> From: Alexander Sverdlin <alexander.sverdlin@gmail.com>
> Sent: Sunday, September 22, 2024 4:20 AM
> To: Anshumali Gaur <agaur@marvell.com>; conor.dooley@microchip.com;
> ulf.hansson@linaro.org; arnd@arndb.de; linus.walleij@linaro.org;
> nikita.shubin@maquefel.me; vkoul@kernel.org; cyy@cyyself.name;
> krzysztof.kozlowski@linaro.org; linux-kernel@vger.kernel.org; Sunil Kovvuri
> Goutham <sgoutham@marvell.com>
> Subject: Re: [PATCH 4/4] soc: marvell: rvu-pf: Handle function level
> reset (FLR) IRQs for VFs
>
> Hi Anshumali! On Fri, 2024-09-20 at 16: 53 +0530, Anshumali Gaur wrote: >
> Added PCIe FLR interrupt handler for VFs. When FLR is triggered for VFs, > parent
> PF gets an interrupt. PF creates a mbox message and sends it to > RVU Admin
> function
> Hi Anshumali!
>
> On Fri, 2024-09-20 at 16:53 +0530, Anshumali Gaur wrote:
> > Added PCIe FLR interrupt handler for VFs. When FLR is triggered for
> > VFs, parent PF gets an interrupt. PF creates a mbox message and sends
> > it to RVU Admin function (AF). AF cleans up all the resources attached
> > to that specific VF and acks the PF that FLR is handled.
> >
> > Signed-off-by: Anshumali Gaur <agaur@marvell.com>
> > ---
>
> []
>
> > diff --git a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
> > b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
> > index 624c55123a19..e2e7c11dd85d 100644
> > --- a/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
> > +++ b/drivers/soc/marvell/rvu_gen_pf/gen_pf.c
> > @@ -691,6 +700,211 @@ static int rvu_gen_pf_register_pfvf_mbox_intr(struct
> gen_pf_dev *pfdev, int numv
> > return 0;
> > }
> >
> > +static void rvu_gen_pf_flr_handler(struct work_struct *work) {
> > + struct flr_work *flrwork = container_of(work, struct flr_work, work);
> > + struct gen_pf_dev *pfdev = flrwork->pfdev;
> > + struct mbox *mbox = &pfdev->mbox;
> > + struct msg_req *req;
> > + int vf, reg = 0;
> > +
> > + vf = flrwork - pfdev->flr_wrk;
> > +
> > + mutex_lock(&mbox->lock);
> > + req = gen_pf_mbox_alloc_msg_vf_flr(mbox);
>
> So this function want's to be a product of "M" macro from patch 2?
> But does it really happen?
>
Yes it uses M macro
M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp)
you can refer: net/ethernet/marvell/octeontx2/af/mbox.h for more details
> > + if (!req) {
> > + mutex_unlock(&mbox->lock);
> > + return;
> > + }
> > + req->hdr.pcifunc &= RVU_PFVF_FUNC_MASK;
>
> Did you mean "req->hdr.pcifunc &= ~RVU_PFVF_FUNC_MASK;"?
>
yes, thank you for pointing this out, will do the changes.
> > + req->hdr.pcifunc |= (vf + 1) & RVU_PFVF_FUNC_MASK;
> > +
> > + if (!rvu_gen_pf_sync_mbox_msg(&pfdev->mbox)) {
> > + if (vf >= 64) {
> > + reg = 1;
> > + vf = vf - 64;
> > + }
> > + /* clear transcation pending bit */
> > + writeq(BIT_ULL(vf), pfdev->reg_base +
> RVU_PF_VFTRPENDX(reg));
> > + writeq(BIT_ULL(vf), pfdev->reg_base +
> RVU_PF_VFFLR_INT_ENA_W1SX(reg));
> > + }
> > +
> > + mutex_unlock(&mbox->lock);
> > +}
> > +
> > +static irqreturn_t rvu_gen_pf_me_intr_handler(int irq, void *pf_irq)
> > +{
> > + struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
> > + int vf, reg, num_reg = 1;
> > + u64 intr;
> > +
> > + if (pfdev->total_vfs > 64)
> > + num_reg = 2;
> > +
> > + for (reg = 0; reg < num_reg; reg++) {
> > + intr = readq(pfdev->reg_base + RVU_PF_VFME_INTX(reg));
> > + if (!intr)
> > + continue;
> > + for (vf = 0; vf < 64; vf++) {
> > + if (!(intr & BIT_ULL(vf)))
> > + continue;
> > + /* clear trpend bit */
> > + writeq(BIT_ULL(vf), pfdev->reg_base +
> RVU_PF_VFTRPENDX(reg));
> > + /* clear interrupt */
> > + writeq(BIT_ULL(vf), pfdev->reg_base +
> RVU_PF_VFME_INTX(reg));
> > + }
> > + }
>
> Should anything else have been performed in the IRQ handler besides
> acknowledging the IRQ request?
>
We are just acknowledging the IRQ request here.
> > + return IRQ_HANDLED;
> > +}
> > +
> > +static irqreturn_t rvu_gen_pf_flr_intr_handler(int irq, void *pf_irq)
> > +{
> > + struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
> > + int reg, dev, vf, start_vf, num_reg = 1;
> > + u64 intr;
> > +
> > + if (pfdev->total_vfs > 64)
> > + num_reg = 2;
> > +
> > + for (reg = 0; reg < num_reg; reg++) {
> > + intr = readq(pfdev->reg_base + RVU_PF_VFFLR_INTX(reg));
> > + if (!intr)
> > + continue;
> > + start_vf = 64 * reg;
> > + for (vf = 0; vf < 64; vf++) {
> > + if (!(intr & BIT_ULL(vf)))
> > + continue;
> > + dev = vf + start_vf;
> > + queue_work(pfdev->flr_wq, &pfdev-
> >flr_wrk[dev].work);
> > + /* Clear interrupt */
> > + writeq(BIT_ULL(vf), pfdev->reg_base +
> RVU_PF_VFFLR_INTX(reg));
> > + /* Disable the interrupt */
> > + writeq(BIT_ULL(vf), pfdev->reg_base +
> RVU_PF_VFFLR_INT_ENA_W1CX(reg));
> > + }
> > + }
> > + return IRQ_HANDLED;
> > +}
>
> []
>
> --
> Alexander Sverdlin.
Thanks and Regards,
Anshumali Gaur
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH 4/4] soc: marvell: rvu-pf: Handle function level reset (FLR) IRQs for VFs
2024-09-25 9:13 ` Anshumali Gaur
@ 2024-09-25 9:19 ` Alexander Sverdlin
2024-09-25 12:46 ` Anshumali Gaur
0 siblings, 1 reply; 14+ messages in thread
From: Alexander Sverdlin @ 2024-09-25 9:19 UTC (permalink / raw)
To: Anshumali Gaur, conor.dooley@microchip.com,
ulf.hansson@linaro.org, arnd@arndb.de, linus.walleij@linaro.org,
nikita.shubin@maquefel.me, vkoul@kernel.org, cyy@cyyself.name,
krzysztof.kozlowski@linaro.org, linux-kernel@vger.kernel.org,
Sunil Kovvuri Goutham
Hi Anshumali!
On Wed, 2024-09-25 at 09:13 +0000, Anshumali Gaur wrote:
> > > +static irqreturn_t rvu_gen_pf_me_intr_handler(int irq, void *pf_irq)
> > > +{
> > > + struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
> > > + int vf, reg, num_reg = 1;
> > > + u64 intr;
> > > +
> > > + if (pfdev->total_vfs > 64)
> > > + num_reg = 2;
> > > +
> > > + for (reg = 0; reg < num_reg; reg++) {
> > > + intr = readq(pfdev->reg_base + RVU_PF_VFME_INTX(reg));
> > > + if (!intr)
> > > + continue;
> > > + for (vf = 0; vf < 64; vf++) {
> > > + if (!(intr & BIT_ULL(vf)))
> > > + continue;
> > > + /* clear trpend bit */
> > > + writeq(BIT_ULL(vf), pfdev->reg_base +
> > RVU_PF_VFTRPENDX(reg));
> > > + /* clear interrupt */
> > > + writeq(BIT_ULL(vf), pfdev->reg_base +
> > RVU_PF_VFME_INTX(reg));
> > > + }
> > > + }
> >
> > Should anything else have been performed in the IRQ handler besides
> > acknowledging the IRQ request?
> >
> We are just acknowledging the IRQ request here.
But what's the goal of requesting the IRQ in the first place then?
--
Alexander Sverdlin.
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH 4/4] soc: marvell: rvu-pf: Handle function level reset (FLR) IRQs for VFs
2024-09-25 9:19 ` Alexander Sverdlin
@ 2024-09-25 12:46 ` Anshumali Gaur
0 siblings, 0 replies; 14+ messages in thread
From: Anshumali Gaur @ 2024-09-25 12:46 UTC (permalink / raw)
To: Alexander Sverdlin, conor.dooley@microchip.com,
ulf.hansson@linaro.org, arnd@arndb.de, linus.walleij@linaro.org,
nikita.shubin@maquefel.me, vkoul@kernel.org, cyy@cyyself.name,
krzysztof.kozlowski@linaro.org, linux-kernel@vger.kernel.org,
Sunil Kovvuri Goutham
> -----Original Message-----
> From: Alexander Sverdlin <alexander.sverdlin@gmail.com>
> Sent: Wednesday, September 25, 2024 2:49 PM
> To: Anshumali Gaur <agaur@marvell.com>; conor.dooley@microchip.com;
> ulf.hansson@linaro.org; arnd@arndb.de; linus.walleij@linaro.org;
> nikita.shubin@maquefel.me; vkoul@kernel.org; cyy@cyyself.name;
> krzysztof.kozlowski@linaro.org; linux-kernel@vger.kernel.org; Sunil Kovvuri
> Goutham <sgoutham@marvell.com>
> Subject: Re: [PATCH 4/4] soc: marvell: rvu-pf: Handle function level
> reset (FLR) IRQs for VFs
>
> Hi Anshumali! On Wed, 2024-09-25 at 09: 13 +0000, Anshumali Gaur wrote: > > >
> +static irqreturn_t rvu_gen_pf_me_intr_handler(int irq, void *pf_irq) > > > +{ > > >
> + struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
> Hi Anshumali!
>
> On Wed, 2024-09-25 at 09:13 +0000, Anshumali Gaur wrote:
> > > > +static irqreturn_t rvu_gen_pf_me_intr_handler(int irq, void
> > > > +*pf_irq) {
> > > > + struct gen_pf_dev *pfdev = (struct gen_pf_dev *)pf_irq;
> > > > + int vf, reg, num_reg = 1;
> > > > + u64 intr;
> > > > +
> > > > + if (pfdev->total_vfs > 64)
> > > > + num_reg = 2;
> > > > +
> > > > + for (reg = 0; reg < num_reg; reg++) {
> > > > + intr = readq(pfdev->reg_base + RVU_PF_VFME_INTX(reg));
> > > > + if (!intr)
> > > > + continue;
> > > > + for (vf = 0; vf < 64; vf++) {
> > > > + if (!(intr & BIT_ULL(vf)))
> > > > + continue;
> > > > + /* clear trpend bit */
> > > > + writeq(BIT_ULL(vf), pfdev->reg_base +
> > > RVU_PF_VFTRPENDX(reg));
> > > > + /* clear interrupt */
> > > > + writeq(BIT_ULL(vf), pfdev->reg_base +
> > > RVU_PF_VFME_INTX(reg));
> > > > + }
> > > > + }
> > >
> > > Should anything else have been performed in the IRQ handler besides
> > > acknowledging the IRQ request?
> > >
> > We are just acknowledging the IRQ request here.
>
> But what's the goal of requesting the IRQ in the first place then?
>
In this PCIe Master enable intr handler we are clearing the PCI Transaction pending bits of PFVF devices
until it is cleared
This API is called whenever the device gets reset.
Thanks and Regards,
Anshumali Gaur
> --
> Alexander Sverdlin.
^ permalink raw reply [flat|nested] 14+ messages in thread