* Re: [PATCH V2 3/4] vhost_vdpa: don't setup irq offloading when irq_num < 0
[not found] ` <20220125091744.115996-4-lingshan.zhu@intel.com>
@ 2022-01-25 19:17 ` kernel test robot
2022-01-25 19:30 ` Michael S. Tsirkin
1 sibling, 0 replies; 5+ messages in thread
From: kernel test robot @ 2022-01-25 19:17 UTC (permalink / raw)
To: Zhu Lingshan, mst, jasowang
Cc: netdev, Zhu Lingshan, llvm, kbuild-all, virtualization
Hi Zhu,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on mst-vhost/linux-next]
[also build test WARNING on horms-ipvs/master linus/master v5.17-rc1 next-20220125]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]
url: https://github.com/0day-ci/linux/commits/Zhu-Lingshan/vDPA-ifcvf-implement-shared-IRQ-feature/20220125-174020
base: https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
config: arm-randconfig-c002-20220124 (https://download.01.org/0day-ci/archive/20220126/202201260245.1yTB6YwE-lkp@intel.com/config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 997e128e2a78f5a5434fc75997441ae1ee76f8a4)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install arm cross compiling tool for clang build
# apt-get install binutils-arm-linux-gnueabi
# https://github.com/0day-ci/linux/commit/9242eae873643db8562d24857da7d05a2950ecfe
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Zhu-Lingshan/vDPA-ifcvf-implement-shared-IRQ-feature/20220125-174020
git checkout 9242eae873643db8562d24857da7d05a2950ecfe
# save the config file to linux build tree
mkdir build_dir
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/vhost/
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
>> drivers/vhost/vdpa.c:99:6: warning: variable 'irq' is uninitialized when used here [-Wuninitialized]
if (irq < 0)
^~~
drivers/vhost/vdpa.c:94:14: note: initialize the variable 'irq' to silence this warning
int ret, irq;
^
= 0
1 warning generated.
vim +/irq +99 drivers/vhost/vdpa.c
88
89 static void vhost_vdpa_setup_vq_irq(struct vhost_vdpa *v, u16 qid)
90 {
91 struct vhost_virtqueue *vq = &v->vqs[qid];
92 const struct vdpa_config_ops *ops = v->vdpa->config;
93 struct vdpa_device *vdpa = v->vdpa;
94 int ret, irq;
95
96 if (!ops->get_vq_irq)
97 return;
98
> 99 if (irq < 0)
100 return;
101
102 irq = ops->get_vq_irq(vdpa, qid);
103 irq_bypass_unregister_producer(&vq->call_ctx.producer);
104 if (!vq->call_ctx.ctx || irq < 0)
105 return;
106
107 vq->call_ctx.producer.token = vq->call_ctx.ctx;
108 vq->call_ctx.producer.irq = irq;
109 ret = irq_bypass_register_producer(&vq->call_ctx.producer);
110 if (unlikely(ret))
111 dev_info(&v->dev, "vq %u, irq bypass producer (token %p) registration fails, ret = %d\n",
112 qid, vq->call_ctx.producer.token, ret);
113 }
114
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH V2 3/4] vhost_vdpa: don't setup irq offloading when irq_num < 0
[not found] ` <20220125091744.115996-4-lingshan.zhu@intel.com>
2022-01-25 19:17 ` [PATCH V2 3/4] vhost_vdpa: don't setup irq offloading when irq_num < 0 kernel test robot
@ 2022-01-25 19:30 ` Michael S. Tsirkin
1 sibling, 0 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2022-01-25 19:30 UTC (permalink / raw)
To: Zhu Lingshan; +Cc: netdev, virtualization
On Tue, Jan 25, 2022 at 05:17:43PM +0800, Zhu Lingshan wrote:
> When irq number is negative(e.g., -EINVAL), the virtqueue
> may be disabled or the virtqueues are sharing a device irq.
> In such case, we should not setup irq offloading for a virtqueue.
>
> Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
> ---
> drivers/vhost/vdpa.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index 851539807bc9..909891d518e8 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -96,6 +96,9 @@ static void vhost_vdpa_setup_vq_irq(struct vhost_vdpa *v, u16 qid)
> if (!ops->get_vq_irq)
> return;
>
> + if (irq < 0)
> + return;
> +
> irq = ops->get_vq_irq(vdpa, qid);
So it's used before it's initialized. Ugh.
How was this patchset tested?
> irq_bypass_unregister_producer(&vq->call_ctx.producer);
> if (!vq->call_ctx.ctx || irq < 0)
> --
> 2.27.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH V2 0/4] vDPA/ifcvf: implement shared IRQ feature
[not found] <20220125091744.115996-1-lingshan.zhu@intel.com>
[not found] ` <20220125091744.115996-4-lingshan.zhu@intel.com>
@ 2022-01-25 19:32 ` Michael S. Tsirkin
[not found] ` <20220125091744.115996-3-lingshan.zhu@intel.com>
[not found] ` <20220125091744.115996-5-lingshan.zhu@intel.com>
3 siblings, 0 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2022-01-25 19:32 UTC (permalink / raw)
To: Zhu Lingshan; +Cc: netdev, virtualization
On Tue, Jan 25, 2022 at 05:17:40PM +0800, Zhu Lingshan wrote:
> It has been observed that on some platforms/devices, there may
> not be enough MSI vectors for virtqueues and the config change.
> Under such circumstances, the interrupt sources of a device
> have to share vectors/IRQs.
>
> This series implemented a shared IRQ feature for ifcvf.
Which configurations did you test with this, and what were
the results? Given patch 3 is broken ...
> Please help review.
>
> Changes from V1:
> (1) Enable config interrupt when only one vector is allocated(Michael)
> (2) Clean vectors/IRQs if failed to request config interrupt
> since config interrupt is a must(Michael)
> (3) Keep local vdpa_ops, disable irq_bypass by setting IRQ = -EINVAL
> for shared IRQ case(Michael)
> (4) Improvements on error messages(Michael)
> (5) Squash functions implementation patches to the callers(Michael)
>
> Zhu Lingshan (4):
> vDPA/ifcvf: implement IO read/write helpers in the header file
> vDPA/ifcvf: implement device MSIX vector allocator
> vhost_vdpa: don't setup irq offloading when irq_num < 0
> vDPA/ifcvf: implement shared IRQ feature
>
> drivers/vdpa/ifcvf/ifcvf_base.c | 67 +++------
> drivers/vdpa/ifcvf/ifcvf_base.h | 60 +++++++-
> drivers/vdpa/ifcvf/ifcvf_main.c | 254 ++++++++++++++++++++++++++++----
> drivers/vhost/vdpa.c | 3 +
> 4 files changed, 305 insertions(+), 79 deletions(-)
>
> --
> 2.27.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH V2 2/4] vDPA/ifcvf: implement device MSIX vector allocator
[not found] ` <20220125091744.115996-3-lingshan.zhu@intel.com>
@ 2022-01-25 19:36 ` Michael S. Tsirkin
0 siblings, 0 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2022-01-25 19:36 UTC (permalink / raw)
To: Zhu Lingshan; +Cc: netdev, virtualization
On Tue, Jan 25, 2022 at 05:17:42PM +0800, Zhu Lingshan wrote:
> This commit implements a MSIX vector allocation helper
> for vqs and config interrupts.
>
> Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
> ---
> drivers/vdpa/ifcvf/ifcvf_main.c | 30 ++++++++++++++++++++++++++++--
> 1 file changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
> index d1a6b5ab543c..7e2af2d2aaf5 100644
> --- a/drivers/vdpa/ifcvf/ifcvf_main.c
> +++ b/drivers/vdpa/ifcvf/ifcvf_main.c
> @@ -58,14 +58,40 @@ static void ifcvf_free_irq(struct ifcvf_adapter *adapter, int queues)
> ifcvf_free_irq_vectors(pdev);
> }
>
> +static int ifcvf_alloc_vectors(struct ifcvf_adapter *adapter)
So the helper returns an int...
> +{
> + struct pci_dev *pdev = adapter->pdev;
> + struct ifcvf_hw *vf = &adapter->vf;
> + u16 max_intr, ret;
> +
> + /* all queues and config interrupt */
> + max_intr = vf->nr_vring + 1;
> + ret = pci_alloc_irq_vectors(pdev, 1, max_intr, PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
> +
> + if (ret < 0) {
> + IFCVF_ERR(pdev, "Failed to alloc IRQ vectors\n");
> + return ret;
which is negative on error...
> + }
> +
> + if (ret < max_intr)
> + IFCVF_INFO(pdev,
> + "Requested %u vectors, however only %u allocated, lower performance\n",
> + max_intr, ret);
> +
> + return ret;
> +}
> +
> static int ifcvf_request_irq(struct ifcvf_adapter *adapter)
> {
> struct pci_dev *pdev = adapter->pdev;
> struct ifcvf_hw *vf = &adapter->vf;
> int vector, i, ret, irq;
> - u16 max_intr;
> + u16 nvectors, max_intr;
> +
> + nvectors = ifcvf_alloc_vectors(adapter);
which you proceed to stash into an unsigned int ...
> + if (!(nvectors > 0))
and then compare to zero ...
> + return nvectors;
>
correct error handling is unlikely as a result.
> - /* all queues and config interrupt */
> max_intr = vf->nr_vring + 1;
>
> ret = pci_alloc_irq_vectors(pdev, max_intr,
As long as you are introducing a helper, document it's return
type and behaviour and then use correctly.
> --
> 2.27.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH V2 4/4] vDPA/ifcvf: implement shared IRQ feature
[not found] ` <20220125091744.115996-5-lingshan.zhu@intel.com>
@ 2022-01-25 19:42 ` Michael S. Tsirkin
0 siblings, 0 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2022-01-25 19:42 UTC (permalink / raw)
To: Zhu Lingshan; +Cc: netdev, virtualization
On Tue, Jan 25, 2022 at 05:17:44PM +0800, Zhu Lingshan wrote:
> On some platforms/devices, there may not be enough MSI vector
> slots allocated for virtqueues and config changes. In such a case,
> the interrupt sources(virtqueues, config changes) must share
> an IRQ/vector, to avoid initialization failures, keep
> the device functional.
>
> This commit handles three cases:
> (1) number of the allocated vectors == the number of virtqueues + 1
> (config changes), every virtqueue and the config interrupt has
> a separated vector/IRQ, the best and the most likely case.
> (2) number of the allocated vectors is less than the best case, but
> greater than 1. In this case, all virtqueues share a vector/IRQ,
> the config interrupt has a separated vector/IRQ
> (3) only one vector is allocated, in this case, the virtqueues and
> the config interrupt share a vector/IRQ. The worst and most
> unlikely case.
>
> Otherwise, it needs to fail.
>
> This commit introduces some helper functions:
> ifcvf_set_vq_vector() and ifcvf_set_config_vector() sets virtqueue
> vector and config vector in the device config space, so that
> the device can send interrupt DMA.
>
> This commit adds some fields in struct ifcvf_hw and re-placed
> the existed fields to be aligned with the cacheline.
>
> Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
> ---
> drivers/vdpa/ifcvf/ifcvf_base.c | 47 +++++--
> drivers/vdpa/ifcvf/ifcvf_base.h | 23 ++-
> drivers/vdpa/ifcvf/ifcvf_main.c | 240 +++++++++++++++++++++++++++-----
> 3 files changed, 253 insertions(+), 57 deletions(-)
>
> diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c b/drivers/vdpa/ifcvf/ifcvf_base.c
> index 397692ae671c..18dcb63ab1e3 100644
> --- a/drivers/vdpa/ifcvf/ifcvf_base.c
> +++ b/drivers/vdpa/ifcvf/ifcvf_base.c
> @@ -15,6 +15,36 @@ struct ifcvf_adapter *vf_to_adapter(struct ifcvf_hw *hw)
> return container_of(hw, struct ifcvf_adapter, vf);
> }
>
> +int ifcvf_set_vq_vector(struct ifcvf_hw *hw, u16 qid, int vector)
> +{
> + struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
> + struct ifcvf_adapter *ifcvf = vf_to_adapter(hw);
> +
> + ifc_iowrite16(qid, &cfg->queue_select);
> + ifc_iowrite16(vector, &cfg->queue_msix_vector);
> + if (ifc_ioread16(&cfg->queue_msix_vector) == VIRTIO_MSI_NO_VECTOR) {
> + IFCVF_ERR(ifcvf->pdev, "No msix vector for queue %u\n", qid);
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +int ifcvf_set_config_vector(struct ifcvf_hw *hw, int vector)
> +{
> + struct virtio_pci_common_cfg __iomem *cfg = hw->common_cfg;
> + struct ifcvf_adapter *ifcvf = vf_to_adapter(hw);
> +
> + cfg = hw->common_cfg;
> + ifc_iowrite16(vector, &cfg->msix_config);
> + if (ifc_ioread16(&cfg->msix_config) == VIRTIO_MSI_NO_VECTOR) {
> + IFCVF_ERR(ifcvf->pdev, "No msix vector for device config\n");
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> static void __iomem *get_cap_addr(struct ifcvf_hw *hw,
> struct virtio_pci_cap *cap)
> {
> @@ -140,6 +170,8 @@ int ifcvf_init_hw(struct ifcvf_hw *hw, struct pci_dev *pdev)
> hw->common_cfg, hw->notify_base, hw->isr,
> hw->dev_cfg, hw->notify_off_multiplier);
>
> + hw->vqs_shared_irq = -EINVAL;
> +
This stuffing of errno values into u32 is not a good idea.
> return 0;
> }
>
> @@ -321,12 +353,6 @@ static int ifcvf_hw_enable(struct ifcvf_hw *hw)
>
> ifcvf = vf_to_adapter(hw);
> cfg = hw->common_cfg;
> - ifc_iowrite16(IFCVF_MSI_CONFIG_OFF, &cfg->msix_config);
> -
> - if (ifc_ioread16(&cfg->msix_config) == VIRTIO_MSI_NO_VECTOR) {
> - IFCVF_ERR(ifcvf->pdev, "No msix vector for device config\n");
> - return -EINVAL;
> - }
>
> for (i = 0; i < hw->nr_vring; i++) {
> if (!hw->vring[i].ready)
> @@ -340,15 +366,6 @@ static int ifcvf_hw_enable(struct ifcvf_hw *hw)
> ifc_iowrite64_twopart(hw->vring[i].used, &cfg->queue_used_lo,
> &cfg->queue_used_hi);
> ifc_iowrite16(hw->vring[i].size, &cfg->queue_size);
> - ifc_iowrite16(i + IFCVF_MSI_QUEUE_OFF, &cfg->queue_msix_vector);
> -
> - if (ifc_ioread16(&cfg->queue_msix_vector) ==
> - VIRTIO_MSI_NO_VECTOR) {
> - IFCVF_ERR(ifcvf->pdev,
> - "No msix vector for queue %u\n", i);
> - return -EINVAL;
> - }
> -
> ifcvf_set_vq_state(hw, i, hw->vring[i].last_avail_idx);
> ifc_iowrite16(1, &cfg->queue_enable);
> }
> diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h b/drivers/vdpa/ifcvf/ifcvf_base.h
> index 949b4fb9d554..d2a2a526f0fc 100644
> --- a/drivers/vdpa/ifcvf/ifcvf_base.h
> +++ b/drivers/vdpa/ifcvf/ifcvf_base.h
> @@ -27,8 +27,6 @@
>
> #define IFCVF_QUEUE_ALIGNMENT PAGE_SIZE
> #define IFCVF_QUEUE_MAX 32768
> -#define IFCVF_MSI_CONFIG_OFF 0
> -#define IFCVF_MSI_QUEUE_OFF 1
> #define IFCVF_PCI_MAX_RESOURCE 6
>
> #define IFCVF_LM_CFG_SIZE 0x40
> @@ -42,6 +40,13 @@
> #define ifcvf_private_to_vf(adapter) \
> (&((struct ifcvf_adapter *)adapter)->vf)
>
> +/* all vqs and config interrupt has its own vector */
> +#define MSIX_VECTOR_PER_VQ_AND_CONFIG 1
> +/* all vqs share a vector, and config interrupt has a separate vector */
> +#define MSIX_VECTOR_SHARED_VQ_AND_CONFIG 2
> +/* all vqs and config interrupt share a vector */
> +#define MSIX_VECTOR_DEV_SHARED 3
> +
> static inline u8 ifc_ioread8(u8 __iomem *addr)
> {
> return ioread8(addr);
> @@ -97,25 +102,27 @@ struct ifcvf_hw {
> u8 __iomem *isr;
> /* Live migration */
> u8 __iomem *lm_cfg;
> - u16 nr_vring;
> /* Notification bar number */
> u8 notify_bar;
> + u8 msix_vector_status;
> + /* virtio-net or virtio-blk device config size */
> + u32 config_size;
> /* Notificaiton bar address */
> void __iomem *notify_base;
> phys_addr_t notify_base_pa;
> u32 notify_off_multiplier;
> + u32 dev_type;
> u64 req_features;
> u64 hw_features;
> - u32 dev_type;
> struct virtio_pci_common_cfg __iomem *common_cfg;
> void __iomem *dev_cfg;
> struct vring_info vring[IFCVF_MAX_QUEUES];
> void __iomem * const *base;
> char config_msix_name[256];
> struct vdpa_callback config_cb;
> - unsigned int config_irq;
> - /* virtio-net or virtio-blk device config size */
> - u32 config_size;
> + u32 config_irq;
> + u32 vqs_shared_irq;
> + u16 nr_vring;
> };
>
> struct ifcvf_adapter {
> @@ -160,4 +167,6 @@ int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 qid, u16 num);
> struct ifcvf_adapter *vf_to_adapter(struct ifcvf_hw *hw);
> int ifcvf_probed_virtio_net(struct ifcvf_hw *hw);
> u32 ifcvf_get_config_size(struct ifcvf_hw *hw);
> +int ifcvf_set_vq_vector(struct ifcvf_hw *hw, u16 qid, int vector);
> +int ifcvf_set_config_vector(struct ifcvf_hw *hw, int vector);
> #endif /* _IFCVF_H_ */
> diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
> index 7e2af2d2aaf5..c7070ff01776 100644
> --- a/drivers/vdpa/ifcvf/ifcvf_main.c
> +++ b/drivers/vdpa/ifcvf/ifcvf_main.c
> @@ -17,6 +17,7 @@
> #define DRIVER_AUTHOR "Intel Corporation"
> #define IFCVF_DRIVER_NAME "ifcvf"
>
> +/* handles config interrupt */
> static irqreturn_t ifcvf_config_changed(int irq, void *arg)
> {
> struct ifcvf_hw *vf = arg;
> @@ -27,6 +28,7 @@ static irqreturn_t ifcvf_config_changed(int irq, void *arg)
> return IRQ_HANDLED;
> }
>
> +/* handles vqs interrupt */
> static irqreturn_t ifcvf_intr_handler(int irq, void *arg)
> {
> struct vring_info *vring = arg;
> @@ -37,24 +39,77 @@ static irqreturn_t ifcvf_intr_handler(int irq, void *arg)
> return IRQ_HANDLED;
> }
>
> +/* handls vqs shared interrupt */
> +static irqreturn_t ifcvf_vq_shared_intr_handler(int irq, void *arg)
> +{
> + struct ifcvf_hw *vf = arg;
> + struct vring_info *vring;
> + int i;
> +
> + for (i = 0; i < vf->nr_vring; i++) {
> + vring = &vf->vring[i];
> + if (vring->cb.callback)
> + vf->vring->cb.callback(vring->cb.private);
> + }
> +
> + return IRQ_HANDLED;
> +}
> +
> +/* handles a shared interrupt for vqs and config */
> +static irqreturn_t ifcvf_dev_shared_intr_handler(int irq, void *arg)
> +{
> + struct ifcvf_hw *vf = arg;
> + u8 isr;
> +
> + isr = ifc_ioread8(vf->isr);
> + if (isr & VIRTIO_PCI_ISR_CONFIG)
> + ifcvf_config_changed(irq, arg);
> +
> + return ifcvf_vq_shared_intr_handler(irq, arg);
> +}
> +
> static void ifcvf_free_irq_vectors(void *data)
> {
> pci_free_irq_vectors(data);
> }
>
> -static void ifcvf_free_irq(struct ifcvf_adapter *adapter, int queues)
> +static void ifcvf_free_vq_irq(struct ifcvf_adapter *adapter, int queues)
> {
> struct pci_dev *pdev = adapter->pdev;
> struct ifcvf_hw *vf = &adapter->vf;
> int i;
>
> + if (vf->msix_vector_status == MSIX_VECTOR_PER_VQ_AND_CONFIG)
> + for (i = 0; i < queues; i++) {
> + devm_free_irq(&pdev->dev, vf->vring[i].irq, &vf->vring[i]);
> + vf->vring[i].irq = -EINVAL;
> + }
> + else {
> + devm_free_irq(&pdev->dev, vf->vqs_shared_irq, vf);
> + } vf->vqs_shared_irq = -EINVAL;
If you use {} in the else branch, use it on the then branch, too.
> +}
>
> - for (i = 0; i < queues; i++) {
> - devm_free_irq(&pdev->dev, vf->vring[i].irq, &vf->vring[i]);
> - vf->vring[i].irq = -EINVAL;
> +static void ifcvf_free_config_irq(struct ifcvf_adapter *adapter)
> +{
> + struct pci_dev *pdev = adapter->pdev;
> + struct ifcvf_hw *vf = &adapter->vf;
> +
> + /* If the irq is shared by all vqs and the config interrupt,
> + * it is already freed in ifcvf_free_vq_irq, so here only
> + * need to free config irq when msix_vector_status != MSIX_VECTOR_DEV_SHARED
> + */
> + if (vf->msix_vector_status != MSIX_VECTOR_DEV_SHARED) {
> + devm_free_irq(&pdev->dev, vf->config_irq, vf);
> + vf->config_irq = -EINVAL;
> }
> +}
> +
> +static void ifcvf_free_irq(struct ifcvf_adapter *adapter, int queues)
> +{
> + struct pci_dev *pdev = adapter->pdev;
>
> - devm_free_irq(&pdev->dev, vf->config_irq, vf);
> + ifcvf_free_vq_irq(adapter, queues);
> + ifcvf_free_config_irq(adapter);
> ifcvf_free_irq_vectors(pdev);
> }
>
> @@ -81,58 +136,170 @@ static int ifcvf_alloc_vectors(struct ifcvf_adapter *adapter)
> return ret;
> }
>
> -static int ifcvf_request_irq(struct ifcvf_adapter *adapter)
> +static int ifcvf_request_per_vq_irq(struct ifcvf_adapter *adapter)
> {
> struct pci_dev *pdev = adapter->pdev;
> struct ifcvf_hw *vf = &adapter->vf;
> - int vector, i, ret, irq;
> - u16 nvectors, max_intr;
> + int i, vector, ret, irq;
>
> - nvectors = ifcvf_alloc_vectors(adapter);
> - if (!(nvectors > 0))
> - return nvectors;
> + for (i = 0; i < vf->nr_vring; i++) {
> + snprintf(vf->vring[i].msix_name, 256, "ifcvf[%s]-%d\n", pci_name(pdev), i);
> + vector = i;
> + irq = pci_irq_vector(pdev, vector);
> + ret = devm_request_irq(&pdev->dev, irq,
> + ifcvf_intr_handler, 0,
> + vf->vring[i].msix_name,
> + &vf->vring[i]);
> + if (ret) {
> + IFCVF_ERR(pdev, "Failed to request irq for vq %d\n", i);
> + ifcvf_free_vq_irq(adapter, i);
> + } else {
> + vf->vring[i].irq = irq;
> + ifcvf_set_vq_vector(vf, i, vector);
> + }
> + }
>
> - max_intr = vf->nr_vring + 1;
> + vf->vqs_shared_irq = -EINVAL;
> +
> + return 0;
> +}
> +
> +static int ifcvf_request_shared_vq_irq(struct ifcvf_adapter *adapter)
> +{
> + struct pci_dev *pdev = adapter->pdev;
> + struct ifcvf_hw *vf = &adapter->vf;
> + int i, vector, ret, irq;
> +
> + vector = 0;
> + irq = pci_irq_vector(pdev, vector);
> + ret = devm_request_irq(&pdev->dev, irq,
> + ifcvf_vq_shared_intr_handler, 0,
> + "ifcvf_vq_shared_irq",
> + vf);
> + if (ret) {
> + IFCVF_ERR(pdev, "Failed to request shared irq for vf\n");
> +
> + return ret;
> + }
> +
> + vf->vqs_shared_irq = irq;
> + for (i = 0; i < vf->nr_vring; i++) {
> + vf->vring[i].irq = -EINVAL;
> + ifcvf_set_vq_vector(vf, i, vector);
> + }
> +
> + return 0;
> +
> +}
> +
> +static int ifcvf_request_dev_shared_irq(struct ifcvf_adapter *adapter)
> +{
> + struct pci_dev *pdev = adapter->pdev;
> + struct ifcvf_hw *vf = &adapter->vf;
> + int i, vector, ret, irq;
> +
> + vector = 0;
> + irq = pci_irq_vector(pdev, vector);
> + ret = devm_request_irq(&pdev->dev, irq,
> + ifcvf_dev_shared_intr_handler, 0,
> + "ifcvf_dev_shared_irq",
> + vf);
> + if (ret) {
> + IFCVF_ERR(pdev, "Failed to request shared irq for vf\n");
>
> - ret = pci_alloc_irq_vectors(pdev, max_intr,
> - max_intr, PCI_IRQ_MSIX);
> - if (ret < 0) {
> - IFCVF_ERR(pdev, "Failed to alloc IRQ vectors\n");
> return ret;
> }
>
> + vf->vqs_shared_irq = irq;
> + for (i = 0; i < vf->nr_vring; i++) {
> + vf->vring[i].irq = -EINVAL;
> + ifcvf_set_vq_vector(vf, i, vector);
> + }
> +
> + vf->config_irq = irq;
> + ifcvf_set_config_vector(vf, vector);
> +
> + return 0;
> +
> +}
> +
> +static int ifcvf_request_vq_irq(struct ifcvf_adapter *adapter)
> +{
> + struct ifcvf_hw *vf = &adapter->vf;
> + int ret;
> +
> + if (vf->msix_vector_status == MSIX_VECTOR_PER_VQ_AND_CONFIG)
> + ret = ifcvf_request_per_vq_irq(adapter);
> + else
> + ret = ifcvf_request_shared_vq_irq(adapter);
> +
> + return ret;
> +}
> +
> +static int ifcvf_request_config_irq(struct ifcvf_adapter *adapter)
> +{
> + struct pci_dev *pdev = adapter->pdev;
> + struct ifcvf_hw *vf = &adapter->vf;
> + int config_vector, ret;
> +
> + if (vf->msix_vector_status == MSIX_VECTOR_DEV_SHARED)
> + return 0;
> +
> + if (vf->msix_vector_status == MSIX_VECTOR_PER_VQ_AND_CONFIG)
> + /* vector 0 ~ vf->nr_vring for vqs, num vf->nr_vring vector for config interrupt */
> + config_vector = vf->nr_vring;
> +
> + if (vf->msix_vector_status == MSIX_VECTOR_SHARED_VQ_AND_CONFIG)
> + /* vector 0 for vqs and 1 for config interrupt */
> + config_vector = 1;
> +
> snprintf(vf->config_msix_name, 256, "ifcvf[%s]-config\n",
> pci_name(pdev));
> - vector = 0;
> - vf->config_irq = pci_irq_vector(pdev, vector);
> + vf->config_irq = pci_irq_vector(pdev, config_vector);
> ret = devm_request_irq(&pdev->dev, vf->config_irq,
> ifcvf_config_changed, 0,
> vf->config_msix_name, vf);
> if (ret) {
> IFCVF_ERR(pdev, "Failed to request config irq\n");
> + ifcvf_free_vq_irq(adapter, vf->nr_vring);
> return ret;
> }
> + ifcvf_set_config_vector(vf, config_vector);
>
> - for (i = 0; i < vf->nr_vring; i++) {
> - snprintf(vf->vring[i].msix_name, 256, "ifcvf[%s]-%d\n",
> - pci_name(pdev), i);
> - vector = i + IFCVF_MSI_QUEUE_OFF;
> - irq = pci_irq_vector(pdev, vector);
> - ret = devm_request_irq(&pdev->dev, irq,
> - ifcvf_intr_handler, 0,
> - vf->vring[i].msix_name,
> - &vf->vring[i]);
> - if (ret) {
> - IFCVF_ERR(pdev,
> - "Failed to request irq for vq %d\n", i);
> - ifcvf_free_irq(adapter, i);
> + return 0;
> +}
>
> - return ret;
> - }
> +static int ifcvf_request_irq(struct ifcvf_adapter *adapter)
This function was not moved, was it?
Try --minimal or --histogram maybe. Or just
--anchored=ifcvf_request_irq
> +{
> + struct ifcvf_hw *vf = &adapter->vf;
> + u16 nvectors, max_intr;
> + int ret;
>
> - vf->vring[i].irq = irq;
> + nvectors = ifcvf_alloc_vectors(adapter);
> + if (!(nvectors > 0))
> + return nvectors;
> +
> + vf->msix_vector_status = MSIX_VECTOR_PER_VQ_AND_CONFIG;
> + max_intr = vf->nr_vring + 1;
> + if (nvectors < max_intr)
> + vf->msix_vector_status = MSIX_VECTOR_SHARED_VQ_AND_CONFIG;
> +
> + if (nvectors == 1) {
> + vf->msix_vector_status = MSIX_VECTOR_DEV_SHARED;
> + ret = ifcvf_request_dev_shared_irq(adapter);
> +
> + return ret;
> }
>
> + ret = ifcvf_request_vq_irq(adapter);
> + if (ret)
> + return ret;
> +
> + ret = ifcvf_request_config_irq(adapter);
> +
> + if (ret)
> + return ret;
> +
> return 0;
> }
>
> @@ -436,7 +603,10 @@ static int ifcvf_vdpa_get_vq_irq(struct vdpa_device *vdpa_dev,
> {
> struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev);
>
> - return vf->vring[qid].irq;
> + if (vf->vqs_shared_irq < 0)
> + return vf->vring[qid].irq;
> + else
> + return -EINVAL;
> }
>
> static struct vdpa_notification_area ifcvf_get_vq_notification(struct vdpa_device *vdpa_dev,
> --
> 2.27.0
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-01-25 19:43 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20220125091744.115996-1-lingshan.zhu@intel.com>
[not found] ` <20220125091744.115996-4-lingshan.zhu@intel.com>
2022-01-25 19:17 ` [PATCH V2 3/4] vhost_vdpa: don't setup irq offloading when irq_num < 0 kernel test robot
2022-01-25 19:30 ` Michael S. Tsirkin
2022-01-25 19:32 ` [PATCH V2 0/4] vDPA/ifcvf: implement shared IRQ feature Michael S. Tsirkin
[not found] ` <20220125091744.115996-3-lingshan.zhu@intel.com>
2022-01-25 19:36 ` [PATCH V2 2/4] vDPA/ifcvf: implement device MSIX vector allocator Michael S. Tsirkin
[not found] ` <20220125091744.115996-5-lingshan.zhu@intel.com>
2022-01-25 19:42 ` [PATCH V2 4/4] vDPA/ifcvf: implement shared IRQ feature Michael S. Tsirkin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).