From: "Zhu, Lingshan" <lingshan.zhu@intel.com>
To: Jason Wang <jasowang@redhat.com>
Cc: mst@redhat.com, virtualization@lists.linux-foundation.org
Subject: Re: [PATCH V2 4/5] vDPA/ifcvf: synchronize irqs in the reset routine
Date: Fri, 26 May 2023 14:35:44 +0800 [thread overview]
Message-ID: <875ea277-784a-0230-826f-4e46154ebbbe@intel.com> (raw)
In-Reply-To: <CACGkMEu37S6FXku3HYw5rRpmDn4mkYOq+bHNqmpD=gxie7VBUw@mail.gmail.com>
On 5/26/2023 2:09 PM, Jason Wang wrote:
> On Fri, May 26, 2023 at 1:30 PM Zhu, Lingshan <lingshan.zhu@intel.com> wrote:
>>
>>
>> On 5/26/2023 11:36 AM, Zhu, Lingshan wrote:
>>>
>>> On 5/26/2023 9:34 AM, Jason Wang wrote:
>>>> On Thu, May 25, 2023 at 5:38 PM Zhu, Lingshan
>>>> <lingshan.zhu@intel.com> wrote:
>>>>>
>>>>> On 5/24/2023 4:03 PM, Jason Wang wrote:
>>>>>> On Mon, May 8, 2023 at 6:05 PM Zhu Lingshan
>>>>>> <lingshan.zhu@intel.com> wrote:
>>>>>>> This commit synchronize irqs of the virtqueues
>>>>>>> and config space in the reset routine.
>>>>>>> Thus ifcvf_stop_hw() and reset() are refactored as well.
>>>>>>>
>>>>>>> Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
>>>>>>> ---
>>>>>>> drivers/vdpa/ifcvf/ifcvf_base.c | 41 +++++++++++++++++++++--------
>>>>>>> drivers/vdpa/ifcvf/ifcvf_base.h | 1 +
>>>>>>> drivers/vdpa/ifcvf/ifcvf_main.c | 46
>>>>>>> +++++----------------------------
>>>>>>> 3 files changed, 38 insertions(+), 50 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c
>>>>>>> b/drivers/vdpa/ifcvf/ifcvf_base.c
>>>>>>> index 79e313c5e10e..1f39290baa38 100644
>>>>>>> --- a/drivers/vdpa/ifcvf/ifcvf_base.c
>>>>>>> +++ b/drivers/vdpa/ifcvf/ifcvf_base.c
>>>>>>> @@ -170,12 +170,9 @@ void ifcvf_set_status(struct ifcvf_hw *hw, u8
>>>>>>> status)
>>>>>>>
>>>>>>> void ifcvf_reset(struct ifcvf_hw *hw)
>>>>>>> {
>>>>>>> - hw->config_cb.callback = NULL;
>>>>>>> - hw->config_cb.private = NULL;
>>>>>>> -
>>>>>>> ifcvf_set_status(hw, 0);
>>>>>>> - /* flush set_status, make sure VF is stopped, reset */
>>>>>>> - ifcvf_get_status(hw);
>>>>>>> + while (ifcvf_get_status(hw))
>>>>>>> + msleep(1);
>>>>>>> }
>>>>>>>
>>>>>>> u64 ifcvf_get_hw_features(struct ifcvf_hw *hw)
>>>>>>> @@ -368,20 +365,42 @@ void ifcvf_set_vq_ready(struct ifcvf_hw *hw,
>>>>>>> u16 qid, bool ready)
>>>>>>> vp_iowrite16(ready, &cfg->queue_enable);
>>>>>>> }
>>>>>>>
>>>>>>> -static void ifcvf_hw_disable(struct ifcvf_hw *hw)
>>>>>>> +static void ifcvf_reset_vring(struct ifcvf_hw *hw)
>>>>>>> {
>>>>>>> - u32 i;
>>>>>>> + u16 qid;
>>>>>>> +
>>>>>>> + for (qid = 0; qid < hw->nr_vring; qid++) {
>>>>>>> + hw->vring[qid].cb.callback = NULL;
>>>>>>> + hw->vring[qid].cb.private = NULL;
>>>>>>> + ifcvf_set_vq_vector(hw, qid, VIRTIO_MSI_NO_VECTOR);
>>>>>>> + }
>>>>>>> +}
>>>>>>>
>>>>>>> +static void ifcvf_reset_config_handler(struct ifcvf_hw *hw)
>>>>>>> +{
>>>>>>> + hw->config_cb.callback = NULL;
>>>>>>> + hw->config_cb.private = NULL;
>>>>>>> ifcvf_set_config_vector(hw, VIRTIO_MSI_NO_VECTOR);
>>>>>>> - for (i = 0; i < hw->nr_vring; i++) {
>>>>>>> - ifcvf_set_vq_vector(hw, i, VIRTIO_MSI_NO_VECTOR);
>>>>>>> +}
>>>>>>> +
>>>>>>> +static void ifcvf_synchronize_irq(struct ifcvf_hw *hw)
>>>>>>> +{
>>>>>>> + u32 nvectors = hw->num_msix_vectors;
>>>>>>> + struct pci_dev *pdev = hw->pdev;
>>>>>>> + int i, irq;
>>>>>>> +
>>>>>>> + for (i = 0; i < nvectors; i++) {
>>>>>>> + irq = pci_irq_vector(pdev, i);
>>>>>>> + if (irq >= 0)
>>>>>>> + synchronize_irq(irq);
>>>>>>> }
>>>>>>> }
>>>>>>>
>>>>>>> void ifcvf_stop_hw(struct ifcvf_hw *hw)
>>>>>>> {
>>>>>>> - ifcvf_hw_disable(hw);
>>>>>>> - ifcvf_reset(hw);
>>>>>>> + ifcvf_synchronize_irq(hw);
>>>>>>> + ifcvf_reset_vring(hw);
>>>>>>> + ifcvf_reset_config_handler(hw);
>>>>>> Nit:
>>>>>>
>>>>>> So the name of this function is kind of misleading since irq
>>>>>> synchronization and virtqueue/config handler are not belong to
>>>>>> hardware?
>>>>>>
>>>>>> Maybe it would be better to call it ifcvf_stop().
>>>>> Sure, I will send a V3 with this renaming,
>>>>> do you ack patch 1/5?
>>>> Yes, I think I've acked to that patch.
>>> I will send a V3 with this renaming and your ack for patch 1/5
>> By the way, do you ack this one after this function renaming?
>> If so, I will also include your ack in V3, so we don't need another
>> review process, I will ping Michael for a merge.
> Have you seen the ack here?
>
> https://lists.linuxfoundation.org/pipermail/virtualization/2023-May/066890.html
Sorry I missed this, a patch only renames a function may be trivial, so
I will
rebase this 4/5 with your ack. So all patches are ack-ed, thanks!
>
> Thanks
>
>>>> Thanks
>>>>
>>>>> Thanks
>>>>> Zhu Lingshan
>>>>>> Thanks
>>>>>>
>>>>>>> }
>>>>>>>
>>>>>>> void ifcvf_notify_queue(struct ifcvf_hw *hw, u16 qid)
>>>>>>> diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h
>>>>>>> b/drivers/vdpa/ifcvf/ifcvf_base.h
>>>>>>> index d34d3bc0dbf4..7430f80779be 100644
>>>>>>> --- a/drivers/vdpa/ifcvf/ifcvf_base.h
>>>>>>> +++ b/drivers/vdpa/ifcvf/ifcvf_base.h
>>>>>>> @@ -82,6 +82,7 @@ struct ifcvf_hw {
>>>>>>> int vqs_reused_irq;
>>>>>>> u16 nr_vring;
>>>>>>> /* VIRTIO_PCI_CAP_DEVICE_CFG size */
>>>>>>> + u32 num_msix_vectors;
>>>>>>> u32 cap_dev_config_size;
>>>>>>> struct pci_dev *pdev;
>>>>>>> };
>>>>>>> diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c
>>>>>>> b/drivers/vdpa/ifcvf/ifcvf_main.c
>>>>>>> index 968687159e44..3401b9901dd2 100644
>>>>>>> --- a/drivers/vdpa/ifcvf/ifcvf_main.c
>>>>>>> +++ b/drivers/vdpa/ifcvf/ifcvf_main.c
>>>>>>> @@ -125,6 +125,7 @@ static void ifcvf_free_irq(struct ifcvf_hw *vf)
>>>>>>> ifcvf_free_vq_irq(vf);
>>>>>>> ifcvf_free_config_irq(vf);
>>>>>>> ifcvf_free_irq_vectors(pdev);
>>>>>>> + vf->num_msix_vectors = 0;
>>>>>>> }
>>>>>>>
>>>>>>> /* ifcvf MSIX vectors allocator, this helper tries to allocate
>>>>>>> @@ -343,36 +344,11 @@ static int ifcvf_request_irq(struct ifcvf_hw
>>>>>>> *vf)
>>>>>>> if (ret)
>>>>>>> return ret;
>>>>>>>
>>>>>>> - return 0;
>>>>>>> -}
>>>>>>> -
>>>>>>> -static int ifcvf_stop_datapath(struct ifcvf_adapter *adapter)
>>>>>>> -{
>>>>>>> - struct ifcvf_hw *vf = adapter->vf;
>>>>>>> - int i;
>>>>>>> -
>>>>>>> - for (i = 0; i < vf->nr_vring; i++)
>>>>>>> - vf->vring[i].cb.callback = NULL;
>>>>>>> -
>>>>>>> - ifcvf_stop_hw(vf);
>>>>>>> + vf->num_msix_vectors = nvectors;
>>>>>>>
>>>>>>> return 0;
>>>>>>> }
>>>>>>>
>>>>>>> -static void ifcvf_reset_vring(struct ifcvf_adapter *adapter)
>>>>>>> -{
>>>>>>> - struct ifcvf_hw *vf = adapter->vf;
>>>>>>> - int i;
>>>>>>> -
>>>>>>> - for (i = 0; i < vf->nr_vring; i++) {
>>>>>>> - vf->vring[i].last_avail_idx = 0;
>>>>>>> - vf->vring[i].cb.callback = NULL;
>>>>>>> - vf->vring[i].cb.private = NULL;
>>>>>>> - }
>>>>>>> -
>>>>>>> - ifcvf_reset(vf);
>>>>>>> -}
>>>>>>> -
>>>>>>> static struct ifcvf_adapter *vdpa_to_adapter(struct vdpa_device
>>>>>>> *vdpa_dev)
>>>>>>> {
>>>>>>> return container_of(vdpa_dev, struct ifcvf_adapter, vdpa);
>>>>>>> @@ -462,23 +438,15 @@ static void ifcvf_vdpa_set_status(struct
>>>>>>> vdpa_device *vdpa_dev, u8 status)
>>>>>>>
>>>>>>> static int ifcvf_vdpa_reset(struct vdpa_device *vdpa_dev)
>>>>>>> {
>>>>>>> - struct ifcvf_adapter *adapter;
>>>>>>> - struct ifcvf_hw *vf;
>>>>>>> - u8 status_old;
>>>>>>> -
>>>>>>> - vf = vdpa_to_vf(vdpa_dev);
>>>>>>> - adapter = vdpa_to_adapter(vdpa_dev);
>>>>>>> - status_old = ifcvf_get_status(vf);
>>>>>>> + struct ifcvf_hw *vf = vdpa_to_vf(vdpa_dev);
>>>>>>> + u8 status = ifcvf_get_status(vf);
>>>>>>>
>>>>>>> - if (status_old == 0)
>>>>>>> - return 0;
>>>>>>> + ifcvf_stop_hw(vf);
>>>>>>>
>>>>>>> - if (status_old & VIRTIO_CONFIG_S_DRIVER_OK) {
>>>>>>> - ifcvf_stop_datapath(adapter);
>>>>>>> + if (status & VIRTIO_CONFIG_S_DRIVER_OK)
>>>>>>> ifcvf_free_irq(vf);
>>>>>>> - }
>>>>>>>
>>>>>>> - ifcvf_reset_vring(adapter);
>>>>>>> + ifcvf_reset(vf);
>>>>>>>
>>>>>>> return 0;
>>>>>>> }
>>>>>>> --
>>>>>>> 2.39.1
>>>>>>>
>>> _______________________________________________
>>> Virtualization mailing list
>>> Virtualization@lists.linux-foundation.org
>>> https://lists.linuxfoundation.org/mailman/listinfo/virtualization
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
next prev parent reply other threads:[~2023-05-26 6:36 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-08 18:05 [PATCH V2 0/5] vDPA/ifcvf: implement immediate initialization mechanism Zhu Lingshan
2023-05-08 18:05 ` [PATCH V2 1/5] vDPA/ifcvf: virt queue ops take immediate actions Zhu Lingshan
2023-05-24 5:58 ` Jason Wang
2023-05-08 18:05 ` [PATCH V2 2/5] vDPA/ifcvf: get_driver_features from virtio registers Zhu Lingshan
2023-05-24 6:02 ` Jason Wang
2023-05-08 18:05 ` [PATCH V2 3/5] vDPA/ifcvf: retire ifcvf_start_datapath and ifcvf_add_status Zhu Lingshan
2023-05-08 18:05 ` [PATCH V2 4/5] vDPA/ifcvf: synchronize irqs in the reset routine Zhu Lingshan
2023-05-24 8:03 ` Jason Wang
2023-05-25 3:07 ` Jason Wang
2023-05-25 9:37 ` Zhu, Lingshan
2023-05-26 1:34 ` Jason Wang
2023-05-26 3:36 ` Zhu, Lingshan
2023-05-26 5:30 ` Zhu, Lingshan
2023-05-26 6:09 ` Jason Wang
2023-05-26 6:35 ` Zhu, Lingshan [this message]
2023-05-08 18:05 ` [PATCH V2 5/5] vDPA/ifcvf: a vendor driver should not set _CONFIG_S_FAILED Zhu Lingshan
2023-05-22 3:15 ` [PATCH V2 0/5] vDPA/ifcvf: implement immediate initialization mechanism Zhu, Lingshan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=875ea277-784a-0230-826f-4e46154ebbbe@intel.com \
--to=lingshan.zhu@intel.com \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).