From: Kunkun Jiang <jiangkunkun@huawei.com>
To: <eric.auger@redhat.com>,
Alex Williamson <alex.williamson@redhat.com>,
Kirti Wankhede <kwankhede@nvidia.com>,
"open list:All patches CC here" <qemu-devel@nongnu.org>
Cc: wanghaibin.wang@huawei.com, tangnianyao@huawei.com, ganqixin@huawei.com
Subject: Re: [PATCH v2 1/2] vfio/pci: Fix vfio-pci sub-page MMIO BAR mmaping in live migration
Date: Fri, 22 Oct 2021 18:01:12 +0800 [thread overview]
Message-ID: <248f1b98-07a7-619c-b5ef-0c1e8508fea9@huawei.com> (raw)
In-Reply-To: <571dc3ee-8b2a-fcef-b617-1ba85a3d442a@redhat.com>
Hi Eric,
On 2021/10/22 0:15, Eric Auger wrote:
> Hi Kunkun,
>
> On 9/14/21 3:53 AM, Kunkun Jiang wrote:
>> We expand MemoryRegions of vfio-pci sub-page MMIO BARs to
>> vfio_pci_write_config to improve IO performance.
> s/to vfio_pci_write_config/ in vfio_pci_write_config()
Thank you for your review. I will correct it in v3.
>> The MemoryRegions of destination VM will not be expanded
>> successful in live migration, because their addresses have
> s/will not be expanded successful in live migration/are not expanded
> anymore after live migration (?) Is that the correct symptom?
You are right. They are not expanded anymore after live migration,
not expanded unsuccessfully. I used the wrong words.
>> been updated in vmstate_load_state (vfio_pci_load_config).
>>
>> So iterate BARs in vfio_pci_write_config and try to update
>> sub-page BARs.
>>
>> Fixes: c5e2fb3ce4d (vfio: Add save and load functions for VFIO PCI devices)
> is it an actual fix or an optimization?
I recently realized that this is actually an optimization.
The VF driver in VM use the assembly language instructions,
which can operate two registers simultaneously, like stp, ldp.
These instructions would result in non-ISV data abort, which
cannot be handled well at the moment.
If the driver doesn't use such instructions, not expanding
only affects performance.
I will add these to the commit message in the next version.
>> Reported-by: Nianyao Tang <tangnianyao@huawei.com>
>> Reported-by: Qixin Gan <ganqixin@huawei.com>
>> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
>> ---
>> hw/vfio/pci.c | 15 ++++++++++++++-
>> 1 file changed, 14 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
>> index e1ea1d8a23..43c7e93153 100644
>> --- a/hw/vfio/pci.c
>> +++ b/hw/vfio/pci.c
>> @@ -2453,7 +2453,12 @@ static int vfio_pci_load_config(VFIODevice *vbasedev, QEMUFile *f)
>> {
>> VFIOPCIDevice *vdev = container_of(vbasedev, VFIOPCIDevice, vbasedev);
>> PCIDevice *pdev = &vdev->pdev;
>> - int ret;
>> + pcibus_t old_addr[PCI_NUM_REGIONS - 1];
>> + int bar, ret;
>> +
>> + for (bar = 0; bar < PCI_ROM_SLOT; bar++) {
>> + old_addr[bar] = pdev->io_regions[bar].addr;
>> + }
> what are those values before the vmstate_load_state ie. can't you do the
Are you referring to pdev->io_regions[bar].addr ? All of the bars addr is
PCI_BAR_UNMAPPED (~(pcibus_t)0) before the vmstate_load_state.
> vfio_sub_page_bar_update_mapping() unconditionnaly on old_addr[bar] !=
> pdev->io_regions[bar].addr
The size of Bar is a power of 2. The Bar, whose size is greater than host
page size, doesn't need to be expanded.
Can you explain more? May be I misunderstood you.
Thanks,
Kunkun Jiang
>>
>> ret = vmstate_load_state(f, &vmstate_vfio_pci_config, vdev, 1);
>> if (ret) {
>> @@ -2463,6 +2468,14 @@ static int vfio_pci_load_config(VFIODevice *vbasedev, QEMUFile *f)
>> vfio_pci_write_config(pdev, PCI_COMMAND,
>> pci_get_word(pdev->config + PCI_COMMAND), 2);
>>
>> + for (bar = 0; bar < PCI_ROM_SLOT; bar++) {
>> + if (old_addr[bar] != pdev->io_regions[bar].addr &&
>> + vdev->bars[bar].region.size > 0 &&
>> + vdev->bars[bar].region.size < qemu_real_host_page_size) {
>> + vfio_sub_page_bar_update_mapping(pdev, bar);
>> + }
>> + }
>> +
>> if (msi_enabled(pdev)) {
>> vfio_msi_enable(vdev);
>> } else if (msix_enabled(pdev)) {
> Thanks
>
> Eric
>
> .
next prev parent reply other threads:[~2021-10-22 10:02 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-14 1:53 [PATCH v2 0/2] vfio: Some fixes about vfio-pci MMIO RAM mapping Kunkun Jiang
2021-09-14 1:53 ` [PATCH v2 1/2] vfio/pci: Fix vfio-pci sub-page MMIO BAR mmaping in live migration Kunkun Jiang
2021-10-21 16:15 ` Eric Auger
2021-10-22 10:01 ` Kunkun Jiang [this message]
2021-10-23 14:26 ` Eric Auger
2021-10-24 2:09 ` Kunkun Jiang
2021-09-14 1:53 ` [PATCH v2 2/2] vfio/common: Add trace point when a MMIO RAM section less than PAGE_SIZE Kunkun Jiang
2021-10-21 17:02 ` Eric Auger
2021-10-22 10:02 ` Kunkun Jiang
2021-10-08 1:27 ` [PATCH v2 0/2] vfio: Some fixes about vfio-pci MMIO RAM mapping Kunkun Jiang
2021-10-08 15:05 ` Alex Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=248f1b98-07a7-619c-b5ef-0c1e8508fea9@huawei.com \
--to=jiangkunkun@huawei.com \
--cc=alex.williamson@redhat.com \
--cc=eric.auger@redhat.com \
--cc=ganqixin@huawei.com \
--cc=kwankhede@nvidia.com \
--cc=qemu-devel@nongnu.org \
--cc=tangnianyao@huawei.com \
--cc=wanghaibin.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).