qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Yui Washizu <yui.washidu@gmail.com>
To: Jason Wang <jasowang@redhat.com>
Cc: qemu-devel@nongnu.org, mst@redhat.com, akihiko.odaki@daynix.com,
	yvugenfi@redhat.com, ybendito@redhat.com, mapfelba@redhat.com,
	marcel@redhat.com, ghammer@redhat.com, mdean@redhat.com
Subject: Re: [RFC 0/1] virtio-net: add support for SR-IOV emulation
Date: Mon, 24 Jul 2023 11:32:44 +0900	[thread overview]
Message-ID: <ef14eb09-e739-3a3a-ebda-13b385a85d8e@gmail.com> (raw)
In-Reply-To: <CACGkMEv9yVCherC89W5ihyP-iZZHDhn1xZy-8aOd4ZSs+1Dk_Q@mail.gmail.com>


On 2023/07/20 11:20, Jason Wang wrote:
> On Wed, Jul 19, 2023 at 9:59 AM Yui Washizu <yui.washidu@gmail.com> wrote:
>> This patch series is the first step towards enabling
>> hardware offloading of the L2 packet switching feature on virtio-net device to host machine.
>> We are considering that this hardware offloading enables
>> the use of high-performance networks in virtual infrastructures,
>> such as container infrastructures on VMs.
>>
>> To enable L2 packet switching by SR-IOV VFs, we are considering the following:
>> - making the guest recognize virtio-net devices as SR-IOV PF devices
>>    (archived with this patch series)
>> - allowing virtio-net devices to connect SR-IOV VFs to the backend networks,
>>    leaving the L2 packet switching feature to the management layer like libvirt
> Could you please show the qemu command line you want to propose here?


I am considering how to specify the properties of VFs to connect SR-IOV 
VFs to the backend networks.


For example:


qemu-system-x86_64 -device 
pcie-root-port,port=8,chassis=8,id=pci.8,bus=pcie.0,multifunction=on
                    -netdev tap,id=hostnet0,vhost=on
                    -netdev tap,id=vfnet1,vhost=on # backend network for 
SR-IOV VF 1
                    -netdev tap,id=vfnet2,vhost=on # backend network for 
SR-IOV VF 2
                    -device 
virtio-net-pci,netdev=hostnet0,sriov_max_vfs=2,sriov_netdev=vfnet1:vfnet2,...


In this example, we can specify multiple backend networks to the VFs
by adding "sriov_netdev" and separating them with ":".
Additionally, when passing properties like "rx_queue_size" to VFs, we 
can utilize new properties,
such as "sriov_rx_queue_size_per_vfs," to ensure that the same value is 
passed to all VFs.

I'm still considering about how to specify it, so please give me any 
comments if you have any.


>>    - This makes hardware offloading of L2 packet switching possible.
>>      For example, when using vDPA devices, it allows the guest
>>      to utilize SR-IOV NIC embedded switch of hosts.
> This would be interesting.
>
> Thanks
>
>> This patch series aims to enable SR-IOV emulation on virtio-net devices.
>> With this series, the guest can identify the virtio-net device as an SR-IOV PF device.
>> The newly added property 'sriov_max_vfs' allows us to enable the SR-IOV feature
>> on the virtio-net device.
>> Currently, we are unable to specify the properties of a VF created from the guest.
>> The properties are set to their default values.
>> In the future, we plan to allow users to set the properties.
>>
>> qemu-system-x86_64 --device virtio-net,sriov_max_vfs=<num>
>> # when 'sriov_max_vfs' is present, the SR-IOV feature will be automatically enabled
>> # <num> means the max number of VF on guest
>>
>> Example commands to create VFs in virtio-net device from the guest:
>>
>> guest% readlink -f /sys/class/net/eth1/device
>>   /sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/virtio1
>> guest% echo "2" > /sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/sriov_numvfs
>> guest% ip link show
>>   eth0: ....
>>   eth1: ....
>>   eth2: .... #virtual VF created
>>   eth3: .... #virtual VF created
>>
>> Please note that communication between VF and PF/VF is not possible by this patch series itself.
>>
>> Yui Washizu (1):
>>    virtio-pci: add SR-IOV capability
>>
>>   hw/pci/msix.c                  |  8 +++--
>>   hw/pci/pci.c                   |  4 +++
>>   hw/virtio/virtio-pci.c         | 62 ++++++++++++++++++++++++++++++----
>>   include/hw/virtio/virtio-pci.h |  1 +
>>   4 files changed, 66 insertions(+), 9 deletions(-)
>>
>> --
>> 2.39.3
>>


  reply	other threads:[~2023-07-24  2:33 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-19  1:56 [RFC 0/1] virtio-net: add support for SR-IOV emulation Yui Washizu
2023-07-19  1:56 ` [RFC 1/1] virtio-pci: add SR-IOV capability Yui Washizu
2023-07-20  0:45   ` Akihiko Odaki
2023-07-20  2:19   ` Jason Wang
2023-07-20  2:20 ` [RFC 0/1] virtio-net: add support for SR-IOV emulation Jason Wang
2023-07-24  2:32   ` Yui Washizu [this message]
2023-07-24  6:58     ` Jason Wang
2023-07-28  7:35       ` Yui Washizu
2023-08-30  5:28       ` Yui Washizu
2023-09-06  5:09         ` Yui Washizu
2023-09-15  7:01           ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ef14eb09-e739-3a3a-ebda-13b385a85d8e@gmail.com \
    --to=yui.washidu@gmail.com \
    --cc=akihiko.odaki@daynix.com \
    --cc=ghammer@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=mapfelba@redhat.com \
    --cc=marcel@redhat.com \
    --cc=mdean@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=ybendito@redhat.com \
    --cc=yvugenfi@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).