qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Sahil Siddiq <icegambit91@gmail.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: sgarzare@redhat.com, mst@redhat.com, qemu-devel@nongnu.org,
	Sahil Siddiq <sahilcdq@proton.me>
Subject: Re: [RFC v4 0/5] Add packed virtqueue to shadow virtqueue
Date: Sun, 15 Dec 2024 22:57:43 +0530	[thread overview]
Message-ID: <f95a9e51-6aa1-4aeb-959e-99e9b31109be@gmail.com> (raw)
In-Reply-To: <CAJaqyWerdWk5S0Sxt4oMUCc8FQJTxopyvhtyOV6ocbXmJ_p7Dw@mail.gmail.com>

Hi,

On 12/10/24 2:57 PM, Eugenio Perez Martin wrote:
> On Thu, Dec 5, 2024 at 9:34 PM Sahil Siddiq <icegambit91@gmail.com> wrote:
>>
>> Hi,
>>
>> There are two issues that I found while trying to test
>> my changes. I thought I would send the patch series
>> as well in case that helps in troubleshooting. I haven't
>> been able to find an issue in the implementation yet.
>> Maybe I am missing something.
>>
>> I have been following the "Hands on vDPA: what do you do
>> when you ain't got the hardware v2 (Part 2)" [1] blog to
>> test my changes. To boot the L1 VM, I ran:
>>
>> sudo ./qemu/build/qemu-system-x86_64 \
>> -enable-kvm \
>> -drive file=//home/valdaarhun/valdaarhun/qcow2_img/L1.qcow2,media=disk,if=virtio \
>> -net nic,model=virtio \
>> -net user,hostfwd=tcp::2222-:22 \
>> -device intel-iommu,snoop-control=on \
>> -device virtio-net-pci,netdev=net0,disable-legacy=on,disable-modern=off,iommu_platform=on,guest_uso4=off,guest_uso6=off,host_uso=off,guest_announce=off,ctrl_vq=on,ctrl_rx=on,packed=on,event_idx=off,bus=pcie.0,addr=0x4 \
>> -netdev tap,id=net0,script=no,downscript=no \
>> -nographic \
>> -m 8G \
>> -smp 4 \
>> -M q35 \
>> -cpu host 2>&1 | tee vm.log
>>
>> Without "guest_uso4=off,guest_uso6=off,host_uso=off,
>> guest_announce=off" in "-device virtio-net-pci", QEMU
>> throws "vdpa svq does not work with features" [2] when
>> trying to boot L2.
>>
>> The enums added in commit #2 in this series is new and
>> wasn't in the earlier versions of the series. Without
>> this change, x-svq=true throws "SVQ invalid device feature
>> flags" [3] and x-svq is consequently disabled.
>>
>> The first issue is related to running traffic in L2
>> with vhost-vdpa.
>>
>> In L0:
>>
>> $ ip addr add 111.1.1.1/24 dev tap0
>> $ ip link set tap0 up
>> $ ip addr show tap0
>> 4: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
>>      link/ether d2:6d:b9:61:e1:9a brd ff:ff:ff:ff:ff:ff
>>      inet 111.1.1.1/24 scope global tap0
>>         valid_lft forever preferred_lft forever
>>      inet6 fe80::d06d:b9ff:fe61:e19a/64 scope link proto kernel_ll
>>         valid_lft forever preferred_lft forever
>>
>> I am able to run traffic in L2 when booting without
>> x-svq.
>>
>> In L1:
>>
>> $ ./qemu/build/qemu-system-x86_64 \
>> -nographic \
>> -m 4G \
>> -enable-kvm \
>> -M q35 \
>> -drive file=//root/L2.qcow2,media=disk,if=virtio \
>> -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0 \
>> -device virtio-net-pci,netdev=vhost-vdpa0,disable-legacy=on,disable-modern=off,ctrl_vq=on,ctrl_rx=on,event_idx=off,bus=pcie.0,addr=0x7 \
>> -smp 4 \
>> -cpu host \
>> 2>&1 | tee vm.log
>>
>> In L2:
>>
>> # ip addr add 111.1.1.2/24 dev eth0
>> # ip addr show eth0
>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
>>      link/ether 52:54:00:12:34:57 brd ff:ff:ff:ff:ff:ff
>>      altname enp0s7
>>      inet 111.1.1.2/24 scope global eth0
>>         valid_lft forever preferred_lft forever
>>      inet6 fe80::9877:de30:5f17:35f9/64 scope link noprefixroute
>>         valid_lft forever preferred_lft forever
>>
>> # ip route
>> 111.1.1.0/24 dev eth0 proto kernel scope link src 111.1.1.2
>>
>> # ping 111.1.1.1 -w3
>> PING 111.1.1.1 (111.1.1.1) 56(84) bytes of data.
>> 64 bytes from 111.1.1.1: icmp_seq=1 ttl=64 time=0.407 ms
>> 64 bytes from 111.1.1.1: icmp_seq=2 ttl=64 time=0.671 ms
>> 64 bytes from 111.1.1.1: icmp_seq=3 ttl=64 time=0.291 ms
>>
>> --- 111.1.1.1 ping statistics ---
>> 3 packets transmitted, 3 received, 0% packet loss, time 2034ms
>> rtt min/avg/max/mdev = 0.291/0.456/0.671/0.159 ms
>>
>>
>> But if I boot L2 with x-svq=true as shown below, I am unable
>> to ping the host machine.
>>
>> $ ./qemu/build/qemu-system-x86_64 \
>> -nographic \
>> -m 4G \
>> -enable-kvm \
>> -M q35 \
>> -drive file=//root/L2.qcow2,media=disk,if=virtio \
>> -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,x-svq=true,id=vhost-vdpa0 \
>> -device virtio-net-pci,netdev=vhost-vdpa0,disable-legacy=on,disable-modern=off,ctrl_vq=on,ctrl_rx=on,event_idx=off,bus=pcie.0,addr=0x7 \
>> -smp 4 \
>> -cpu host \
>> 2>&1 | tee vm.log
>>
>> In L2:
>>
>> # ip addr add 111.1.1.2/24 dev eth0
>> # ip addr show eth0
>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
>>      link/ether 52:54:00:12:34:57 brd ff:ff:ff:ff:ff:ff
>>      altname enp0s7
>>      inet 111.1.1.2/24 scope global eth0
>>         valid_lft forever preferred_lft forever
>>      inet6 fe80::9877:de30:5f17:35f9/64 scope link noprefixroute
>>         valid_lft forever preferred_lft forever
>>
>> # ip route
>> 111.1.1.0/24 dev eth0 proto kernel scope link src 111.1.1.2
>>
>> # ping 111.1.1.1 -w10
>> PING 111.1.1.1 (111.1.1.1) 56(84) bytes of data.
>>  From 111.1.1.2 icmp_seq=1 Destination Host Unreachable
>> ping: sendmsg: No route to host
>>  From 111.1.1.2 icmp_seq=2 Destination Host Unreachable
>>  From 111.1.1.2 icmp_seq=3 Destination Host Unreachable
>>
>> --- 111.1.1.1 ping statistics ---
>> 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2076ms
>> pipe 3
>>
>> The other issue is related to booting L2 with "x-svq=true"
>> and "packed=on".
>>
>> In L1:
>>
>> $ ./qemu/build/qemu-system-x86_64 \
>> -nographic \
>> -m 4G \
>> -enable-kvm \
>> -M q35 \
>> -drive file=//root/L2.qcow2,media=disk,if=virtio \
>> -netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,id=vhost-vdpa0,x-svq=true \
>> -device virtio-net-pci,netdev=vhost-vdpa0,disable-legacy=on,disable-modern=off,guest_uso4=off,guest_uso6=off,host_uso=off,guest_announce=off,ctrl_vq=on,ctrl_rx=on,event_idx=off,packed=on,bus=pcie.0,addr=0x7 \
>> -smp 4 \
>> -cpu host \
>> 2>&1 | tee vm.log
>>
>> The kernel throws "virtio_net virtio1: output.0:id 0 is not
>> a head!" [4].
>>
> 
> So this series implements the descriptor forwarding from the guest to
> the device in packed vq. We also need to forward the descriptors from
> the device to the guest. The device writes them in the SVQ ring.
> 
> The functions responsible for that in QEMU are
> hw/virtio/vhost-shadow-virtqueue.c:vhost_svq_flush, which is called by
> the device when used descriptors are written to the SVQ, which calls
> hw/virtio/vhost-shadow-virtqueue.c:vhost_svq_get_buf. We need to do
> modifications similar to vhost_svq_add: Make them conditional if we're
> in split or packed vq, and "copy" the code from Linux's
> drivers/virtio/virtio_ring.c:virtqueue_get_buf.
> 
> After these modifications you should be able to ping and forward
> traffic. As always, It is totally ok if it needs more than one
> iteration, and feel free to ask any question you have :).
> 

I misunderstood this part. While working on extending
hw/virtio/vhost-shadow-virtqueue.c:vhost_svq_get_buf() [1]
for packed vqs, I realized that this function and
vhost_svq_flush() already support split vqs. However, I am
unable to ping L0 when booting L2 with "x-svq=true" and
"packed=off" or when the "packed" option is not specified
in QEMU's command line.

I tried debugging these functions for split vqs after running
the following QEMU commands while following the blog [2].

Booting L1:

$ sudo ./qemu/build/qemu-system-x86_64 \
-enable-kvm \
-drive file=//home/valdaarhun/valdaarhun/qcow2_img/L1.qcow2,media=disk,if=virtio \
-net nic,model=virtio \
-net user,hostfwd=tcp::2222-:22 \
-device intel-iommu,snoop-control=on \
-device virtio-net-pci,netdev=net0,disable-legacy=on,disable-modern=off,iommu_platform=on,guest_uso4=off,guest_uso6=off,host_uso=off,guest_announce=off,ctrl_vq=on,ctrl_rx=on,packed=off,event_idx=off,bus=pcie.0,addr=0x4 \
-netdev tap,id=net0,script=no,downscript=no \
-nographic \
-m 8G \
-smp 4 \
-M q35 \
-cpu host 2>&1 | tee vm.log

Booting L2:

# ./qemu/build/qemu-system-x86_64 \
-nographic \
-m 4G \
-enable-kvm \
-M q35 \
-drive file=//root/L2.qcow2,media=disk,if=virtio \
-netdev type=vhost-vdpa,vhostdev=/dev/vhost-vdpa-0,x-svq=true,id=vhost-vdpa0 \
-device virtio-net-pci,netdev=vhost-vdpa0,disable-legacy=on,disable-modern=off,ctrl_vq=on,ctrl_rx=on,event_idx=off,bus=pcie.0,addr=0x7 \
-smp 4 \
-cpu host \
2>&1 | tee vm.log

I printed out the contents of VirtQueueElement returned
by vhost_svq_get_buf() in vhost_svq_flush() [3].
I noticed that "len" which is set by "vhost_svq_get_buf"
is always set to 0 while VirtQueueElement.len is non-zero.
I haven't understood the difference between these two "len"s.

The "len" that is set to 0 is used in "virtqueue_fill()" in
virtio.c [4]. Could this point to why I am not able to ping
L0 from L2?

Thanks,
Sahil

[1] https://gitlab.com/qemu-project/qemu/-/blob/master/hw/virtio/vhost-shadow-virtqueue.c#L418
[2] https://www.redhat.com/en/blog/hands-vdpa-what-do-you-do-when-you-aint-got-hardware-part-2
[3] https://gitlab.com/qemu-project/qemu/-/blob/master/hw/virtio/vhost-shadow-virtqueue.c#L488
[4] https://gitlab.com/qemu-project/qemu/-/blob/master/hw/virtio/vhost-shadow-virtqueue.c#L501



  parent reply	other threads:[~2024-12-15 17:28 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-05 20:34 [RFC v4 0/5] Add packed virtqueue to shadow virtqueue Sahil Siddiq
2024-12-05 20:34 ` [RFC v4 1/5] vhost: Refactor vhost_svq_add_split Sahil Siddiq
2024-12-10  8:40   ` Eugenio Perez Martin
2024-12-05 20:34 ` [RFC v4 2/5] vhost: Write descriptors to packed svq Sahil Siddiq
2024-12-10  8:54   ` Eugenio Perez Martin
2024-12-11 15:58     ` Sahil Siddiq
2024-12-05 20:34 ` [RFC v4 3/5] vhost: Data structure changes to support packed vqs Sahil Siddiq
2024-12-10  8:55   ` Eugenio Perez Martin
2024-12-11 15:59     ` Sahil Siddiq
2024-12-05 20:34 ` [RFC v4 4/5] vdpa: Allocate memory for svq and map them to vdpa Sahil Siddiq
2024-12-05 20:34 ` [RFC v4 5/5] vdpa: Support setting vring_base for packed svq Sahil Siddiq
2024-12-10  9:27 ` [RFC v4 0/5] Add packed virtqueue to shadow virtqueue Eugenio Perez Martin
2024-12-11 15:57   ` Sahil Siddiq
2024-12-15 17:27   ` Sahil Siddiq [this message]
2024-12-16  8:39     ` Eugenio Perez Martin
2024-12-17  5:45       ` Sahil Siddiq
2024-12-17  7:50         ` Eugenio Perez Martin
2024-12-19 19:37           ` Sahil Siddiq
2024-12-20  6:58             ` Eugenio Perez Martin
2025-01-03 13:06               ` Sahil Siddiq
2025-01-07  8:05                 ` Eugenio Perez Martin
2025-01-19  6:37                   ` Sahil Siddiq
2025-01-21 16:37                     ` Eugenio Perez Martin
2025-01-24  5:46                       ` Sahil Siddiq
2025-01-24  7:34                         ` Eugenio Perez Martin
2025-01-31  5:04                           ` Sahil Siddiq
2025-01-31  6:57                             ` Eugenio Perez Martin
2025-02-04 12:49                               ` Sahil Siddiq
2025-02-04 18:10                                 ` Eugenio Perez Martin
2025-02-04 18:15                                   ` Eugenio Perez Martin
2025-02-06  5:26                                   ` Sahil Siddiq
2025-02-06  7:12                                     ` Eugenio Perez Martin
2025-02-06 15:17                                       ` Sahil Siddiq
2025-02-10 10:58                                         ` Sahil Siddiq
2025-02-10 14:23                                           ` Eugenio Perez Martin
2025-02-10 16:25                                             ` Sahil Siddiq
2025-02-11  7:57                                               ` Eugenio Perez Martin
2025-03-06  5:25                                                 ` Sahil Siddiq
2025-03-06  7:23                                                   ` Eugenio Perez Martin
2025-03-24 13:54                                                     ` Sahil Siddiq

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f95a9e51-6aa1-4aeb-959e-99e9b31109be@gmail.com \
    --to=icegambit91@gmail.com \
    --cc=eperezma@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sahilcdq@proton.me \
    --cc=sgarzare@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).