From: Eugenio Perez Martin <eperezma@redhat.com>
To: Sahil <icegambit91@gmail.com>
Cc: sgarzare@redhat.com, mst@redhat.com, qemu-devel@nongnu.org,
Sahil Siddiq <sahilcdq@proton.me>
Subject: Re: [RFC v3 3/3] vhost: Allocate memory for packed vring
Date: Tue, 27 Aug 2024 17:30:36 +0200 [thread overview]
Message-ID: <CAJaqyWeDxL039GV=QzreenSNGm7S1XWWp=FH2KeB6PLGf=11-w@mail.gmail.com> (raw)
In-Reply-To: <1901750.tdWV9SEqCh@valdaarhun>
On Wed, Aug 21, 2024 at 2:20 PM Sahil <icegambit91@gmail.com> wrote:
>
> Hi,
>
> Sorry for the late reply.
>
> On Tuesday, August 13, 2024 12:23:55 PM GMT+5:30 Eugenio Perez Martin wrote:
> > [...]
> > > I think I have understood what's going on in "vhost_vdpa_svq_map_rings",
> > > "vhost_vdpa_svq_map_ring" and "vhost_vdpa_dma_map". But based on
> > > what I have understood it looks like the driver area is getting mapped to
> > > an iova which is read-only for vhost_vdpa. Please let me know where I am
> > > going wrong.
> >
> > You're not going wrong there. The device does not need to write into
> > this area, so we map it read only.
> >
> > > Consider the following implementation in hw/virtio/vhost_vdpa.c:
> > > > size_t device_size = vhost_svq_device_area_size(svq);
> > > > size_t driver_size = vhost_svq_driver_area_size(svq);
> > >
> > > The driver size includes the descriptor area and the driver area. For
> > > packed vq, the driver area is the "driver event suppression" structure
> > > which should be read-only for the device according to the virtio spec
> > > (section 2.8.10) [1].
> > >
> > > > size_t avail_offset;
> > > > bool ok;
> > > >
> > > > vhost_svq_get_vring_addr(svq, &svq_addr);
> > >
> > > Over here "svq_addr.desc_user_addr" will point to the descriptor area
> > > while "svq_addr.avail_user_addr" will point to the driver area/driver
> > > event suppression structure.
> > >
> > > > driver_region = (DMAMap) {
> > > >
> > > > .translated_addr = svq_addr.desc_user_addr,
> > > > .size = driver_size - 1,
> > > > .perm = IOMMU_RO,
> > > >
> > > > };
> > >
> > > This region points to the descriptor area and its size encompasses the
> > > driver area as well with RO permission.
> > >
> > > > ok = vhost_vdpa_svq_map_ring(v, &driver_region, errp);
> > >
> > > The above function checks the value of needle->perm and sees that it is
> > > RO.
> > >
> > > It then calls "vhost_vdpa_dma_map" with the following arguments:
> > > > r = vhost_vdpa_dma_map(v->shared, v->address_space_id, needle->iova,
> > > >
> > > > needle->size + 1,
> > > > (void
> > > > *)(uintptr_t)needle->tra
> > > > nslated_addr,
> > > > needle->perm ==
> > > > IOMMU_RO);
> > >
> > > Since needle->size includes the driver area as well, the driver area will
> > > be mapped to a RO page in the device's address space, right?
> >
> > Yes, the device does not need to write into the descriptor area in the
> > supported split virtqueue case. So the descriptor area is also mapped
> > RO at this moment.
> >
> > This change in the packed virtqueue case, so we need to map it RW.
>
> I understand this now. I'll see how the implementation can be modified to take
> this into account. I'll see if making the driver area and descriptor ring helps.
>
> > > > if (unlikely(!ok)) {
> > > >
> > > > error_prepend(errp, "Cannot create vq driver region: ");
> > > > return false;
> > > >
> > > > }
> > > > addr->desc_user_addr = driver_region.iova;
> > > > avail_offset = svq_addr.avail_user_addr - svq_addr.desc_user_addr;
> > > > addr->avail_user_addr = driver_region.iova + avail_offset;
> > >
> > > I think "addr->desc_user_addr" and "addr->avail_user_addr" will both be
> > > mapped to a RO page in the device's address space.
> > >
> > > > device_region = (DMAMap) {
> > > >
> > > > .translated_addr = svq_addr.used_user_addr,
> > > > .size = device_size - 1,
> > > > .perm = IOMMU_RW,
> > > >
> > > > };
> > >
> > > The device area/device event suppression structure on the other hand will
> > > be mapped to a RW page.
> > >
> > > I also think there are other issues with the current state of the patch.
> > > According to the virtio spec (section 2.8.10) [1], the "device event
> > > suppression" structure needs to be write-only for the device but is
> > > mapped to a RW page.
> >
> > Yes, I'm not sure if all IOMMU supports write-only maps to be honest.
>
> Got it. I think it should be alright to defer this issue until later.
>
> > > Another concern I have is regarding the driver area size for packed vq. In
> > >
> > > "hw/virtio/vhost-shadow-virtqueue.c" of the current patch:
> > > > size_t vhost_svq_driver_area_size(const VhostShadowVirtqueue *svq)
> > > > {
> > > >
> > > > size_t desc_size = sizeof(vring_desc_t) * svq->vring.num;
> > > > size_t avail_size = offsetof(vring_avail_t, ring[svq->vring.num]) +
> > > >
> > > > sizeof(uin
> > > > t16_t);
> > > >
> > > > return ROUND_UP(desc_size + avail_size, qemu_real_host_page_size());
> > > >
> > > > }
> > > >
> > > > [...]
> > > >
> > > > size_t vhost_svq_memory_packed(const VhostShadowVirtqueue *svq)
> > > > {
> > > >
> > > > size_t desc_size = sizeof(struct vring_packed_desc) * svq->num_free;
> > > > size_t driver_event_suppression = sizeof(struct
> > > > vring_packed_desc_event);
> > > > size_t device_event_suppression = sizeof(struct
> > > > vring_packed_desc_event);
> > > >
> > > > return ROUND_UP(desc_size + driver_event_suppression +
> > > > device_event_suppression,> >
> > > > qemu_real_host_page_size());
> > > >
> > > > }
> > >
> > > The size returned by "vhost_svq_driver_area_size" might not be the actual
> > > driver size which is given by desc_size + driver_event_suppression,
> > > right? Will this have to be changed too?
> >
> > Yes, you're right this needs to be changed too.
>
> Understood. I'll modify this too.
>
> I have been trying to test my changes so far as well. I am not very clear on
> a few things.
>
> Q1.
> I built QEMU from source with my changes and followed the vdpa_sim +
> vhost_vdpa tutorial [1]. The VM seems to be running fine. How do I check
> if the packed format is being used instead of the split vq format for shadow
> virtqueues? I know the packed format is used when virtio_vdev has got the
> VIRTIO_F_RING_PACKED bit enabled. Is there a way of checking that this is
> the case?
>
You can see the features that the driver acked from the guest by
checking sysfs. Once you know the PCI BFN from lspci:
# lspci -nn|grep '\[1af4:1041\]'
01:00.0 Ethernet controller [0200]: Red Hat, Inc. Virtio 1.0 network
device [1af4:1041] (rev 01)
# cut -c 35 /sys/devices/pci0000:00/0000:00:02.0/0000:01:00.0/virtio0/features
0
Also, you can check from QEMU by simply tracing if your functions are
being called.
> Q2.
> What's the recommended way to see what's going on under the hood? I tried
> using the -D option so QEMU's logs are written to a file but the file was empty.
> Would using qemu with -monitor stdio or attaching gdb to the QEMU VM be
> worthwhile?
>
You need to add --trace options with the regex you want to get to
enable any output. For example, --trace 'vhost_vdpa_*' print all the
trace_vhost_vdpa_* functions.
If you want to speed things up, you can just replace the interesting
trace_... functions with fprintf(stderr, ...). We can add the trace
ones afterwards.
next prev parent reply other threads:[~2024-08-27 15:31 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-02 11:21 [RFC v3 0/3] Add packed virtqueue to shadow virtqueue Sahil Siddiq
2024-08-02 11:21 ` [RFC v3 1/3] vhost: Introduce packed vq and add buffer elements Sahil Siddiq
2024-08-07 16:40 ` Eugenio Perez Martin
2024-08-02 11:21 ` [RFC v3 2/3] vhost: Data structure changes to support packed vqs Sahil Siddiq
2024-08-02 11:21 ` [RFC v3 3/3] vhost: Allocate memory for packed vring Sahil Siddiq
2024-08-07 16:22 ` Eugenio Perez Martin
2024-08-11 15:37 ` Sahil
2024-08-11 17:20 ` Sahil
2024-08-12 6:31 ` Eugenio Perez Martin
2024-08-12 19:32 ` Sahil
2024-08-13 6:53 ` Eugenio Perez Martin
2024-08-21 12:19 ` Sahil
2024-08-27 15:30 ` Eugenio Perez Martin [this message]
2024-08-30 10:20 ` Sahil
2024-08-30 10:48 ` Eugenio Perez Martin
2024-09-08 19:46 ` Sahil
2024-09-09 12:34 ` Eugenio Perez Martin
2024-09-11 19:36 ` Sahil
2024-09-12 9:54 ` Eugenio Perez Martin
2024-09-16 4:34 ` Sahil
2024-09-24 5:31 ` Sahil
2024-09-24 10:46 ` Eugenio Perez Martin
2024-09-30 5:34 ` Sahil
2024-10-28 5:37 ` Sahil Siddiq
2024-10-28 8:10 ` Eugenio Perez Martin
2024-10-31 5:10 ` Sahil Siddiq
2024-11-13 5:10 ` Sahil Siddiq
2024-11-13 11:30 ` Eugenio Perez Martin
2024-12-05 20:38 ` Sahil Siddiq
2024-08-07 16:41 ` [RFC v3 0/3] Add packed virtqueue to shadow virtqueue Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJaqyWeDxL039GV=QzreenSNGm7S1XWWp=FH2KeB6PLGf=11-w@mail.gmail.com' \
--to=eperezma@redhat.com \
--cc=icegambit91@gmail.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sahilcdq@proton.me \
--cc=sgarzare@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).