From: Sahil <icegambit91@gmail.com>
To: Eugenio Perez Martin <eperezma@redhat.com>
Cc: sgarzare@redhat.com, mst@redhat.com, qemu-devel@nongnu.org,
Sahil Siddiq <sahilcdq@proton.me>
Subject: Re: [RFC v3 3/3] vhost: Allocate memory for packed vring
Date: Sun, 11 Aug 2024 22:50:27 +0530 [thread overview]
Message-ID: <23656540.6Emhk5qWAg@valdaarhun> (raw)
In-Reply-To: <CAJaqyWcrcEJimGqF3_K7YWCobPw00Yx+rcYQH1JXGcKesb5M2w@mail.gmail.com>
Hi,
On Wednesday, August 7, 2024 9:52:10 PM GMT+5:30 Eugenio Perez Martin wrote:
> On Fri, Aug 2, 2024 at 1:22 PM Sahil Siddiq <icegambit91@gmail.com> wrote:
> > [...]
> > @@ -726,17 +738,30 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
> > svq->vring.num = virtio_queue_get_num(vdev,
> > virtio_get_queue_index(vq));
> > svq->num_free = svq->vring.num;
> >
> > - svq->vring.desc = mmap(NULL, vhost_svq_driver_area_size(svq),
> > - PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
> > - -1, 0);
> > - desc_size = sizeof(vring_desc_t) * svq->vring.num;
> > - svq->vring.avail = (void *)((char *)svq->vring.desc + desc_size);
> > - svq->vring.used = mmap(NULL, vhost_svq_device_area_size(svq),
> > - PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
> > - -1, 0);
> > - svq->desc_state = g_new0(SVQDescState, svq->vring.num);
> > - svq->desc_next = g_new0(uint16_t, svq->vring.num);
> > - for (unsigned i = 0; i < svq->vring.num - 1; i++) {
> > + svq->is_packed = virtio_vdev_has_feature(svq->vdev, VIRTIO_F_RING_PACKED);
> > +
> > + if (virtio_vdev_has_feature(svq->vdev, VIRTIO_F_RING_PACKED)) {
> > + svq->vring_packed.vring.desc = mmap(NULL, vhost_svq_memory_packed(svq),
> > + PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
> > + -1, 0);
> > + desc_size = sizeof(struct vring_packed_desc) * svq->vring.num;
> > + svq->vring_packed.vring.driver = (void *)((char *)svq->vring_packed.vring.desc + desc_size);
> > + svq->vring_packed.vring.device = (void *)((char *)svq->vring_packed.vring.driver +
> > + sizeof(struct vring_packed_desc_event));
>
> This is a great start but it will be problematic when you start
> mapping the areas to the vdpa device. The driver area should be read
> only for the device, but it is placed in the same page as a RW one.
>
> More on this later.
>
> > + } else {
> > + svq->vring.desc = mmap(NULL, vhost_svq_driver_area_size(svq),
> > + PROT_READ | PROT_WRITE, MAP_SHARED |MAP_ANONYMOUS,
> > + -1, 0);
> > + desc_size = sizeof(vring_desc_t) * svq->vring.num;
> > + svq->vring.avail = (void *)((char *)svq->vring.desc + desc_size);
> > + svq->vring.used = mmap(NULL, vhost_svq_device_area_size(svq),
> > + PROT_READ | PROT_WRITE, MAP_SHARED |MAP_ANONYMOUS,
> > + -1, 0);
> > + }
>
> I think it will be beneficial to avoid "if (packed)" conditionals on
> the exposed functions that give information about the memory maps.
> These need to be replicated at
> hw/virtio/vhost-vdpa.c:vhost_vdpa_svq_map_rings.
>
> However, the current one depends on the driver area to live in the
> same page as the descriptor area, so it is not suitable for this.
I haven't really understood this.
In split vqs the descriptor, driver and device areas are mapped to RW pages.
In vhost_vdpa.c:vhost_vdpa_svq_map_rings, the regions are mapped with
the appropriate "perm" field that sets the R/W permissions in the DMAMap
object. Is this problematic for the split vq format because the avail ring is
anyway mapped to a RW page in "vhost_svq_start"?
For packed vqs, the "Driver Event Suppression" data structure should be
read-only for the device. Similar to split vqs, this is mapped to a RW page
in "vhost_svq_start" but it is then mapped to a DMAMap object with read-
only perms in "vhost_vdpa_svq_map_rings".
I am a little confused about where the issue lies.
Thanks,
Sahil
next prev parent reply other threads:[~2024-08-11 17:21 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-02 11:21 [RFC v3 0/3] Add packed virtqueue to shadow virtqueue Sahil Siddiq
2024-08-02 11:21 ` [RFC v3 1/3] vhost: Introduce packed vq and add buffer elements Sahil Siddiq
2024-08-07 16:40 ` Eugenio Perez Martin
2024-08-02 11:21 ` [RFC v3 2/3] vhost: Data structure changes to support packed vqs Sahil Siddiq
2024-08-02 11:21 ` [RFC v3 3/3] vhost: Allocate memory for packed vring Sahil Siddiq
2024-08-07 16:22 ` Eugenio Perez Martin
2024-08-11 15:37 ` Sahil
2024-08-11 17:20 ` Sahil [this message]
2024-08-12 6:31 ` Eugenio Perez Martin
2024-08-12 19:32 ` Sahil
2024-08-13 6:53 ` Eugenio Perez Martin
2024-08-21 12:19 ` Sahil
2024-08-27 15:30 ` Eugenio Perez Martin
2024-08-30 10:20 ` Sahil
2024-08-30 10:48 ` Eugenio Perez Martin
2024-09-08 19:46 ` Sahil
2024-09-09 12:34 ` Eugenio Perez Martin
2024-09-11 19:36 ` Sahil
2024-09-12 9:54 ` Eugenio Perez Martin
2024-09-16 4:34 ` Sahil
2024-09-24 5:31 ` Sahil
2024-09-24 10:46 ` Eugenio Perez Martin
2024-09-30 5:34 ` Sahil
2024-10-28 5:37 ` Sahil Siddiq
2024-10-28 8:10 ` Eugenio Perez Martin
2024-10-31 5:10 ` Sahil Siddiq
2024-11-13 5:10 ` Sahil Siddiq
2024-11-13 11:30 ` Eugenio Perez Martin
2024-12-05 20:38 ` Sahil Siddiq
2024-08-07 16:41 ` [RFC v3 0/3] Add packed virtqueue to shadow virtqueue Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=23656540.6Emhk5qWAg@valdaarhun \
--to=icegambit91@gmail.com \
--cc=eperezma@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sahilcdq@proton.me \
--cc=sgarzare@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).