From: Wei Xu <wexu@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: mst@redhat.com, tiwei.bie@intel.com, jfreimann@redhat.com,
qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 7/8] virtio: get avail bytes check for packed ring
Date: Mon, 4 Jun 2018 14:07:23 +0800 [thread overview]
Message-ID: <20180604060723.GA13354@wei-ubt> (raw)
In-Reply-To: <3302231d-bb95-1cba-9d6b-87e6c8b76c00@redhat.com>
On Wed, Apr 11, 2018 at 11:03:24AM +0800, Jason Wang wrote:
>
>
> On 2018年04月04日 20:54, wexu@redhat.com wrote:
> >From: Wei Xu <wexu@redhat.com>
> >
> >mostly as same as 1.0, copy it separately for
> >prototype, need a refactoring.
> >
> >Signed-off-by: Wei Xu <wexu@redhat.com>
> >---
> > hw/virtio/virtio.c | 142 +++++++++++++++++++++++++++++++++++++++++++++++++++--
> > 1 file changed, 139 insertions(+), 3 deletions(-)
> >
> >diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> >index def07c6..cf726f3 100644
> >--- a/hw/virtio/virtio.c
> >+++ b/hw/virtio/virtio.c
> >@@ -836,9 +836,9 @@ static int virtqueue_read_next_desc(VirtIODevice *vdev, VRingDesc *desc,
> > return VIRTQUEUE_READ_DESC_MORE;
> > }
> >-void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> >- unsigned int *out_bytes,
> >- unsigned max_in_bytes, unsigned max_out_bytes)
> >+static void virtqueue_get_avail_bytes_split(VirtQueue *vq,
> >+ unsigned int *in_bytes, unsigned int *out_bytes,
> >+ unsigned max_in_bytes, unsigned max_out_bytes)
> > {
> > VirtIODevice *vdev = vq->vdev;
> > unsigned int max, idx;
> >@@ -961,6 +961,142 @@ err:
> > goto done;
> > }
> >+static void virtqueue_get_avail_bytes_packed(VirtQueue *vq,
> >+ unsigned int *in_bytes, unsigned int *out_bytes,
> >+ unsigned max_in_bytes, unsigned max_out_bytes)
> >+{
> >+ VirtIODevice *vdev = vq->vdev;
> >+ unsigned int max, idx;
> >+ unsigned int total_bufs, in_total, out_total;
> >+ MemoryRegionCache *desc_cache;
> >+ VRingMemoryRegionCaches *caches;
> >+ MemoryRegionCache indirect_desc_cache = MEMORY_REGION_CACHE_INVALID;
> >+ int64_t len = 0;
> >+ VRingDescPacked desc;
> >+
> >+ if (unlikely(!vq->packed.desc)) {
> >+ if (in_bytes) {
> >+ *in_bytes = 0;
> >+ }
> >+ if (out_bytes) {
> >+ *out_bytes = 0;
> >+ }
> >+ return;
> >+ }
> >+
> >+ rcu_read_lock();
> >+ idx = vq->last_avail_idx;
> >+ total_bufs = in_total = out_total = 0;
> >+
> >+ max = vq->packed.num;
> >+ caches = vring_get_region_caches(vq);
> >+ if (caches->desc.len < max * sizeof(VRingDescPacked)) {
> >+ virtio_error(vdev, "Cannot map descriptor ring");
> >+ goto err;
> >+ }
> >+
> >+ desc_cache = &caches->desc;
> >+ vring_desc_read_packed(vdev, &desc, desc_cache, idx);
> >+ while (is_desc_avail(&desc)) {
> >+ unsigned int num_bufs;
> >+ unsigned int i;
> >+
> >+ num_bufs = total_bufs;
> >+
> >+ if (desc.flags & VRING_DESC_F_INDIRECT) {
> >+ if (desc.len % sizeof(VRingDescPacked)) {
> >+ virtio_error(vdev, "Invalid size for indirect buffer table");
> >+ goto err;
> >+ }
> >+
> >+ /* If we've got too many, that implies a descriptor loop. */
> >+ if (num_bufs >= max) {
> >+ virtio_error(vdev, "Looped descriptor");
> >+ goto err;
> >+ }
> >+
> >+ /* loop over the indirect descriptor table */
> >+ len = address_space_cache_init(&indirect_desc_cache,
> >+ vdev->dma_as,
> >+ desc.addr, desc.len, false);
> >+ desc_cache = &indirect_desc_cache;
> >+ if (len < desc.len) {
> >+ virtio_error(vdev, "Cannot map indirect buffer");
> >+ goto err;
> >+ }
> >+
> >+ max = desc.len / sizeof(VRingDescPacked);
> >+ num_bufs = i = 0;
> >+ vring_desc_read_packed(vdev, &desc, desc_cache, i);
> >+ }
> >+
> >+ do {
> >+ /* If we've got too many, that implies a descriptor loop. */
> >+ if (++num_bufs > max) {
> >+ virtio_error(vdev, "Looped descriptor");
> >+ goto err;
> >+ }
> >+
> >+ if (desc.flags & VRING_DESC_F_WRITE) {
> >+ in_total += desc.len;
> >+ } else {
> >+ out_total += desc.len;
> >+ }
> >+ if (in_total >= max_in_bytes && out_total >= max_out_bytes) {
> >+ goto done;
> >+ }
> >+
> >+ if (desc_cache == &indirect_desc_cache) {
> >+ vring_desc_read_packed(vdev, &desc, desc_cache,
> >+ ++i % vq->packed.num);
> >+ } else {
> >+ vring_desc_read_packed(vdev, &desc, desc_cache,
> >+ ++idx % vq->packed.num);
> >+ }
>
> Need to make sure desc.flags was read before other fields, otherwise we may
> get stale address.
OK, need a barrier here, thanks.
Wei
>
> Thanks
>
> >+ } while (desc.flags & VRING_DESC_F_NEXT);
> >+
> >+ if (desc_cache == &indirect_desc_cache) {
> >+ address_space_cache_destroy(&indirect_desc_cache);
> >+ total_bufs++;
> >+ /* We missed one step on for indirect desc */
> >+ idx++;
> >+ } else {
> >+ total_bufs = num_bufs;
> >+ }
> >+
> >+ desc_cache = &caches->desc;
> >+ vring_desc_read_packed(vdev, &desc, desc_cache, idx % vq->packed.num);
> >+ }
> >+
> >+done:
> >+ address_space_cache_destroy(&indirect_desc_cache);
> >+ if (in_bytes) {
> >+ *in_bytes = in_total;
> >+ }
> >+ if (out_bytes) {
> >+ *out_bytes = out_total;
> >+ }
> >+ rcu_read_unlock();
> >+ return;
> >+
> >+err:
> >+ in_total = out_total = 0;
> >+ goto done;
> >+}
> >+
> >+void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> >+ unsigned int *out_bytes,
> >+ unsigned max_in_bytes, unsigned max_out_bytes)
> >+{
> >+ if (virtio_vdev_has_feature(vq->vdev, VIRTIO_F_RING_PACKED)) {
> >+ virtqueue_get_avail_bytes_packed(vq, in_bytes, out_bytes,
> >+ max_in_bytes, max_out_bytes);
> >+ } else {
> >+ virtqueue_get_avail_bytes_split(vq, in_bytes, out_bytes,
> >+ max_in_bytes, max_out_bytes);
> >+ }
> >+}
> >+
> > int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
> > unsigned int out_bytes)
> > {
>
>
next prev parent reply other threads:[~2018-06-04 6:07 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-04-04 12:53 [Qemu-devel] [RFC PATCH 0/8] virtio-net 1.1 userspace backend support wexu
2018-04-04 12:53 ` [Qemu-devel] [PATCH 1/8] virtio: feature bit, data structure for packed ring wexu
2018-04-10 7:05 ` Jason Wang
2018-06-03 16:21 ` Wei Xu
2018-04-04 12:53 ` [Qemu-devel] [PATCH 2/8] virtio: memory cache " wexu
2018-04-10 7:06 ` Jason Wang
2018-04-04 12:53 ` [Qemu-devel] [PATCH 3/8] virtio: add empty check " wexu
2018-04-10 7:23 ` Jason Wang
2018-06-03 17:44 ` Wei Xu
2018-06-04 8:32 ` Jason Wang
2018-04-04 12:54 ` [Qemu-devel] [PATCH 4/8] virtio: add detach element for packed ring(1.1) wexu
2018-04-10 7:32 ` Jason Wang
2018-06-04 1:34 ` Wei Xu
2018-06-04 1:54 ` Michael S. Tsirkin
2018-06-04 9:40 ` Wei Xu
2018-04-04 12:54 ` [Qemu-devel] [PATCH 5/8] virtio: notification tweak for packed ring wexu
2018-04-04 12:54 ` [Qemu-devel] [PATCH 6/8] virtio: flush/push support " wexu
2018-04-11 2:58 ` Jason Wang
2018-04-04 12:54 ` [Qemu-devel] [PATCH 7/8] virtio: get avail bytes check " wexu
2018-04-11 3:03 ` Jason Wang
2018-06-04 6:07 ` Wei Xu [this message]
2018-04-04 12:54 ` [Qemu-devel] [PATCH 8/8] virtio: queue pop support " wexu
2018-04-11 2:43 ` Jason Wang
2018-06-04 7:07 ` Wei Xu
2018-04-04 13:11 ` [Qemu-devel] [RFC PATCH 0/8] virtio-net 1.1 userspace backend support no-reply
2018-04-04 13:14 ` no-reply
2018-04-04 13:14 ` no-reply
2018-04-10 3:46 ` Jason Wang
2018-04-11 2:22 ` Wei Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180604060723.GA13354@wei-ubt \
--to=wexu@redhat.com \
--cc=jasowang@redhat.com \
--cc=jfreimann@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=tiwei.bie@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).