From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: pbonzini@redhat.com, crosthwaite.peter@gmail.com,
rth@twiddle.net, qemu-devel@nongnu.org,
Cornelia Huck <cornelia.huck@de.ibm.com>
Subject: Re: [Qemu-devel] [PATCH] exec: don't return int64_t in address_space_cache_init()
Date: Thu, 30 Mar 2017 00:26:52 +0300 [thread overview]
Message-ID: <20170330002612-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <1490767970-23689-1-git-send-email-jasowang@redhat.com>
On Wed, Mar 29, 2017 at 02:12:50PM +0800, Jason Wang wrote:
> We return int64_t as the length of region cache but accept hwaddr as
> the required length. This is wrong and may confuse the caller since
> the it can lead comparison between signed and unsigned types. The
> caller can not catch the failure in this case. Fixing this by
> returning hwaddr and return zero on failure.
>
> Fixes: 5eba0404b9829 ("virtio: use MemoryRegionCache to access descriptors")
> Fixes: e45da65322386 ("virtio: validate address space cache during init")
> Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
Can you be more specific about the symptoms this fixes in the
commit log?
E.g. "This actually triggers on XYZ when using ABC".
> ---
> exec.c | 12 ++++++------
> hw/virtio/virtio.c | 7 +++----
> include/exec/memory.h | 13 ++++++-------
> 3 files changed, 15 insertions(+), 17 deletions(-)
>
> diff --git a/exec.c b/exec.c
> index e57a8a2..9b71174 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -3230,11 +3230,11 @@ void cpu_physical_memory_unmap(void *buffer, hwaddr len,
> #define RCU_READ_UNLOCK(...) rcu_read_unlock()
> #include "memory_ldst.inc.c"
>
> -int64_t address_space_cache_init(MemoryRegionCache *cache,
> - AddressSpace *as,
> - hwaddr addr,
> - hwaddr len,
> - bool is_write)
> +hwaddr address_space_cache_init(MemoryRegionCache *cache,
> + AddressSpace *as,
> + hwaddr addr,
> + hwaddr len,
> + bool is_write)
> {
> hwaddr l, xlat;
> MemoryRegion *mr;
> @@ -3245,7 +3245,7 @@ int64_t address_space_cache_init(MemoryRegionCache *cache,
> l = len;
> mr = address_space_translate(as, addr, &xlat, &l, is_write);
> if (!memory_access_is_direct(mr, is_write)) {
> - return -EINVAL;
> + return 0;
> }
>
> l = address_space_extend_translation(as, addr, len, mr, xlat, l, is_write);
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 03592c5..3482be2 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -129,9 +129,8 @@ static void virtio_init_region_cache(VirtIODevice *vdev, int n)
> VirtQueue *vq = &vdev->vq[n];
> VRingMemoryRegionCaches *old = vq->vring.caches;
> VRingMemoryRegionCaches *new;
> - hwaddr addr, size;
> + hwaddr addr, size, len;
> int event_size;
> - int64_t len;
>
> event_size = virtio_vdev_has_feature(vq->vdev, VIRTIO_RING_F_EVENT_IDX) ? 2 : 0;
>
> @@ -586,7 +585,7 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> unsigned int total_bufs, in_total, out_total;
> VRingMemoryRegionCaches *caches;
> MemoryRegionCache indirect_desc_cache = MEMORY_REGION_CACHE_INVALID;
> - int64_t len = 0;
> + hwaddr len = 0;
> int rc;
>
> if (unlikely(!vq->vring.desc)) {
> @@ -831,7 +830,7 @@ void *virtqueue_pop(VirtQueue *vq, size_t sz)
> VRingMemoryRegionCaches *caches;
> MemoryRegionCache indirect_desc_cache = MEMORY_REGION_CACHE_INVALID;
> MemoryRegionCache *desc_cache;
> - int64_t len;
> + hwaddr len;
> VirtIODevice *vdev = vq->vdev;
> VirtQueueElement *elem = NULL;
> unsigned out_num, in_num;
> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index e39256a..932dd00 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -1444,8 +1444,7 @@ struct MemoryRegionCache {
> * @is_write: indicates the transfer direction
> *
> * Will only work with RAM, and may map a subset of the requested range by
> - * returning a value that is less than @len. On failure, return a negative
> - * errno value.
> + * returning a value that is less than @len. On failure, return zero.
> *
> * Because it only works with RAM, this function can be used for
> * read-modify-write operations. In this case, is_write should be %true.
> @@ -1453,11 +1452,11 @@ struct MemoryRegionCache {
> * Note that addresses passed to the address_space_*_cached functions
> * are relative to @addr.
> */
> -int64_t address_space_cache_init(MemoryRegionCache *cache,
> - AddressSpace *as,
> - hwaddr addr,
> - hwaddr len,
> - bool is_write);
> +hwaddr address_space_cache_init(MemoryRegionCache *cache,
> + AddressSpace *as,
> + hwaddr addr,
> + hwaddr len,
> + bool is_write);
>
> /**
> * address_space_cache_invalidate: complete a write to a #MemoryRegionCache
> --
> 2.7.4
next prev parent reply other threads:[~2017-03-29 21:27 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-29 6:12 [Qemu-devel] [PATCH] exec: don't return int64_t in address_space_cache_init() Jason Wang
2017-03-29 21:26 ` Michael S. Tsirkin [this message]
2017-03-30 5:59 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170330002612-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=cornelia.huck@de.ibm.com \
--cc=crosthwaite.peter@gmail.com \
--cc=jasowang@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).