From: Cornelia Huck <cornelia.huck@de.ibm.com>
To: Jason Wang <jasowang@redhat.com>
Cc: mst@redhat.com, qemu-devel@nongnu.org, pbonzini@redhat.com,
peterx@redhat.com
Subject: Re: [Qemu-devel] [PATCH] virtio: destroy region cache during reset
Date: Wed, 8 Mar 2017 10:19:22 +0100 [thread overview]
Message-ID: <20170308101922.730b1579.cornelia.huck@de.ibm.com> (raw)
In-Reply-To: <cb84b888-4917-162d-31a5-f708f4783444@redhat.com>
On Wed, 8 Mar 2017 11:18:27 +0800
Jason Wang <jasowang@redhat.com> wrote:
> On 2017年03月07日 18:16, Cornelia Huck wrote:
> > On Tue, 7 Mar 2017 16:47:58 +0800
> > Jason Wang <jasowang@redhat.com> wrote:
> >
> >> We don't destroy region cache during reset which can make the maps
> >> of previous driver leaked to a buggy or malicious driver that don't
> >> set vring address before starting to use the device. Fix this by
> >> destroy the region cache during reset and validate it before trying to
> >> use them. While at it, also validate address_space_cache_init() during
> >> virtio_init_region_cache() to make sure we have a correct region
> >> cache.
> >>
> >> Signed-off-by: Jason Wang <jasowang@redhat.com>
> >> ---
> >> hw/virtio/virtio.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++--------
> >> 1 file changed, 76 insertions(+), 12 deletions(-)
> >> @@ -190,6 +211,10 @@ static inline uint16_t vring_avail_flags(VirtQueue *vq)
> >> {
> >> VRingMemoryRegionCaches *caches = atomic_rcu_read(&vq->vring.caches);
> >> hwaddr pa = offsetof(VRingAvail, flags);
> >> + if (!caches) {
> >> + virtio_error(vq->vdev, "Cannot map avail flags");
> > I'm not sure that virtio_error is the right thing here; ending up in
> > this function with !caches indicates an error in our logic.
>
> Probably not, this can be triggered by buggy guest.
I would think that even a buggy guest should not be able to trigger
accesses to vring structures that have not yet been set up. What am I
missing?
>
> > An assert
> > might be better (and I hope we can sort out all of those errors exposed
> > by the introduction of region caches for 2.9...)
>
> I thought we should avoid assert as much as possible in this case. But
> if you and maintainer want an assert, it's also fine.
My personal rule-of-thumb:
- If it is something that can be triggered by the guest, or it is
something that is easily recovered, set the device to broken.
- If it is something that indicates that we messed up our internal
logic, use an assert.
I think arriving here with !caches indicates the second case; but if
there is a way for a guest to trigger this, setting the device to
broken would certainly be better.
>
> >
> >> + return 0;
> >> + }
> >> return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
> >> }
> >>
> >> @@ -1117,6 +1174,15 @@ static enum virtio_device_endian virtio_current_cpu_endian(void)
> >> }
> >> }
> >>
> >> +static void virtio_virtqueue_reset_region_cache(struct VirtQueue *vq)
> >> +{
> >> + VRingMemoryRegionCaches *caches;
> >> +
> >> + caches = atomic_read(&vq->vring.caches);
> >> + atomic_set(&vq->vring.caches, NULL);
> >> + virtio_free_region_cache(caches);
> > Shouldn't this use rcu to free it? Unconditionally setting caches to
> > NULL feels wrong...
>
> Right, will switch to use rcu.
>
> >> +}
> >> +
> >> void virtio_reset(void *opaque)
> >> {
> >> VirtIODevice *vdev = opaque;
> >> @@ -1157,6 +1223,7 @@ void virtio_reset(void *opaque)
> >> vdev->vq[i].notification = true;
> >> vdev->vq[i].vring.num = vdev->vq[i].vring.num_default;
> >> vdev->vq[i].inuse = 0;
> >> + virtio_virtqueue_reset_region_cache(&vdev->vq[i]);
> > ...especially as you call it in a reset context here.
> >
> >> }
> >> }
> >>
> >> @@ -2451,13 +2518,10 @@ static void virtio_device_free_virtqueues(VirtIODevice *vdev)
> >> }
> >>
> >> for (i = 0; i < VIRTIO_QUEUE_MAX; i++) {
> >> - VRingMemoryRegionCaches *caches;
> >> if (vdev->vq[i].vring.num == 0) {
> >> break;
> >> }
> >> - caches = atomic_read(&vdev->vq[i].vring.caches);
> >> - atomic_set(&vdev->vq[i].vring.caches, NULL);
> >> - virtio_free_region_cache(caches);
> >> + virtio_virtqueue_reset_region_cache(&vdev->vq[i]);
> > OTOH, immediate destruction may still be called for during device
> > finalization.
> >
>
> Right but to avoid code duplication, use rcu unconditionally should be
> no harm here.
Yes, this should be fine.
next prev parent reply other threads:[~2017-03-08 9:19 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-07 8:47 [Qemu-devel] [PATCH] virtio: destroy region cache during reset Jason Wang
2017-03-07 10:16 ` Cornelia Huck
2017-03-08 3:18 ` Jason Wang
2017-03-08 9:19 ` Cornelia Huck [this message]
2017-03-08 9:51 ` Jason Wang
2017-03-08 10:12 ` Cornelia Huck
2017-03-09 2:19 ` Jason Wang
2017-03-09 11:07 ` Cornelia Huck
2017-03-09 11:12 ` Paolo Bonzini
2017-03-09 11:38 ` Cornelia Huck
2017-03-10 10:57 ` Jason Wang
2017-03-07 10:55 ` Paolo Bonzini
2017-03-08 3:21 ` Jason Wang
2017-03-08 6:22 ` Jason Wang
2017-03-08 9:10 ` Paolo Bonzini
2017-03-08 9:48 ` Jason Wang
2017-03-09 11:10 ` Paolo Bonzini
2017-03-09 11:49 ` Cornelia Huck
2017-03-08 9:30 ` Cornelia Huck
2017-03-08 9:53 ` Jason Wang
2017-03-08 10:15 ` Cornelia Huck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170308101922.730b1579.cornelia.huck@de.ibm.com \
--to=cornelia.huck@de.ibm.com \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).