From: Eugenio Perez Martin <eperezma@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: Parav Pandit <parav@mellanox.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
qemu-devel <qemu-devel@nongnu.org>,
virtualization <virtualization@lists.linux-foundation.org>,
Stefan Hajnoczi <stefanha@redhat.com>,
Eli Cohen <eli@mellanox.com>,
Stefano Garzarella <sgarzare@redhat.com>
Subject: Re: [PATCH v2 3/3] vdpa: Check for iova range at mappings changes
Date: Tue, 12 Oct 2021 08:27:52 +0200 [thread overview]
Message-ID: <CAJaqyWec_h5nHPuWxZHLzOTVde57ghOXVat7HNPMryccb91mXQ@mail.gmail.com> (raw)
In-Reply-To: <CACGkMEsGAKroQysXsNnQHVVnCZcEBsL34-qnDry9JEMcefVRAw@mail.gmail.com>
On Tue, Oct 12, 2021 at 5:50 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Oct 5, 2021 at 9:49 PM Eugenio Pérez <eperezma@redhat.com> wrote:
> >
> > Check vdpa device range before updating memory regions so we don't add
> > any outside of it, and report the invalid change if any.
> >
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> > include/hw/virtio/vhost-vdpa.h | 2 +
> > hw/virtio/vhost-vdpa.c | 68 ++++++++++++++++++++++++++--------
> > hw/virtio/trace-events | 1 +
> > 3 files changed, 55 insertions(+), 16 deletions(-)
> >
> > diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> > index a8963da2d9..c288cf7ecb 100644
> > --- a/include/hw/virtio/vhost-vdpa.h
> > +++ b/include/hw/virtio/vhost-vdpa.h
> > @@ -13,6 +13,7 @@
> > #define HW_VIRTIO_VHOST_VDPA_H
> >
> > #include "hw/virtio/virtio.h"
> > +#include "standard-headers/linux/vhost_types.h"
> >
> > typedef struct VhostVDPAHostNotifier {
> > MemoryRegion mr;
> > @@ -24,6 +25,7 @@ typedef struct vhost_vdpa {
> > uint32_t msg_type;
> > bool iotlb_batch_begin_sent;
> > MemoryListener listener;
> > + struct vhost_vdpa_iova_range iova_range;
> > struct vhost_dev *dev;
> > VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX];
> > } VhostVDPA;
> > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> > index be7c63b4ba..6654287050 100644
> > --- a/hw/virtio/vhost-vdpa.c
> > +++ b/hw/virtio/vhost-vdpa.c
> > @@ -37,20 +37,34 @@ static Int128 vhost_vdpa_section_end(const MemoryRegionSection *section)
> > return llend;
> > }
> >
> > -static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section)
> > -{
> > - return (!memory_region_is_ram(section->mr) &&
> > - !memory_region_is_iommu(section->mr)) ||
> > - memory_region_is_protected(section->mr) ||
> > - /* vhost-vDPA doesn't allow MMIO to be mapped */
> > - memory_region_is_ram_device(section->mr) ||
> > - /*
> > - * Sizing an enabled 64-bit BAR can cause spurious mappings to
> > - * addresses in the upper part of the 64-bit address space. These
> > - * are never accessed by the CPU and beyond the address width of
> > - * some IOMMU hardware. TODO: VDPA should tell us the IOMMU width.
> > - */
> > - section->offset_within_address_space & (1ULL << 63);
> > +static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
> > + uint64_t iova_min,
> > + uint64_t iova_max)
> > +{
> > + Int128 llend;
> > +
> > + if ((!memory_region_is_ram(section->mr) &&
> > + !memory_region_is_iommu(section->mr)) ||
> > + memory_region_is_protected(section->mr) ||
> > + /* vhost-vDPA doesn't allow MMIO to be mapped */
> > + memory_region_is_ram_device(section->mr)) {
> > + return true;
> > + }
> > +
> > + if (section->offset_within_address_space < iova_min) {
> > + error_report("RAM section out of device range (min=%lu, addr=%lu)",
> > + iova_min, section->offset_within_address_space);
> > + return true;
> > + }
> > +
> > + llend = vhost_vdpa_section_end(section);
> > + if (int128_gt(llend, int128_make64(iova_max))) {
> > + error_report("RAM section out of device range (max=%lu, end addr=%lu)",
> > + iova_max, int128_get64(llend));
> > + return true;
> > + }
> > +
> > + return false;
> > }
> >
> > static int vhost_vdpa_dma_map(struct vhost_vdpa *v, hwaddr iova, hwaddr size,
> > @@ -162,7 +176,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
> > void *vaddr;
> > int ret;
> >
> > - if (vhost_vdpa_listener_skipped_section(section)) {
> > + if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> > + v->iova_range.last)) {
> > return;
> > }
> >
> > @@ -220,7 +235,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
> > Int128 llend, llsize;
> > int ret;
> >
> > - if (vhost_vdpa_listener_skipped_section(section)) {
> > + if (vhost_vdpa_listener_skipped_section(section, v->iova_range.first,
> > + v->iova_range.last)) {
> > return;
> > }
> >
> > @@ -288,9 +304,24 @@ static void vhost_vdpa_add_status(struct vhost_dev *dev, uint8_t status)
> > vhost_vdpa_call(dev, VHOST_VDPA_SET_STATUS, &s);
> > }
> >
> > +static int vhost_vdpa_get_iova_range(struct vhost_vdpa *v)
> > +{
> > + int ret;
> > +
> > + ret = vhost_vdpa_call(v->dev, VHOST_VDPA_GET_IOVA_RANGE, &v->iova_range);
> > + if (ret != 0) {
> > + return ret;
> > + }
>
> I think we need a fallback for the kernel that does not support
> VHOST_VDPA_GET_IOVA_RANGE?
>
I'm fine with giving a default, but only "old" kernels will not have
the syscall. Future kernels will return success and [0, ULLONG_MAX]
range, won't it?
Taking that into account, what is the good default value? Before this
commit the valid range was [0, 2^63). Although that seems too wide for
every iommu, I would go with that one. The kernel considers fine [0,
ULLONG_MAX] in case the device and iommu domain does not support
them...
Thanks!
> Thanks
>
> > +
> > + trace_vhost_vdpa_get_iova_range(v->dev, v->iova_range.first,
> > + v->iova_range.last);
> > + return ret;
> > +}
> > +
> > static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> > {
> > struct vhost_vdpa *v;
> > + int r;
> > assert(dev->vhost_ops->backend_type == VHOST_BACKEND_TYPE_VDPA);
> > trace_vhost_vdpa_init(dev, opaque);
> >
> > @@ -300,6 +331,11 @@ static int vhost_vdpa_init(struct vhost_dev *dev, void *opaque, Error **errp)
> > v->listener = vhost_vdpa_memory_listener;
> > v->msg_type = VHOST_IOTLB_MSG_V2;
> >
> > + r = vhost_vdpa_get_iova_range(v);
> > + if (unlikely(!r)) {
> > + return r;
> > + }
> > +
> > vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_ACKNOWLEDGE |
> > VIRTIO_CONFIG_S_DRIVER);
> >
> > diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
> > index 8ed19e9d0c..650e521e35 100644
> > --- a/hw/virtio/trace-events
> > +++ b/hw/virtio/trace-events
> > @@ -52,6 +52,7 @@ vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index:
> > vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
> > vhost_vdpa_set_owner(void *dev) "dev: %p"
> > vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
> > +vhost_vdpa_get_iova_range(void *dev, uint64_t first, uint64_t last) "dev: %p first: 0x%"PRIx64" last: 0x%"PRIx64
> >
> > # virtio.c
> > virtqueue_alloc_element(void *elem, size_t sz, unsigned in_num, unsigned out_num) "elem %p size %zd in_num %u out_num %u"
> > --
> > 2.27.0
> >
>
next prev parent reply other threads:[~2021-10-12 6:29 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-05 13:48 [PATCH v2 0/3] vdpa: Check iova range on memory regions ops Eugenio Pérez
2021-10-05 13:48 ` [PATCH v2 1/3] vdpa: Skip protected ram IOMMU mappings Eugenio Pérez
2021-10-12 3:45 ` Jason Wang
2021-10-05 13:48 ` [PATCH v2 2/3] vdpa: Add vhost_vdpa_section_end Eugenio Pérez
2021-10-12 3:46 ` Jason Wang
2021-10-05 13:48 ` [PATCH v2 3/3] vdpa: Check for iova range at mappings changes Eugenio Pérez
2021-10-12 3:50 ` Jason Wang
2021-10-12 6:27 ` Eugenio Perez Martin [this message]
2021-10-12 6:44 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJaqyWec_h5nHPuWxZHLzOTVde57ghOXVat7HNPMryccb91mXQ@mail.gmail.com \
--to=eperezma@redhat.com \
--cc=eli@mellanox.com \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).