qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Bharat Bhushan <bharatb.linux@gmail.com>
To: Jean-Philippe Brucker <jean-philippe@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	kevin.tian@intel.com, tnowicki@marvell.com, mst@redhat.com,
	drjones@redhat.com, peterx@redhat.com, qemu-devel@nongnu.org,
	Auger Eric <eric.auger@redhat.com>,
	alex.williamson@redhat.com, qemu-arm@nongnu.org,
	Bharat Bhushan <bbhushan2@marvell.com>,
	linuc.decode@gmail.com, eric.auger.pro@gmail.com
Subject: Re: [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach
Date: Wed, 18 Mar 2020 15:47:44 +0530	[thread overview]
Message-ID: <CAAeCc_k_7Ny1Kf2ZiAAJe2xms2bdK-DqB1S2ro+omxp0EWi28g@mail.gmail.com> (raw)
In-Reply-To: <20200317155929.GB1057687@myrica>

Hi Jean,

On Tue, Mar 17, 2020 at 9:29 PM Jean-Philippe Brucker
<jean-philippe@linaro.org> wrote:
>
> On Tue, Mar 17, 2020 at 02:46:55PM +0530, Bharat Bhushan wrote:
> > Hi Jean,
> >
> > On Tue, Mar 17, 2020 at 2:23 PM Jean-Philippe Brucker
> > <jean-philippe@linaro.org> wrote:
> > >
> > > On Tue, Mar 17, 2020 at 12:40:39PM +0530, Bharat Bhushan wrote:
> > > > Hi Jean,
> > > >
> > > > On Mon, Mar 16, 2020 at 3:41 PM Jean-Philippe Brucker
> > > > <jean-philippe@linaro.org> wrote:
> > > > >
> > > > > Hi Bharat,
> > > > >
> > > > > Could you Cc me on your next posting?  Unfortunately I don't have much
> > > > > hardware for testing this at the moment, but I might be able to help a
> > > > > little on the review.
> > > > >
> > > > > On Mon, Mar 16, 2020 at 02:40:00PM +0530, Bharat Bhushan wrote:
> > > > > > > >>> First issue is: your guest can use 4K page and your host can use 64KB
> > > > > > > >>> pages. In that case VFIO_DMA_MAP will fail with -EINVAL. We must devise
> > > > > > > >>> a way to pass the host settings to the VIRTIO-IOMMU device.
> > > > > > > >>>
> > > > > > > >>> Even with 64KB pages, it did not work for me. I have obviously not the
> > > > > > > >>> storm of VFIO_DMA_MAP failures but I have some, most probably due to
> > > > > > > >>> some wrong notifications somewhere. I will try to investigate on my side.
> > > > > > > >>>
> > > > > > > >>> Did you test with VFIO on your side?
> > > > > > > >>
> > > > > > > >> I did not tried with different page sizes, only tested with 4K page size.
> > > > > > > >>
> > > > > > > >> Yes it works, I tested with two n/w device assigned to VM, both interfaces works
> > > > > > > >>
> > > > > > > >> First I will try with 64k page size.
> > > > > > > >
> > > > > > > > 64K page size does not work for me as well,
> > > > > > > >
> > > > > > > > I think we are not passing correct page_size_mask here
> > > > > > > > (config.page_size_mask is set to TARGET_PAGE_MASK ( which is
> > > > > > > > 0xfffffffffffff000))
> > > > > > > I guess you mean with guest using 4K and host using 64K.
> > > > > > > >
> > > > > > > > We need to set this correctly as per host page size, correct?
> > > > > > > Yes that's correct. We need to put in place a control path to retrieve
> > > > > > > the page settings on host through VFIO to inform the virtio-iommu device.
> > > > > > >
> > > > > > > Besides this issue, did you try with 64kB on host and guest?
> > > > > >
> > > > > > I tried Followings
> > > > > >   - 4k host and 4k guest  - it works with v7 version
> > > > > >   - 64k host and 64k guest - it does not work with v7
> > > > > >     hard-coded config.page_size_mask to 0xffffffffffff0000 and it works
> > > > >
> > > > > You might get this from the iova_pgsize bitmap returned by
> > > > > VFIO_IOMMU_GET_INFO. The virtio config.page_size_mask is global so there
> > > > > is the usual problem of aggregating consistent properties, but I'm
> > > > > guessing using the host page size as a granule here is safe enough.
> > > > >
> > > > > If it is a problem, we can add a PROBE property for page size mask,
> > > > > allowing to define per-endpoint page masks. I have kernel patches
> > > > > somewhere to do just that.
> > > >
> > > > I do not see we need page size mask per endpoint.
> > > >
> > > > While I am trying to understand what "page-size-mask" guest will work with
> > > >
> > > > - 4K page size host and 4k page size guest
> > > >   config.page_size_mask = 0xffffffffffff000 will work
> > > >
> > > > - 64K page size host and 64k page size guest
> > > >   config.page_size_mask = 0xfffffffffff0000 will work
> > > >
> > > > - 64K page size host and 4k page size guest
> > > >    1) config.page_size_mask = 0xffffffffffff000 will also not work as
> > > > VFIO in host expect iova and size to be aligned to 64k (PAGE_SIZE in
> > > > host)
> > > >    2) config.page_size_mask = 0xfffffffffff0000 will not work, iova
> > > > initialization (in guest) expect minimum page-size supported by h/w to
> > > > be equal to 4k (PAGE_SIZE in guest)
> > > >        Should we look to relax this in iova allocation code?
> > >
> > > Oh right, that's not great. Maybe the BUG_ON() can be removed, I'll ask on
> > > the list.
> >
> > yes, the BUG_ON in iova_init.
> > I tried with removing same and it worked, but not analyzed side effects.
>
> It might break the assumption from device drivers that mapping a page is
> safe. For example they call alloc_page() followed by dma_map_page(). In
> our situation dma-iommu.c will oblige and create one 64k mapping to one 4k
> physical page. As a result the endpoint can access the neighbouring 60k of
> memory.
>
> This isn't too terrible. After all, even when the page sizes match, device
> drivers can call dma_map_single() on sub-page buffers, which will also let
> the endpoint access a whole page. The solution, if you don't trust the
> endpoint, is to use bounce buffers.
>
> But I suspect it's not as simple as removing the BUG_ON(), we'll need to
> go over dma-iommu.c first. And it seems like assigning endpoints to guest
> userspace won't work either in this config. In vfio_dma_do_map():
>
>         mask = ((uint64_t)1 << __ffs(vfio_pgsize_bitmap(iommu))) - 1;
>
>         WARN_ON(mask & PAGE_MASK);

Yes, Agree

>
> If I read this correctly the WARN will trigger in a 4k guest under 64k
> host, right?  So maybe we can just say that this config isn't supported,
> unless it's an important use-case for virtio-iommu?

I sent v8 version of patch and with that guest and host with same page
size should work.
While i have not yet added analyzed how to mark 4k guest and 64k host
as un-supported configuration, will analyze and send patch.

Thanks
-Bharat

>
> Thanks,
> Jean
>


  reply	other threads:[~2020-03-18 10:19 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-13  7:48 [PATCH v7 0/5] virtio-iommu: VFIO integration Bharat Bhushan
2020-03-13  7:48 ` [PATCH v7 1/5] hw/vfio/common: Remove error print on mmio region translation by viommu Bharat Bhushan
2020-03-13  7:48 ` [PATCH v7 2/5] virtio-iommu: Add iommu notifier for map/unmap Bharat Bhushan
2020-03-13 14:25   ` Auger Eric
2020-03-16  6:36     ` Bharat Bhushan
2020-03-16  7:32       ` Auger Eric
2020-03-13  7:48 ` [PATCH v7 3/5] virtio-iommu: Call iommu notifier for attach/detach Bharat Bhushan
2020-03-13 14:41   ` Auger Eric
2020-03-16  6:41     ` Bharat Bhushan
2020-03-16  7:32       ` Auger Eric
2020-03-16  7:45         ` Bharat Bhushan
2020-03-16  8:58           ` Bharat Bhushan
2020-03-16  9:04             ` Auger Eric
2020-03-16  9:10               ` Bharat Bhushan
2020-03-16 10:11                 ` Jean-Philippe Brucker
2020-03-17  7:10                   ` Bharat Bhushan
2020-03-17  8:25                     ` Auger Eric
2020-03-17  8:53                     ` Jean-Philippe Brucker
2020-03-17  9:16                       ` Bharat Bhushan
2020-03-17 15:59                         ` Jean-Philippe Brucker
2020-03-18 10:17                           ` Bharat Bhushan [this message]
2020-03-18 11:17                             ` Jean-Philippe Brucker
2020-03-18 11:20                               ` [EXT] " Bharat Bhushan
2020-03-18 11:42                                 ` Auger Eric
2020-03-18 12:00                                   ` Jean-Philippe Brucker
2020-03-13  7:48 ` [PATCH v7 4/5] virtio-iommu: add iommu replay Bharat Bhushan
2020-03-13  7:48 ` [PATCH v7 5/5] virtio-iommu: add iommu notifier memory-region Bharat Bhushan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAeCc_k_7Ny1Kf2ZiAAJe2xms2bdK-DqB1S2ro+omxp0EWi28g@mail.gmail.com \
    --to=bharatb.linux@gmail.com \
    --cc=alex.williamson@redhat.com \
    --cc=bbhushan2@marvell.com \
    --cc=drjones@redhat.com \
    --cc=eric.auger.pro@gmail.com \
    --cc=eric.auger@redhat.com \
    --cc=jean-philippe@linaro.org \
    --cc=kevin.tian@intel.com \
    --cc=linuc.decode@gmail.com \
    --cc=mst@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=peterx@redhat.com \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=tnowicki@marvell.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).