From: Alex Williamson <alex.williamson@redhat.com>
To: Oza Pawandeep <oza.oza@broadcom.com>
Cc: Joerg Roedel <joro@8bytes.org>,
Robin Murphy <robin.murphy@arm.com>,
iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org,
linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org, devicetree@vger.kernel.org,
bcm-kernel-feedback-list@broadcom.com,
Oza Pawandeep <oza.pawandeep@gmail.com>
Subject: Re: [PATCH v7 0/3] PCI/IOMMU: Reserve IOVAs for PCI inbound memory
Date: Mon, 22 May 2017 13:18:38 -0600 [thread overview]
Message-ID: <20170522131838.71258483@w520.home> (raw)
In-Reply-To: <1495471182-12490-1-git-send-email-oza.oza@broadcom.com>
On Mon, 22 May 2017 22:09:39 +0530
Oza Pawandeep <oza.oza@broadcom.com> wrote:
> iproc based PCI RC and Stingray SOC has limitaiton of addressing only 512GB
> memory at once.
>
> IOVA allocation honors device's coherent_dma_mask/dma_mask.
> In PCI case, current code honors DMA mask set by EP, there is no
> concept of PCI host bridge dma-mask, should be there and hence
> could truly reflect the limitation of PCI host bridge.
>
> However assuming Linux takes care of largest possible dma_mask, still the
> limitation could exist, because of the way memory banks are implemented.
>
> for e.g. memory banks:
> <0x00000000 0x80000000 0x0 0x80000000>, /* 2G @ 2G */
> <0x00000008 0x80000000 0x3 0x80000000>, /* 14G @ 34G */
> <0x00000090 0x00000000 0x4 0x00000000>, /* 16G @ 576G */
> <0x000000a0 0x00000000 0x4 0x00000000>; /* 16G @ 640G */
>
> When run User space (SPDK) which internally uses vfio in order to access
> PCI EndPoint directly.
>
> Vfio uses huge-pages which could come from 640G/0x000000a0.
> And the way vfio maps the hugepage is to have phys addr as iova,
> and ends up calling VFIO_IOMMU_MAP_DMA ends up calling iommu_map,
> inturn arm_lpae_map mapping iovas out of range.
>
> So the way kernel allocates IOVA (where it honours device dma_mask) and
> the way userspace gets IOVA is different.
>
> dma-ranges = <0x43000000 0x00 0x00 0x00 0x00 0x80 0x00>; will not work.
>
> Instead we have to go for scattered dma-ranges leaving holes.
> Hence, we have to reserve IOVA allocations for inbound memory.
> The patch-set caters to only addressing IOVA allocation problem.
The description here confuses me, with vfio the user owns the iova
allocation problem. Mappings are only identity mapped if the user
chooses to do so. The dma_mask of the device is set by the driver and
only relevant to the DMA-API. vfio is a meta-driver and doesn't know
the dma_mask of any particular device, that's the user's job. Is the
net result of what's happening here for the vfio case simply to expose
extra reserved regions in sysfs, which the user can then consume to
craft a compatible iova? Thanks,
Alex
>
> Changes since v7:
> - Robin's comment addressed
> where he wanted to remove depedency between IOMMU and OF layer.
> - Bjorn Helgaas's comments addressed.
>
> Changes since v6:
> - Robin's comments addressed.
>
> Changes since v5:
> Changes since v4:
> Changes since v3:
> Changes since v2:
> - minor changes, redudant checkes removed
> - removed internal review
>
> Changes since v1:
> - address Rob's comments.
> - Add a get_dma_ranges() function to of_bus struct..
> - Convert existing contents of of_dma_get_range function to
> of_bus_default_dma_get_ranges and adding that to the
> default of_bus struct.
> - Make of_dma_get_range call of_bus_match() and then bus->get_dma_ranges.
>
>
> Oza Pawandeep (3):
> OF/PCI: expose inbound memory interface to PCI RC drivers.
> IOMMU/PCI: reserve IOVA for inbound memory for PCI masters
> PCI: add support for inbound windows resources
>
> drivers/iommu/dma-iommu.c | 44 ++++++++++++++++++++--
> drivers/of/of_pci.c | 96 +++++++++++++++++++++++++++++++++++++++++++++++
> drivers/pci/probe.c | 30 +++++++++++++--
> include/linux/of_pci.h | 7 ++++
> include/linux/pci.h | 1 +
> 5 files changed, 170 insertions(+), 8 deletions(-)
>
next prev parent reply other threads:[~2017-05-22 19:18 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-22 16:39 [PATCH v7 0/3] PCI/IOMMU: Reserve IOVAs for PCI inbound memory Oza Pawandeep via iommu
[not found] ` <1495471182-12490-1-git-send-email-oza.oza-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2017-05-22 16:39 ` [PATCH v7 1/3] OF/PCI: Export inbound memory interface to PCI RC drivers Oza Pawandeep
2017-05-22 16:39 ` [PATCH v7 2/3] PCI: Add support for PCI inbound windows resources Oza Pawandeep
[not found] ` <1495471182-12490-3-git-send-email-oza.oza-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2017-05-30 22:42 ` Bjorn Helgaas via iommu
2017-05-31 16:17 ` Oza Oza
[not found] ` <CAMSpPPdXbCteC7scb99CMKqdif0q9ngZnzJMhGa6xZt7BM0yKg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-06-01 17:08 ` Bjorn Helgaas via iommu
2017-06-01 18:06 ` Oza Oza
2017-05-22 16:39 ` [PATCH v7 3/3] IOMMU/PCI: Reserve IOVA for inbound memory for PCI masters Oza Pawandeep via iommu
[not found] ` <1495471182-12490-4-git-send-email-oza.oza-dY08KVG/lbpWk0Htik3J/w@public.gmane.org>
2017-07-19 12:07 ` Oza Oza
2017-05-22 19:18 ` Alex Williamson [this message]
[not found] ` <20170522131838.71258483-DGNDKt5SQtizQB+pC5nmwQ@public.gmane.org>
2017-05-23 5:00 ` [PATCH v7 0/3] PCI/IOMMU: Reserve IOVAs for PCI inbound memory Oza Oza via iommu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170522131838.71258483@w520.home \
--to=alex.williamson@redhat.com \
--cc=bcm-kernel-feedback-list@broadcom.com \
--cc=devicetree@vger.kernel.org \
--cc=iommu@lists.linux-foundation.org \
--cc=joro@8bytes.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=oza.oza@broadcom.com \
--cc=oza.pawandeep@gmail.com \
--cc=robin.murphy@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).