From: Niklas Schnelle <schnelle@linux.ibm.com>
To: Alexandra Winter <wintera@linux.ibm.com>,
Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
Robin Murphy <robin.murphy@arm.com>,
Jason Gunthorpe <jgg@nvidia.com>,
Wenjia Zhang <wenjia@linux.ibm.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>,
Gerd Bayer <gbayer@linux.ibm.com>,
Pierre Morel <pmorel@linux.ibm.com>,
iommu@lists.linux.dev, linux-s390@vger.kernel.org,
borntraeger@linux.ibm.com, hca@linux.ibm.com, gor@linux.ibm.com,
gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com,
svens@linux.ibm.com, linux-kernel@vger.kernel.org,
Julian Ruess <julianr@linux.ibm.com>
Subject: Re: [PATCH v3 2/7] iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return
Date: Tue, 03 Jan 2023 09:16:22 +0100 [thread overview]
Message-ID: <3e363da787126a4e8f779988ced92ae4624e3ec3.camel@linux.ibm.com> (raw)
In-Reply-To: <2f1beb15-e9e4-d8ab-1b68-c83f1a53c5c5@linux.ibm.com>
On Mon, 2023-01-02 at 19:25 +0100, Alexandra Winter wrote:
>
> On 02.01.23 12:56, Niklas Schnelle wrote:
> > On s390 .iotlb_sync_map is used to sync mappings to an underlying
> > hypervisor by letting the hypervisor inspect the synced IOVA range and
> > updating its shadow table. This however means that it can fail as the
> > hypervisor may run out of resources. This can be due to the hypervisor
> > being unable to pin guest pages, due to a limit on concurrently mapped
> > addresses such as vfio_iommu_type1.dma_entry_limit or other resources.
> > Either way such a failure to sync a mapping should result in
> > a DMA_MAPPING_EROR.
> >
> > Now especially when running with batched IOTLB flushes for unmap it may
> > be that some IOVAs have already been invalidated but not yet synced via
> > .iotlb_sync_map. Thus if the hypervisor indicates running out of
> > resources, first do a global flush allowing the hypervisor to free
> > resources associated with these mappings and only if that also fails
> > report this error to callers.
> >
> > Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com>
> > ---
> Just a small typo, I noticed
> [...]
You mean the misspelled DMA_MAPPING_ERROR, right? Either way I did edit
the commit message for a bit more clarity on some of the details:
On s390 when using a paging hypervisor, .iotlb_sync_map is used to sync
mappings by letting the hypervisor inspect the synced IOVA range and
updating a shadow table. This however means that .iotlb_sync_map can
fail as the hypervisor may run out of resources while doing the sync.
This can be due to the hypervisor being unable to pin guest pages, due
to a limit on mapped addresses such as vfio_iommu_type1.dma_entry_limit
or lack of other resources. Either way such a failure to sync a mapping
should result in a DMA_MAPPING_ERROR.
Now especially when running with batched IOTLB flushes for unmap it may
be that some IOVAs have already been invalidated but not yet synced via
.iotlb_sync_map. Thus if the hypervisor indicates running out of
resources, first do a global flush allowing the hypervisor to free
resources associated with these mappings as well a retry creating the
new mappings and only if that also fails report this error to callers.
> > diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
> > index ed33c6cce083..6ba38b4f5b37 100644
> > --- a/drivers/iommu/s390-iommu.c
> > +++ b/drivers/iommu/s390-iommu.c
> > @@ -210,6 +210,14 @@ static void s390_iommu_release_device(struct device *dev)
> > __s390_iommu_detach_device(zdev);
> > }
> >
> > +
> > +static int zpci_refresh_all(struct zpci_dev *zdev)
> > +{
> > + return zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma,
> > + zdev->end_dma - zdev->start_dma + 1);
> > +
> > +}
> > +
> > static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain)
> > {
> > struct s390_domain *s390_domain = to_s390_domain(domain);
> > @@ -217,8 +225,7 @@ static void s390_iommu_flush_iotlb_all(struct iommu_domain *domain)
> >
> > rcu_read_lock();
> > list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) {
> > - zpci_refresh_trans((u64)zdev->fh << 32, zdev->start_dma,
> > - zdev->end_dma - zdev->start_dma + 1);
> > + zpci_refresh_all(zdev);
> > }
> > rcu_read_unlock();
> > }
> > @@ -242,20 +249,32 @@ static void s390_iommu_iotlb_sync(struct iommu_domain *domain,
> > rcu_read_unlock();
> > }
> >
> > -static void s390_iommu_iotlb_sync_map(struct iommu_domain *domain,
> > +static int s390_iommu_iotlb_sync_map(struct iommu_domain *domain,
> > unsigned long iova, size_t size)
> > {
> > struct s390_domain *s390_domain = to_s390_domain(domain);
> > struct zpci_dev *zdev;
> > + int ret = 0;
> >
> > rcu_read_lock();
> > list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) {
> > if (!zdev->tlb_refresh)
> > continue;
> > - zpci_refresh_trans((u64)zdev->fh << 32,
> > - iova, size);
> > + ret = zpci_refresh_trans((u64)zdev->fh << 32,
> > + iova, size);
> > + /*
> > + * let the hypervisor disover invalidated entries
> typo: s/disover/discover/g
> > + * allowing it to free IOVAs and unpin pages
> > + */
> > + if (ret == -ENOMEM) {
> > + ret = zpci_refresh_all(zdev);
> > + if (ret)
> > + break;
> > + }
> > }
> > rcu_read_unlock();
> > +
> > + return ret;
> > }
> >
> > static int s390_iommu_validate_trans(struct s390_domain *s390_domain,
> [...]
next prev parent reply other threads:[~2023-01-03 8:17 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-02 11:56 [PATCH v3 0/7 iommu/dma: s390 DMA API conversion and optimized IOTLB flushing Niklas Schnelle
2023-01-02 11:56 ` [PATCH v3 1/7] s390/ism: Set DMA coherent mask Niklas Schnelle
2023-01-02 18:18 ` Alexandra Winter
2023-01-02 11:56 ` [PATCH v3 2/7] iommu: Allow .iotlb_sync_map to fail and handle s390's -ENOMEM return Niklas Schnelle
2023-01-02 18:25 ` Alexandra Winter
2023-01-03 8:16 ` Niklas Schnelle [this message]
2023-01-03 9:25 ` Niklas Schnelle
2023-01-03 9:26 ` Heiko Carstens
2023-01-03 16:03 ` Niklas Schnelle
2023-01-02 11:56 ` [PATCH v3 3/7] s390/pci: prepare is_passed_through() for dma-iommu Niklas Schnelle
2023-01-02 11:56 ` [PATCH v3 4/7] s390/pci: Use dma-iommu layer Niklas Schnelle
2023-01-02 11:56 ` [PATCH v3 5/7] iommu/dma: Allow a single FQ in addition to per-CPU FQs Niklas Schnelle
2023-01-02 11:56 ` [PATCH v3 6/7] iommu/dma: Enable variable queue size and use larger single queue Niklas Schnelle
2023-01-02 11:56 ` [PATCH v3 7/7] iommu/dma: Add IOMMU op to choose lazy domain type Niklas Schnelle
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3e363da787126a4e8f779988ced92ae4624e3ec3.camel@linux.ibm.com \
--to=schnelle@linux.ibm.com \
--cc=agordeev@linux.ibm.com \
--cc=borntraeger@linux.ibm.com \
--cc=gbayer@linux.ibm.com \
--cc=gerald.schaefer@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=julianr@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=mjrosato@linux.ibm.com \
--cc=pmorel@linux.ibm.com \
--cc=robin.murphy@arm.com \
--cc=svens@linux.ibm.com \
--cc=wenjia@linux.ibm.com \
--cc=will@kernel.org \
--cc=wintera@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox