From: John Garry <john.garry@huawei.com>
To: <joro@8bytes.org>, <will@kernel.org>, <robin.murphy@arm.com>
Cc: <linux-doc@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<linux-ide@vger.kernel.org>, <iommu@lists.linux-foundation.org>,
<linux-scsi@vger.kernel.org>, <liyihang6@hisilicon.com>,
<chenxiang66@hisilicon.com>, <thunder.leizhen@huawei.com>,
<damien.lemoal@opensource.wdc.com>, <m.szyprowski@samsung.com>,
<martin.petersen@oracle.com>, <jejb@linux.ibm.com>, <hch@lst.de>
Subject: Re: [PATCH v3 2/4] dma-iommu: Add iommu_dma_opt_mapping_size()
Date: Thu, 23 Jun 2022 09:38:05 +0100 [thread overview]
Message-ID: <ebe0ce98-4a02-1e94-d21b-ccb010abfd2d@huawei.com> (raw)
In-Reply-To: <4a3ab043-f609-22cb-895f-e67c8dd8f6ab@huawei.com>
On 14/06/2022 14:12, John Garry wrote:
> On 06/06/2022 10:30, John Garry wrote:
>> Add the IOMMU callback for DMA mapping API dma_opt_mapping_size(), which
>> allows the drivers to know the optimal mapping limit and thus limit the
>> requested IOVA lengths.
>>
>> This value is based on the IOVA rcache range limit, as IOVAs allocated
>> above this limit must always be newly allocated, which may be quite slow.
>>
>
> Can I please get some sort of ack from the IOMMU people on this one?
>
Another request for an ack please.
Thanks,
john
>
>> Signed-off-by: John Garry <john.garry@huawei.com>
>> Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
>> ---
>> drivers/iommu/dma-iommu.c | 6 ++++++
>> drivers/iommu/iova.c | 5 +++++
>> include/linux/iova.h | 2 ++
>> 3 files changed, 13 insertions(+)
>>
>> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
>> index f90251572a5d..9e1586447ee8 100644
>> --- a/drivers/iommu/dma-iommu.c
>> +++ b/drivers/iommu/dma-iommu.c
>> @@ -1459,6 +1459,11 @@ static unsigned long
>> iommu_dma_get_merge_boundary(struct device *dev)
>> return (1UL << __ffs(domain->pgsize_bitmap)) - 1;
>> }
>> +static size_t iommu_dma_opt_mapping_size(void)
>> +{
>> + return iova_rcache_range();
>> +}
>> +
>> static const struct dma_map_ops iommu_dma_ops = {
>> .alloc = iommu_dma_alloc,
>> .free = iommu_dma_free,
>> @@ -1479,6 +1484,7 @@ static const struct dma_map_ops iommu_dma_ops = {
>> .map_resource = iommu_dma_map_resource,
>> .unmap_resource = iommu_dma_unmap_resource,
>> .get_merge_boundary = iommu_dma_get_merge_boundary,
>> + .opt_mapping_size = iommu_dma_opt_mapping_size,
>> };
>> /*
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index db77aa675145..9f00b58d546e 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -26,6 +26,11 @@ static unsigned long iova_rcache_get(struct
>> iova_domain *iovad,
>> static void free_cpu_cached_iovas(unsigned int cpu, struct
>> iova_domain *iovad);
>> static void free_iova_rcaches(struct iova_domain *iovad);
>> +unsigned long iova_rcache_range(void)
>> +{
>> + return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
>> +}
>> +
>> static int iova_cpuhp_dead(unsigned int cpu, struct hlist_node *node)
>> {
>> struct iova_domain *iovad;
>> diff --git a/include/linux/iova.h b/include/linux/iova.h
>> index 320a70e40233..c6ba6d95d79c 100644
>> --- a/include/linux/iova.h
>> +++ b/include/linux/iova.h
>> @@ -79,6 +79,8 @@ static inline unsigned long iova_pfn(struct
>> iova_domain *iovad, dma_addr_t iova)
>> int iova_cache_get(void);
>> void iova_cache_put(void);
>> +unsigned long iova_rcache_range(void);
>> +
>> void free_iova(struct iova_domain *iovad, unsigned long pfn);
>> void __free_iova(struct iova_domain *iovad, struct iova *iova);
>> struct iova *alloc_iova(struct iova_domain *iovad, unsigned long size,
>
next prev parent reply other threads:[~2022-06-23 8:38 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-06 9:30 [PATCH v3 0/4] DMA mapping changes for SCSI core John Garry
2022-06-06 9:30 ` [PATCH v3 1/4] dma-mapping: Add dma_opt_mapping_size() John Garry
2022-06-08 17:27 ` Bart Van Assche
2022-06-06 9:30 ` [PATCH v3 2/4] dma-iommu: Add iommu_dma_opt_mapping_size() John Garry
2022-06-08 17:26 ` Bart Van Assche
2022-06-08 17:39 ` John Garry
2022-06-14 13:12 ` John Garry
2022-06-23 8:38 ` John Garry [this message]
2022-06-06 9:30 ` [PATCH v3 3/4] scsi: core: Cap shost max_sectors according to DMA optimum mapping limits John Garry
2022-06-08 17:33 ` Bart Van Assche
2022-06-08 17:50 ` John Garry
2022-06-08 21:07 ` Bart Van Assche
2022-06-09 8:00 ` John Garry
2022-06-09 17:18 ` Bart Van Assche
2022-06-09 17:54 ` John Garry
2022-06-09 20:34 ` Bart Van Assche
2022-06-10 15:37 ` John Garry
2022-06-23 8:36 ` John Garry
2022-06-06 9:30 ` [PATCH v3 4/4] libata-scsi: Cap ata_device->max_sectors according to shost->max_sectors John Garry
2022-06-07 22:43 ` [PATCH v3 0/4] DMA mapping changes for SCSI core Bart Van Assche
2022-06-08 10:14 ` John Garry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ebe0ce98-4a02-1e94-d21b-ccb010abfd2d@huawei.com \
--to=john.garry@huawei.com \
--cc=chenxiang66@hisilicon.com \
--cc=damien.lemoal@opensource.wdc.com \
--cc=hch@lst.de \
--cc=iommu@lists.linux-foundation.org \
--cc=jejb@linux.ibm.com \
--cc=joro@8bytes.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-ide@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=liyihang6@hisilicon.com \
--cc=m.szyprowski@samsung.com \
--cc=martin.petersen@oracle.com \
--cc=robin.murphy@arm.com \
--cc=thunder.leizhen@huawei.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox