virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH v2] iova: Move fast alloc size roundup into alloc_iova_fast()
       [not found] <1638875846-23993-1-git-send-email-john.garry@huawei.com>
@ 2021-12-07 11:44 ` Robin Murphy
  2021-12-17  8:11 ` Joerg Roedel
  1 sibling, 0 replies; 2+ messages in thread
From: Robin Murphy @ 2021-12-07 11:44 UTC (permalink / raw)
  To: John Garry, joro, will; +Cc: xieyongji, virtualization, iommu, mst

On 2021-12-07 11:17, John Garry wrote:
> It really is a property of the IOVA rcache code that we need to alloc a
> power-of-2 size, so relocate the functionality to resize into
> alloc_iova_fast(), rather than the callsites.

I'd still much prefer to resolve the issue that there shouldn't *be* 
more than one caller in the first place, but hey.

Acked-by: Robin Murphy <robin.murphy@arm.com>

> Signed-off-by: John Garry <john.garry@huawei.com>
> Acked-by: Will Deacon <will@kernel.org>
> Reviewed-by: Xie Yongji <xieyongji@bytedance.com>
> Acked-by: Jason Wang <jasowang@redhat.com>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> ---
> Differences to v1:
> - Separate out from original series which conflicts with Robin's IOVA FQ work:
>    https://lore.kernel.org/linux-iommu/1632477717-5254-1-git-send-email-john.garry@huawei.com/
> - Add tags - thanks!
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index b42e38a0dbe2..84dee53fe892 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -442,14 +442,6 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>   
>   	shift = iova_shift(iovad);
>   	iova_len = size >> shift;
> -	/*
> -	 * Freeing non-power-of-two-sized allocations back into the IOVA caches
> -	 * will come back to bite us badly, so we have to waste a bit of space
> -	 * rounding up anything cacheable to make sure that can't happen. The
> -	 * order of the unadjusted size will still match upon freeing.
> -	 */
> -	if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> -		iova_len = roundup_pow_of_two(iova_len);
>   
>   	dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
>   
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 9e8bc802ac05..ff567cbc42f7 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -497,6 +497,15 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>   	unsigned long iova_pfn;
>   	struct iova *new_iova;
>   
> +	/*
> +	 * Freeing non-power-of-two-sized allocations back into the IOVA caches
> +	 * will come back to bite us badly, so we have to waste a bit of space
> +	 * rounding up anything cacheable to make sure that can't happen. The
> +	 * order of the unadjusted size will still match upon freeing.
> +	 */
> +	if (size < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> +		size = roundup_pow_of_two(size);
> +
>   	iova_pfn = iova_rcache_get(iovad, size, limit_pfn + 1);
>   	if (iova_pfn)
>   		return iova_pfn;
> diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
> index 1daae2608860..2b1143f11d8f 100644
> --- a/drivers/vdpa/vdpa_user/iova_domain.c
> +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> @@ -292,14 +292,6 @@ vduse_domain_alloc_iova(struct iova_domain *iovad,
>   	unsigned long iova_len = iova_align(iovad, size) >> shift;
>   	unsigned long iova_pfn;
>   
> -	/*
> -	 * Freeing non-power-of-two-sized allocations back into the IOVA caches
> -	 * will come back to bite us badly, so we have to waste a bit of space
> -	 * rounding up anything cacheable to make sure that can't happen. The
> -	 * order of the unadjusted size will still match upon freeing.
> -	 */
> -	if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> -		iova_len = roundup_pow_of_two(iova_len);
>   	iova_pfn = alloc_iova_fast(iovad, iova_len, limit >> shift, true);
>   
>   	return iova_pfn << shift;
> 
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] iova: Move fast alloc size roundup into alloc_iova_fast()
       [not found] <1638875846-23993-1-git-send-email-john.garry@huawei.com>
  2021-12-07 11:44 ` [PATCH v2] iova: Move fast alloc size roundup into alloc_iova_fast() Robin Murphy
@ 2021-12-17  8:11 ` Joerg Roedel
  1 sibling, 0 replies; 2+ messages in thread
From: Joerg Roedel @ 2021-12-17  8:11 UTC (permalink / raw)
  To: John Garry; +Cc: mst, robin.murphy, virtualization, xieyongji, iommu, will

On Tue, Dec 07, 2021 at 07:17:26PM +0800, John Garry wrote:
> It really is a property of the IOVA rcache code that we need to alloc a
> power-of-2 size, so relocate the functionality to resize into
> alloc_iova_fast(), rather than the callsites.
> 
> Signed-off-by: John Garry <john.garry@huawei.com>
> Acked-by: Will Deacon <will@kernel.org>
> Reviewed-by: Xie Yongji <xieyongji@bytedance.com>
> Acked-by: Jason Wang <jasowang@redhat.com>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> ---
> Differences to v1:
> - Separate out from original series which conflicts with Robin's IOVA FQ work:
>   https://lore.kernel.org/linux-iommu/1632477717-5254-1-git-send-email-john.garry@huawei.com/
> - Add tags - thanks!

Applied, thanks.

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-12-17  8:11 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1638875846-23993-1-git-send-email-john.garry@huawei.com>
2021-12-07 11:44 ` [PATCH v2] iova: Move fast alloc size roundup into alloc_iova_fast() Robin Murphy
2021-12-17  8:11 ` Joerg Roedel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).