virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH 1/5] iova: Move fast alloc size roundup into alloc_iova_fast()
       [not found] ` <1632477717-5254-2-git-send-email-john.garry@huawei.com>
@ 2021-10-04 11:31   ` Will Deacon
  2021-10-11  2:06   ` Jason Wang
  2021-10-18 15:42   ` Michael S. Tsirkin
  2 siblings, 0 replies; 8+ messages in thread
From: Will Deacon @ 2021-10-04 11:31 UTC (permalink / raw)
  To: John Garry
  Cc: mst, joro, linuxarm, linux-kernel, virtualization, xieyongji,
	iommu, thunder.leizhen, robin.murphy, baolu.lu

On Fri, Sep 24, 2021 at 06:01:53PM +0800, John Garry wrote:
> It really is a property of the IOVA rcache code that we need to alloc a
> power-of-2 size, so relocate the functionality to resize into
> alloc_iova_fast(), rather than the callsites.
> 
> Signed-off-by: John Garry <john.garry@huawei.com>
> ---
>  drivers/iommu/dma-iommu.c            | 8 --------
>  drivers/iommu/iova.c                 | 9 +++++++++
>  drivers/vdpa/vdpa_user/iova_domain.c | 8 --------
>  3 files changed, 9 insertions(+), 16 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 5/5] iommu/iova: Avoid double-negatives in magazine helpers
       [not found] ` <1632477717-5254-6-git-send-email-john.garry@huawei.com>
@ 2021-10-04 11:38   ` Will Deacon
  0 siblings, 0 replies; 8+ messages in thread
From: Will Deacon @ 2021-10-04 11:38 UTC (permalink / raw)
  To: John Garry
  Cc: mst, joro, linuxarm, linux-kernel, virtualization, xieyongji,
	iommu, thunder.leizhen, robin.murphy, baolu.lu

On Fri, Sep 24, 2021 at 06:01:57PM +0800, John Garry wrote:
> A similar crash to the following could be observed if initial CPU rcache
> magazine allocations fail in init_iova_rcaches():
> 
> Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000
> Mem abort info:
> 
>   free_iova_fast+0xfc/0x280
>   iommu_dma_free_iova+0x64/0x70
>   __iommu_dma_unmap+0x9c/0xf8
>   iommu_dma_unmap_sg+0xa8/0xc8
>   dma_unmap_sg_attrs+0x28/0x50
>   cq_thread_v3_hw+0x2dc/0x528
>   irq_thread_fn+0x2c/0xa0
>   irq_thread+0x130/0x1e0
>   kthread+0x154/0x158
>   ret_from_fork+0x10/0x34
> 
> The issue is that expression !iova_magazine_full(NULL) evaluates true; this
> falls over in __iova_rcache_insert() when we attempt to cache a mag and
> cpu_rcache->loaded == NULL:
> 
> if (!iova_magazine_full(cpu_rcache->loaded)) {
> 	can_insert = true;
> ...
> 
> if (can_insert)
> 	iova_magazine_push(cpu_rcache->loaded, iova_pfn);
> 
> As above, can_insert is evaluated true, which it shouldn't be, and we try
> to insert pfns in a NULL mag, which is not safe.
> 
> To avoid this, stop using double-negatives, like !iova_magazine_full() and
> !iova_magazine_empty(), and use positive tests, like
> iova_magazine_has_space() and iova_magazine_has_pfns(), respectively; these
> can safely deal with cpu_rcache->{loaded, prev} = NULL.

I don't understand why you're saying that things like !iova_magazine_empty()
are double-negatives. What about e.g. !list_empty() elsewhre in the kernel?

The crux of the fix seems to be:

> @@ -783,8 +787,9 @@ static bool __iova_rcache_insert(struct iova_caching_domain *rcached,
>  		if (new_mag) {
>  			spin_lock(&rcache->lock);
>  			if (rcache->depot_size < MAX_GLOBAL_MAGS) {
> -				rcache->depot[rcache->depot_size++] =
> -						cpu_rcache->loaded;
> +				if (cpu_rcache->loaded)
> +					rcache->depot[rcache->depot_size++] =
> +							cpu_rcache->loaded;

Which could be independent of the renaming?

Will
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/5] iommu: Some IOVA code reorganisation
       [not found] <1632477717-5254-1-git-send-email-john.garry@huawei.com>
       [not found] ` <1632477717-5254-2-git-send-email-john.garry@huawei.com>
       [not found] ` <1632477717-5254-6-git-send-email-john.garry@huawei.com>
@ 2021-10-04 11:44 ` Will Deacon
  2021-10-04 14:48   ` Robin Murphy
       [not found]   ` <cdb502c5-4896-385b-8872-f4f20e9c7e34@huawei.com>
       [not found] ` <1632477717-5254-5-git-send-email-john.garry@huawei.com>
  3 siblings, 2 replies; 8+ messages in thread
From: Will Deacon @ 2021-10-04 11:44 UTC (permalink / raw)
  To: John Garry
  Cc: mst, joro, linuxarm, linux-kernel, virtualization, xieyongji,
	iommu, thunder.leizhen, robin.murphy, baolu.lu

On Fri, Sep 24, 2021 at 06:01:52PM +0800, John Garry wrote:
> The IOVA domain structure is a bit overloaded, holding:
> - IOVA tree management
> - FQ control
> - IOVA rcache memories
> 
> Indeed only a couple of IOVA users use the rcache, and only dma-iommu.c
> uses the FQ feature.
> 
> This series separates out that structure. In addition, it moves the FQ
> code into dma-iommu.c . This is not strictly necessary, but it does make
> it easier for the FQ domain lookup the rcache domain.
> 
> The rcache code stays where it is, as it may be reworked in future, so
> there is not much point in relocating and then discarding.
> 
> This topic was initially discussed and suggested (I think) by Robin here:
> https://lore.kernel.org/linux-iommu/1d06eda1-9961-d023-f5e7-fe87e768f067@arm.com/

It would be useful to have Robin's Ack on patches 2-4. The implementation
looks straightforward to me, but the thread above isn't very clear about
what is being suggested.

To play devil's advocate: there aren't many direct users of the iovad code:
either they'll die out entirely (and everybody will use the dma-iommu code)
and it's fine having the flush queue code where it is, or we'll get more
users and the likelihood of somebody else wanting flush queues increases.

Will
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/5] iommu: Some IOVA code reorganisation
  2021-10-04 11:44 ` [PATCH 0/5] iommu: Some IOVA code reorganisation Will Deacon
@ 2021-10-04 14:48   ` Robin Murphy
       [not found]   ` <cdb502c5-4896-385b-8872-f4f20e9c7e34@huawei.com>
  1 sibling, 0 replies; 8+ messages in thread
From: Robin Murphy @ 2021-10-04 14:48 UTC (permalink / raw)
  To: Will Deacon, John Garry
  Cc: mst, joro, linuxarm, linux-kernel, virtualization, xieyongji,
	iommu, thunder.leizhen, baolu.lu

On 2021-10-04 12:44, Will Deacon wrote:
> On Fri, Sep 24, 2021 at 06:01:52PM +0800, John Garry wrote:
>> The IOVA domain structure is a bit overloaded, holding:
>> - IOVA tree management
>> - FQ control
>> - IOVA rcache memories
>>
>> Indeed only a couple of IOVA users use the rcache, and only dma-iommu.c
>> uses the FQ feature.
>>
>> This series separates out that structure. In addition, it moves the FQ
>> code into dma-iommu.c . This is not strictly necessary, but it does make
>> it easier for the FQ domain lookup the rcache domain.
>>
>> The rcache code stays where it is, as it may be reworked in future, so
>> there is not much point in relocating and then discarding.
>>
>> This topic was initially discussed and suggested (I think) by Robin here:
>> https://lore.kernel.org/linux-iommu/1d06eda1-9961-d023-f5e7-fe87e768f067@arm.com/
> 
> It would be useful to have Robin's Ack on patches 2-4. The implementation
> looks straightforward to me, but the thread above isn't very clear about
> what is being suggested.

FWIW I actually got about half-way through writing my own equivalent of 
patches 2-3, except tackling it from the other direction - simplifying 
the FQ code *before* moving whatever was left to iommu-dma, then I got 
side-tracked trying to make io-pgtable use that freelist properly, and 
then I've been on holiday the last 2 weeks. I've got other things to 
catch up on first but I'll try to get to this later this week.

> To play devil's advocate: there aren't many direct users of the iovad code:
> either they'll die out entirely (and everybody will use the dma-iommu code)
> and it's fine having the flush queue code where it is, or we'll get more
> users and the likelihood of somebody else wanting flush queues increases.

I think the FQ code is mostly just here as a historical artefact, since 
the IOVA allocator was the only thing common to the Intel and AMD DMA 
ops when the common FQ implementation was factored out of those, so 
although it's essentially orthogonal it was still related enough that it 
was an easy place to stick it.

Cheers,
Robin.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/5] iova: Move fast alloc size roundup into alloc_iova_fast()
       [not found] ` <1632477717-5254-2-git-send-email-john.garry@huawei.com>
  2021-10-04 11:31   ` [PATCH 1/5] iova: Move fast alloc size roundup into alloc_iova_fast() Will Deacon
@ 2021-10-11  2:06   ` Jason Wang
  2021-10-18 15:42   ` Michael S. Tsirkin
  2 siblings, 0 replies; 8+ messages in thread
From: Jason Wang @ 2021-10-11  2:06 UTC (permalink / raw)
  To: John Garry
  Cc: mst, Will Deacon, Joerg Roedel, linuxarm, linux-kernel,
	virtualization, Yongji Xie, iommu, thunder.leizhen, Robin Murphy,
	Lu Baolu

On Fri, Sep 24, 2021 at 6:07 PM John Garry <john.garry@huawei.com> wrote:
>
> It really is a property of the IOVA rcache code that we need to alloc a
> power-of-2 size, so relocate the functionality to resize into
> alloc_iova_fast(), rather than the callsites.
>
> Signed-off-by: John Garry <john.garry@huawei.com>

Acked-by: Jason Wang <jasowang@redhat.com>

> ---
>  drivers/iommu/dma-iommu.c            | 8 --------
>  drivers/iommu/iova.c                 | 9 +++++++++
>  drivers/vdpa/vdpa_user/iova_domain.c | 8 --------
>  3 files changed, 9 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 896bea04c347..a99b3445fef8 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -444,14 +444,6 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>
>         shift = iova_shift(iovad);
>         iova_len = size >> shift;
> -       /*
> -        * Freeing non-power-of-two-sized allocations back into the IOVA caches
> -        * will come back to bite us badly, so we have to waste a bit of space
> -        * rounding up anything cacheable to make sure that can't happen. The
> -        * order of the unadjusted size will still match upon freeing.
> -        */
> -       if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> -               iova_len = roundup_pow_of_two(iova_len);
>
>         dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
>
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 9e8bc802ac05..ff567cbc42f7 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -497,6 +497,15 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>         unsigned long iova_pfn;
>         struct iova *new_iova;
>
> +       /*
> +        * Freeing non-power-of-two-sized allocations back into the IOVA caches
> +        * will come back to bite us badly, so we have to waste a bit of space
> +        * rounding up anything cacheable to make sure that can't happen. The
> +        * order of the unadjusted size will still match upon freeing.
> +        */
> +       if (size < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> +               size = roundup_pow_of_two(size);
> +
>         iova_pfn = iova_rcache_get(iovad, size, limit_pfn + 1);
>         if (iova_pfn)
>                 return iova_pfn;
> diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
> index 1daae2608860..2b1143f11d8f 100644
> --- a/drivers/vdpa/vdpa_user/iova_domain.c
> +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> @@ -292,14 +292,6 @@ vduse_domain_alloc_iova(struct iova_domain *iovad,
>         unsigned long iova_len = iova_align(iovad, size) >> shift;
>         unsigned long iova_pfn;
>
> -       /*
> -        * Freeing non-power-of-two-sized allocations back into the IOVA caches
> -        * will come back to bite us badly, so we have to waste a bit of space
> -        * rounding up anything cacheable to make sure that can't happen. The
> -        * order of the unadjusted size will still match upon freeing.
> -        */
> -       if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> -               iova_len = roundup_pow_of_two(iova_len);
>         iova_pfn = alloc_iova_fast(iovad, iova_len, limit >> shift, true);
>
>         return iova_pfn << shift;
> --
> 2.26.2
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/5] iova: Move fast alloc size roundup into alloc_iova_fast()
       [not found] ` <1632477717-5254-2-git-send-email-john.garry@huawei.com>
  2021-10-04 11:31   ` [PATCH 1/5] iova: Move fast alloc size roundup into alloc_iova_fast() Will Deacon
  2021-10-11  2:06   ` Jason Wang
@ 2021-10-18 15:42   ` Michael S. Tsirkin
  2 siblings, 0 replies; 8+ messages in thread
From: Michael S. Tsirkin @ 2021-10-18 15:42 UTC (permalink / raw)
  To: John Garry
  Cc: will, joro, linuxarm, linux-kernel, virtualization, xieyongji,
	iommu, thunder.leizhen, robin.murphy, baolu.lu

On Fri, Sep 24, 2021 at 06:01:53PM +0800, John Garry wrote:
> It really is a property of the IOVA rcache code that we need to alloc a
> power-of-2 size, so relocate the functionality to resize into
> alloc_iova_fast(), rather than the callsites.
> 
> Signed-off-by: John Garry <john.garry@huawei.com>

for vdpa code:

Acked-by: Michael S. Tsirkin <mst@redhat.com>

> ---
>  drivers/iommu/dma-iommu.c            | 8 --------
>  drivers/iommu/iova.c                 | 9 +++++++++
>  drivers/vdpa/vdpa_user/iova_domain.c | 8 --------
>  3 files changed, 9 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 896bea04c347..a99b3445fef8 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -444,14 +444,6 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain *domain,
>  
>  	shift = iova_shift(iovad);
>  	iova_len = size >> shift;
> -	/*
> -	 * Freeing non-power-of-two-sized allocations back into the IOVA caches
> -	 * will come back to bite us badly, so we have to waste a bit of space
> -	 * rounding up anything cacheable to make sure that can't happen. The
> -	 * order of the unadjusted size will still match upon freeing.
> -	 */
> -	if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> -		iova_len = roundup_pow_of_two(iova_len);
>  
>  	dma_limit = min_not_zero(dma_limit, dev->bus_dma_limit);
>  
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 9e8bc802ac05..ff567cbc42f7 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -497,6 +497,15 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
>  	unsigned long iova_pfn;
>  	struct iova *new_iova;
>  
> +	/*
> +	 * Freeing non-power-of-two-sized allocations back into the IOVA caches
> +	 * will come back to bite us badly, so we have to waste a bit of space
> +	 * rounding up anything cacheable to make sure that can't happen. The
> +	 * order of the unadjusted size will still match upon freeing.
> +	 */
> +	if (size < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> +		size = roundup_pow_of_two(size);
> +
>  	iova_pfn = iova_rcache_get(iovad, size, limit_pfn + 1);
>  	if (iova_pfn)
>  		return iova_pfn;
> diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
> index 1daae2608860..2b1143f11d8f 100644
> --- a/drivers/vdpa/vdpa_user/iova_domain.c
> +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> @@ -292,14 +292,6 @@ vduse_domain_alloc_iova(struct iova_domain *iovad,
>  	unsigned long iova_len = iova_align(iovad, size) >> shift;
>  	unsigned long iova_pfn;
>  
> -	/*
> -	 * Freeing non-power-of-two-sized allocations back into the IOVA caches
> -	 * will come back to bite us badly, so we have to waste a bit of space
> -	 * rounding up anything cacheable to make sure that can't happen. The
> -	 * order of the unadjusted size will still match upon freeing.
> -	 */
> -	if (iova_len < (1 << (IOVA_RANGE_CACHE_MAX_SIZE - 1)))
> -		iova_len = roundup_pow_of_two(iova_len);
>  	iova_pfn = alloc_iova_fast(iovad, iova_len, limit >> shift, true);
>  
>  	return iova_pfn << shift;
> -- 
> 2.26.2

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/5] iommu: Some IOVA code reorganisation
       [not found]   ` <cdb502c5-4896-385b-8872-f4f20e9c7e34@huawei.com>
@ 2021-11-16 14:25     ` Robin Murphy
  0 siblings, 0 replies; 8+ messages in thread
From: Robin Murphy @ 2021-11-16 14:25 UTC (permalink / raw)
  To: John Garry, Will Deacon
  Cc: mst, joro, linuxarm, linux-kernel, virtualization, xieyongji,
	iommu, thunder.leizhen, baolu.lu

On 2021-11-16 14:21, John Garry wrote:
> On 04/10/2021 12:44, Will Deacon wrote:
>> On Fri, Sep 24, 2021 at 06:01:52PM +0800, John Garry wrote:
>>> The IOVA domain structure is a bit overloaded, holding:
>>> - IOVA tree management
>>> - FQ control
>>> - IOVA rcache memories
>>>
>>> Indeed only a couple of IOVA users use the rcache, and only dma-iommu.c
>>> uses the FQ feature.
>>>
>>> This series separates out that structure. In addition, it moves the FQ
>>> code into dma-iommu.c . This is not strictly necessary, but it does make
>>> it easier for the FQ domain lookup the rcache domain.
>>>
>>> The rcache code stays where it is, as it may be reworked in future, so
>>> there is not much point in relocating and then discarding.
>>>
>>> This topic was initially discussed and suggested (I think) by Robin 
>>> here:
>>> https://lore.kernel.org/linux-iommu/1d06eda1-9961-d023-f5e7-fe87e768f067@arm.com/ 
>>>
>> It would be useful to have Robin's Ack on patches 2-4. The implementation
>> looks straightforward to me, but the thread above isn't very clear about
>> what is being suggested.
> 
> Hi Robin,
> 
> Just wondering if you had made any progress on your FQ code rework or 
> your own re-org?

Hey John - as it happens I started hacking on that in earnest about half 
an hour ago, aiming to get something out later this week.

Cheers,
Robin.

> I wasn't planning on progressing 
> https://lore.kernel.org/linux-iommu/1626259003-201303-1-git-send-email-john.garry@huawei.com/ 
> until this is done first (and that is still a big issue), even though 
> not strictly necessary.
> 
> Thanks,
> John
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 4/5] iommu: Separate IOVA rcache memories from iova_domain structure
       [not found]   ` <2c58036f-d9aa-61f9-ae4b-f6938a135de5@huawei.com>
@ 2021-12-20 13:57     ` Robin Murphy
  0 siblings, 0 replies; 8+ messages in thread
From: Robin Murphy @ 2021-12-20 13:57 UTC (permalink / raw)
  To: John Garry, joro, will, mst, jasowang
  Cc: linuxarm, linux-kernel, xieyongji, iommu, thunder.leizhen,
	virtualization, baolu.lu

Hi John,

On 2021-12-20 08:49, John Garry wrote:
> On 24/09/2021 11:01, John Garry wrote:
>> Only dma-iommu.c and vdpa actually use the "fast" mode of IOVA alloc and
>> free. As such, it's wasteful that all other IOVA domains hold the rcache
>> memories.
>>
>> In addition, the current IOVA domain init implementation is poor
>> (init_iova_domain()), in that errors are ignored and not passed to the
>> caller. The only errors can come from the IOVA rcache init, and fixing up
>> all the IOVA domain init callsites to handle the errors would take some
>> work.
>>
>> Separate the IOVA rache out of the IOVA domain, and create a new IOVA
>> domain structure, iova_caching_domain.
>>
>> Signed-off-by: John Garry <john.garry@huawei.com>
> 
> Hi Robin,
> 
> Do you have any thoughts on this patch? The decision is whether we stick 
> with a single iova domain structure or support this super structure for 
> iova domains which support the rcache. I did not try the former - it 
> would be do-able but I am not sure on how it would look.

TBH I feel inclined to take the simpler approach of just splitting the 
rcache array to a separate allocation, making init_iova_rcaches() public 
(with a proper return value), and tweaking put_iova_domain() to make 
rcache cleanup conditional. A residual overhead of 3 extra pointers in 
iova_domain doesn't seem like *too* much for non-DMA-API users to bear. 
Unless you want to try generalising the rcache mechanism completely away 
from IOVA API specifics, it doesn't seem like there's really enough to 
justify the bother of having its own distinct abstraction layer.

Cheers,
Robin.
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-12-20 13:57 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1632477717-5254-1-git-send-email-john.garry@huawei.com>
     [not found] ` <1632477717-5254-2-git-send-email-john.garry@huawei.com>
2021-10-04 11:31   ` [PATCH 1/5] iova: Move fast alloc size roundup into alloc_iova_fast() Will Deacon
2021-10-11  2:06   ` Jason Wang
2021-10-18 15:42   ` Michael S. Tsirkin
     [not found] ` <1632477717-5254-6-git-send-email-john.garry@huawei.com>
2021-10-04 11:38   ` [PATCH 5/5] iommu/iova: Avoid double-negatives in magazine helpers Will Deacon
2021-10-04 11:44 ` [PATCH 0/5] iommu: Some IOVA code reorganisation Will Deacon
2021-10-04 14:48   ` Robin Murphy
     [not found]   ` <cdb502c5-4896-385b-8872-f4f20e9c7e34@huawei.com>
2021-11-16 14:25     ` Robin Murphy
     [not found] ` <1632477717-5254-5-git-send-email-john.garry@huawei.com>
     [not found]   ` <2c58036f-d9aa-61f9-ae4b-f6938a135de5@huawei.com>
2021-12-20 13:57     ` [PATCH 4/5] iommu: Separate IOVA rcache memories from iova_domain structure Robin Murphy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).