* swiotlb regression fixe
@ 2022-05-11 12:58 Christoph Hellwig
2022-05-11 12:58 ` [PATCH 1/3] swiotlb: don't panic when the swiotlb buffer can't be allocated Christoph Hellwig
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Christoph Hellwig @ 2022-05-11 12:58 UTC (permalink / raw)
To: iommu; +Cc: xen-devel, Boris Ostrovsky, Stefano Stabellini, Conor.Dooley
Hi all,
attached are a bunch of fixes for regressions in the recent swiotlb
refactoring. The first one was reported by Conor, and the other two
are things I found by code inspections while trying to fix what he
reported.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/3] swiotlb: don't panic when the swiotlb buffer can't be allocated
2022-05-11 12:58 swiotlb regression fixe Christoph Hellwig
@ 2022-05-11 12:58 ` Christoph Hellwig
2022-05-13 1:31 ` Stefano Stabellini
2022-05-13 7:47 ` Conor.Dooley--- via iommu
2022-05-11 12:58 ` [PATCH 2/3] swiotlb: use the right nslabs value in swiotlb_init_remap Christoph Hellwig
2022-05-11 12:58 ` [PATCH 3/3] swiotlb: use the right nslabs-derived sizes in swiotlb_init_late Christoph Hellwig
2 siblings, 2 replies; 8+ messages in thread
From: Christoph Hellwig @ 2022-05-11 12:58 UTC (permalink / raw)
To: iommu; +Cc: xen-devel, Boris Ostrovsky, Stefano Stabellini, Conor.Dooley
For historical reasons the switlb code paniced when the metadata could
not be allocated, but just printed a warning when the actual main
swiotlb buffer could not be allocated. Restore this somewhat unexpected
behavior as changing it caused a boot failure on the Microchip RISC-V
PolarFire SoC Icicle kit.
Fixes: 6424e31b1c05 ("swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl")
Reported-by: Conor Dooley <Conor.Dooley@microchip.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Conor Dooley <Conor.Dooley@microchip.com>
---
kernel/dma/swiotlb.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e2ef0864eb1e5..3e992a308c8a1 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -254,8 +254,10 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
tlb = memblock_alloc(bytes, PAGE_SIZE);
else
tlb = memblock_alloc_low(bytes, PAGE_SIZE);
- if (!tlb)
- panic("%s: failed to allocate tlb structure\n", __func__);
+ if (!tlb) {
+ pr_warn("%s: failed to allocate tlb structure\n", __func__);
+ return;
+ }
if (remap && remap(tlb, nslabs) < 0) {
memblock_free(tlb, PAGE_ALIGN(bytes));
--
2.30.2
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/3] swiotlb: use the right nslabs value in swiotlb_init_remap
2022-05-11 12:58 swiotlb regression fixe Christoph Hellwig
2022-05-11 12:58 ` [PATCH 1/3] swiotlb: don't panic when the swiotlb buffer can't be allocated Christoph Hellwig
@ 2022-05-11 12:58 ` Christoph Hellwig
2022-05-13 1:39 ` Stefano Stabellini
2022-05-11 12:58 ` [PATCH 3/3] swiotlb: use the right nslabs-derived sizes in swiotlb_init_late Christoph Hellwig
2 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2022-05-11 12:58 UTC (permalink / raw)
To: iommu; +Cc: xen-devel, Boris Ostrovsky, Stefano Stabellini, Conor.Dooley
default_nslabs should only be used to initialize nslabs, after that we
need to use the local variable that can shrink when allocations or the
remap don't succeed.
Fixes: 6424e31b1c05 ("swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl")
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
kernel/dma/swiotlb.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 3e992a308c8a1..113e1e8aaca37 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -234,7 +234,7 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
{
struct io_tlb_mem *mem = &io_tlb_default_mem;
unsigned long nslabs = default_nslabs;
- size_t alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
+ size_t alloc_size;
size_t bytes;
void *tlb;
@@ -249,7 +249,7 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
* memory encryption.
*/
retry:
- bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT);
+ bytes = PAGE_ALIGN(nslabs << IO_TLB_SHIFT);
if (flags & SWIOTLB_ANY)
tlb = memblock_alloc(bytes, PAGE_SIZE);
else
@@ -269,12 +269,13 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
goto retry;
}
+ alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
if (!mem->slots)
panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
__func__, alloc_size, PAGE_SIZE);
- swiotlb_init_io_tlb_mem(mem, __pa(tlb), default_nslabs, false);
+ swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
mem->force_bounce = flags & SWIOTLB_FORCE;
if (flags & SWIOTLB_VERBOSE)
--
2.30.2
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/3] swiotlb: use the right nslabs-derived sizes in swiotlb_init_late
2022-05-11 12:58 swiotlb regression fixe Christoph Hellwig
2022-05-11 12:58 ` [PATCH 1/3] swiotlb: don't panic when the swiotlb buffer can't be allocated Christoph Hellwig
2022-05-11 12:58 ` [PATCH 2/3] swiotlb: use the right nslabs value in swiotlb_init_remap Christoph Hellwig
@ 2022-05-11 12:58 ` Christoph Hellwig
2022-05-13 1:44 ` Stefano Stabellini
2 siblings, 1 reply; 8+ messages in thread
From: Christoph Hellwig @ 2022-05-11 12:58 UTC (permalink / raw)
To: iommu; +Cc: xen-devel, Boris Ostrovsky, Stefano Stabellini, Conor.Dooley
nslabs can shrink when allocations or the remap don't succeed, so make
sure to use it for all sizing. For that remove the bytes value that
can get stale and replace it with local calculations and a boolean to
indicate if the originally requested size could not be allocated.
Fixes: 6424e31b1c05 ("swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl")
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
kernel/dma/swiotlb.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 113e1e8aaca37..d6e62a6a42ceb 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -297,9 +297,9 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
{
struct io_tlb_mem *mem = &io_tlb_default_mem;
unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
- unsigned long bytes;
unsigned char *vstart = NULL;
unsigned int order;
+ bool retried = false;
int rc = 0;
if (swiotlb_force_disable)
@@ -308,7 +308,6 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
retry:
order = get_order(nslabs << IO_TLB_SHIFT);
nslabs = SLABS_PER_PAGE << order;
- bytes = nslabs << IO_TLB_SHIFT;
while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
vstart = (void *)__get_free_pages(gfp_mask | __GFP_NOWARN,
@@ -316,16 +315,13 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
if (vstart)
break;
order--;
+ nslabs = SLABS_PER_PAGE << order;
+ retried = true;
}
if (!vstart)
return -ENOMEM;
- if (order != get_order(bytes)) {
- pr_warn("only able to allocate %ld MB\n",
- (PAGE_SIZE << order) >> 20);
- nslabs = SLABS_PER_PAGE << order;
- }
if (remap)
rc = remap(vstart, nslabs);
if (rc) {
@@ -334,9 +330,15 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
if (nslabs < IO_TLB_MIN_SLABS)
return rc;
+ retried = true;
goto retry;
}
+ if (retried) {
+ pr_warn("only able to allocate %ld MB\n",
+ (PAGE_SIZE << order) >> 20);
+ }
+
mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
get_order(array_size(sizeof(*mem->slots), nslabs)));
if (!mem->slots) {
@@ -344,7 +346,8 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
return -ENOMEM;
}
- set_memory_decrypted((unsigned long)vstart, bytes >> PAGE_SHIFT);
+ set_memory_decrypted((unsigned long)vstart,
+ (nslabs << IO_TLB_SHIFT) >> PAGE_SHIFT);
swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, true);
swiotlb_print_info();
--
2.30.2
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 1/3] swiotlb: don't panic when the swiotlb buffer can't be allocated
2022-05-11 12:58 ` [PATCH 1/3] swiotlb: don't panic when the swiotlb buffer can't be allocated Christoph Hellwig
@ 2022-05-13 1:31 ` Stefano Stabellini
2022-05-13 7:47 ` Conor.Dooley--- via iommu
1 sibling, 0 replies; 8+ messages in thread
From: Stefano Stabellini @ 2022-05-13 1:31 UTC (permalink / raw)
To: Christoph Hellwig
Cc: xen-devel, iommu, Stefano Stabellini, Boris Ostrovsky,
Conor.Dooley
On Wed, 11 May 2022, Christoph Hellwig wrote:
> For historical reasons the switlb code paniced when the metadata could
> not be allocated, but just printed a warning when the actual main
> swiotlb buffer could not be allocated. Restore this somewhat unexpected
> behavior as changing it caused a boot failure on the Microchip RISC-V
> PolarFire SoC Icicle kit.
>
> Fixes: 6424e31b1c05 ("swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl")
> Reported-by: Conor Dooley <Conor.Dooley@microchip.com>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Conor Dooley <Conor.Dooley@microchip.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> kernel/dma/swiotlb.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index e2ef0864eb1e5..3e992a308c8a1 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -254,8 +254,10 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
> tlb = memblock_alloc(bytes, PAGE_SIZE);
> else
> tlb = memblock_alloc_low(bytes, PAGE_SIZE);
> - if (!tlb)
> - panic("%s: failed to allocate tlb structure\n", __func__);
> + if (!tlb) {
> + pr_warn("%s: failed to allocate tlb structure\n", __func__);
> + return;
> + }
>
> if (remap && remap(tlb, nslabs) < 0) {
> memblock_free(tlb, PAGE_ALIGN(bytes));
> --
> 2.30.2
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/3] swiotlb: use the right nslabs value in swiotlb_init_remap
2022-05-11 12:58 ` [PATCH 2/3] swiotlb: use the right nslabs value in swiotlb_init_remap Christoph Hellwig
@ 2022-05-13 1:39 ` Stefano Stabellini
0 siblings, 0 replies; 8+ messages in thread
From: Stefano Stabellini @ 2022-05-13 1:39 UTC (permalink / raw)
To: Christoph Hellwig
Cc: xen-devel, iommu, Stefano Stabellini, Boris Ostrovsky,
Conor.Dooley
On Wed, 11 May 2022, Christoph Hellwig wrote:
> default_nslabs should only be used to initialize nslabs, after that we
> need to use the local variable that can shrink when allocations or the
> remap don't succeed.
>
> Fixes: 6424e31b1c05 ("swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> kernel/dma/swiotlb.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 3e992a308c8a1..113e1e8aaca37 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -234,7 +234,7 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
> {
> struct io_tlb_mem *mem = &io_tlb_default_mem;
> unsigned long nslabs = default_nslabs;
> - size_t alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
> + size_t alloc_size;
> size_t bytes;
> void *tlb;
>
> @@ -249,7 +249,7 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
> * memory encryption.
> */
> retry:
> - bytes = PAGE_ALIGN(default_nslabs << IO_TLB_SHIFT);
> + bytes = PAGE_ALIGN(nslabs << IO_TLB_SHIFT);
> if (flags & SWIOTLB_ANY)
> tlb = memblock_alloc(bytes, PAGE_SIZE);
> else
> @@ -269,12 +269,13 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
> goto retry;
> }
>
> + alloc_size = PAGE_ALIGN(array_size(sizeof(*mem->slots), nslabs));
> mem->slots = memblock_alloc(alloc_size, PAGE_SIZE);
> if (!mem->slots)
> panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
> __func__, alloc_size, PAGE_SIZE);
>
> - swiotlb_init_io_tlb_mem(mem, __pa(tlb), default_nslabs, false);
> + swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
> mem->force_bounce = flags & SWIOTLB_FORCE;
>
> if (flags & SWIOTLB_VERBOSE)
> --
> 2.30.2
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 3/3] swiotlb: use the right nslabs-derived sizes in swiotlb_init_late
2022-05-11 12:58 ` [PATCH 3/3] swiotlb: use the right nslabs-derived sizes in swiotlb_init_late Christoph Hellwig
@ 2022-05-13 1:44 ` Stefano Stabellini
0 siblings, 0 replies; 8+ messages in thread
From: Stefano Stabellini @ 2022-05-13 1:44 UTC (permalink / raw)
To: Christoph Hellwig
Cc: xen-devel, iommu, Stefano Stabellini, Boris Ostrovsky,
Conor.Dooley
On Wed, 11 May 2022, Christoph Hellwig wrote:
> nslabs can shrink when allocations or the remap don't succeed, so make
> sure to use it for all sizing. For that remove the bytes value that
> can get stale and replace it with local calculations and a boolean to
> indicate if the originally requested size could not be allocated.
>
> Fixes: 6424e31b1c05 ("swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> kernel/dma/swiotlb.c | 19 +++++++++++--------
> 1 file changed, 11 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 113e1e8aaca37..d6e62a6a42ceb 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -297,9 +297,9 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> {
> struct io_tlb_mem *mem = &io_tlb_default_mem;
> unsigned long nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
> - unsigned long bytes;
> unsigned char *vstart = NULL;
> unsigned int order;
> + bool retried = false;
> int rc = 0;
>
> if (swiotlb_force_disable)
> @@ -308,7 +308,6 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> retry:
> order = get_order(nslabs << IO_TLB_SHIFT);
> nslabs = SLABS_PER_PAGE << order;
> - bytes = nslabs << IO_TLB_SHIFT;
>
> while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
> vstart = (void *)__get_free_pages(gfp_mask | __GFP_NOWARN,
> @@ -316,16 +315,13 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> if (vstart)
> break;
> order--;
> + nslabs = SLABS_PER_PAGE << order;
> + retried = true;
> }
>
> if (!vstart)
> return -ENOMEM;
>
> - if (order != get_order(bytes)) {
> - pr_warn("only able to allocate %ld MB\n",
> - (PAGE_SIZE << order) >> 20);
> - nslabs = SLABS_PER_PAGE << order;
> - }
> if (remap)
> rc = remap(vstart, nslabs);
> if (rc) {
> @@ -334,9 +330,15 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> nslabs = ALIGN(nslabs >> 1, IO_TLB_SEGSIZE);
> if (nslabs < IO_TLB_MIN_SLABS)
> return rc;
> + retried = true;
> goto retry;
> }
>
> + if (retried) {
> + pr_warn("only able to allocate %ld MB\n",
> + (PAGE_SIZE << order) >> 20);
> + }
> +
> mem->slots = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
> get_order(array_size(sizeof(*mem->slots), nslabs)));
> if (!mem->slots) {
> @@ -344,7 +346,8 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
> return -ENOMEM;
> }
>
> - set_memory_decrypted((unsigned long)vstart, bytes >> PAGE_SHIFT);
> + set_memory_decrypted((unsigned long)vstart,
> + (nslabs << IO_TLB_SHIFT) >> PAGE_SHIFT);
> swiotlb_init_io_tlb_mem(mem, virt_to_phys(vstart), nslabs, true);
>
> swiotlb_print_info();
> --
> 2.30.2
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 1/3] swiotlb: don't panic when the swiotlb buffer can't be allocated
2022-05-11 12:58 ` [PATCH 1/3] swiotlb: don't panic when the swiotlb buffer can't be allocated Christoph Hellwig
2022-05-13 1:31 ` Stefano Stabellini
@ 2022-05-13 7:47 ` Conor.Dooley--- via iommu
1 sibling, 0 replies; 8+ messages in thread
From: Conor.Dooley--- via iommu @ 2022-05-13 7:47 UTC (permalink / raw)
To: hch, iommu; +Cc: xen-devel, boris.ostrovsky, sstabellini
On 11/05/2022 13:58, Christoph Hellwig wrote:
> EXTERNAL EMAIL: Do not click links or open attachments unless you know the content is safe
>
> For historical reasons the switlb code paniced when the metadata could
> not be allocated, but just printed a warning when the actual main
> swiotlb buffer could not be allocated. Restore this somewhat unexpected
> behavior as changing it caused a boot failure on the Microchip RISC-V
> PolarFire SoC Icicle kit.
>
> Fixes: 6424e31b1c05 ("swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl")
> Reported-by: Conor Dooley <Conor.Dooley@microchip.com>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Tested-by: Conor Dooley <Conor.Dooley@microchip.com>
FWIW:
Acked-by: Conor Dooley <conor.dooley@microchip.com>
> ---
> kernel/dma/swiotlb.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index e2ef0864eb1e5..3e992a308c8a1 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -254,8 +254,10 @@ void __init swiotlb_init_remap(bool addressing_limit, unsigned int flags,
> tlb = memblock_alloc(bytes, PAGE_SIZE);
> else
> tlb = memblock_alloc_low(bytes, PAGE_SIZE);
> - if (!tlb)
> - panic("%s: failed to allocate tlb structure\n", __func__);
> + if (!tlb) {
> + pr_warn("%s: failed to allocate tlb structure\n", __func__);
> + return;
> + }
>
> if (remap && remap(tlb, nslabs) < 0) {
> memblock_free(tlb, PAGE_ALIGN(bytes));
> --
> 2.30.2
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2022-05-13 12:42 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-05-11 12:58 swiotlb regression fixe Christoph Hellwig
2022-05-11 12:58 ` [PATCH 1/3] swiotlb: don't panic when the swiotlb buffer can't be allocated Christoph Hellwig
2022-05-13 1:31 ` Stefano Stabellini
2022-05-13 7:47 ` Conor.Dooley--- via iommu
2022-05-11 12:58 ` [PATCH 2/3] swiotlb: use the right nslabs value in swiotlb_init_remap Christoph Hellwig
2022-05-13 1:39 ` Stefano Stabellini
2022-05-11 12:58 ` [PATCH 3/3] swiotlb: use the right nslabs-derived sizes in swiotlb_init_late Christoph Hellwig
2022-05-13 1:44 ` Stefano Stabellini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox