* large DMA segments vs SWIOTLB
@ 2019-07-31 14:40 Lucas Stach
2019-08-01 7:29 ` Christoph Hellwig
0 siblings, 1 reply; 5+ messages in thread
From: Lucas Stach @ 2019-07-31 14:40 UTC (permalink / raw)
To: Christoph Hellwig, Konrad Rzeszutek Wilk; +Cc: iommu
Hi all,
I'm currently looking at an issue with an NVMe device, which isn't
working properly under some specific conditions.
The issue comes down to my platform having DMA addressing restrictions,
with only 3 of the total 4GiB of RAM being device addressable, which
means a bunch of DMA mappings are going through the SWIOTLB.
Now with this NVMe device I'm getting a request with ~520KiB data
payload. The system memory isn't heavily fragmented at that point yet, so the payload gets mapped a single dma segment in nvme_map_data(). Due to the addressing restrictions the request is passed to SWIOTLB, which is unable to satisfy the mapping request, despite plenty of TLB space being available due to the maximum segment size imposed by SWIOTLB. Currently a SWIOTLB slab is 2KiB (IO_TLB_SHIFT) in size, while the maximum segment size is IO_TLB_SEGSIZE = 128 slabs. This causes the dma mapping to fail, which means the blk layer will retry the request indefinitely.
Now I can work around the issue at hand simply by bumping
IO_TLB_SEGSIZE to 512, but this doesn't seem like a very robust
solution.
Do we need a SWIOTLB allocator that doesn't exhibit linear complexity
with the maximum segment size? Some buddy scheme maybe? Splitting the
dma segment doesn't seem to be an option, as the documentation states
that dma_map_sg may return less segments as a result of the mapping
operation, not more. I'm not sure how far this assumption is ingrained
into the users of the API.
Regards,
Lucas
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: large DMA segments vs SWIOTLB
2019-07-31 14:40 large DMA segments vs SWIOTLB Lucas Stach
@ 2019-08-01 7:29 ` Christoph Hellwig
2019-08-01 8:35 ` Lucas Stach
0 siblings, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2019-08-01 7:29 UTC (permalink / raw)
To: Lucas Stach; +Cc: iommu, Christoph Hellwig, Konrad Rzeszutek Wilk
Hi Lukas,
have you tried the latest 5.3-rc kernel, where we limited the NVMe
I/O size based on the swiotlb buffer size?
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: large DMA segments vs SWIOTLB
2019-08-01 7:29 ` Christoph Hellwig
@ 2019-08-01 8:35 ` Lucas Stach
2019-08-01 14:00 ` Christoph Hellwig
0 siblings, 1 reply; 5+ messages in thread
From: Lucas Stach @ 2019-08-01 8:35 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: iommu, Konrad Rzeszutek Wilk
Hi Christoph,
Am Donnerstag, den 01.08.2019, 09:29 +0200 schrieb Christoph Hellwig:
> Hi Lukas,
>
> have you tried the latest 5.3-rc kernel, where we limited the NVMe
> I/O size based on the swiotlb buffer size?
Yes, the issue was reproduced on 5.3-rc2. I now see your commit
limiting the request size, so I guess I need to dig in to see why I'm
still getting requests larger than the SWIOTLB max segment size. Thanks
for the pointer!
Regards,
Lucas
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: large DMA segments vs SWIOTLB
2019-08-01 8:35 ` Lucas Stach
@ 2019-08-01 14:00 ` Christoph Hellwig
2019-08-05 15:56 ` Lucas Stach
0 siblings, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2019-08-01 14:00 UTC (permalink / raw)
To: Lucas Stach; +Cc: iommu, Christoph Hellwig, Konrad Rzeszutek Wilk
On Thu, Aug 01, 2019 at 10:35:02AM +0200, Lucas Stach wrote:
> Hi Christoph,
>
> Am Donnerstag, den 01.08.2019, 09:29 +0200 schrieb Christoph Hellwig:
> > Hi Lukas,
> >
> > have you tried the latest 5.3-rc kernel, where we limited the NVMe
> > I/O size based on the swiotlb buffer size?
>
> Yes, the issue was reproduced on 5.3-rc2. I now see your commit
> limiting the request size, so I guess I need to dig in to see why I'm
> still getting requests larger than the SWIOTLB max segment size. Thanks
> for the pointer!
a similar setup to yours the
dma_addressing_limited doesn't work, but if we changed it to a <=
it does. The result is counter to what I'd expect, but because I'm on
vacation I didn't have time to look into why it works. This is his
patch, let me know if this works for you:
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index f7d1eea32c78..89ac1cf754cc 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -689,7 +689,7 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask)
*/
static inline bool dma_addressing_limited(struct device *dev)
{
- return min_not_zero(dma_get_mask(dev), dev->bus_dma_mask) <
+ return min_not_zero(dma_get_mask(dev), dev->bus_dma_mask) <=
dma_get_required_mask(dev);
}
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: large DMA segments vs SWIOTLB
2019-08-01 14:00 ` Christoph Hellwig
@ 2019-08-05 15:56 ` Lucas Stach
0 siblings, 0 replies; 5+ messages in thread
From: Lucas Stach @ 2019-08-05 15:56 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: iommu, Konrad Rzeszutek Wilk
Hi Christoph,
Am Donnerstag, den 01.08.2019, 16:00 +0200 schrieb Christoph Hellwig:
> On Thu, Aug 01, 2019 at 10:35:02AM +0200, Lucas Stach wrote:
> > Hi Christoph,
> >
> > Am Donnerstag, den 01.08.2019, 09:29 +0200 schrieb Christoph Hellwig:
> > > Hi Lukas,
> > >
> > > have you tried the latest 5.3-rc kernel, where we limited the NVMe
> > > I/O size based on the swiotlb buffer size?
> >
> > Yes, the issue was reproduced on 5.3-rc2. I now see your commit
> > limiting the request size, so I guess I need to dig in to see why I'm
> > still getting requests larger than the SWIOTLB max segment size. Thanks
> > for the pointer!
>
> a similar setup to yours the
> dma_addressing_limited doesn't work, but if we changed it to a <=
> it does. The result is counter to what I'd expect, but because I'm on
> vacation I didn't have time to look into why it works. This is his
> patch, let me know if this works for you:
>
>
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index f7d1eea32c78..89ac1cf754cc 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -689,7 +689,7 @@ static inline int dma_coerce_mask_and_coherent(struct device *dev, u64 mask)
> */
> static inline bool dma_addressing_limited(struct device *dev)
> {
> > - return min_not_zero(dma_get_mask(dev), dev->bus_dma_mask) <
> > + return min_not_zero(dma_get_mask(dev), dev->bus_dma_mask) <=
> > dma_get_required_mask(dev);
> }
From the patch I just sent it should be clear why the above works. With
my patch applied I can't reproduce any issues with this NVMe device
anymore.
Thanks for pointing me into the right direction!
Regards,
Lucas
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-08-05 15:56 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-07-31 14:40 large DMA segments vs SWIOTLB Lucas Stach
2019-08-01 7:29 ` Christoph Hellwig
2019-08-01 8:35 ` Lucas Stach
2019-08-01 14:00 ` Christoph Hellwig
2019-08-05 15:56 ` Lucas Stach
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox