Linux IOMMU Development
 help / color / mirror / Atom feed
From: Lucas Stach <l.stach@pengutronix.de>
To: Christoph Hellwig <hch@lst.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: iommu@lists.linux-foundation.org
Subject: large DMA segments vs SWIOTLB
Date: Wed, 31 Jul 2019 16:40:29 +0200	[thread overview]
Message-ID: <1564584029.7267.15.camel@pengutronix.de> (raw)

Hi all,

I'm currently looking at an issue with an NVMe device, which isn't
working properly under some specific conditions.

The issue comes down to my platform having DMA addressing restrictions,
with only 3 of the total 4GiB of RAM being device addressable, which
means a bunch of DMA mappings are going through the SWIOTLB.

Now with this NVMe device I'm getting a request with ~520KiB data
payload. The system memory isn't heavily fragmented at that point yet, so the payload gets mapped a single dma segment in nvme_map_data(). Due to the addressing restrictions the request is passed to SWIOTLB, which is unable to satisfy the mapping request, despite plenty of TLB space being available due to the maximum segment size imposed by SWIOTLB. Currently a SWIOTLB slab is 2KiB (IO_TLB_SHIFT) in size, while the maximum segment size is IO_TLB_SEGSIZE = 128 slabs. This causes the dma mapping to fail, which means the blk layer will retry the request indefinitely.

Now I can work around the issue at hand simply by bumping
IO_TLB_SEGSIZE to 512, but this doesn't seem like a very robust
solution.

Do we need a SWIOTLB allocator that doesn't exhibit linear complexity
with the maximum segment size? Some buddy scheme maybe? Splitting the
dma segment doesn't seem to be an option, as the documentation states
that dma_map_sg may return less segments as a result of the mapping
operation, not more. I'm not sure how far this assumption is ingrained
into the users of the API.

Regards,
Lucas


_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

             reply	other threads:[~2019-07-31 15:08 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-31 14:40 Lucas Stach [this message]
2019-08-01  7:29 ` large DMA segments vs SWIOTLB Christoph Hellwig
2019-08-01  8:35   ` Lucas Stach
2019-08-01 14:00     ` Christoph Hellwig
2019-08-05 15:56       ` Lucas Stach

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1564584029.7267.15.camel@pengutronix.de \
    --to=l.stach@pengutronix.de \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=konrad.wilk@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox