From: Halil Pasic <pasic@linux.ibm.com>
To: "Petr Tesařík" <petr@tesarici.cz>
Cc: Niklas Schnelle <schnelle@linux.ibm.com>,
Christoph Hellwig <hch@lst.de>,
Bjorn Helgaas <bhelgaas@google.com>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Robin Murphy <robin.murphy@arm.com>,
Petr Tesarik <petr.tesarik1@huawei-partners.com>,
Ross Lagerwall <ross.lagerwall@citrix.com>,
linux-pci <linux-pci@vger.kernel.org>,
linux-kernel@vger.kernel.org, iommu@lists.linux.dev,
Matthew Rosato <mjrosato@linux.ibm.com>,
Halil Pasic <pasic@linux.ibm.com>
Subject: Re: Memory corruption with CONFIG_SWIOTLB_DYNAMIC=y
Date: Wed, 8 Nov 2023 11:52:07 +0100 [thread overview]
Message-ID: <20231108115207.791a30d8.pasic@linux.ibm.com> (raw)
In-Reply-To: <20231103195949.0af884d0@meshulam.tesarici.cz>
On Fri, 3 Nov 2023 19:59:49 +0100
Petr Tesařík <petr@tesarici.cz> wrote:
> > Not sure how to properly fix this as the different alignment
> > requirements get pretty complex quickly. So would appreciate your
> > input.
>
> I don't think it's possible to improve the allocation logic without
> modifying the page allocator and/or the DMA atomic pool allocator to
> take additional constraints into account.
I don't understand. What speaks against calculating the amount of space
needed, so that with the waste we can still fit the bounce-buffer in the
pool?
I believe alloc_size + combined_mask is a trivial upper bound, but we can
do slightly better since we know that we allocate pages.
For the sake of simplicity let us assume we only have the min_align_mask
requirement. Then I believe the worst case is that we need
(orig_addr & min_align_mask & PAGE_MASK) + (min_align_mask & ~PAGE_MASK)
extra space to fit.
Depending on how the semantics pan out one may be able to replace
min_align_mask with combined_mask.
Is your point that for large combined_mask values
_get_free_pages(GFP_NOWAIT | __GFP_NOWARN, required_order) is not
likely to complete successfully?
Regards,
Halil
next prev parent reply other threads:[~2023-11-08 10:53 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-03 15:13 Memory corruption with CONFIG_SWIOTLB_DYNAMIC=y Niklas Schnelle
2023-11-03 16:14 ` Halil Pasic
2023-11-03 20:50 ` Petr Tesařík
2023-11-06 7:42 ` Christoph Hellwig
2023-11-07 17:24 ` Halil Pasic
2023-11-08 7:30 ` Christoph Hellwig
2023-11-06 10:08 ` Halil Pasic
2023-11-07 17:24 ` Halil Pasic
2023-11-08 9:13 ` Petr Tesařík
2023-11-23 10:16 ` Petr Tesařík
2023-11-27 15:59 ` Christoph Hellwig
2023-11-28 7:16 ` Petr Tesařík
2023-11-03 18:59 ` Petr Tesařík
2023-11-06 7:44 ` Christoph Hellwig
2023-11-06 12:46 ` Petr Tesarik
2023-11-08 10:52 ` Halil Pasic [this message]
2023-11-08 11:04 ` Petr Tesarik
2023-11-08 14:32 ` Halil Pasic
2023-11-08 14:45 ` Petr Tesarik
2023-11-10 9:22 ` Halil Pasic
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231108115207.791a30d8.pasic@linux.ibm.com \
--to=pasic@linux.ibm.com \
--cc=bhelgaas@google.com \
--cc=hch@lst.de \
--cc=iommu@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=m.szyprowski@samsung.com \
--cc=mjrosato@linux.ibm.com \
--cc=petr.tesarik1@huawei-partners.com \
--cc=petr@tesarici.cz \
--cc=robin.murphy@arm.com \
--cc=ross.lagerwall@citrix.com \
--cc=schnelle@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).