* [PATCH] swiotlb: check dynamically allocated TLB address before decrypting
@ 2023-10-26 9:51 Petr Tesarik
2023-10-28 17:24 ` Petr Tesařík
0 siblings, 1 reply; 4+ messages in thread
From: Petr Tesarik @ 2023-10-26 9:51 UTC (permalink / raw)
To: Christoph Hellwig, Marek Szyprowski, Robin Murphy,
Greg Kroah-Hartman, Petr Tesarik, open list:DMA MAPPING HELPERS,
open list
Cc: Wangkefeng, Roberto Sassu, petr
From: Petr Tesarik <petr.tesarik1@huawei-partners.com>
Do not decrypt a dynamically allocated TLB area until its physical address
is known to be below the requested limit. Currently, pages are allocated
and decrypted, but then they may be freed while still decrypted if
swiotlb_alloc_tlb() determines that the physical address is too high.
Let the caller differentiate between unsuitable physical address (=> retry
from a lower zone) and allocation failures (=> no point in retrying).
Fixes: 79636caad361 ("swiotlb: if swiotlb is full, fall back to a transient memory pool")
Signed-off-by: Petr Tesarik <petr.tesarik1@huawei-partners.com>
---
kernel/dma/swiotlb.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index dff067bd56b1..d1118f6f61b8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -558,30 +558,36 @@ void __init swiotlb_exit(void)
* alloc_dma_pages() - allocate pages to be used for DMA
* @gfp: GFP flags for the allocation.
* @bytes: Size of the buffer.
+ * @phys_limit: Maximum allowed physical address of the buffer.
*
* Allocate pages from the buddy allocator. If successful, make the allocated
* pages decrypted that they can be used for DMA.
*
- * Return: Decrypted pages, or %NULL on failure.
+ * Return: Decrypted pages, %NULL on allocation failure, or ERR_PTR(-EAGAIN)
+ * if the allocated physical address was above @phys_limit.
*/
-static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes)
+static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes, u64 phys_limit)
{
unsigned int order = get_order(bytes);
struct page *page;
+ phys_addr_t paddr;
void *vaddr;
page = alloc_pages(gfp, order);
if (!page)
return NULL;
- vaddr = page_address(page);
+ paddr = page_to_phys(page);
+ if (paddr + bytes - 1 > phys_limit)
+ goto error;
+ vaddr = phys_to_virt(paddr);
if (set_memory_decrypted((unsigned long)vaddr, PFN_UP(bytes)))
goto error;
return page;
error:
__free_pages(page, order);
- return NULL;
+ return ERR_PTR(-EAGAIN);
}
/**
@@ -618,11 +624,7 @@ static struct page *swiotlb_alloc_tlb(struct device *dev, size_t bytes,
else if (phys_limit <= DMA_BIT_MASK(32))
gfp |= __GFP_DMA32;
- while ((page = alloc_dma_pages(gfp, bytes)) &&
- page_to_phys(page) + bytes - 1 > phys_limit) {
- /* allocated, but too high */
- __free_pages(page, get_order(bytes));
-
+ while (IS_ERR(page = alloc_dma_pages(gfp, bytes, phys_limit))) {
if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
phys_limit < DMA_BIT_MASK(64) &&
!(gfp & (__GFP_DMA32 | __GFP_DMA)))
--
2.42.0
^ permalink raw reply related [flat|nested] 4+ messages in thread* Re: [PATCH] swiotlb: check dynamically allocated TLB address before decrypting
2023-10-26 9:51 [PATCH] swiotlb: check dynamically allocated TLB address before decrypting Petr Tesarik
@ 2023-10-28 17:24 ` Petr Tesařík
2023-10-30 13:31 ` Christoph Hellwig
0 siblings, 1 reply; 4+ messages in thread
From: Petr Tesařík @ 2023-10-28 17:24 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Petr Tesarik, Marek Szyprowski, Robin Murphy, Greg Kroah-Hartman,
Petr Tesarik, open list:DMA MAPPING HELPERS, open list,
Wangkefeng, Roberto Sassu
Hi Christoph,
On Thu, 26 Oct 2023 11:51:23 +0200
Petr Tesarik <petrtesarik@huaweicloud.com> wrote:
> From: Petr Tesarik <petr.tesarik1@huawei-partners.com>
>
> Do not decrypt a dynamically allocated TLB area until its physical address
> is known to be below the requested limit. Currently, pages are allocated
> and decrypted, but then they may be freed while still decrypted if
> swiotlb_alloc_tlb() determines that the physical address is too high.
>
> Let the caller differentiate between unsuitable physical address (=> retry
> from a lower zone) and allocation failures (=> no point in retrying).
>
> Fixes: 79636caad361 ("swiotlb: if swiotlb is full, fall back to a transient memory pool")
> Signed-off-by: Petr Tesarik <petr.tesarik1@huawei-partners.com>
This patch fixes a potential information leak from a CoCo VM, so I
would like to see it reviewed. Is this on your radar, or did it fall
through the cracks?
Petr T
> ---
> kernel/dma/swiotlb.c | 20 +++++++++++---------
> 1 file changed, 11 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index dff067bd56b1..d1118f6f61b8 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -558,30 +558,36 @@ void __init swiotlb_exit(void)
> * alloc_dma_pages() - allocate pages to be used for DMA
> * @gfp: GFP flags for the allocation.
> * @bytes: Size of the buffer.
> + * @phys_limit: Maximum allowed physical address of the
> buffer. *
> * Allocate pages from the buddy allocator. If successful, make the
> allocated
> * pages decrypted that they can be used for DMA.
> *
> - * Return: Decrypted pages, or %NULL on failure.
> + * Return: Decrypted pages, %NULL on allocation failure, or
> ERR_PTR(-EAGAIN)
> + * if the allocated physical address was above @phys_limit.
> */
> -static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes)
> +static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes, u64
> phys_limit) {
> unsigned int order = get_order(bytes);
> struct page *page;
> + phys_addr_t paddr;
> void *vaddr;
>
> page = alloc_pages(gfp, order);
> if (!page)
> return NULL;
>
> - vaddr = page_address(page);
> + paddr = page_to_phys(page);
> + if (paddr + bytes - 1 > phys_limit)
> + goto error;
> + vaddr = phys_to_virt(paddr);
> if (set_memory_decrypted((unsigned long)vaddr,
> PFN_UP(bytes))) goto error;
> return page;
>
> error:
> __free_pages(page, order);
> - return NULL;
> + return ERR_PTR(-EAGAIN);
> }
>
> /**
> @@ -618,11 +624,7 @@ static struct page *swiotlb_alloc_tlb(struct
> device *dev, size_t bytes, else if (phys_limit <= DMA_BIT_MASK(32))
> gfp |= __GFP_DMA32;
>
> - while ((page = alloc_dma_pages(gfp, bytes)) &&
> - page_to_phys(page) + bytes - 1 > phys_limit) {
> - /* allocated, but too high */
> - __free_pages(page, get_order(bytes));
> -
> + while (IS_ERR(page = alloc_dma_pages(gfp, bytes,
> phys_limit))) { if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
> phys_limit < DMA_BIT_MASK(64) &&
> !(gfp & (__GFP_DMA32 | __GFP_DMA)))
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH] swiotlb: check dynamically allocated TLB address before decrypting
2023-10-28 17:24 ` Petr Tesařík
@ 2023-10-30 13:31 ` Christoph Hellwig
2023-10-30 14:09 ` Petr Tesařík
0 siblings, 1 reply; 4+ messages in thread
From: Christoph Hellwig @ 2023-10-30 13:31 UTC (permalink / raw)
To: Petr Tesařík
Cc: Christoph Hellwig, Petr Tesarik, Marek Szyprowski, Robin Murphy,
Greg Kroah-Hartman, Petr Tesarik, open list:DMA MAPPING HELPERS,
open list, Wangkefeng, Roberto Sassu
I'm trying to review it properly this week. It was defintively too big
to just rush it into 6.6 in the last few days.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] swiotlb: check dynamically allocated TLB address before decrypting
2023-10-30 13:31 ` Christoph Hellwig
@ 2023-10-30 14:09 ` Petr Tesařík
0 siblings, 0 replies; 4+ messages in thread
From: Petr Tesařík @ 2023-10-30 14:09 UTC (permalink / raw)
To: Christoph Hellwig
Cc: Petr Tesarik, Marek Szyprowski, Robin Murphy, Greg Kroah-Hartman,
Petr Tesarik, open list:DMA MAPPING HELPERS, open list,
Wangkefeng, Roberto Sassu, Michael Kelley
Hi Christoph,
On Mon, 30 Oct 2023 14:31:12 +0100
Christoph Hellwig <hch@lst.de> wrote:
> I'm trying to review it properly this week. It was defintively too big
> to just rush it into 6.6 in the last few days.
Thank you for the answer. This is OK. Let me give a bit of background.
The bug was reported by Michael Kelley to me, while I temporarily lost
access to my @huaweicloud.com mailbox. Then I was not able to add him
in a Reported-by: header, because this was a private email, which could
not be referred by a Closes: header.
Anyway, Michael explained in that private email that the threat is more
or less theoretical, because environments where set_memory_decrypted()
actually does something are unlikely to have physical address
constraints for the bounce buffer.
But maybe we should add a CC: stable@vger.kernel.org nevertheless.
Petr T
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-10-30 14:09 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-10-26 9:51 [PATCH] swiotlb: check dynamically allocated TLB address before decrypting Petr Tesarik
2023-10-28 17:24 ` Petr Tesařík
2023-10-30 13:31 ` Christoph Hellwig
2023-10-30 14:09 ` Petr Tesařík
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox