linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Tomasz Jeznach <tjeznach@rivosinc.com>
Cc: Alim Akhtar <alim.akhtar@samsung.com>,
	Alyssa Rosenzweig <alyssa@rosenzweig.io>,
	Albert Ou <aou@eecs.berkeley.edu>,
	asahi@lists.linux.dev, Lu Baolu <baolu.lu@linux.intel.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Heiko Stuebner <heiko@sntech.de>,
	iommu@lists.linux.dev, Jernej Skrabec <jernej.skrabec@gmail.com>,
	Jonathan Hunter <jonathanh@nvidia.com>,
	Joerg Roedel <joro@8bytes.org>,
	Krzysztof Kozlowski <krzk@kernel.org>,
	linux-arm-kernel@lists.infradead.org,
	linux-riscv@lists.infradead.org,
	linux-rockchip@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, linux-sunxi@lists.linux.dev,
	linux-tegra@vger.kernel.org,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Hector Martin <marcan@marcan.st>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Robin Murphy <robin.murphy@arm.com>,
	Samuel Holland <samuel@sholland.org>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Sven Peter <sven@svenpeter.dev>,
	Thierry Reding <thierry.reding@gmail.com>,
	Krishna Reddy <vdumpa@nvidia.com>, Chen-Yu Tsai <wens@csie.org>,
	Will Deacon <will@kernel.org>,
	Bagas Sanjaya <bagasdotme@gmail.com>,
	Joerg Roedel <jroedel@suse.de>,
	Pasha Tatashin <pasha.tatashin@soleen.com>,
	patches@lists.linux.dev, David Rientjes <rientjes@google.com>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH 17/19] iommu/riscv: Update to use iommu_alloc_pages_node_lg2()
Date: Thu, 6 Feb 2025 09:17:21 -0400	[thread overview]
Message-ID: <20250206131721.GF2960738@nvidia.com> (raw)
In-Reply-To: <Z6RI3ftJTrm3UxoO@tjeznach.ba.rivosinc.com>

On Wed, Feb 05, 2025 at 09:30:05PM -0800, Tomasz Jeznach wrote:
> > @@ -161,9 +163,8 @@ static int riscv_iommu_queue_alloc(struct riscv_iommu_device *iommu,
> >  	} else {
> >  		do {
> >  			const size_t queue_size = entry_size << (logsz + 1);
> > -			const int order = get_order(queue_size);
> >  
> > -			queue->base = riscv_iommu_get_pages(iommu, order);
> > +			queue->base = riscv_iommu_get_pages(iommu, queue_size);
> >  			queue->phys = __pa(queue->base);
> 
> All allocations must be 4k aligned, including sub-page allocs.

Oh weird, so it requires 4k alignment but the HW can refuse to support
a 4k queue length?

I changed it to this:

+                       queue->base = riscv_iommu_get_pages(
+                               iommu, max(queue_size, SZ_4K));

> >  		} while (!queue->base && logsz-- > 0);
> >  	}
> > @@ -618,7 +619,7 @@ static struct riscv_iommu_dc *riscv_iommu_get_dc(struct riscv_iommu_device *iomm
> >  				break;
> >  			}
> >  
> > -			ptr = riscv_iommu_get_pages(iommu, 0);
> > +			ptr = riscv_iommu_get_pages(iommu, PAGE_SIZE);
> >  			if (!ptr)
> >  				return NULL;
> >  
> > @@ -698,7 +699,7 @@ static int riscv_iommu_iodir_alloc(struct riscv_iommu_device *iommu)
> >  	}
> >  
> >  	if (!iommu->ddt_root) {
> > -		iommu->ddt_root = riscv_iommu_get_pages(iommu, 0);
> > +		iommu->ddt_root = riscv_iommu_get_pages(iommu, PAGE_SIZE);
> >  		iommu->ddt_phys = __pa(iommu->ddt_root);
> >  	}

Should these be SZ_4K as well or PAGE_SIZE?

Thanks,
Jason


  reply	other threads:[~2025-02-06 13:21 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-04 18:34 [PATCH 00/19] iommu: Further abstract iommu-pages Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 01/19] iommu/terga: Do not use struct page as the handle for as->pd memory Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 02/19] iommu/tegra: Do not use struct page as the handle for pts Jason Gunthorpe
2025-02-05 19:28   ` Robin Murphy
2025-02-06 17:48     ` Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 03/19] iommu/pages: Remove __iommu_alloc_pages()/__iommu_free_pages() Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 04/19] iommu/pages: Make iommu_put_pages_list() work with high order allocations Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 05/19] iommu/pages: Replace iommu_free_pages() with iommu_free_page() Jason Gunthorpe
2025-02-05 15:55   ` Robin Murphy
2025-02-05 17:41     ` Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 06/19] iommu/pages: De-inline the substantial functions Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 07/19] iommu/vtd: Use virt_to_phys() Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 08/19] iommu/pages: Formalize the freelist API Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 09/19] iommu/riscv: Convert to use struct iommu_pages_list Jason Gunthorpe
2025-02-06  5:53   ` Tomasz Jeznach
2025-02-04 18:34 ` [PATCH 10/19] iommu/amd: " Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 11/19] iommu: Change iommu_iotlb_gather to use iommu_page_list Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 12/19] iommu/pages: Remove iommu_put_pages_list_old and the _Generic Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 13/19] iommu/pages: Move from struct page to struct ioptdesc and folio Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 14/19] iommu/pages: Move the __GFP_HIGHMEM checks into the common code Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 15/19] iommu/pages: Allow sub page sizes to be passed into the allocator Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 16/19] iommu/amd: Use roundup_pow_two() instead of get_order() Jason Gunthorpe
2025-02-05 16:11   ` Robin Murphy
2025-02-05 17:59     ` Jason Gunthorpe
2025-02-04 18:34 ` [PATCH 17/19] iommu/riscv: Update to use iommu_alloc_pages_node_lg2() Jason Gunthorpe
2025-02-06  5:30   ` Tomasz Jeznach
2025-02-06 13:17     ` Jason Gunthorpe [this message]
2025-02-06 17:54       ` Tomasz Jeznach
2025-02-04 18:34 ` [PATCH 18/19] iommu: Update various drivers to pass in lg2sz instead of order to iommu pages Jason Gunthorpe
2025-02-05 15:47   ` Robin Murphy
2025-02-05 16:10     ` Jason Gunthorpe
2025-02-05 18:03       ` Robin Murphy
2025-02-05 18:39         ` Jason Gunthorpe
2025-02-04 18:35 ` [PATCH 19/19] iommu/pages: Remove iommu_alloc_page/pages() Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250206131721.GF2960738@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=alim.akhtar@samsung.com \
    --cc=alyssa@rosenzweig.io \
    --cc=aou@eecs.berkeley.edu \
    --cc=asahi@lists.linux.dev \
    --cc=bagasdotme@gmail.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=dwmw2@infradead.org \
    --cc=heiko@sntech.de \
    --cc=iommu@lists.linux.dev \
    --cc=jernej.skrabec@gmail.com \
    --cc=jonathanh@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=jroedel@suse.de \
    --cc=krzk@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linux-rockchip@lists.infradead.org \
    --cc=linux-samsung-soc@vger.kernel.org \
    --cc=linux-sunxi@lists.linux.dev \
    --cc=linux-tegra@vger.kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=marcan@marcan.st \
    --cc=palmer@dabbelt.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=patches@lists.linux.dev \
    --cc=paul.walmsley@sifive.com \
    --cc=rientjes@google.com \
    --cc=robin.murphy@arm.com \
    --cc=samuel@sholland.org \
    --cc=suravee.suthikulpanit@amd.com \
    --cc=sven@svenpeter.dev \
    --cc=thierry.reding@gmail.com \
    --cc=tjeznach@rivosinc.com \
    --cc=vdumpa@nvidia.com \
    --cc=wens@csie.org \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).