All of lore.kernel.org
 help / color / mirror / Atom feed
From: John Garry <john.g.garry@oracle.com>
To: Robin Murphy <robin.murphy@arm.com>, joro@8bytes.org
Cc: will@kernel.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org, zhangzekun11@huawei.com
Subject: Re: [PATCH 0/2] iommu/iova: Make the rcache depot properly flexible
Date: Tue, 15 Aug 2023 11:24:53 +0100	[thread overview]
Message-ID: <80fb865a-eb45-e783-277d-0d2e044c28f5@oracle.com> (raw)
In-Reply-To: <cover.1692033783.git.robin.murphy@arm.com>

On 14/08/2023 18:53, Robin Murphy wrote:
> Hi all,
> 

Hi Robin,

> Prompted by [1], which reminded me I started this a while ago, I've now
> finished off my own attempt at sorting out the horrid lack of rcache
> scalability. It's become quite clear that given the vast range of system
> sizes and workloads there is no right size for a fixed depot array, so I
> reckon we're better off not having one at all.
> 
> Note that the reclaim threshold and rate are chosen fairly arbitrarily -

This threshold is the number of online CPUs, right?

> it's enough of a challenge to get my 4-core dev board with spinning disk
> and gigabit ethernet to push anything into a depot at all :)
> 

I have to admit that I was hoping to also see a more aggressive reclaim 
strategy, where we also trim the per-CPU rcaches when not in use. 
Leizhen proposed something like this a long time ago.

Thanks,
John

> Thanks,
> Robin.
> 
> [1] https://urldefense.com/v3/__https://lore.kernel.org/linux-iommu/20230811130246.42719-1-zhangzekun11@huawei.com__;!!ACWV5N9M2RV99hQ!Oj-N3yDamuhrlNTcuL5MA2LQRVf1EwFxQU21BMXSFBR1Fb3z4H_on1uiFG0EOoYpNc-FKGeoKvw9wzEV_1TRcr4$
> 
> 
> Robin Murphy (2):
>    iommu/iova: Make the rcache depot scale better
>    iommu/iova: Manage the depot list size
> 
>   drivers/iommu/iova.c | 94 ++++++++++++++++++++++++++++++--------------
>   1 file changed, 65 insertions(+), 29 deletions(-)
> 


  parent reply	other threads:[~2023-08-15 10:25 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-14 17:53 [PATCH 0/2] iommu/iova: Make the rcache depot properly flexible Robin Murphy
2023-08-14 17:53 ` [PATCH 1/2] iommu/iova: Make the rcache depot scale better Robin Murphy
2023-08-21  8:11   ` Srivastava, Dheeraj Kumar
2023-08-21  8:55     ` Robin Murphy
2023-08-21  9:03       ` Srivastava, Dheeraj Kumar
2023-08-21 12:02   ` John Garry
2023-08-21 12:28     ` Robin Murphy
2023-08-14 17:53 ` [PATCH 2/2] iommu/iova: Manage the depot list size Robin Murphy
2023-08-15 14:11   ` zhangzekun (A)
2023-08-16  4:25     ` Jerry Snitselaar
2023-08-16 16:52     ` Robin Murphy
2023-08-15 10:24 ` John Garry [this message]
2023-08-15 11:11   ` [PATCH 0/2] iommu/iova: Make the rcache depot properly flexible Robin Murphy
2023-08-15 13:35     ` John Garry
2023-08-16 15:10       ` Robin Murphy
2023-08-21 11:35         ` John Garry
2023-08-17 16:39 ` Jerry Snitselaar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=80fb865a-eb45-e783-277d-0d2e044c28f5@oracle.com \
    --to=john.g.garry@oracle.com \
    --cc=iommu@lists.linux.dev \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=will@kernel.org \
    --cc=zhangzekun11@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.