Linux IOMMU Development
 help / color / mirror / Atom feed
From: John Garry <john.g.garry@oracle.com>
To: Robin Murphy <robin.murphy@arm.com>,
	Jakub Kicinski <kuba@kernel.org>, Joerg Roedel <joro@8bytes.org>
Cc: will@kernel.org, iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [PATCH v4] iommu: Optimise PCI SAC address trick
Date: Thu, 15 Jun 2023 13:15:07 +0100	[thread overview]
Message-ID: <f17ab40c-d5c6-e31b-7877-7452b612505c@oracle.com> (raw)
In-Reply-To: <99c1e8ab-a064-c770-072f-23ef9e9abb82@arm.com>

On 15/06/2023 12:41, Robin Murphy wrote:
>>
>> Sure, not the same problem.
>>
>> However when we switched storage drivers to use dma_opt_mapping_size() 
>> then performance is similar to iommu.forcedac=1 - that's what I found, 
>> anyway.
>>
>> This tells me that that even though IOVA allocator performance is poor 
>> when the 32b space fills, it was those large IOVAs which don't fit in 
>> the rcache which were the major contributor to hogging the CPU in the 
>> allocator.
> 
> The root cause is that every time the last usable 32-bit IOVA is 
> allocated, the *next* PCI caller to hit the rbtree for a SAC allocation 
> is burdened with walking the whole 32-bit subtree to determine that it's 
> full again and re-set max32_alloc_size. That's the overhead that 
> forcedac avoids.
> 

Sure

> In the storage case with larger buffers, dma_opt_mapping_size() also 
> means you spend less time in the rbtree, but because you're inherently 
> hitting it less often at all, since most allocations can now hopefully 
> be fulfilled by the caches.

Sure

> That's obviously moot when the mappings are 
> already small enough to be cached and the only reason for hitting the 
> rbtree is overflow/underflow in the depot because the working set is 
> sufficiently large and the allocation pattern sufficiently "bursty".

After a bit of checking, this is same issue 
https://lore.kernel.org/linux-iommu/20230329181407.3eed7378@kernel.org/, 
and indeed we would always be using rcache'able-sized mappings.

So you think that we are reaching the depot full issue when we start to 
free depot magazines in __iova_rcache_insert(), right? From my 
experience in storage testing, it takes a long time to get to this state.

Thanks,
John

  reply	other threads:[~2023-06-15 12:15 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-13 13:40 [PATCH v4] iommu: Optimise PCI SAC address trick Robin Murphy
2023-04-13 14:02 ` Jakub Kicinski
2023-04-14 11:45 ` Joerg Roedel
2023-04-14 17:45   ` Robin Murphy
2023-05-23 16:06     ` Joerg Roedel
2023-05-24 14:56       ` Robin Murphy
2023-06-13 17:58   ` Jakub Kicinski
2023-06-15  7:49     ` John Garry
2023-06-15  9:04       ` Robin Murphy
2023-06-15 10:11         ` John Garry
2023-06-15 11:41           ` Robin Murphy
2023-06-15 12:15             ` John Garry [this message]
2023-04-18  9:23 ` Vasant Hegde
2023-04-18 10:19   ` John Garry
2023-04-18 17:36     ` Linus Torvalds
2023-04-18 18:50       ` John Garry
2023-04-18 10:57   ` Robin Murphy
2023-04-18 13:05     ` Vasant Hegde
2023-07-14 14:09 ` Joerg Roedel
2023-07-17  9:24   ` John Garry

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f17ab40c-d5c6-e31b-7877-7452b612505c@oracle.com \
    --to=john.g.garry@oracle.com \
    --cc=iommu@lists.linux.dev \
    --cc=joro@8bytes.org \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=torvalds@linux-foundation.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox