stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Baolu Lu <baolu.lu@linux.intel.com>
To: "Tian, Kevin" <kevin.tian@intel.com>,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Robin Murphy <robin.murphy@arm.com>,
	Ioanna Alifieraki <ioanna-maria.alifieraki@canonical.com>
Cc: "iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"stable@vger.kernel.org" <stable@vger.kernel.org>
Subject: Re: [PATCH 1/1] iommu/vt-d: Optimize iotlb_sync_map for non-caching/non-RWBF modes
Date: Fri, 4 Jul 2025 09:25:51 +0800	[thread overview]
Message-ID: <b95b6a35-e6b5-46df-9a5f-4cc7f4e823bb@linux.intel.com> (raw)
In-Reply-To: <BN9PR11MB5276F7B7E0F6091334A7A3128C43A@BN9PR11MB5276.namprd11.prod.outlook.com>

On 7/3/25 15:16, Tian, Kevin wrote:
>> From: Lu Baolu<baolu.lu@linux.intel.com>
>> Sent: Thursday, July 3, 2025 11:16 AM
>>
>> The iotlb_sync_map iommu ops allows drivers to perform necessary cache
>> flushes when new mappings are established. For the Intel iommu driver,
>> this callback specifically serves two purposes:
>>
>> - To flush caches when a second-stage page table is attached to a device
>>    whose iommu is operating in caching mode (CAP_REG.CM==1).
>> - To explicitly flush internal write buffers to ensure updates to memory-
>>    resident remapping structures are visible to hardware (CAP_REG.RWBF==1).
>>
>> However, in scenarios where neither caching mode nor the RWBF flag is
>> active, the cache_tag_flush_range_np() helper, which is called in the
>> iotlb_sync_map path, effectively becomes a no-op.
>>
>> Despite being a no-op, cache_tag_flush_range_np() involves iterating
>> through all cache tags of the iommu's attached to the domain, protected
>> by a spinlock. This unnecessary execution path introduces overhead,
>> leading to a measurable I/O performance regression. On systems with
>> NVMes
>> under the same bridge, performance was observed to drop from
>> approximately
>> ~6150 MiB/s down to ~4985 MiB/s.
> so for the same bridge case two NVMe disks likely are in the same
> iommu group sharing a domain. Then there is contention on the
> spinlock from two parallel threads on two disks. when disks come
> from different bridges they are attached to different domains hence
> no contention.
> 
> is it a correct description for the difference between same vs.
> different bridge?

Yes. I have the same understanding.

Thanks,
baolu

  reply	other threads:[~2025-07-04  1:27 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-03  3:15 [PATCH 1/1] iommu/vt-d: Optimize iotlb_sync_map for non-caching/non-RWBF modes Lu Baolu
2025-07-03  7:16 ` Tian, Kevin
2025-07-04  1:25   ` Baolu Lu [this message]
2025-07-14  5:01 ` Baolu Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b95b6a35-e6b5-46df-9a5f-4cc7f4e823bb@linux.intel.com \
    --to=baolu.lu@linux.intel.com \
    --cc=ioanna-maria.alifieraki@canonical.com \
    --cc=iommu@lists.linux.dev \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=stable@vger.kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).