From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 971A12877D5 for ; Sun, 14 Dec 2025 22:32:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.46 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765751563; cv=none; b=U5uydYKbrgBGblJBIP0FIc85ussYrYPIQw9S2uWMzbqtxjDc0bgtpkARcTI0hf1vBzpyfp9PycN+4eRLLEuiAb++OsQmb6925/gpBu04d7dZsW3NsS9kJGbWNuJ98kMx/JFrvFgPt56I4N+7nJUakWOErXP+O+DvR8U/zmkUQgg= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1765751563; c=relaxed/simple; bh=hRO5aCgoa8UV07o3viCbY7ZAYG07ytOBFMZZzD+wX3Q=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=bs8Vggl/eiNjqWRhfz/XE108sYqM3I4ViFPh8bRLVlvJyALWFeqy7HYQHy1K6DtYp/jFh//erqkO98gNFAHDLEIwkIo2Xig9WwsdkTF4P8KZOnOCicVP3w8gzrgHX22wadV9+YEgLoem3wq0qbMVE0ty2JPBP+YzB7ZOWAA4XEU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VtrveMDn; arc=none smtp.client-ip=209.85.128.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VtrveMDn" Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-47a95a96d42so40565e9.1 for ; Sun, 14 Dec 2025 14:32:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1765751560; x=1766356360; darn=vger.kernel.org; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=KcFIYG8pThrquY7+bXzMj2jE6v828zltZDO7Yqjwkw8=; b=VtrveMDnytpikS2IC54yJW2z68IXfQHVGdc2gUZSZCd73/cekLuLo/6O+A7HocklNL 5BqIqHSY5GIlCKskndcfHC4WzGsQ8jKaFxuIvWN+yaP484NFSsEsq2S4oCF63S12YktF yUlA/Z8S8pL8hIqKV45+CMTERvG91sTdlNzaIZm+85tf/zs4/sPEzqOiltv4A++Lhb/h wZSfAwcxAV4YJGvSlSWgroRYP2ktth9iY2mG7iv6vubRo648sQWJR9IoNRHXwGYcjERn XGuRBgAQQEZo0OP/anT5jdZ3/PpGgU+imOq8T9Z8olQ4L1u+cLsFy4Xonl1PAhCBYODF HLIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765751560; x=1766356360; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KcFIYG8pThrquY7+bXzMj2jE6v828zltZDO7Yqjwkw8=; b=EQ3fP3dRyWsuA1YhMzZr7Y7FpllKJAx05QsRgFH3ywtgzqV5t4pGM9q8ksEbeXX8Pu pQBDtyIFRaRuA8Z12VxO3R56sXCZ4DcWH7+DctYAGvucdcnvbI2tMwO7UgTeJlIm9Jhl JU0iPjk1pjqnUp+VI/IV4Ukrj/5bVf7e2TpE4n3ExmdEzGG3XG/pi2A/s4JERNqEdmh5 FSHR2Wh+wrXmW0sGlW4vq0oPzHA4OYDwxJ4iY5UQDht9YWo83/iiUGdzVYvSEAyigzYw UPX3qsClQYU2Be834nhAH0lW+cd9BFYIzVAemTnuB466NJsR6TXJDt4mykfLklTXwwl9 sU3g== X-Forwarded-Encrypted: i=1; AJvYcCXGQz9TxvzkHjuk5m81FgjJ3dDoPbTnXV+NvHnSNSeNgHKd1+R0bxC0bDgHVD4wW0z+G1TfI7mGAy0QToc=@vger.kernel.org X-Gm-Message-State: AOJu0YyFrAhoVndbarJy4DQcicrWt18CH9jHjneSe6iu8F3iTdUMDQsQ +ykqa7dbqvnev4haVB158P30rIHpedPel3AsOc8acu4c9wuYrI775rHALncbHqv6KQ== X-Gm-Gg: AY/fxX7elOHnX/o9IS4Ef0CRRY97XdEbTpG0mfvATReMt+lkTmjuOLz7XifTATz94Am sB+dXhHAbJFCaKbYTkBg19p2R4gxOf4LWDPoXNDhdH+6VljTHxlDjCSsynnO8pbuGqJBu05S7pA yCz1YYeLO4WDumo/Eamw4EOofe1Zg+8RZ2H6g7omoPtk19n5LLFNFnCd8xEpJ/DuQ57P3NKr5s+ fQCcIEsmL1hpCkozvlgs8LSPPF4AR8UdYMdcLtWixmefacfBd0CGDBL59nQotvE2sDGGHUq2HBJ CLrNlOi7TXZBN3dBVYUZcHIY4tNUTumbG2b8etonwYPn9gD5gYxY7HNCj//b+zao01MqoaoAg8g xamwWW+9BAchM8ohVvbDPMR+P6fiF/WhHl+DuiOyuxs3lGAXW/+oWK6EpEZyZhAQSyvWW0pypN7 sclwJcTuk7nho/fwd+hENinQgW8OYJO1rR5cXWf2J68LN9bH09rw== X-Google-Smtp-Source: AGHT+IE3eRhVbzGegcHa1xcNo46EdXefsnSGCiZbU0jrtHUdR0mYYs0/N/wqhHRGzTlCZ8FRslGcTw== X-Received: by 2002:a05:600c:c3c1:20b0:477:b358:d7aa with SMTP id 5b1f17b1804b1-47a94876b6cmr1087625e9.18.1765751559808; Sun, 14 Dec 2025 14:32:39 -0800 (PST) Received: from google.com (54.140.140.34.bc.googleusercontent.com. [34.140.140.54]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-42fb38a977esm17818592f8f.12.2025.12.14.14.32.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 Dec 2025 14:32:39 -0800 (PST) Date: Sun, 14 Dec 2025 22:32:35 +0000 From: Mostafa Saleh To: Nicolin Chen Cc: jgg@nvidia.com, will@kernel.org, robin.murphy@arm.com, joro@8bytes.org, linux-arm-kernel@lists.infradead.org, iommu@lists.linux.dev, linux-kernel@vger.kernel.org, skolothumtho@nvidia.com, praan@google.com, xueshuai@linux.alibaba.com Subject: Re: [PATCH rc v3 1/4] iommu/arm-smmu-v3: Add ignored bits to fix STE update sequence Message-ID: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: Hi Nicolin, On Tue, Dec 09, 2025 at 06:45:16PM -0800, Nicolin Chen wrote: > From: Jason Gunthorpe > > C_BAD_STE was observed when updating nested STE from an S1-bypass mode to > an S1DSS-bypass mode. As both modes enabled S2, the used bit is slightly > different than the normal S1-bypass and S1DSS-bypass modes. As a result, > fields like MEV and EATS in S2's used list marked the word1 as a critical > word that requested a STE.V=0. This breaks a hitless update. > > However, both MEV and EATS aren't critical in terms of STE update. One > controls the merge of the events and the other controls the ATS that is > managed by the driver at the same time via pci_enable_ats(). > > Add an arm_smmu_get_ste_ignored() to allow STE update algorithm to ignore > those fields, avoiding the STE update breakages. > > Note that this change is required by both MEV and EATS fields, which were > introduced in different kernel versions. So add this get_ignored() first. > The MEV and EATS will be added in arm_smmu_get_ste_ignored() separately. > > Fixes: 1e8be08d1c91 ("iommu/arm-smmu-v3: Support IOMMU_DOMAIN_NESTED") > Cc: stable@vger.kernel.org > Signed-off-by: Jason Gunthorpe > Reviewed-by: Shuai Xue > Signed-off-by: Nicolin Chen > --- > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 ++ > .../iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c | 19 ++++++++++++--- > drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 24 +++++++++++++++---- > 3 files changed, 37 insertions(+), 8 deletions(-) > > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h > index ae23aacc3840..d5f0e5407b9f 100644 > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h > @@ -900,6 +900,7 @@ struct arm_smmu_entry_writer { > > struct arm_smmu_entry_writer_ops { > void (*get_used)(const __le64 *entry, __le64 *used); > + void (*get_ignored)(__le64 *ignored_bits); > void (*sync)(struct arm_smmu_entry_writer *writer); > }; > > @@ -911,6 +912,7 @@ void arm_smmu_make_s2_domain_ste(struct arm_smmu_ste *target, > > #if IS_ENABLED(CONFIG_KUNIT) > void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits); > +void arm_smmu_get_ste_ignored(__le64 *ignored_bits); > void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, __le64 *cur, > const __le64 *target); > void arm_smmu_get_cd_used(const __le64 *ent, __le64 *used_bits); > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c > index d2671bfd3798..3556e65cf9ac 100644 > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.c > @@ -38,13 +38,16 @@ enum arm_smmu_test_master_feat { > static bool arm_smmu_entry_differs_in_used_bits(const __le64 *entry, > const __le64 *used_bits, > const __le64 *target, > + const __le64 *ignored, > unsigned int length) > { > bool differs = false; > unsigned int i; > > for (i = 0; i < length; i++) { > - if ((entry[i] & used_bits[i]) != target[i]) > + __le64 used = used_bits[i] & ~ignored[i]; > + > + if ((entry[i] & used) != (target[i] & used)) > differs = true; > } > return differs; > @@ -56,12 +59,18 @@ arm_smmu_test_writer_record_syncs(struct arm_smmu_entry_writer *writer) > struct arm_smmu_test_writer *test_writer = > container_of(writer, struct arm_smmu_test_writer, writer); > __le64 *entry_used_bits; > + __le64 *ignored; > > entry_used_bits = kunit_kzalloc( > test_writer->test, sizeof(*entry_used_bits) * NUM_ENTRY_QWORDS, > GFP_KERNEL); > KUNIT_ASSERT_NOT_NULL(test_writer->test, entry_used_bits); > > + ignored = kunit_kzalloc(test_writer->test, > + sizeof(*ignored) * NUM_ENTRY_QWORDS, > + GFP_KERNEL); > + KUNIT_ASSERT_NOT_NULL(test_writer->test, ignored); > + > pr_debug("STE value is now set to: "); > print_hex_dump_debug(" ", DUMP_PREFIX_NONE, 16, 8, > test_writer->entry, > @@ -79,14 +88,17 @@ arm_smmu_test_writer_record_syncs(struct arm_smmu_entry_writer *writer) > * configuration. > */ > writer->ops->get_used(test_writer->entry, entry_used_bits); > + if (writer->ops->get_ignored) > + writer->ops->get_ignored(ignored); > KUNIT_EXPECT_FALSE( > test_writer->test, > arm_smmu_entry_differs_in_used_bits( > test_writer->entry, entry_used_bits, > - test_writer->init_entry, NUM_ENTRY_QWORDS) && > + test_writer->init_entry, ignored, > + NUM_ENTRY_QWORDS) && > arm_smmu_entry_differs_in_used_bits( > test_writer->entry, entry_used_bits, > - test_writer->target_entry, > + test_writer->target_entry, ignored, > NUM_ENTRY_QWORDS)); > } > } > @@ -106,6 +118,7 @@ arm_smmu_v3_test_debug_print_used_bits(struct arm_smmu_entry_writer *writer, > static const struct arm_smmu_entry_writer_ops test_ste_ops = { > .sync = arm_smmu_test_writer_record_syncs, > .get_used = arm_smmu_get_ste_used, > + .get_ignored = arm_smmu_get_ste_ignored, > }; > > static const struct arm_smmu_entry_writer_ops test_cd_ops = { > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > index d16d35c78c06..e22c0890041b 100644 > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c > @@ -1082,6 +1082,12 @@ void arm_smmu_get_ste_used(const __le64 *ent, __le64 *used_bits) > } > EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_used); > > +VISIBLE_IF_KUNIT > +void arm_smmu_get_ste_ignored(__le64 *ignored_bits) > +{ > +} > +EXPORT_SYMBOL_IF_KUNIT(arm_smmu_get_ste_ignored); > + > /* > * Figure out if we can do a hitless update of entry to become target. Returns a > * bit mask where 1 indicates that qword needs to be set disruptively. > @@ -1094,13 +1100,22 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer, > { > __le64 target_used[NUM_ENTRY_QWORDS] = {}; > __le64 cur_used[NUM_ENTRY_QWORDS] = {}; > + __le64 ignored[NUM_ENTRY_QWORDS] = {}; I think we can avoid extra stack allocation for another STE, if we make the function update cur_used directly, but no strong opinion. > u8 used_qword_diff = 0; > unsigned int i; > > writer->ops->get_used(entry, cur_used); > writer->ops->get_used(target, target_used); > + if (writer->ops->get_ignored) > + writer->ops->get_ignored(ignored); > > for (i = 0; i != NUM_ENTRY_QWORDS; i++) { > + /* > + * Ignored is only used for bits that are used by both entries, > + * otherwise it is sequenced according to the unused entry. > + */ > + ignored[i] &= target_used[i] & cur_used[i]; > + > /* > * Check that masks are up to date, the make functions are not > * allowed to set a bit to 1 if the used function doesn't say it > @@ -1109,6 +1124,7 @@ static u8 arm_smmu_entry_qword_diff(struct arm_smmu_entry_writer *writer, > WARN_ON_ONCE(target[i] & ~target_used[i]); > > /* Bits can change because they are not currently being used */ > + cur_used[i] &= ~ignored[i]; > unused_update[i] = (entry[i] & cur_used[i]) | > (target[i] & ~cur_used[i]); > /* > @@ -1207,12 +1223,9 @@ void arm_smmu_write_entry(struct arm_smmu_entry_writer *writer, __le64 *entry, > entry_set(writer, entry, target, 0, 1); > } else { > /* > - * No inuse bit changed. Sanity check that all unused bits are 0 > - * in the entry. The target was already sanity checked by > - * compute_qword_diff(). > + * No inuse bit changed, though ignored bits may have changed. > */ > - WARN_ON_ONCE( > - entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS)); > + entry_set(writer, entry, target, 0, NUM_ENTRY_QWORDS); After this change, no other caller uses the entry_set() return value, so it can be changed to return void. > } > } > EXPORT_SYMBOL_IF_KUNIT(arm_smmu_write_entry); > @@ -1543,6 +1556,7 @@ static void arm_smmu_ste_writer_sync_entry(struct arm_smmu_entry_writer *writer) > static const struct arm_smmu_entry_writer_ops arm_smmu_ste_writer_ops = { > .sync = arm_smmu_ste_writer_sync_entry, > .get_used = arm_smmu_get_ste_used, > + .get_ignored = arm_smmu_get_ste_ignored, > }; > I have some mixed feelings about this, having get_used(), then get_ignored() with the same bits set seems confusing to me, specially the get_ignored() loops back to update cur_used, which is set from get_used() My initial though was just to remove this bit from get_used() + some changes to checks setting bits that are not used would be enough, and the semantics of get_used() can be something as: “Return bits used by the updated translation regime that MUST be observed atomically” and in that case we can ignore things as MEV as it doesn’t impact the translation. However, this approach makes it a bit explicit which bits are ignored, if we keep this logic, I think changing the name of get_ignored() might help, to something as "get_allowed_break()" or "get_update_safe()"? Thanks, Mostafa > static void arm_smmu_write_ste(struct arm_smmu_master *master, u32 sid, > -- > 2.43.0 >