public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/1] iommu/vt-d: Fix aligned pages in calculate_psi_aligned_address()
@ 2024-07-08 12:14 Lu Baolu
  2024-07-09  2:53 ` Baolu Lu
  0 siblings, 1 reply; 3+ messages in thread
From: Lu Baolu @ 2024-07-08 12:14 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian,
	Louis Maliyam
  Cc: iommu, linux-kernel, Lu Baolu

The helper calculate_psi_aligned_address() is used to convert an arbitrary
range into a size-aligned one.

The aligned_pages variable is calculated from input start and end, but is
not adjusted when the start pfn is not aligned and the mask is adjusted,
which results in an incorrect number of pages returned.

The number of pages is used by qi_flush_piotlb() to flush caches for the
first-stage translation. With the wrong number of pages, the cache is not
synchronized, leading to inconsistencies in some cases.

Fixes: c4d27ffaa8eb ("iommu/vt-d: Add cache tag invalidation helpers")
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
---
 drivers/iommu/intel/cache.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
index e8418cdd8331..113834742107 100644
--- a/drivers/iommu/intel/cache.c
+++ b/drivers/iommu/intel/cache.c
@@ -246,6 +246,7 @@ static unsigned long calculate_psi_aligned_address(unsigned long start,
 		 */
 		shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
 		mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
+		aligned_pages = 1UL << mask;
 	}
 
 	*_pages = aligned_pages;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH 1/1] iommu/vt-d: Fix aligned pages in calculate_psi_aligned_address()
  2024-07-08 12:14 [PATCH 1/1] iommu/vt-d: Fix aligned pages in calculate_psi_aligned_address() Lu Baolu
@ 2024-07-09  2:53 ` Baolu Lu
  2024-07-09  6:58   ` Tian, Kevin
  0 siblings, 1 reply; 3+ messages in thread
From: Baolu Lu @ 2024-07-09  2:53 UTC (permalink / raw)
  To: Joerg Roedel, Will Deacon, Robin Murphy, Kevin Tian,
	Louis Maliyam
  Cc: baolu.lu, iommu, linux-kernel

On 7/8/24 8:14 PM, Lu Baolu wrote:
> The helper calculate_psi_aligned_address() is used to convert an arbitrary
> range into a size-aligned one.
> 
> The aligned_pages variable is calculated from input start and end, but is
> not adjusted when the start pfn is not aligned and the mask is adjusted,
> which results in an incorrect number of pages returned.
> 
> The number of pages is used by qi_flush_piotlb() to flush caches for the
> first-stage translation. With the wrong number of pages, the cache is not
> synchronized, leading to inconsistencies in some cases.
> 
> Fixes: c4d27ffaa8eb ("iommu/vt-d: Add cache tag invalidation helpers")
> Signed-off-by: Lu Baolu<baolu.lu@linux.intel.com>
> ---
>   drivers/iommu/intel/cache.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
> index e8418cdd8331..113834742107 100644
> --- a/drivers/iommu/intel/cache.c
> +++ b/drivers/iommu/intel/cache.c
> @@ -246,6 +246,7 @@ static unsigned long calculate_psi_aligned_address(unsigned long start,
>   		 */
>   		shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
>   		mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
> +		aligned_pages = 1UL << mask;

Hmm, it appears that if mask is equal to BITS_PER_LONG (which is
typically 64), the left shift operation will overflow.

So perhaps we need another line of change:

diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
index 113834742107..44e92638c0cd 100644
--- a/drivers/iommu/intel/cache.c
+++ b/drivers/iommu/intel/cache.c
@@ -245,7 +245,7 @@ static unsigned long 
calculate_psi_aligned_address(unsigned long start,
                  * shared_bits are all equal in both pfn and end_pfn.
                  */
                 shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
-               mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
+               mask = shared_bits ? __ffs(shared_bits) : 
MAX_AGAW_PFN_WIDTH;
                 aligned_pages = 1UL << mask;
         }

I will make above another fix as it already causes overflow in another
path.

Kevin, sound good to you?

Thanks,
baolu

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* RE: [PATCH 1/1] iommu/vt-d: Fix aligned pages in calculate_psi_aligned_address()
  2024-07-09  2:53 ` Baolu Lu
@ 2024-07-09  6:58   ` Tian, Kevin
  0 siblings, 0 replies; 3+ messages in thread
From: Tian, Kevin @ 2024-07-09  6:58 UTC (permalink / raw)
  To: Baolu Lu, Joerg Roedel, Will Deacon, Robin Murphy, Louis Maliyam
  Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org

> From: Baolu Lu <baolu.lu@linux.intel.com>
> Sent: Tuesday, July 9, 2024 10:54 AM
> 
> On 7/8/24 8:14 PM, Lu Baolu wrote:
> > The helper calculate_psi_aligned_address() is used to convert an arbitrary
> > range into a size-aligned one.
> >
> > The aligned_pages variable is calculated from input start and end, but is
> > not adjusted when the start pfn is not aligned and the mask is adjusted,
> > which results in an incorrect number of pages returned.
> >
> > The number of pages is used by qi_flush_piotlb() to flush caches for the
> > first-stage translation. With the wrong number of pages, the cache is not
> > synchronized, leading to inconsistencies in some cases.
> >
> > Fixes: c4d27ffaa8eb ("iommu/vt-d: Add cache tag invalidation helpers")
> > Signed-off-by: Lu Baolu<baolu.lu@linux.intel.com>
> > ---
> >   drivers/iommu/intel/cache.c | 1 +
> >   1 file changed, 1 insertion(+)
> >
> > diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
> > index e8418cdd8331..113834742107 100644
> > --- a/drivers/iommu/intel/cache.c
> > +++ b/drivers/iommu/intel/cache.c
> > @@ -246,6 +246,7 @@ static unsigned long
> calculate_psi_aligned_address(unsigned long start,
> >   		 */
> >   		shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
> >   		mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
> > +		aligned_pages = 1UL << mask;
> 
> Hmm, it appears that if mask is equal to BITS_PER_LONG (which is
> typically 64), the left shift operation will overflow.
> 
> So perhaps we need another line of change:
> 
> diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
> index 113834742107..44e92638c0cd 100644
> --- a/drivers/iommu/intel/cache.c
> +++ b/drivers/iommu/intel/cache.c
> @@ -245,7 +245,7 @@ static unsigned long
> calculate_psi_aligned_address(unsigned long start,
>                   * shared_bits are all equal in both pfn and end_pfn.
>                   */
>                  shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
> -               mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
> +               mask = shared_bits ? __ffs(shared_bits) :
> MAX_AGAW_PFN_WIDTH;
>                  aligned_pages = 1UL << mask;
>          }
> 
> I will make above another fix as it already causes overflow in another
> path.
> 
> Kevin, sound good to you?
> 

yes. for both:

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-07-09  6:59 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-08 12:14 [PATCH 1/1] iommu/vt-d: Fix aligned pages in calculate_psi_aligned_address() Lu Baolu
2024-07-09  2:53 ` Baolu Lu
2024-07-09  6:58   ` Tian, Kevin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox