iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] iommu/vt-d: Avoid write-tearing on PTE clear
@ 2016-05-21  9:51 Nadav Amit
       [not found] ` <1463824283-1683-1-git-send-email-namit-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 3+ messages in thread
From: Nadav Amit @ 2016-05-21  9:51 UTC (permalink / raw)
  To: dwmw2-wEGCiKHe2LqWVfeAwA7xHQ
  Cc: Nadav Amit, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

When a PTE is cleared, the write may be teared or perform by multiple
writes. In addition, in 32-bit kernel, writes are currently performed
using a single 64-bit write, which does not guarantee order.

The byte-code right now does not seem to cause a problem, but it may
still occur in theory.

Avoid this scenario by using WRITE_ONCE, and order the writes on
32-bit kernels.

Signed-off-by: Nadav Amit <namit-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
---
 drivers/iommu/intel-iommu.c | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index e1852e8..4f488a5 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -326,9 +326,26 @@ struct dma_pte {
 	u64 val;
 };
 
+#ifndef CONFIG_64BIT
+union split_dma_pte {
+	struct {
+		u32 val_low;
+		u32 val_high;
+	};
+	u64 val;
+};
+#endif
+
 static inline void dma_clear_pte(struct dma_pte *pte)
 {
-	pte->val = 0;
+#ifdef CONFIG_64BIT
+	WRITE_ONCE(pte->val, 0);
+#else
+	union split_dma_pte *sdma_pte = (union split_dma_pte *)pte;
+
+	WRITE_ONCE(sdma_pte->val_low, 0);
+	sdma_pte->val_high = 0;
+#endif
 }
 
 static inline u64 dma_pte_addr(struct dma_pte *pte)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] iommu/vt-d: Avoid write-tearing on PTE clear
       [not found] ` <1463824283-1683-1-git-send-email-namit-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
@ 2016-06-03 17:55   ` Nadav Amit
  2016-06-15 11:48   ` Joerg Roedel
  1 sibling, 0 replies; 3+ messages in thread
From: Nadav Amit @ 2016-06-03 17:55 UTC (permalink / raw)
  To: dwmw2-wEGCiKHe2LqWVfeAwA7xHQ, joro-zLv9SwRftAIdnm+yROfE0A,
	iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA

Ping?

Nadav Amit <namit-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org> wrote:

> When a PTE is cleared, the write may be teared or perform by multiple
> writes. In addition, in 32-bit kernel, writes are currently performed
> using a single 64-bit write, which does not guarantee order.
> 
> The byte-code right now does not seem to cause a problem, but it may
> still occur in theory.
> 
> Avoid this scenario by using WRITE_ONCE, and order the writes on
> 32-bit kernels.
> 
> Signed-off-by: Nadav Amit <namit-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
> ---
> drivers/iommu/intel-iommu.c | 19 ++++++++++++++++++-
> 1 file changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index e1852e8..4f488a5 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -326,9 +326,26 @@ struct dma_pte {
> 	u64 val;
> };
> 
> +#ifndef CONFIG_64BIT
> +union split_dma_pte {
> +	struct {
> +		u32 val_low;
> +		u32 val_high;
> +	};
> +	u64 val;
> +};
> +#endif
> +
> static inline void dma_clear_pte(struct dma_pte *pte)
> {
> -	pte->val = 0;
> +#ifdef CONFIG_64BIT
> +	WRITE_ONCE(pte->val, 0);
> +#else
> +	union split_dma_pte *sdma_pte = (union split_dma_pte *)pte;
> +
> +	WRITE_ONCE(sdma_pte->val_low, 0);
> +	sdma_pte->val_high = 0;
> +#endif
> }
> 
> static inline u64 dma_pte_addr(struct dma_pte *pte)
> -- 
> 2.7.4

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] iommu/vt-d: Avoid write-tearing on PTE clear
       [not found] ` <1463824283-1683-1-git-send-email-namit-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
  2016-06-03 17:55   ` Nadav Amit
@ 2016-06-15 11:48   ` Joerg Roedel
  1 sibling, 0 replies; 3+ messages in thread
From: Joerg Roedel @ 2016-06-15 11:48 UTC (permalink / raw)
  To: Nadav Amit
  Cc: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA,
	dwmw2-wEGCiKHe2LqWVfeAwA7xHQ

On Sat, May 21, 2016 at 02:51:23AM -0700, Nadav Amit wrote:
> When a PTE is cleared, the write may be teared or perform by multiple
> writes. In addition, in 32-bit kernel, writes are currently performed
> using a single 64-bit write, which does not guarantee order.
> 
> The byte-code right now does not seem to cause a problem, but it may
> still occur in theory.
> 
> Avoid this scenario by using WRITE_ONCE, and order the writes on
> 32-bit kernels.
> 
> Signed-off-by: Nadav Amit <namit-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
> ---
>  drivers/iommu/intel-iommu.c | 19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index e1852e8..4f488a5 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -326,9 +326,26 @@ struct dma_pte {
>  	u64 val;
>  };
>  
> +#ifndef CONFIG_64BIT
> +union split_dma_pte {
> +	struct {
> +		u32 val_low;
> +		u32 val_high;
> +	};

Please move this struct definition to dma_clear_pte().

> +	u64 val;
> +};
> +#endif
> +
>  static inline void dma_clear_pte(struct dma_pte *pte)
>  {
> -	pte->val = 0;
> +#ifdef CONFIG_64BIT
> +	WRITE_ONCE(pte->val, 0);
> +#else
> +	union split_dma_pte *sdma_pte = (union split_dma_pte *)pte;
> +
> +	WRITE_ONCE(sdma_pte->val_low, 0);
> +	sdma_pte->val_high = 0;
> +#endif

And this needs a comment explaining what it going on and why it is
necessary.


Thanks,

	Joerg

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-06-15 11:48 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-21  9:51 [PATCH] iommu/vt-d: Avoid write-tearing on PTE clear Nadav Amit
     [not found] ` <1463824283-1683-1-git-send-email-namit-pghWNbHTmq7QT0dZR+AlfA@public.gmane.org>
2016-06-03 17:55   ` Nadav Amit
2016-06-15 11:48   ` Joerg Roedel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).