From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Message-Id: <20171201155029.951684562@goodmis.org> Date: Fri, 01 Dec 2017 10:50:11 -0500 From: Steven Rostedt To: linux-kernel@vger.kernel.org, linux-rt-users Cc: Thomas Gleixner , Carsten Emde , Sebastian Andrzej Siewior , John Kacur , Paul Gortmaker , Julia Cartwright , Daniel Wagner , tom.zanussi@linux.intel.com, Alex Shi , rt-stable@vger.kernel.org, Joerg Roedel , iommu@lists.linux-foundation.org, Vinod Adhikary Subject: [PATCH RT 12/15] iommu/amd: Use raw_cpu_ptr() instead of get_cpu_ptr() for ->flush_queue References: <20171201154959.375846909@goodmis.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Content-Disposition: inline; filename=0012-iommu-amd-Use-raw_cpu_ptr-instead-of-get_cpu_ptr-for.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: 4.9.65-rt57-rc1 stable review patch. If anyone has any objections, please let me know. ------------------ From: Sebastian Andrzej Siewior get_cpu_ptr() disabled preemption and returns the ->flush_queue object of the current CPU. raw_cpu_ptr() does the same except that it not disable preemption which means the scheduler can move it to another CPU after it obtained the per-CPU object. In this case this is not bad because the data structure itself is protected with a spin_lock. This change shouldn't matter however on RT it does because the sleeping lock can't be accessed with disabled preemption. Cc: rt-stable@vger.kernel.org Cc: Joerg Roedel Cc: iommu@lists.linux-foundation.org Reported-by: Vinod Adhikary Signed-off-by: Sebastian Andrzej Siewior --- drivers/iommu/amd_iommu.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c index a88595b21111..ff5c2424eb9e 100644 --- a/drivers/iommu/amd_iommu.c +++ b/drivers/iommu/amd_iommu.c @@ -2283,7 +2283,7 @@ static void queue_add(struct dma_ops_domain *dma_dom, pages = __roundup_pow_of_two(pages); address >>= PAGE_SHIFT; - queue = get_cpu_ptr(&flush_queue); + queue = raw_cpu_ptr(&flush_queue); spin_lock_irqsave(&queue->lock, flags); if (queue->next == FLUSH_QUEUE_SIZE) @@ -2300,8 +2300,6 @@ static void queue_add(struct dma_ops_domain *dma_dom, if (atomic_cmpxchg(&queue_timer_on, 0, 1) == 0) mod_timer(&queue_timer, jiffies + msecs_to_jiffies(10)); - - put_cpu_ptr(&flush_queue); } -- 2.13.2