From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751151AbdE1Gwn (ORCPT ); Sun, 28 May 2017 02:52:43 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:37752 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750854AbdE1GwP (ORCPT ); Sun, 28 May 2017 02:52:15 -0400 From: Noam Camus To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, Noam Camus Subject: [PATCH v2 08/11] ARC: [plat-eznps] spinlock aware for MTM Date: Sun, 28 May 2017 09:52:05 +0300 Message-Id: <1495954328-28736-9-git-send-email-noamca@mellanox.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1495954328-28736-1-git-send-email-noamca@mellanox.com> References: <1495954328-28736-1-git-send-email-noamca@mellanox.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Noam Camus This way when we execute "ex" during trying to hold lock we can switch to other HW thread and utilize the core intead of just spinning on a lock. We noticed about 10% improvement of execution time with hackbench test. Signed-off-by: Noam Camus --- arch/arc/include/asm/spinlock.h | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/arch/arc/include/asm/spinlock.h b/arch/arc/include/asm/spinlock.h index 233d5ff..0a54ce7 100644 --- a/arch/arc/include/asm/spinlock.h +++ b/arch/arc/include/asm/spinlock.h @@ -252,9 +252,15 @@ static inline void arch_spin_lock(arch_spinlock_t *lock) __asm__ __volatile__( "1: ex %0, [%1] \n" +#ifdef CONFIG_EZNPS_MTM_EXT + " .word %3 \n" +#endif " breq %0, %2, 1b \n" : "+&r" (val) : "r"(&(lock->slock)), "ir"(__ARCH_SPIN_LOCK_LOCKED__) +#ifdef CONFIG_EZNPS_MTM_EXT + , "i"(CTOP_INST_SCHD_RW) +#endif : "memory"); /* -- 1.7.1