From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933274AbcBZLDB (ORCPT ); Fri, 26 Feb 2016 06:03:01 -0500 Received: from us01smtprelay-2.synopsys.com ([198.182.47.9]:47742 "EHLO smtprelay.synopsys.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753225AbcBZLC6 (ORCPT ); Fri, 26 Feb 2016 06:02:58 -0500 Subject: Re: [PATCH 4/5] ARCv2: Elide sending new cross core intr if receiver didn't ack prev To: References: <1456218702-11911-1-git-send-email-vgupta@synopsys.com> <1456218702-11911-5-git-send-email-vgupta@synopsys.com> CC: , Peter Zijlstra , Chuck Jordan Newsgroups: gmane.linux.kernel,gmane.linux.kernel.arc From: Vineet Gupta Message-ID: <56D030D2.6070104@synopsys.com> Date: Fri, 26 Feb 2016 16:32:42 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <1456218702-11911-5-git-send-email-vgupta@synopsys.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.12.197.208] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tuesday 23 February 2016 02:41 PM, Vineet Gupta wrote: > ARConnect/MCIP IPI sending has a retry-wait loop in case caller had > not seen a previous such interrupt. Turns out that it is not needed at > all. Linux cross core calling allows coalescing multiple IPIs to same > receiver - it is fine as long as there is one. > > This logic is built into upper layer already, at a higher level of > abstraction. ipi_send_msg_one() sets the actual msg payload, but it only > calls MCIP IPI sending if msg holder was empty (using > atomic-set-new-and-get-old construct). Thus it is unlikely that the > retry-wait looping was ever getting exercised at all. Turns out that this patch was needed for more serious reasons. For experiment sake I reverted the IPI eliding optimization and immediately ran into a deadlock, with LTP:trace_sched ! @@ -241,7 +241,7 @@ static void ipi_send_msg_one(int cpu, enum ipi_msg_type msg) * IPI handler, because !@old means it has not yet dequeued the msg(s) * so @new msg can be a free-loader */ - if (plat_smp_ops.ipi_send && !old) + if (plat_smp_ops.ipi_send) -Vineet > > Cc: Chuck Jordan > Cc: Peter Zijlstra > Signed-off-by: Vineet Gupta > --- > arch/arc/kernel/mcip.c | 27 ++++++++++----------------- > 1 file changed, 10 insertions(+), 17 deletions(-) > > diff --git a/arch/arc/kernel/mcip.c b/arch/arc/kernel/mcip.c > index e30d5d428330..7afc3c703ed1 100644 > --- a/arch/arc/kernel/mcip.c > +++ b/arch/arc/kernel/mcip.c > @@ -40,26 +40,19 @@ static void mcip_ipi_send(int cpu) > return; > } > > + raw_spin_lock_irqsave(&mcip_lock, flags); > + > /* > - * NOTE: We must spin here if the other cpu hasn't yet > - * serviced a previous message. This can burn lots > - * of time, but we MUST follows this protocol or > - * ipi messages can be lost!!! > - * Also, we must release the lock in this loop because > - * the other side may get to this same loop and not > - * be able to ack -- thus causing deadlock. > + * If receiver already has a pending interrupt, elide sending this one. > + * Linux cross core calling works well with concurrent IPIs > + * coalesced into one > + * see arch/arc/kernel/smp.c: ipi_send_msg_one() > */ > + __mcip_cmd(CMD_INTRPT_READ_STATUS, cpu); > + ipi_was_pending = read_aux_reg(ARC_REG_MCIP_READBACK); > + if (!ipi_was_pending) > + __mcip_cmd(CMD_INTRPT_GENERATE_IRQ, cpu); > > - do { > - raw_spin_lock_irqsave(&mcip_lock, flags); > - __mcip_cmd(CMD_INTRPT_READ_STATUS, cpu); > - ipi_was_pending = read_aux_reg(ARC_REG_MCIP_READBACK); > - if (ipi_was_pending == 0) > - break; /* break out but keep lock */ > - raw_spin_unlock_irqrestore(&mcip_lock, flags); > - } while (1); > - > - __mcip_cmd(CMD_INTRPT_GENERATE_IRQ, cpu); > raw_spin_unlock_irqrestore(&mcip_lock, flags); > > #ifdef CONFIG_ARC_IPI_DBG >