From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=BAYES_00,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 545BDC433B4 for ; Fri, 7 May 2021 08:56:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1546F61078 for ; Fri, 7 May 2021 08:56:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235759AbhEGI5e convert rfc822-to-8bit (ORCPT ); Fri, 7 May 2021 04:57:34 -0400 Received: from mail.kernel.org ([198.145.29.99]:53404 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229621AbhEGI5d (ORCPT ); Fri, 7 May 2021 04:57:33 -0400 Received: from disco-boy.misterjones.org (disco-boy.misterjones.org [51.254.78.96]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id AF52861057; Fri, 7 May 2021 08:56:33 +0000 (UTC) Received: from 78.163-31-62.static.virginmediabusiness.co.uk ([62.31.163.78] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1lewHf-00BQbF-LB; Fri, 07 May 2021 09:56:31 +0100 Date: Fri, 07 May 2021 09:56:24 +0100 Message-ID: <87lf8qq5vr.wl-maz@kernel.org> From: Marc Zyngier To: He Ying Cc: , , , , , , , , , , , , , Subject: Re: [PATCH v3 03/16] arm64: Allow IPIs to be handled as normal interrupts In-Reply-To: References: <87pmy4qe7e.wl-maz@kernel.org> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT X-SA-Exim-Connect-IP: 62.31.163.78 X-SA-Exim-Rcpt-To: heying24@huawei.com, vincent.guittot@linaro.org, Valentin.Schneider@arm.com, andrew@lunn.ch, catalin.marinas@arm.com, f.fainelli@gmail.com, gregory.clement@bootlin.com, kernel-team@android.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux@arm.linux.org.uk, saravanak@google.com, sumit.garg@linaro.org, tglx@linutronix.de, will@kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 07 May 2021 08:30:06 +0100, He Ying wrote: > > > 在 2021/5/6 19:44, Marc Zyngier 写道: > > On Thu, 06 May 2021 08:50:42 +0100, > > He Ying wrote: > >> Hello Marc, > >> > >> We have faced a performance regression for handling ipis since this > >> commit. I think it's the same issue reported by Vincent. > > Can you share more details on what regression you have observed? > > What's the workload, the system, the performance drop? > > OK. We have just calculated the pmu cycles from the entry of gic_handle_irq > to the entry of do_handle_ipi. Here is some more information about our test: > > CPU: Hisilicon hip05-d02 > > Applying the patch series: 1115 cycles > Reverting the patch series: 599 cycles And? How is that meaningful? Interrupts are pretty rare compared to everything that happens in the system. How does it affect the behaviour of the system as a whole? > > > > >> I found you pointed out the possible two causes: > >> > >> (1) irq_enter/exit on the rescheduling IPI means we reschedule much > >> more often. > > It turned out to be a red herring. We don't reschedule more often, but > > we instead suffer from the overhead of irq_enter()/irq_exit(). > > However, this only matters for silly benchmarks, and no real-life > > workload showed any significant regression. Have you identified such > > realistic workload? > > I'm afraid not. We just run some benchmarks and calculated pmu cycle > counters. But we have observed running time from the entry of > gic_handle_irq to the entry of do_handle_ipi almost doubles. Doesn't > it affect realistic workload? Then I'm not that interested. Show me an actual regression in a real workload that affects people, and I'll be a bit more sympathetic to your complain. But quoting raw numbers do not help. There is a number of advantages to having IPI as IRQs, as it allows us to deal with proper allocation (other subsystem want to use IPIs), and eventually NMIs. There is a trade-off, and if that means wasting a few cycles, so be it. > >> (2) irq_domain lookups add some overhead. > > While this is also a potential source of overhead, it turned out not > > to be the case. > OK. > > > >> But I don't see any following patches in mainline. So, are you still > >> working on this issue? Looking forward to your reply. > > See [1]. However, there is probably better things to do than this > > low-level specialisation of IPIs, and Thomas outlined what needs to be > > done (see v1 of the patch series). > > OK. I see the patch series. Would it be applied to the mainline > someday? I notice that more than 5 months have passed since you sent > the patch series. I have no plan to merge these patches any time soon, given that nobody has shown a measurable regression using something other than a trivial benchmark. If you come up with such an example, I will of course reconsider this position. Thanks, M. -- Without deviation from the norm, progress is not possible.