From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86BA1C54EBE for ; Tue, 10 Jan 2023 02:33:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229848AbjAJCdl (ORCPT ); Mon, 9 Jan 2023 21:33:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229747AbjAJCdj (ORCPT ); Mon, 9 Jan 2023 21:33:39 -0500 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE1D318394; Mon, 9 Jan 2023 18:33:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673318018; x=1704854018; h=message-id:subject:from:to:cc:date:in-reply-to: references:mime-version:content-transfer-encoding; bh=ndVCMs9uzNJVNnhJNVDY7OYi3hWBTn11Vz73KoiJinM=; b=U7dyfeAoE0zuqCduOKRFH1pNySB+lxxex/Bb9hBAltBafcRAvMXErg/i Dx0KBfIz0dgDLXUpUFFZ4rErr6hCS4akvzvkstswkoRcvl9Mumbor1PlM SpZfhLcurc8cCuymwAhWBmUZpjSAKsSpVwbiaHfpYdIHQAJxCqPZCzbu0 ENp3rBXabx0J1q6FxBvZrB6DYSxOv7JVleWTOVVXiZqlv/BXZT/e0Nq/h gif9tItuUHS6/XnnEr7he66fH/hE/EEz+V6mebC5ae5fLHH9dFpd9INga 1I7YR00pmmcs552uQoCFzEx060kWU8tod1FvJTmhVWvtB5JMw3DGCkWrX w==; X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="409289612" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="409289612" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 18:33:38 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="658819714" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="658819714" Received: from avaratha-mobl.amr.corp.intel.com (HELO spandruv-desk1.amr.corp.intel.com) ([10.209.69.31]) by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 18:33:38 -0800 Message-ID: Subject: Re: [RFC/RFT PATCH 1/2] sched/core: Check and schedule ksoftirq From: srinivas pandruvada To: Peter Zijlstra Cc: Frederic Weisbecker , rafael@kernel.org, daniel.lezcano@linaro.org, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org, len.brown@intel.com, Thomas Gleixner , Sebastian Andrzej Siewior Date: Mon, 09 Jan 2023 18:33:37 -0800 In-Reply-To: References: <20221215184300.1592872-1-srinivas.pandruvada@linux.intel.com> <20221215184300.1592872-2-srinivas.pandruvada@linux.intel.com> <20221216220748.GA1967978@lothringen> <5ae0d53990c29aa25648cbf32ef3b16e9bec911c.camel@linux.intel.com> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.42.4 (3.42.4-2.fc35) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org On Tue, 2022-12-20 at 22:18 +0100, Peter Zijlstra wrote: > On Tue, Dec 20, 2022 at 09:51:09PM +0100, Peter Zijlstra wrote: > > On Mon, Dec 19, 2022 at 02:54:55PM -0800, srinivas pandruvada > > wrote: > > > > > But after ksoftirqd_run_end(), which will renable local irq, this > > > may > > > further cause more soft irq pending. Here RCU soft irq entry > > > continues > > > > Right you are.. what about if we spell the idle.c thing like so > > instead? > > > > Then we repeat the softirq thing every time we drop out of the idle > > loop > > for a reschedule. > > Uff, that obviously can't work because we already have preemption > disabled here, this needs a bit more work. I think it's possible to > re-arrange things a bit. Didn't work. Also when __do_softirq returns, softirq can be pending again. I think if local_softirq_pending(), we can break do_idle() loop. Thanks, Srinivas > > I'll try and have a look tomorrow, but the kids have their xmas play > at > school so who knows what I'll get done. > > > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c > > index f26ab2675f7d..6dce49813bcc 100644 > > --- a/kernel/sched/idle.c > > +++ b/kernel/sched/idle.c > > @@ -381,8 +381,13 @@ void play_idle_precise(u64 duration_ns, u64 > > latency_ns) > >         hrtimer_start(&it.timer, ns_to_ktime(duration_ns), > >                       HRTIMER_MODE_REL_PINNED_HARD); > >   > > -       while (!READ_ONCE(it.done)) > > +       while (!READ_ONCE(it.done)) { > > +               rt_mutex_lock(&per_cpu(ksoftirq_lock, cpu)); > > +               __run_ksoftirqd(smp_processor_id()); > > +               rt_mutex_unlock(&per_cpu(ksoftirq_lock, cpu)); > > + > >                 do_idle(); > > +       } > >   > >         cpuidle_use_deepest_state(0); > >         current->flags &= ~PF_IDLE;