From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2EBE6229B35; Wed, 5 Feb 2025 09:08:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.92.199 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738746493; cv=none; b=ftYxRE1y0dtuQZFjVFYqjCeTqAaVFsduQRKW9RnrbmMsShYKq6ya69lbkC9Us3BsuCVMsgW07VOhKwqe0ksC1VN+VA7yl+QekuS6D8lBgekkgDikpjQVxg1kWbV4oMyA2mVLyrYkPh8s9e5ZCDldXcHJxYWS4tSWsyLapj7q7tI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738746493; c=relaxed/simple; bh=zK4BcCviFYKOci9GCdxPo1LHA6sklsGtzEQGEFH0p50=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=cJj1SpOSrmcJ048ezVvaAuq1f7ZARIoHp/fgsSF0eUJ9sr8r/BPFIcWRc3PC9WrAE+NP0ZQ+IOimDP/3tSrILBdjEAW+MG/sWbznEwIi3bLIGj3hH+42+kB7psxJ63eCZbapwZaV6Ion8OFZvxmh5eiM+XFK7rDiMESqOf5EYgE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=CM3G/q47; arc=none smtp.client-ip=90.155.92.199 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="CM3G/q47" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=qkne+ZvkqwZNIJvySjfCkna8nyuLIRV8ik1o0AG9MyE=; b=CM3G/q47aJbqtCQ0AUoQetpyzV cuJAP/abGed2N9qu1ydLrMdp8uUQzF5Rao21C7odP4h0tpTJBtorMkzXYcxsfQvWchUNAJaRM3uRF wNCEqwJzS7+BCwd2nf3WoCWP+LWuKGLGz7eKIoht32jc4QPPV2TMFtCEkGWJ/zqRzyRJKs96XwGIV mt9JlknaYHh/qdivsptOSn1/DLRHO+HfqYEnFIou5JHVgtf5LSXVN1pScmN1D6Fu4+ab8DjNkZnlz eQsxwNDfFGNwsEA3NUnDNjzT+i+JoDq8mdygVC9dagfg8nViwk9W38tND56xGft3+H4+PQy51otQS GDDnr4uA==; Received: from 77-249-17-89.cable.dynamic.v4.ziggo.nl ([77.249.17.89] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.98 #2 (Red Hat Linux)) id 1tfbNh-0000000GbAt-28dP; Wed, 05 Feb 2025 09:07:37 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id 6C1A93002CE; Wed, 5 Feb 2025 10:07:36 +0100 (CET) Date: Wed, 5 Feb 2025 10:07:36 +0100 From: Peter Zijlstra To: Steven Rostedt Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org, Thomas Gleixner , Ankur Arora , Linus Torvalds , linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org, luto@kernel.org, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, juri.lelli@redhat.com, vincent.guittot@linaro.org, willy@infradead.org, mgorman@suse.de, jon.grimm@amd.com, bharata@amd.com, raghavendra.kt@amd.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com, jgross@suse.com, andrew.cooper3@citrix.com, Joel Fernandes , Vineeth Pillai , Suleiman Souhlal , Ingo Molnar , Mathieu Desnoyers , Clark Williams , bigeasy@linutronix.de, daniel.wagner@suse.com, joseph.salisbury@oracle.com, broonie@gmail.com Subject: Re: [RFC][PATCH 1/2] sched: Extended scheduler time slice Message-ID: <20250205090736.GY7145@noisy.programming.kicks-ass.net> References: <20250201115906.GB8256@noisy.programming.kicks-ass.net> <20250201181129.GA34937@noisy.programming.kicks-ass.net> <20250201180617.491ce087@batman.local.home> <20250203084306.GC505@noisy.programming.kicks-ass.net> <20250203114537.6a30c7c0@gandalf.local.home> <20250204091613.GQ7145@noisy.programming.kicks-ass.net> <20250204075100.3fcbfda8@gandalf.local.home> <20250204153053.GX7145@noisy.programming.kicks-ass.net> <20250204111119.10ee37c8@gandalf.local.home> Precedence: bulk X-Mailing-List: linux-trace-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250204111119.10ee37c8@gandalf.local.home> On Tue, Feb 04, 2025 at 11:11:19AM -0500, Steven Rostedt wrote: > On Tue, 4 Feb 2025 16:30:53 +0100 > Peter Zijlstra wrote: > > > If you go back and reread that initial thread, you'll find the 50us is > > below the scheduling latency that random test box already had. > > > > I'm sure more modern systems will have a lower number, and slower > > systems will have a larger number, but we got to pick a number :/ > > > > I'm fine with making it 20us. Or whatever. Its just a stupid number. > > > > But yes. If we're going to be doing this, there is absolutely no reason > > not to allow DEADLINE/FIFO threads the same. Misbehaving FIFO is already > > a problem, and we can make DL-CBS enforcement punch through it if we > > have to. > > > > And less retries on the RSEQ for FIFO can equally improve performance. > > > > There is no difference between a 'malicious/broken' userspace consuming > > the entire window in userspace (50us, 20us whatever it will be) and > > doing a system call which we know will cause similar delays because it > > does in-kernel locking. > > This is where we will disagree for the reasons I explained in my second > email. This feature affects other tasks. And no, making it 20us doesn't > make it better. Because from what I get from you, if we implement this, it > will be available for all preemption methods (including PREEMPT_RT), where > we do have less than 50us latency, and and even a 20us will break those > applications. Then pick another number; RT too has a max scheduling latency number (on some random hardware). If you stay below that, all is fine. > This was supposed to be only a hint to the kernel, not a complete feature That's a contradiction in terms -- even a hint is a feature. > that is hard coded and will override how other tasks behave. Everything has some effect. My point is that if you limit this effect to be less than what it can already effect, you're not making things worse. > As system > calls themselves can make how things are scheduled depending on the > preemption method, What? > I didn't want to add something that will change how > things are scheduled that ignores the preemption method that was chosen. Userspace is totally oblivious to the preemption method chosen, and it damn well should be.