From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 158D7205E2E for ; Fri, 10 Jan 2025 08:31:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736497872; cv=none; b=qeK/fB/cEApLIbLbllu0ezssv1eFSePruLstOJ6sws+8DHQbvwsuGffgoJm1gZL0EQseLmbGF1D2iqbdWYaeKZBUybrt4C2CdutCjNRkd82zrrg0Jg2KpthkrhuaBmAv8KbHEjMGV2vH27exkuRec5V3uxLuxC/IMTeQz/5jMA4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1736497872; c=relaxed/simple; bh=Idv5nYtFB+v+Drh25PeRuYpWRxEfx652UbKFfgq0xPs=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=t19aMF49YaYjs71MVmyJUHRe1tsyXtiBePu0ufo3Qsr3DrmrFnjMtlZ3y3jzqr86DRM0ydo1uvoQDVE7m648Say70oFlU573Mnd/TtjoCjXtUD6qsF3fwVqaVJzG2sgr5PaWcSq9McWSz7nh3XOw0qnM4jNZCzFcCRm0tm/jVag= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=smRZnrhM; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="smRZnrhM" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=sUwFeRrRlZwme5xhojJiohEYhV9NrW4boo7ACbwqWT0=; b=smRZnrhMnEge5iUAuwOgU8jbG5 6K+Ce+cQyaIMdJidF0H7CJXwhXwycCam4dD3vrw2QEcihTj0dafSwM7sA7L/dmzEHF7psoiwnC+F8 T5jzZaUX+S4jzgm4wgnk9rbjKLyCe3GQzWRRu5256ChLd8KXkkmOuN4ndZ8JmpQSeV1hv34nBZtFt 65niEaIWfMRC2oOtKYNwCMappXVMLqPoIWD3aaPw0Z5+Bwf8P1uoQPYHIHkrlfEZCmO0vyJl2N2Nl WEBb7vFwKB140eRXxYNu+OSsDktkNIdeOZ3FmFVf4izS3cSZHEs65nXqAhfmi91dBCFY50Xk0Cb/7 X7ug4XwQ==; Received: from 77-249-17-89.cable.dynamic.v4.ziggo.nl ([77.249.17.89] helo=noisy.programming.kicks-ass.net) by casper.infradead.org with esmtpsa (Exim 4.98 #2 (Red Hat Linux)) id 1tWAQ3-0000000CTS0-0mWI; Fri, 10 Jan 2025 08:31:03 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id 491D23002E6; Fri, 10 Jan 2025 09:31:02 +0100 (CET) Date: Fri, 10 Jan 2025 09:31:02 +0100 From: Peter Zijlstra To: Changwoo Min Cc: tj@kernel.org, void@manifault.com, arighi@nvidia.com, mingo@redhat.com, kernel-dev@igalia.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH v8 2/6] sched_ext: Implement scx_bpf_now() Message-ID: <20250110083102.GA4213@noisy.programming.kicks-ass.net> References: <20250109131456.7055-1-changwoo@igalia.com> <20250109131456.7055-3-changwoo@igalia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250109131456.7055-3-changwoo@igalia.com> On Thu, Jan 09, 2025 at 10:14:52PM +0900, Changwoo Min wrote: > Returns a high-performance monotonically non-decreasing clock for the current > CPU. The clock returned is in nanoseconds. > > It provides the following properties: > > 1) High performance: Many BPF schedulers call bpf_ktime_get_ns() frequently > to account for execution time and track tasks' runtime properties. > Unfortunately, in some hardware platforms, bpf_ktime_get_ns() -- which > eventually reads a hardware timestamp counter -- is neither performant nor > scalable. scx_bpf_now() aims to provide a high-performance clock by > using the rq clock in the scheduler core whenever possible. > > 2) High enough resolution for the BPF scheduler use cases: In most BPF > scheduler use cases, the required clock resolution is lower than the most > accurate hardware clock (e.g., rdtsc in x86). scx_bpf_now() basically > uses the rq clock in the scheduler core whenever it is valid. It considers > that the rq clock is valid from the time the rq clock is updated > (update_rq_clock) until the rq is unlocked (rq_unpin_lock). > > 3) Monotonically non-decreasing clock for the same CPU: scx_bpf_now() > guarantees the clock never goes backward when comparing them in the same > CPU. On the other hand, when comparing clocks in different CPUs, there > is no such guarantee -- the clock can go backward. It provides a > monotonically *non-decreasing* clock so that it would provide the same > clock values in two different scx_bpf_now() calls in the same CPU > during the same period of when the rq clock is valid. > > An rq clock becomes valid when it is updated using update_rq_clock() > and invalidated when the rq is unlocked using rq_unpin_lock(). > > Let's suppose the following timeline in the scheduler core: > > T1. rq_lock(rq) > T2. update_rq_clock(rq) > T3. a sched_ext BPF operation > T4. rq_unlock(rq) > T5. a sched_ext BPF operation > T6. rq_lock(rq) > T7. update_rq_clock(rq) > > For [T2, T4), we consider that rq clock is valid (SCX_RQ_CLK_VALID is > set), so scx_bpf_now() calls during [T2, T4) (including T3) will > return the rq clock updated at T2. For duration [T4, T7), when a BPF > scheduler can still call scx_bpf_now() (T5), we consider the rq clock > is invalid (SCX_RQ_CLK_VALID is unset at T4). So when calling > scx_bpf_now() at T5, we will return a fresh clock value by calling > sched_clock_cpu() internally. Also, to prevent getting outdated rq clocks > from a previous scx scheduler, invalidate all the rq clocks when unloading > a BPF scheduler. > > One example of calling scx_bpf_now(), when the rq clock is invalid > (like T5), is in scx_central [1]. The scx_central scheduler uses a BPF > timer for preemptive scheduling. In every msec, the timer callback checks > if the currently running tasks exceed their timeslice. At the beginning of > the BPF timer callback (central_timerfn in scx_central.bpf.c), scx_central > gets the current time. When the BPF timer callback runs, the rq clock could > be invalid, the same as T5. In this case, scx_bpf_now() returns a fresh > clock value rather than returning the old one (T2). > > [1] https://github.com/sched-ext/scx/blob/main/scheds/c/scx_central.bpf.c > > Signed-off-by: Changwoo Min This one looks good, thanks! Acked-by: Peter Zijlstra (Intel)