From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6784E20318 for ; Tue, 21 Jan 2025 15:00:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.45 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737471618; cv=none; b=LlfaV5R+fP9ciJQGQsUmLUpTJBTZs5gsZUEvEziwrSX73DAj4WfC3BkZCx781e+GKrD7fTL5iGyYTwlxAQi38YdbeWSBEWB1wl+D3NxEIX6pUAvdDDyutCvryImRqPWt8DdJbZI2d+JD2uEcWzZ1zS341TlCKzqRiRnsLsBNq9Y= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737471618; c=relaxed/simple; bh=ljNJGKTW83JtWDrm7KchjIEx8bUY0ZxECkreAldaKrg=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=aZ7+pz0jUtuz9SKt5yPrp1o0C+1uSDmlL2awvmPuNWbVlWDsgcrkHqdz+hGnEo/BojWCzglYA9v5WHfGbbJghkuQN/2DzwZXxsQg30FLweOCxHKO13SNqsULJMxcGFuSNEWVu5Q3sorTeLz0CJG4EyNQDEQI4Anu0XMTV8n8ndw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YkkaC2Tc; arc=none smtp.client-ip=209.85.128.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YkkaC2Tc" Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-4362f61757fso59000895e9.2 for ; Tue, 21 Jan 2025 07:00:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1737471614; x=1738076414; darn=vger.kernel.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=bFRFrj1sJu1fGKx7ZAkyBXESi+OfIIZmJrmMLTJTkvc=; b=YkkaC2Tccw+NHtJ5GWW2GKAHST5lKfFDBw8wrRDZcU/HTe/V5Cfp485Mkwo5KlzAIW 2k2dM7S5AvRGSahRCabF6cUkPKq+oF798dMqa6RIyJpXgsuFO4Mbj8WHFxhIYGUsVw99 eRzB2cNSlfi/+tASWn7fpzpfjUOvuCnccjeUjtgqPwYVHY8JulmqiXBCE9C2LNEkuQfb ye5Zi5sxiyuLtqFoL6vR0i9lBAjI5BaaEZy6To6MEO8LEHsAxsgWSFXzZwSIrWv4b1Nx isXebJd31AgPoW59Y96APZsCc9BcI18kOt9YKBStnkZeJiR76svuxG1tC/CEQSyBrCd7 6aRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737471614; x=1738076414; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bFRFrj1sJu1fGKx7ZAkyBXESi+OfIIZmJrmMLTJTkvc=; b=FKENUT0EMng6k5wpqfQwHsDGoFj/iM24MLDjGuQyj/yr07KWlpd/uR0FGp+7QrZnu2 uScPaarrWiYKjZwC4NhRxE3ZxTPp17UDu16gdR6hZUR8sOFB44nmOmGXrKR31AdnkWOa USwA/uc2gex3jase/9RraLNW3khzDHhgNhc9cYJw+EBrdaq8jotfCxzHDzFGPhed/aKU kOGCyyPRwVeLsC1Vdg99FNbg6Ij38lEvua4k41uPSk0Q8264WDRqziQ+DCnvjKC0oVcz HrIzwABhNoqOr17U5Wc8d4D7XYJ6CeRTRq3h+lEo9RmdTUdWCazCF/1ECFaX5MgqDEWX p11w== X-Forwarded-Encrypted: i=1; AJvYcCU2EQhz7eXMnNAUK3Pl8/cbhIxvuSEXvlHC2ytBqHSIqI0A/6HDhgb4jBxUPZW6TJtAsweR0sYvRLEfboE=@vger.kernel.org X-Gm-Message-State: AOJu0YwFlGIz5bpngso4/QsXgmrA1eWDf6OqLLuXpINCXuX+ORUySOV3 oxetsoAJpLToexQaNwc1sj4zMO3YTvx1yHSm9E6d5EiCAU/1cgYC X-Gm-Gg: ASbGncvpBxlOme6zpk4HIW/RpT9PL26tbt74TwHVkxvTR5uDefqB1IqfylUOP+KdVHJ XqPyZ6E0DXiRNEFu+PLisiUc/GgyJzrqeJVLytXogpoQsgwCgYUr0QoEzjSsgNBn+hH/tQO6eY1 pvUl2kHSu0JuAqTkWY8vXwXDBac4ZVnPBfVi7bYUd3ZOKBDsmJxHkPbB284u48qqNWE2g5M5YG2 QzVoxFU2IwGZ4p3JYJSkhR02mNl1tlJ8cxxS3oyT85j97pG2z0LWR6O3ukJ2fPJ/HnDEHaL1aFV Jt2ypbZmPqxk43wOwW3d2iKj X-Google-Smtp-Source: AGHT+IE5VLA6UZ+MQC4mTOqbxcYEMgBMFm0v7kRxlc8AK70rYcx8YA9/Y7NYGiLi/um7adURcsDoAw== X-Received: by 2002:a5d:598d:0:b0:38a:5122:5a10 with SMTP id ffacd0b85a97d-38bf5662800mr17732554f8f.15.1737471614082; Tue, 21 Jan 2025 07:00:14 -0800 (PST) Received: from [192.168.178.23] ([46.234.230.163]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38bf3214c5csm13604121f8f.8.2025.01.21.07.00.13 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 21 Jan 2025 07:00:13 -0800 (PST) Message-ID: <5c1f6f0b-2b9d-4b16-8ec0-e3be876eb260@gmail.com> Date: Tue, 21 Jan 2025 16:00:12 +0100 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/7] sched_ext: Implement event counter infrastructure and add an event To: Changwoo Min , tj@kernel.org, void@manifault.com, arighi@nvidia.com Cc: kernel-dev@igalia.com, linux-kernel@vger.kernel.org References: <20250116151543.80163-1-changwoo@igalia.com> <20250116151543.80163-2-changwoo@igalia.com> Content-Language: en-US From: Leonardo Temperanza In-Reply-To: <20250116151543.80163-2-changwoo@igalia.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Hello, On 1/16/2025 4:15 PM, Changwoo Min wrote: > Collect the statistics of specific types of behavior in the sched_ext core, > which are not easily visible but still interesting to an scx scheduler. > > Also, add a core event, SCX_EVENT_INVAL_SELECT_CPU, which represents how > many times ops.select_cpu() returns a CPU that the task can't use. > > Signed-off-by: Changwoo Min > --- > kernel/sched/ext.c | 120 ++++++++++++++++++++++++++++++++++++++++++++- > 1 file changed, 118 insertions(+), 2 deletions(-) > > diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c > index 0bcdd1a31676..7e12d5b8322e 100644 > --- a/kernel/sched/ext.c > +++ b/kernel/sched/ext.c > @@ -1458,6 +1458,66 @@ static struct task_struct *scx_task_iter_next_locked(struct scx_task_iter *iter) > return p; > } > > +/* > + * Collection of event counters. > + */ > +struct scx_event_stat { > + /* > + * If ops.select_cpu() returns a CPU which can't be used by the task, > + * the core scheduler code silently picks a fallback CPU. > + */ > + u64 INVAL_SELECT_CPU; > +}; > + > +#define SCX_EVENT_IDX(e) (offsetof(struct scx_event_stat, e)/sizeof(u64)) > +#define SCX_EVENT_END_IDX() (sizeof(struct scx_event_stat)/sizeof(u64)) > +#define SCX_EVENT_DEFINE(e) SCX_EVENT_##e = SCX_EVENT_IDX(e) > + scx_event_stat fields are required to be u64, otherwise the macros below don't work correctly. Perhaps a comment could highlight this "hidden" constraint? As well as a BUILD_BUG_ON to verify that this constraint is verified at build time. > +/* > + * Types of event counters. > + */ > +enum scx_event_kind { > + SCX_EVENT_BEGIN = 0, > + SCX_EVENT_DEFINE(INVAL_SELECT_CPU), > + SCX_EVENT_END = SCX_EVENT_END_IDX(), > +}; > + > +static const char *scx_event_stat_str[] = { > + [SCX_EVENT_INVAL_SELECT_CPU] = "invalid_select_cpu", > +}; > + If an event is added to scx_event_kind but not scx_event_stat_str, the array can possibly be accessed out of bounds (or into a NULL string depending on the missing index). The GNU C extension could be used to initialize all elements to the empty string "", like this: static const char *scx_event_stat_str[SCX_EVENT_END] = {     [0 ... SCX_EVENT_END - 1] = "",     [SCX_EVENT_INVAL_SELECT_CPU] = "invalid_select_cpu", }; Alternatively a helper could be used: static const char *scx_event_name(enum scx_event_kind kind) {     return scx_event_stat_str[kind] ? : ""; } > +/* > + * The event counter is organized by a per-CPU variable to minimize the > + * accounting overhead without synchronization. A system-wide view on the > + * event counter is constructed when requested by scx_bpf_get_event_stat(). > + */ > +static DEFINE_PER_CPU(struct scx_event_stat, event_stats); > + > +/** > + * scx_add_event - Increase an event counter for 'name' by 'cnt' > + * @name: an event name defined in struct scx_event_stat > + * @cnt: the number of the event occured > + */ > +#define scx_add_event(name, cnt) ({ \ > + struct scx_event_stat *__e; \ > + __e = get_cpu_ptr(&event_stats); \ > + WRITE_ONCE(__e->name, __e->name+ (cnt)); \ > + put_cpu_ptr(&event_stats); \ > +}) > + > + > +/** > + * scx_read_event_kind - Read an event from 'e' with 'kind' > + * @e: a pointer to an event collected by scx_bpf_event_stat() > + * @kine: an event type defined in scx_event_kind > + */ > +#define scx_read_event_kind(e, kind) ({ \ > + u64 *__e64 = (u64 *)(e); \ > + __e64[kind]; \ > +}) > + nit: typo, "@kine" instead of "@kind" > +static void scx_bpf_event_stat(struct scx_event_stat *event, size_t event__sz); > + > static enum scx_ops_enable_state scx_ops_enable_state(void) > { > return atomic_read(&scx_ops_enable_state_var); > @@ -3607,8 +3667,10 @@ static int select_task_rq_scx(struct task_struct *p, int prev_cpu, int wake_flag > *ddsp_taskp = NULL; > if (ops_cpu_valid(cpu, "from ops.select_cpu()")) > return cpu; > - else > + else { > + scx_add_event(INVAL_SELECT_CPU, 1); > return prev_cpu; > + } > } else { > bool found; > s32 cpu; > @@ -5053,6 +5115,15 @@ static void scx_ops_disable_workfn(struct kthread_work *work) > scx_rq_clock_invalidate(rq); > } > > + /* > + * Clear event counters so the next scx scheduler always gets > + * fresh event counter values. > + */ > + for_each_possible_cpu(cpu) { > + struct scx_event_stat *e = per_cpu_ptr(&event_stats, cpu); > + memset(e, 0, sizeof(*e)); > + } > + > /* no task is on scx, turn off all the switches and flush in-progress calls */ > static_branch_disable(&__scx_ops_enabled); > for (i = SCX_OPI_BEGIN; i < SCX_OPI_END; i++) > @@ -5309,9 +5380,10 @@ static void scx_dump_state(struct scx_exit_info *ei, size_t dump_len) > .at_jiffies = jiffies, > }; > struct seq_buf s; > + struct scx_event_stat event; > unsigned long flags; > char *buf; > - int cpu; > + int cpu, kind; > > spin_lock_irqsave(&dump_lock, flags); > > @@ -5417,6 +5489,16 @@ static void scx_dump_state(struct scx_exit_info *ei, size_t dump_len) > rq_unlock(rq, &rf); > } > > + dump_newline(&s); > + dump_line(&s, "Event counters"); > + dump_line(&s, "--------------"); > + > + scx_bpf_event_stat(&event, sizeof(event)); > + for (kind = SCX_EVENT_BEGIN; kind < SCX_EVENT_END; kind++) { > + dump_line(&s, "%25s : %llu", scx_event_stat_str[kind], > + scx_read_event_kind(&event, kind)); > + } > + > if (seq_buf_has_overflowed(&s) && dump_len >= sizeof(trunc_marker)) > memcpy(ei->dump + dump_len - sizeof(trunc_marker), > trunc_marker, sizeof(trunc_marker)); > @@ -7720,6 +7802,39 @@ __bpf_kfunc u64 scx_bpf_now(void) > return clock; > } > > +/* > + * scx_bpf_event_stat - Get a system-wide event counter to > + * @event: output buffer from a BPF program > + * @event__sz: @event len, must end in '__sz'' for the verifier > + */ > +__bpf_kfunc void scx_bpf_event_stat(struct scx_event_stat *event, > + size_t event__sz) > +{ > + struct scx_event_stat *e; > + u64 *event64, *e64; > + int cpu, kind, event_end; > + > + /* > + * We cannot entirely trust a BPF-provided size since a BPF program > + * might be compiled against a different vmlinux.h, of which > + * scx_event_stat would be larger (a newer vmlinux.h) or smaller > + * (an older vmlinux.h). Hence, we use the smaller size to avoid > + * memory corruption. > + */ > + event__sz = min(event__sz, sizeof(*event)); > + event_end = event__sz / sizeof(u64); > + > + event64 = (u64 *)event; > + memset(event, 0, event__sz); > + for_each_possible_cpu(cpu) { > + e = per_cpu_ptr(&event_stats, cpu); > + e64 = (u64 *)e; > + for (kind = 0; kind < event_end; kind++) { > + event64[kind] += READ_ONCE(e64[kind]); > + } > + } > +} > + > __bpf_kfunc_end_defs(); > > BTF_KFUNCS_START(scx_kfunc_ids_any) > @@ -7752,6 +7867,7 @@ BTF_ID_FLAGS(func, scx_bpf_cpu_rq) > BTF_ID_FLAGS(func, scx_bpf_task_cgroup, KF_RCU | KF_ACQUIRE) > #endif > BTF_ID_FLAGS(func, scx_bpf_now) > +BTF_ID_FLAGS(func, scx_bpf_event_stat, KF_TRUSTED_ARGS) > BTF_KFUNCS_END(scx_kfunc_ids_any) > > static const struct btf_kfunc_id_set scx_kfunc_set_any = {