public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Waiman Long <longman@redhat.com>
To: Frederic Weisbecker <frederic@kernel.org>
Cc: Simon Horman <horms@verge.net.au>, Julian Anastasov <ja@ssi.bg>,
	"David S. Miller" <davem@davemloft.net>,
	David Ahern <dsahern@kernel.org>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Pablo Neira Ayuso <pablo@netfilter.org>,
	Florian Westphal <fw@strlen.de>, Phil Sutter <phil@nwl.cc>,
	Chen Ridong <chenridong@huawei.com>, Phil Auld <pauld@redhat.com>,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
	lvs-devel@vger.kernel.org, netfilter-devel@vger.kernel.org,
	coreteam@netfilter.org, sheviks <sheviks@gmail.com>
Subject: Re: [PATCH-next v2 2/2] ipvs: Guard access of HK_TYPE_KTHREAD cpumask with RCU
Date: Wed, 1 Apr 2026 11:13:11 -0400	[thread overview]
Message-ID: <81fb8d07-01c2-4fde-930a-03ea72b30978@redhat.com> (raw)
In-Reply-To: <ac0VfO3XiD_F1gv-@localhost.localdomain>

On 4/1/26 8:54 AM, Frederic Weisbecker wrote:
> Le Tue, Mar 31, 2026 at 12:50:15PM -0400, Waiman Long a écrit :
>> The ip_vs_ctl.c file and the associated ip_vs.h file are the only places
>> in the kernel where HK_TYPE_KTHREAD cpumask is being retrieved and used.
>> Now that HK_TYPE_KTHREAD/HK_TYPE_DOMAIN cpumask can be changed at run
>> time. We need to use RCU to guard access to this cpumask to avoid a
>> potential UAF problem as the returned cpumask may be freed before it
>> is being used.
>>
>> We can replace HK_TYPE_KTHREAD by HK_TYPE_DOMAIN as they are aliases
>> of each other, but keeping the HK_TYPE_KTHREAD name can highlight the
>> fact that it is the kthread initiated by ipvs that is being controlled.
>>
>> Signed-off-by: Waiman Long <longman@redhat.com>
> Oh I see you're handling a few concerns here. But it's too late, the previous
> patch broke bisection.
Good point. So I have to either reverse the patch order, or just change 
HK_TYPE_KTHREAD to HK_TYPE_DOMAIN & drop the first one.
>
>> ---
>>   include/net/ip_vs.h            | 20 ++++++++++++++++----
>>   net/netfilter/ipvs/ip_vs_ctl.c | 13 ++++++++-----
>>   2 files changed, 24 insertions(+), 9 deletions(-)
>>
>> diff --git a/include/net/ip_vs.h b/include/net/ip_vs.h
>> index 72d325c81313..7bda92fd3fe6 100644
>> --- a/include/net/ip_vs.h
>> +++ b/include/net/ip_vs.h
>> @@ -1411,7 +1411,7 @@ static inline int sysctl_run_estimation(struct netns_ipvs *ipvs)
>>   	return ipvs->sysctl_run_estimation;
>>   }
>>   
>> -static inline const struct cpumask *sysctl_est_cpulist(struct netns_ipvs *ipvs)
>> +static inline const struct cpumask *__sysctl_est_cpulist(struct netns_ipvs *ipvs)
>>   {
>>   	if (ipvs->est_cpulist_valid)
>>   		return ipvs->sysctl_est_cpulist;
>> @@ -1529,7 +1529,7 @@ static inline int sysctl_run_estimation(struct netns_ipvs *ipvs)
>>   	return 1;
>>   }
>>   
>> -static inline const struct cpumask *sysctl_est_cpulist(struct netns_ipvs *ipvs)
>> +static inline const struct cpumask *__sysctl_est_cpulist(struct netns_ipvs *ipvs)
>>   {
>>   	return housekeeping_cpumask(HK_TYPE_KTHREAD);
>>   }
>> @@ -1564,6 +1564,18 @@ static inline int sysctl_svc_lfactor(struct netns_ipvs *ipvs)
>>   	return READ_ONCE(ipvs->sysctl_svc_lfactor);
>>   }
>>   
>> +static inline bool sysctl_est_cpulist_empty(struct netns_ipvs *ipvs)
>> +{
>> +	guard(rcu)();
>> +	return cpumask_empty(__sysctl_est_cpulist(ipvs));
>> +}
>> +
>> +static inline unsigned int sysctl_est_cpulist_weight(struct netns_ipvs *ipvs)
>> +{
>> +	guard(rcu)();
>> +	return cpumask_weight(__sysctl_est_cpulist(ipvs));
>> +}
>> +
>>   /* IPVS core functions
>>    * (from ip_vs_core.c)
>>    */
>> @@ -1895,7 +1907,7 @@ static inline void ip_vs_est_stopped_recalc(struct netns_ipvs *ipvs)
>>   	/* Stop tasks while cpulist is empty or if disabled with flag */
>>   	ipvs->est_stopped = !sysctl_run_estimation(ipvs) ||
>>   			    (ipvs->est_cpulist_valid &&
>> -			     cpumask_empty(sysctl_est_cpulist(ipvs)));
>> +			     sysctl_est_cpulist_empty(ipvs));
> It's not needed, if ipvs->est_cpulist_valid, sysctl_est_cpulist() doesn't
> refer to housekeeping.
Right.
>>   #endif
>>   }
>>   
>> @@ -1911,7 +1923,7 @@ static inline bool ip_vs_est_stopped(struct netns_ipvs *ipvs)
>>   static inline int ip_vs_est_max_threads(struct netns_ipvs *ipvs)
>>   {
>>   	unsigned int limit = IPVS_EST_CPU_KTHREADS *
>> -			     cpumask_weight(sysctl_est_cpulist(ipvs));
>> +			     sysctl_est_cpulist_weight(ipvs);
> That probably works for callers ip_vs_start_estimator().
>
> But this is not handling the core issue that related kthreads should be updated,
> as is done in ipvs_proc_est_cpumask_set(), when HK_TYPE_DOMAIN mask changes.

If ipvs_proc_est_cpumask_set() has been called, the real affinity should 
be the intersection of the given cpumask and HK_TYPE_DOMAIN cpumask. Is 
that what you are referring to?

Cheers,
Longman

>
>>   
>>   	return max(1U, limit);
>>   }
>> diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
>> index 032425025d88..e253a1ceef48 100644
>> --- a/net/netfilter/ipvs/ip_vs_ctl.c
>> +++ b/net/netfilter/ipvs/ip_vs_ctl.c
>> @@ -2338,11 +2338,14 @@ static int ipvs_proc_est_cpumask_get(const struct ctl_table *table,
>>   
>>   	mutex_lock(&ipvs->est_mutex);
>>   
>> -	if (ipvs->est_cpulist_valid)
>> -		mask = *valp;
>> -	else
>> -		mask = (struct cpumask *)housekeeping_cpumask(HK_TYPE_KTHREAD);
>> -	ret = scnprintf(buffer, size, "%*pbl\n", cpumask_pr_args(mask));
>> +	/* HK_TYPE_KTHREAD cpumask needs RCU protection */
>> +	scoped_guard(rcu) {
>> +		if (ipvs->est_cpulist_valid)
>> +			mask = *valp;
>> +		else
>> +			mask = (struct cpumask *)housekeeping_cpumask(HK_TYPE_KTHREAD);
>> +		ret = scnprintf(buffer, size, "%*pbl\n", cpumask_pr_args(mask));
>> +	}
> And that works.
>
> Thanks.
>
>>   
>>   	mutex_unlock(&ipvs->est_mutex);
>>   
>> -- 
>> 2.53.0
>>


  reply	other threads:[~2026-04-01 15:13 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-31 16:50 [PATCH-next v2 0/2] ipvs: Fix incorrect use of HK_TYPE_KTHREAD housekeeping cpumask Waiman Long
2026-03-31 16:50 ` [PATCH-next v2 1/2] sched/isolation: Make HK_TYPE_KTHREAD an alias of HK_TYPE_DOMAIN Waiman Long
2026-04-01 12:43   ` Frederic Weisbecker
2026-03-31 16:50 ` [PATCH-next v2 2/2] ipvs: Guard access of HK_TYPE_KTHREAD cpumask with RCU Waiman Long
2026-04-01 12:54   ` Frederic Weisbecker
2026-04-01 15:13     ` Waiman Long [this message]
2026-04-03 14:15 ` [PATCH-next v2 0/2] ipvs: Fix incorrect use of HK_TYPE_KTHREAD housekeeping cpumask Julian Anastasov
2026-04-03 14:29   ` Pablo Neira Ayuso
2026-04-03 15:00     ` Julian Anastasov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=81fb8d07-01c2-4fde-930a-03ea72b30978@redhat.com \
    --to=longman@redhat.com \
    --cc=chenridong@huawei.com \
    --cc=coreteam@netfilter.org \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=edumazet@google.com \
    --cc=frederic@kernel.org \
    --cc=fw@strlen.de \
    --cc=horms@verge.net.au \
    --cc=ja@ssi.bg \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lvs-devel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=netfilter-devel@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pablo@netfilter.org \
    --cc=pauld@redhat.com \
    --cc=phil@nwl.cc \
    --cc=sheviks@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox