lvs-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jiri Wiesner <jwiesner@suse.de>
To: Julian Anastasov <ja@ssi.bg>
Cc: Simon Horman <horms@verge.net.au>,
	lvs-devel@vger.kernel.org,
	yunhong-cgl jiang <xintian1976@gmail.com>,
	dust.li@linux.alibaba.com
Subject: Re: [RFC PATCHv6 4/7] ipvs: use kthreads for stats estimation
Date: Fri, 11 Nov 2022 18:21:36 +0100	[thread overview]
Message-ID: <20221111172136.GE3484@incl> (raw)
In-Reply-To: <ff8bf15e-ddf3-76d1-b23b-814133ae5b@ssi.bg>

On Thu, Nov 10, 2022 at 10:16:24PM +0200, Julian Anastasov wrote:
> > AMD EPYC 7601 32-Core Processor
> > 128 CPUs, 8 NUMA nodes
> > Zen 1 machines such as this one have a large number of NUMA nodes due to restrictions in the CPU architecture. First, tests with different governors:
> > > cpupower frequency-set -g ondemand
> > > [  653.441325] IPVS: starting estimator thread 0...
> > > [  653.514918] IPVS: calc: chain_max=8, single est=11171ns, diff=11301, loops=1, ntest=12
> > > [  653.523580] IPVS: dequeue: 892ns
> > > [  653.527528] IPVS: using max 384 ests per chain, 19200 per kthread
> > > [  655.349916] IPVS: tick time: 3059313ns for 128 CPUs, 384 ests, 1 chains, chain_max=384
> > > [  685.230016] IPVS: starting estimator thread 1...
> > > [  717.110852] IPVS: starting estimator thread 2...
> > > [  719.349755] IPVS: tick time: 2896668ns for 128 CPUs, 384 ests, 1 chains, chain_max=384
> > > [  750.349974] IPVS: starting estimator thread 3...
> > > [  783.349841] IPVS: tick time: 2942604ns for 128 CPUs, 384 ests, 1 chains, chain_max=384
> > > [  847.349811] IPVS: tick time: 2930872ns for 128 CPUs, 384 ests, 1 chains, chain_max=384
> 
> 	Looks like cache_factor of 4 is good both to
> ondemand which prefers cache_factor 3 (2.9->4ms) and performance
> which prefers cache_factor 5 (5.6->4.3ms):
> 
> gov/cache_factor	chain_max	tick time (goal 4.8ms)
> ondemand/4		8		2.9ms
> ondemand/3		11		4ms
> performance/4		22		5.6ms
> performance/5		17		4.3ms

Yes, a cache factor of 4 happens to be a good compromise on this particular Zen 1 machine.

> > > [ 1578.032593] IPVS: tick time: 5691875ns for 128 CPUs, 1056 ests, 1 chains, chain_max=1056
> > >    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
> > >  42514 root      20   0       0      0      0 I 14.24 0.000   0:14.96 ipvs-e:0:0
> > >  95356 root      20   0       0      0      0 I 1.987 0.000   0:01.34 ipvs-e:0:1
> > While having the services loaded, I switched to the ondemand governor:
> > > [ 1706.032577] IPVS: tick time: 5666868ns for 128 CPUs, 1056 ests, 1 chains, chain_max=1056
> > > [ 1770.032534] IPVS: tick time: 5638505ns for 128 CPUs, 1056 ests, 1 chains, chain_max=1056
> 
> 	Hm, ondemand governor takes 5.6ms just like
> the above performance result? This is probabllly still
> performance mode?

I am not sure if I copied the right messages from the log. Probably not.

> > Basically, chain_max calculation under gonernors than ramp up CPU frequency more slowly (ondemand on AMD or powersave for intel_pstate) is stabler than before on both AMD and Intel. We know from previous results that even ARM with multiple NUMA nodes is not a complete disaster. Switching CPU frequency gonernors, including the unfavourable switches from performance to ondemand, does not saturate CPUs. When it comes to CPU frequency gonernors, people tend to use either ondemand (or powersave for intel_pstate) or performance consistently - switches between gonernors can be expected to be rare in production.
> > I will need to find out to read through the latest version of the patch set.
> 
> 	OK. Thank you for testing the different cases!
> Let me know if any changes are needed before releasing
> the patchset. We can even include some testing results
> in the commit messages.

Absolutely.

-- 
Jiri Wiesner
SUSE Labs

  reply	other threads:[~2022-11-11 17:21 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-31 14:56 [RFC PATCHv6 0/7] ipvs: Use kthreads for stats Julian Anastasov
2022-10-31 14:56 ` [RFC PATCHv6 1/7] ipvs: add rcu protection to stats Julian Anastasov
2022-11-12  7:51   ` Jiri Wiesner
2022-10-31 14:56 ` [RFC PATCHv6 2/7] ipvs: use common functions for stats allocation Julian Anastasov
2022-11-12  8:22   ` Jiri Wiesner
2022-10-31 14:56 ` [RFC PATCHv6 3/7] ipvs: use u64_stats_t for the per-cpu counters Julian Anastasov
2022-11-12  9:00   ` Jiri Wiesner
2022-11-12  9:09     ` Jiri Wiesner
2022-11-12 16:01       ` Julian Anastasov
2022-11-14 11:46         ` Julian Anastasov
2022-11-15 12:26         ` Jiri Wiesner
2022-11-15 16:53           ` Julian Anastasov
2022-11-16 17:37             ` Jiri Wiesner
2022-11-19  7:46   ` Jiri Wiesner
2022-10-31 14:56 ` [RFC PATCHv6 4/7] ipvs: use kthreads for stats estimation Julian Anastasov
2022-11-10 15:39   ` Jiri Wiesner
2022-11-10 20:16     ` Julian Anastasov
2022-11-11 17:21       ` Jiri Wiesner [this message]
2022-11-21 16:05   ` Jiri Wiesner
2022-10-31 14:56 ` [RFC PATCHv6 5/7] ipvs: add est_cpulist and est_nice sysctl vars Julian Anastasov
2022-11-21 16:29   ` Jiri Wiesner
2022-10-31 14:56 ` [RFC PATCHv6 6/7] ipvs: run_estimation should control the kthread tasks Julian Anastasov
2022-11-21 16:32   ` Jiri Wiesner
2022-10-31 14:56 ` [RFC PATCHv6 7/7] ipvs: debug the tick time Julian Anastasov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20221111172136.GE3484@incl \
    --to=jwiesner@suse.de \
    --cc=dust.li@linux.alibaba.com \
    --cc=horms@verge.net.au \
    --cc=ja@ssi.bg \
    --cc=lvs-devel@vger.kernel.org \
    --cc=xintian1976@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).