* [PATCH net-next v6 1/1] net:openvswitch:reduce cpu_used_mask memory [not found] <20230203095118.276261-1-taoyuan_eddy@hotmail.com> @ 2023-02-03 9:51 ` Eddy Tao 2023-02-03 11:23 ` Eelco Chaudron 2023-02-03 12:00 ` [PATCH net-next v6 " Jiri Pirko 0 siblings, 2 replies; 6+ messages in thread From: Eddy Tao @ 2023-02-03 9:51 UTC (permalink / raw) To: netdev Cc: Eddy Tao, Pravin B Shelar, David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni, dev, linux-kernel Use actual CPU number instead of hardcoded value to decide the size of 'cpu_used_mask' in 'struct sw_flow'. Below is the reason. 'struct cpumask cpu_used_mask' is embedded in struct sw_flow. Its size is hardcoded to CONFIG_NR_CPUS bits, which can be 8192 by default, it costs memory and slows down ovs_flow_alloc To address this, redefine cpu_used_mask to pointer append cpumask_size() bytes after 'stat' to hold cpumask cpumask APIs like cpumask_next and cpumask_set_cpu never access bits beyond cpu count, cpumask_size() bytes of memory is enough Signed-off-by: Eddy Tao <taoyuan_eddy@hotmail.com> --- net/openvswitch/flow.c | 9 ++++++--- net/openvswitch/flow.h | 2 +- net/openvswitch/flow_table.c | 8 +++++--- 3 files changed, 12 insertions(+), 7 deletions(-) diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c index e20d1a973417..416976f70322 100644 --- a/net/openvswitch/flow.c +++ b/net/openvswitch/flow.c @@ -107,7 +107,8 @@ void ovs_flow_stats_update(struct sw_flow *flow, __be16 tcp_flags, rcu_assign_pointer(flow->stats[cpu], new_stats); - cpumask_set_cpu(cpu, &flow->cpu_used_mask); + cpumask_set_cpu(cpu, + flow->cpu_used_mask); goto unlock; } } @@ -135,7 +136,8 @@ void ovs_flow_stats_get(const struct sw_flow *flow, memset(ovs_stats, 0, sizeof(*ovs_stats)); /* We open code this to make sure cpu 0 is always considered */ - for (cpu = 0; cpu < nr_cpu_ids; cpu = cpumask_next(cpu, &flow->cpu_used_mask)) { + for (cpu = 0; cpu < nr_cpu_ids; + cpu = cpumask_next(cpu, flow->cpu_used_mask)) { struct sw_flow_stats *stats = rcu_dereference_ovsl(flow->stats[cpu]); if (stats) { @@ -159,7 +161,8 @@ void ovs_flow_stats_clear(struct sw_flow *flow) int cpu; /* We open code this to make sure cpu 0 is always considered */ - for (cpu = 0; cpu < nr_cpu_ids; cpu = cpumask_next(cpu, &flow->cpu_used_mask)) { + for (cpu = 0; cpu < nr_cpu_ids; + cpu = cpumask_next(cpu, flow->cpu_used_mask)) { struct sw_flow_stats *stats = ovsl_dereference(flow->stats[cpu]); if (stats) { diff --git a/net/openvswitch/flow.h b/net/openvswitch/flow.h index 073ab73ffeaa..b5711aff6e76 100644 --- a/net/openvswitch/flow.h +++ b/net/openvswitch/flow.h @@ -229,7 +229,7 @@ struct sw_flow { */ struct sw_flow_key key; struct sw_flow_id id; - struct cpumask cpu_used_mask; + struct cpumask *cpu_used_mask; struct sw_flow_mask *mask; struct sw_flow_actions __rcu *sf_acts; struct sw_flow_stats __rcu *stats[]; /* One for each CPU. First one diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index 0a0e4c283f02..dc6a174c3194 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -87,11 +87,12 @@ struct sw_flow *ovs_flow_alloc(void) if (!stats) goto err; + flow->cpu_used_mask = (struct cpumask *)&flow->stats[nr_cpu_ids]; spin_lock_init(&stats->lock); RCU_INIT_POINTER(flow->stats[0], stats); - cpumask_set_cpu(0, &flow->cpu_used_mask); + cpumask_set_cpu(0, flow->cpu_used_mask); return flow; err: @@ -115,7 +116,7 @@ static void flow_free(struct sw_flow *flow) flow->sf_acts); /* We open code this to make sure cpu 0 is always considered */ for (cpu = 0; cpu < nr_cpu_ids; - cpu = cpumask_next(cpu, &flow->cpu_used_mask)) { + cpu = cpumask_next(cpu, flow->cpu_used_mask)) { if (flow->stats[cpu]) kmem_cache_free(flow_stats_cache, (struct sw_flow_stats __force *)flow->stats[cpu]); @@ -1196,7 +1197,8 @@ int ovs_flow_init(void) flow_cache = kmem_cache_create("sw_flow", sizeof(struct sw_flow) + (nr_cpu_ids - * sizeof(struct sw_flow_stats *)), + * sizeof(struct sw_flow_stats *)) + + cpumask_size(), 0, 0, NULL); if (flow_cache == NULL) return -ENOMEM; -- 2.27.0 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH net-next v6 1/1] net:openvswitch:reduce cpu_used_mask memory 2023-02-03 9:51 ` [PATCH net-next v6 1/1] net:openvswitch:reduce cpu_used_mask memory Eddy Tao @ 2023-02-03 11:23 ` Eelco Chaudron 2023-02-03 15:50 ` 回复: [PATCH net-next v7 " 陶 缘 2023-02-03 12:00 ` [PATCH net-next v6 " Jiri Pirko 1 sibling, 1 reply; 6+ messages in thread From: Eelco Chaudron @ 2023-02-03 11:23 UTC (permalink / raw) To: Eddy Tao Cc: netdev, Pravin B Shelar, David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni, dev, linux-kernel On 3 Feb 2023, at 10:51, Eddy Tao wrote: > Use actual CPU number instead of hardcoded value to decide the size > of 'cpu_used_mask' in 'struct sw_flow'. Below is the reason. > > 'struct cpumask cpu_used_mask' is embedded in struct sw_flow. > Its size is hardcoded to CONFIG_NR_CPUS bits, which can be > 8192 by default, it costs memory and slows down ovs_flow_alloc > > To address this, redefine cpu_used_mask to pointer > append cpumask_size() bytes after 'stat' to hold cpumask > > cpumask APIs like cpumask_next and cpumask_set_cpu never access > bits beyond cpu count, cpumask_size() bytes of memory is enough > > Signed-off-by: Eddy Tao <taoyuan_eddy@hotmail.com> Hi Eddy, Thanks for this patch, I have one small nit, but the rest looks good. Acked-by: Eelco Chaudron <echaudro@redhat.com> > --- > net/openvswitch/flow.c | 9 ++++++--- > net/openvswitch/flow.h | 2 +- > net/openvswitch/flow_table.c | 8 +++++--- > 3 files changed, 12 insertions(+), 7 deletions(-) > > diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c > index e20d1a973417..416976f70322 100644 > --- a/net/openvswitch/flow.c > +++ b/net/openvswitch/flow.c > @@ -107,7 +107,8 @@ void ovs_flow_stats_update(struct sw_flow *flow, __be16 tcp_flags, > > rcu_assign_pointer(flow->stats[cpu], > new_stats); > - cpumask_set_cpu(cpu, &flow->cpu_used_mask); > + cpumask_set_cpu(cpu, > + flow->cpu_used_mask); > goto unlock; > } > } > @@ -135,7 +136,8 @@ void ovs_flow_stats_get(const struct sw_flow *flow, > memset(ovs_stats, 0, sizeof(*ovs_stats)); > > /* We open code this to make sure cpu 0 is always considered */ > - for (cpu = 0; cpu < nr_cpu_ids; cpu = cpumask_next(cpu, &flow->cpu_used_mask)) { > + for (cpu = 0; cpu < nr_cpu_ids; > + cpu = cpumask_next(cpu, flow->cpu_used_mask)) { > struct sw_flow_stats *stats = rcu_dereference_ovsl(flow->stats[cpu]); > > if (stats) { > @@ -159,7 +161,8 @@ void ovs_flow_stats_clear(struct sw_flow *flow) > int cpu; > > /* We open code this to make sure cpu 0 is always considered */ > - for (cpu = 0; cpu < nr_cpu_ids; cpu = cpumask_next(cpu, &flow->cpu_used_mask)) { > + for (cpu = 0; cpu < nr_cpu_ids; > + cpu = cpumask_next(cpu, flow->cpu_used_mask)) { > struct sw_flow_stats *stats = ovsl_dereference(flow->stats[cpu]); > > if (stats) { > diff --git a/net/openvswitch/flow.h b/net/openvswitch/flow.h > index 073ab73ffeaa..b5711aff6e76 100644 > --- a/net/openvswitch/flow.h > +++ b/net/openvswitch/flow.h > @@ -229,7 +229,7 @@ struct sw_flow { > */ > struct sw_flow_key key; > struct sw_flow_id id; > - struct cpumask cpu_used_mask; > + struct cpumask *cpu_used_mask; > struct sw_flow_mask *mask; > struct sw_flow_actions __rcu *sf_acts; > struct sw_flow_stats __rcu *stats[]; /* One for each CPU. First one > diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c > index 0a0e4c283f02..dc6a174c3194 100644 > --- a/net/openvswitch/flow_table.c > +++ b/net/openvswitch/flow_table.c > @@ -87,11 +87,12 @@ struct sw_flow *ovs_flow_alloc(void) > if (!stats) > goto err; > > + flow->cpu_used_mask = (struct cpumask *)&flow->stats[nr_cpu_ids]; nit: I would move this up with the other flow structure initialisation. diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c index dc6a174c3194..791504b7f42b 100644 --- a/net/openvswitch/flow_table.c +++ b/net/openvswitch/flow_table.c @@ -79,6 +79,7 @@ struct sw_flow *ovs_flow_alloc(void) return ERR_PTR(-ENOMEM); flow->stats_last_writer = -1; + flow->cpu_used_mask = (struct cpumask *)&flow->stats[nr_cpu_ids]; /* Initialize the default stat node. */ stats = kmem_cache_alloc_node(flow_stats_cache, @@ -87,7 +88,6 @@ struct sw_flow *ovs_flow_alloc(void) if (!stats) goto err; - flow->cpu_used_mask = (struct cpumask *)&flow->stats[nr_cpu_ids]; spin_lock_init(&stats->lock); > spin_lock_init(&stats->lock); > > RCU_INIT_POINTER(flow->stats[0], stats); > > - cpumask_set_cpu(0, &flow->cpu_used_mask); > + cpumask_set_cpu(0, flow->cpu_used_mask); > > return flow; > err: > @@ -115,7 +116,7 @@ static void flow_free(struct sw_flow *flow) > flow->sf_acts); > /* We open code this to make sure cpu 0 is always considered */ > for (cpu = 0; cpu < nr_cpu_ids; > - cpu = cpumask_next(cpu, &flow->cpu_used_mask)) { > + cpu = cpumask_next(cpu, flow->cpu_used_mask)) { > if (flow->stats[cpu]) > kmem_cache_free(flow_stats_cache, > (struct sw_flow_stats __force *)flow->stats[cpu]); > @@ -1196,7 +1197,8 @@ int ovs_flow_init(void) > > flow_cache = kmem_cache_create("sw_flow", sizeof(struct sw_flow) > + (nr_cpu_ids > - * sizeof(struct sw_flow_stats *)), > + * sizeof(struct sw_flow_stats *)) > + + cpumask_size(), > 0, 0, NULL); > if (flow_cache == NULL) > return -ENOMEM; > -- > 2.27.0 ^ permalink raw reply related [flat|nested] 6+ messages in thread
* 回复: [PATCH net-next v7 1/1] net:openvswitch:reduce cpu_used_mask memory 2023-02-03 11:23 ` Eelco Chaudron @ 2023-02-03 15:50 ` 陶 缘 2023-02-03 16:07 ` Jiri Pirko 0 siblings, 1 reply; 6+ messages in thread From: 陶 缘 @ 2023-02-03 15:50 UTC (permalink / raw) To: Eelco Chaudron Cc: netdev@vger.kernel.org, Pravin B Shelar, David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni, dev@openvswitch.org, linux-kernel@vger.kernel.org Change between V7 and V6: move initialization of cpu_used_mask up to follow stats_last_writer thanks eddy ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: 回复: [PATCH net-next v7 1/1] net:openvswitch:reduce cpu_used_mask memory 2023-02-03 15:50 ` 回复: [PATCH net-next v7 " 陶 缘 @ 2023-02-03 16:07 ` Jiri Pirko 0 siblings, 0 replies; 6+ messages in thread From: Jiri Pirko @ 2023-02-03 16:07 UTC (permalink / raw) To: 陶 缘 Cc: Eelco Chaudron, netdev@vger.kernel.org, Pravin B Shelar, David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni, dev@openvswitch.org, linux-kernel@vger.kernel.org Fri, Feb 03, 2023 at 04:50:31PM CET, taoyuan_eddy@hotmail.com wrote: >Change between V7 and V6: >move initialization of cpu_used_mask up to follow stats_last_writer Okay, please stop sending stuff and begin to read. > >thanks >eddy ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next v6 1/1] net:openvswitch:reduce cpu_used_mask memory 2023-02-03 9:51 ` [PATCH net-next v6 1/1] net:openvswitch:reduce cpu_used_mask memory Eddy Tao 2023-02-03 11:23 ` Eelco Chaudron @ 2023-02-03 12:00 ` Jiri Pirko 1 sibling, 0 replies; 6+ messages in thread From: Jiri Pirko @ 2023-02-03 12:00 UTC (permalink / raw) To: Eddy Tao Cc: netdev, Pravin B Shelar, David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni, dev, linux-kernel Fri, Feb 03, 2023 at 10:51:18AM CET, taoyuan_eddy@hotmail.com wrote: >Use actual CPU number instead of hardcoded value to decide the size >of 'cpu_used_mask' in 'struct sw_flow'. Below is the reason. > >'struct cpumask cpu_used_mask' is embedded in struct sw_flow. >Its size is hardcoded to CONFIG_NR_CPUS bits, which can be >8192 by default, it costs memory and slows down ovs_flow_alloc > >To address this, redefine cpu_used_mask to pointer >append cpumask_size() bytes after 'stat' to hold cpumask > >cpumask APIs like cpumask_next and cpumask_set_cpu never access >bits beyond cpu count, cpumask_size() bytes of memory is enough > >Signed-off-by: Eddy Tao <taoyuan_eddy@hotmail.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Sigh, I hope this is the last V, at least until I send this email... ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH net-next v6 1/1] net:openvswitch:reduce cpu_used_mask memory @ 2023-02-03 9:51 Eddy Tao 0 siblings, 0 replies; 6+ messages in thread From: Eddy Tao @ 2023-02-03 9:51 UTC (permalink / raw) To: netdev; +Cc: Eddy Tao change based on initial submission(v1) 1. cleanup comments 2. resolve max-line-length warning in checkpatch 3. revise commit log to use imperitive description Eddy Tao (1): net:openvswitch:reduce cpu_used_mask memory net/openvswitch/flow.c | 9 ++++++--- net/openvswitch/flow.h | 2 +- net/openvswitch/flow_table.c | 8 +++++--- 3 files changed, 12 insertions(+), 7 deletions(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-02-03 16:07 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20230203095118.276261-1-taoyuan_eddy@hotmail.com>
2023-02-03 9:51 ` [PATCH net-next v6 1/1] net:openvswitch:reduce cpu_used_mask memory Eddy Tao
2023-02-03 11:23 ` Eelco Chaudron
2023-02-03 15:50 ` 回复: [PATCH net-next v7 " 陶 缘
2023-02-03 16:07 ` Jiri Pirko
2023-02-03 12:00 ` [PATCH net-next v6 " Jiri Pirko
2023-02-03 9:51 Eddy Tao
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox