stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg KH <gregkh@linuxfoundation.org>
To: "Toke Høiland-Jørgensen" <toke@redhat.com>
Cc: stable@vger.kernel.org, Eric Dumazet <edumazet@google.com>,
	"David S . Miller" <davem@davemloft.net>
Subject: Re: [PATCH 4.4] net: Revert "pkt_sched: fq: use proper locking in fq_dump_stats()"
Date: Tue, 23 Jun 2020 19:44:46 +0200	[thread overview]
Message-ID: <20200623174446.GA17865@kroah.com> (raw)
In-Reply-To: <20200623150053.272985-1-toke@redhat.com>

On Tue, Jun 23, 2020 at 05:00:53PM +0200, Toke Høiland-Jørgensen wrote:
> This reverts commit 191cf872190de28a92e1bd2b56d8860e37e07443.
> 
> That commit should never have been backported since it relies on a change in
> locking semantics that was introduced in v4.8 and not backported. Because of
> this, the backported commit to sch_fq leads to lockups because of the double
> locking.
> 
> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
> ---
>  net/sched/sch_fq.c | 32 ++++++++++++++------------------
>  1 file changed, 14 insertions(+), 18 deletions(-)
> 
> diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
> index f4aa2ab4713a..eb814ffc0902 100644
> --- a/net/sched/sch_fq.c
> +++ b/net/sched/sch_fq.c
> @@ -830,24 +830,20 @@ nla_put_failure:
>  static int fq_dump_stats(struct Qdisc *sch, struct gnet_dump *d)
>  {
>  	struct fq_sched_data *q = qdisc_priv(sch);
> -	struct tc_fq_qd_stats st;
> -
> -	sch_tree_lock(sch);
> -
> -	st.gc_flows		  = q->stat_gc_flows;
> -	st.highprio_packets	  = q->stat_internal_packets;
> -	st.tcp_retrans		  = q->stat_tcp_retrans;
> -	st.throttled		  = q->stat_throttled;
> -	st.flows_plimit		  = q->stat_flows_plimit;
> -	st.pkts_too_long	  = q->stat_pkts_too_long;
> -	st.allocation_errors	  = q->stat_allocation_errors;
> -	st.time_next_delayed_flow = q->time_next_delayed_flow - ktime_get_ns();
> -	st.flows		  = q->flows;
> -	st.inactive_flows	  = q->inactive_flows;
> -	st.throttled_flows	  = q->throttled_flows;
> -	st.pad			  = 0;
> -
> -	sch_tree_unlock(sch);
> +	u64 now = ktime_get_ns();
> +	struct tc_fq_qd_stats st = {
> +		.gc_flows		= q->stat_gc_flows,
> +		.highprio_packets	= q->stat_internal_packets,
> +		.tcp_retrans		= q->stat_tcp_retrans,
> +		.throttled		= q->stat_throttled,
> +		.flows_plimit		= q->stat_flows_plimit,
> +		.pkts_too_long		= q->stat_pkts_too_long,
> +		.allocation_errors	= q->stat_allocation_errors,
> +		.flows			= q->flows,
> +		.inactive_flows		= q->inactive_flows,
> +		.throttled_flows	= q->throttled_flows,
> +		.time_next_delayed_flow	= q->time_next_delayed_flow - now,
> +	};
>  
>  	return gnet_stats_copy_app(d, &st, sizeof(st));
>  }
> -- 
> 2.27.0
> 

Thanks, now applied.

greg k-h

      reply	other threads:[~2020-06-23 17:44 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-23 15:00 [PATCH 4.4] net: Revert "pkt_sched: fq: use proper locking in fq_dump_stats()" Toke Høiland-Jørgensen
2020-06-23 17:44 ` Greg KH [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200623174446.GA17865@kroah.com \
    --to=gregkh@linuxfoundation.org \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=stable@vger.kernel.org \
    --cc=toke@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).