From: Jakub Kicinski <kuba@kernel.org>
To: Hangyu Hua <hbh25y@gmail.com>
Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
kuniyu@amazon.co.jp, richard_siegfried@systemli.org,
joannelkoong@gmail.com, socketcan@hartkopp.net,
gerrit@erg.abdn.ac.uk, tomasz@grobelny.oswiecenia.net,
dccp@vger.kernel.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] dccp: put dccp_qpolicy_full() and dccp_qpolicy_push() in the same lock
Date: Thu, 28 Jul 2022 20:01:39 -0700 [thread overview]
Message-ID: <20220728200139.1e7d9bc6@kernel.org> (raw)
In-Reply-To: <20220727080609.26532-1-hbh25y@gmail.com>
On Wed, 27 Jul 2022 16:06:09 +0800 Hangyu Hua wrote:
> In the case of sk->dccps_qpolicy == DCCPQ_POLICY_PRIO, dccp_qpolicy_full
> will drop a skb when qpolicy is full. And the lock in dccp_sendmsg is
> released before sock_alloc_send_skb and then relocked after
> sock_alloc_send_skb. The following conditions may lead dccp_qpolicy_push
> to add skb to an already full sk_write_queue:
>
> thread1--->lock
> thread1--->dccp_qpolicy_full: queue is full. drop a skb
This linie should say "not full"?
> thread1--->unlock
> thread2--->lock
> thread2--->dccp_qpolicy_full: queue is not full. no need to drop.
> thread2--->unlock
> thread1--->lock
> thread1--->dccp_qpolicy_push: add a skb. queue is full.
> thread1--->unlock
> thread2--->lock
> thread2--->dccp_qpolicy_push: add a skb!
> thread2--->unlock
>
> Fix this by moving dccp_qpolicy_full.
>
> Fixes: 871a2c16c21b ("dccp: Policy-based packet dequeueing infrastructure")
This code was added in b1308dc015eb0, AFAICT. Please double check.
> Signed-off-by: Hangyu Hua <hbh25y@gmail.com>
> ---
> net/dccp/proto.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/net/dccp/proto.c b/net/dccp/proto.c
> index eb8e128e43e8..1a0193823c82 100644
> --- a/net/dccp/proto.c
> +++ b/net/dccp/proto.c
> @@ -736,11 +736,6 @@ int dccp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
>
> lock_sock(sk);
>
> - if (dccp_qpolicy_full(sk)) {
> - rc = -EAGAIN;
> - goto out_release;
> - }
> -
> timeo = sock_sndtimeo(sk, noblock);
>
> /*
> @@ -773,6 +768,11 @@ int dccp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
> if (rc != 0)
> goto out_discard;
>
> + if (dccp_qpolicy_full(sk)) {
> + rc = -EAGAIN;
> + goto out_discard;
> + }
Shouldn't this be earlier, right after relocking? Why copy the data etc.
if we know the queue is full?
> dccp_qpolicy_push(sk, skb);
> /*
> * The xmit_timer is set if the TX CCID is rate-based and will expire
next prev parent reply other threads:[~2022-07-29 3:01 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-27 8:06 [PATCH] dccp: put dccp_qpolicy_full() and dccp_qpolicy_push() in the same lock Hangyu Hua
2022-07-29 3:01 ` Jakub Kicinski [this message]
2022-07-29 10:34 ` Hangyu Hua
2022-07-29 15:44 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220728200139.1e7d9bc6@kernel.org \
--to=kuba@kernel.org \
--cc=davem@davemloft.net \
--cc=dccp@vger.kernel.org \
--cc=edumazet@google.com \
--cc=gerrit@erg.abdn.ac.uk \
--cc=hbh25y@gmail.com \
--cc=joannelkoong@gmail.com \
--cc=kuniyu@amazon.co.jp \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=richard_siegfried@systemli.org \
--cc=socketcan@hartkopp.net \
--cc=tomasz@grobelny.oswiecenia.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).