netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Allison Henderson <allison.henderson@oracle.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [PATCH 1/6] net/rds: Avoid queuing superfluous send and recv work
Date: Wed, 26 Mar 2025 09:42:45 -0700	[thread overview]
Message-ID: <20250326094245.094cef0d@kernel.org> (raw)
In-Reply-To: <3b02c34d2a15b4529b384ab91b27e5be0f941130.camel@oracle.com>

On Wed, 12 Mar 2025 07:50:11 +0000 Allison Henderson wrote:
> Thread A:					Thread B:
> -----------------------------------		-----------------------------------
> 						Calls rds_sendmsg()
> 						   Calls rds_send_xmit()
> 						      Calls rds_cond_queue_send_work()   
> Calls rds_send_worker()					
>   calls rds_clear_queued_send_work_bit()		   
>     clears RDS_SEND_WORK_QUEUED in cp->cp_flags		
>       						         checks RDS_SEND_WORK_QUEUED in cp->cp_flags

But if the two threads run in parallel what prevents this check 
to happen fully before the previous line on the "Thread A" side?

Please take a look at netif_txq_try_stop() for an example of 
a memory-barrier based algo.

>       						         Queues work on on cp->cp_send_w
>     Calls rds_send_xmit()
>        Calls rds_cond_queue_send_work()
>           skips queueing work on cp->cp_send_w

  reply	other threads:[~2025-03-26 16:42 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-27  4:26 [PATCH 0/6] RDS bug fix collection allison.henderson
2025-02-27  4:26 ` [PATCH 1/6] net/rds: Avoid queuing superfluous send and recv work allison.henderson
2025-03-01  0:19   ` Jakub Kicinski
2025-03-05  0:38     ` Allison Henderson
2025-03-05  0:44       ` Jakub Kicinski
2025-03-06 16:41         ` Allison Henderson
2025-03-06 18:18           ` Jakub Kicinski
2025-03-07 20:28             ` Allison Henderson
2025-03-08  2:53               ` Jakub Kicinski
2025-03-12  7:50                 ` Allison Henderson
2025-03-26 16:42                   ` Jakub Kicinski [this message]
2025-04-02  1:34                     ` Allison Henderson
2025-04-02 16:18                       ` Jakub Kicinski
2025-04-03  1:27                         ` Allison Henderson
2025-02-27  4:26 ` [PATCH 2/6] net/rds: Re-factor and avoid superfluous queuing of reconnect work allison.henderson
2025-02-27  4:26 ` [PATCH 3/6] net/rds: RDS/TCP does not initiate a connection allison.henderson
2025-02-27  4:26 ` [PATCH 4/6] net/rds: No shortcut out of RDS_CONN_ERROR allison.henderson
2025-03-01  0:19   ` Jakub Kicinski
2025-03-05  0:38     ` Allison Henderson
2025-02-27  4:26 ` [PATCH 5/6] net/rds: rds_tcp_accept_one ought to not discard messages allison.henderson
2025-03-01  0:21   ` Jakub Kicinski
2025-03-06 16:41     ` Allison Henderson
2025-03-01 23:22   ` kernel test robot
2025-03-05  0:43     ` Allison Henderson
2025-03-04 10:28   ` Dan Carpenter
2025-03-05  0:39     ` Allison Henderson
2025-02-27  4:26 ` [PATCH 6/6] net/rds: Encode cp_index in TCP source port allison.henderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250326094245.094cef0d@kernel.org \
    --to=kuba@kernel.org \
    --cc=allison.henderson@oracle.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).