netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jakub Kicinski <kuba@kernel.org>
To: Tariq Toukan <tariqt@nvidia.com>
Cc: Boris Pismenny <borisp@nvidia.com>,
	John Fastabend <john.fastabend@gmail.com>,
	"David S . Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Paolo Abeni <pabeni@redhat.com>, <netdev@vger.kernel.org>,
	Saeed Mahameed <saeedm@nvidia.com>,
	Gal Pressman <galp@nvidia.com>,
	Maxim Mikityanskiy <maximmi@nvidia.com>
Subject: Re: [PATCH net-next V2 2/6] net/tls: Multi-threaded calls to TX tls_dev_del
Date: Wed, 13 Jul 2022 20:10:50 -0700	[thread overview]
Message-ID: <20220713201050.3aab0cb8@kernel.org> (raw)
In-Reply-To: <20220713051603.14014-3-tariqt@nvidia.com>

On Wed, 13 Jul 2022 08:15:59 +0300 Tariq Toukan wrote:
> @@ -99,21 +85,17 @@ static void tls_device_queue_ctx_destruction(struct tls_context *ctx)
>  	bool async_cleanup;
>  
>  	spin_lock_irqsave(&tls_device_lock, flags);
> +	list_del(&ctx->list); /* Remove from tls_device_list / tls_device_down_list */
> +	spin_unlock_irqrestore(&tls_device_lock, flags);
> +
>  	async_cleanup = ctx->netdev && ctx->tx_conf == TLS_HW;
>  	if (async_cleanup) {
> -		list_move_tail(&ctx->list, &tls_device_gc_list);
> +		struct tls_offload_context_tx *offload_ctx = tls_offload_ctx_tx(ctx);
>  
> -		/* schedule_work inside the spinlock
> -		 * to make sure tls_device_down waits for that work.
> -		 */
> -		schedule_work(&tls_device_gc_work);
> +		queue_work(destruct_wq, &offload_ctx->destruct_work);

Doesn't queue_work() need to be under the tls_device_lock?
Otherwise I think there's a race between removing the context from 
the list and the netdev down notifier searching that list and flushing
the wq.

>  	} else {
> -		list_del(&ctx->list);
> -	}
> -	spin_unlock_irqrestore(&tls_device_lock, flags);
> -
> -	if (!async_cleanup)
>  		tls_device_free_ctx(ctx);
> +	}
>  }
>  
>  /* We assume that the socket is already connected */
> @@ -1150,6 +1132,9 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
>  	start_marker_record->len = 0;
>  	start_marker_record->num_frags = 0;
>  
> +	INIT_WORK(&offload_ctx->destruct_work, tls_device_tx_del_task);
> +	offload_ctx->ctx = ctx;
> +
>  	INIT_LIST_HEAD(&offload_ctx->records_list);
>  	list_add_tail(&start_marker_record->list, &offload_ctx->records_list);
>  	spin_lock_init(&offload_ctx->lock);
> @@ -1389,7 +1374,7 @@ static int tls_device_down(struct net_device *netdev)
>  
>  	up_write(&device_offload_lock);
>  
> -	flush_work(&tls_device_gc_work);
> +	flush_workqueue(destruct_wq);
>  
>  	return NOTIFY_DONE;
>  }
> @@ -1428,14 +1413,20 @@ static struct notifier_block tls_dev_notifier = {
>  	.notifier_call	= tls_dev_event,
>  };
>  
> -void __init tls_device_init(void)
> +int __init tls_device_init(void)
>  {
> +	destruct_wq = alloc_workqueue("ktls_device_destruct", 0, 0);
> +	if (!destruct_wq)
> +		return -ENOMEM;
> +
>  	register_netdevice_notifier(&tls_dev_notifier);

For a future cleanup - we should probably check for errors here.
Or perhaps we should take the fix via net? If you spin a quick
patch it can still make tomorrows net -> net-next merge.

> +	return 0;
>  }

  reply	other threads:[~2022-07-14  3:10 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-13  5:15 [PATCH net-next V2 0/6] mlx5e use TLS TX pool to improve connection rate Tariq Toukan
2022-07-13  5:15 ` [PATCH net-next V2 1/6] net/tls: Perform immediate device ctx cleanup when possible Tariq Toukan
2022-07-13  5:15 ` [PATCH net-next V2 2/6] net/tls: Multi-threaded calls to TX tls_dev_del Tariq Toukan
2022-07-14  3:10   ` Jakub Kicinski [this message]
2022-07-13  5:16 ` [PATCH net-next V2 3/6] net/mlx5e: kTLS, Introduce TLS-specific create TIS Tariq Toukan
2022-07-13  5:16 ` [PATCH net-next V2 4/6] net/mlx5e: kTLS, Take stats out of OOO handler Tariq Toukan
2022-07-13  5:16 ` [PATCH net-next V2 5/6] net/mlx5e: kTLS, Recycle objects of device-offloaded TLS TX connections Tariq Toukan
2022-07-13  5:16 ` [PATCH net-next V2 6/6] net/mlx5e: kTLS, Dynamically re-size TX recycling pool Tariq Toukan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220713201050.3aab0cb8@kernel.org \
    --to=kuba@kernel.org \
    --cc=borisp@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=galp@nvidia.com \
    --cc=john.fastabend@gmail.com \
    --cc=maximmi@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=tariqt@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).