From: Simon Horman <horms@kernel.org>
To: Alan Brady <alan.brady@intel.com>
Cc: netdev@vger.kernel.org, Joshua Hay <joshua.a.hay@intel.com>,
intel-wired-lan@lists.osuosl.org,
Igor Bagnucki <igor.bagnucki@intel.com>,
Przemek Kitszel <przemyslaw.kitszel@intel.com>
Subject: Re: [Intel-wired-lan] [PATCH 1/6 iwl-next] idpf: implement virtchnl transaction manager
Date: Tue, 23 Jan 2024 16:25:47 +0000 [thread overview]
Message-ID: <20240123162547.GA254773@kernel.org> (raw)
In-Reply-To: <20240122211125.840833-2-alan.brady@intel.com>
On Mon, Jan 22, 2024 at 01:11:20PM -0800, Alan Brady wrote:
> This starts refactoring how virtchnl messages are handled by adding a
> transaction manager (idpf_vc_xn_manager).
>
> There are two primary motivations here which are to enable handling of
> multiple messages at once and to make it more robust in general. As it
> is right now, the driver may only have one pending message at a time and
> there's no guarantee that the response we receive was actually intended
> for the message we sent prior.
>
> This works by utilizing a "cookie" field of the message descriptor. It
> is arbitrary what data we put in the cookie and the response is required
> to have the same cookie the original message was sent with. Then using a
> "transaction" abstraction that uses the completion API to pair responses
> to the message it belongs to.
>
> The cookie works such that the first half is the index to the
> transaction in our array, and the second half is a "salt" that gets
> incremented every message. This enables quick lookups into the array and
> also ensuring we have the correct message. The salt is necessary because
> after, for example, a message times out and we deem the response was
> lost for some reason, we could theoretically reuse the same index but
> using a different salt ensures that when we do actually get a response
> it's not the old message that timed out previously finally coming in.
> Since the number of transactions allocated is U8_MAX and the salt is 8
> bits, we can never have a conflict because we can't roll over the salt
> without using more transactions than we have available.
>
> This starts by only converting the VIRTCHNL2_OP_VERSION message to use
> this new transaction API. Follow up patches will convert all virtchnl
> messages to use the API.
>
> Reviewed-by: Igor Bagnucki <igor.bagnucki@intel.com>
> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
> Signed-off-by: Alan Brady <alan.brady@intel.com>
> Co-developed-by: Joshua Hay <joshua.a.hay@intel.com>
> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com>
...
> +/**
> + * idpf_vc_xn_init - Initialize virtchnl transaction object
> + * @vcxn_mngr: pointer to vc transaction manager struct
> + */
> +static void idpf_vc_xn_init(struct idpf_vc_xn_manager *vcxn_mngr)
> +{
> + int i;
> +
> + spin_lock_init(&vcxn_mngr->xn_bm_lock);
> +
> + for (i = 0; i < ARRAY_SIZE(vcxn_mngr->ring); i++) {
> + struct idpf_vc_xn *xn = &vcxn_mngr->ring[i];
> +
> + xn->state = IDPF_VC_XN_IDLE;
> + xn->idx = i;
> + idpf_vc_xn_release_bufs(xn);
> + init_completion(&xn->completed);
> + }
Hi Alan and Joshua,
I'm slightly surprised to see that
it is safe to initialise xn_bm_lock above,
but it needs to be taken below.
> +
> + spin_lock_bh(&vcxn_mngr->xn_bm_lock);
> + bitmap_set(vcxn_mngr->free_xn_bm, 0, IDPF_VC_XN_RING_LEN);
> + spin_unlock_bh(&vcxn_mngr->xn_bm_lock);
> +}
next prev parent reply other threads:[~2024-01-23 16:26 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-22 21:11 [Intel-wired-lan] [PATCH 0/6 iwl-next] idpf: refactor virtchnl messages Alan Brady
2024-01-22 21:11 ` [Intel-wired-lan] [PATCH 1/6 iwl-next] idpf: implement virtchnl transaction manager Alan Brady
2024-01-23 16:25 ` Simon Horman [this message]
2024-01-23 16:52 ` Brady, Alan
2024-01-22 21:11 ` [Intel-wired-lan] [PATCH 2/6 iwl-next] idpf: refactor vport virtchnl messages Alan Brady
2024-01-22 21:11 ` [Intel-wired-lan] [PATCH 3/6 iwl-next] idpf: refactor queue related " Alan Brady
2024-01-22 21:11 ` [Intel-wired-lan] [PATCH 4/6 iwl-next] idpf: refactor remaining " Alan Brady
2024-01-25 3:17 ` Willem de Bruijn
2024-01-25 17:01 ` Brady, Alan
2024-01-22 21:11 ` [Intel-wired-lan] [PATCH 5/6 iwl-next] idpf: refactor idpf_recv_mb_msg Alan Brady
2024-01-22 21:11 ` [Intel-wired-lan] [PATCH 6/6 iwl-next] idpf: cleanup virtchnl cruft Alan Brady
2024-01-25 3:14 ` [Intel-wired-lan] [PATCH 0/6 iwl-next] idpf: refactor virtchnl messages Willem de Bruijn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240123162547.GA254773@kernel.org \
--to=horms@kernel.org \
--cc=alan.brady@intel.com \
--cc=igor.bagnucki@intel.com \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=joshua.a.hay@intel.com \
--cc=netdev@vger.kernel.org \
--cc=przemyslaw.kitszel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox