netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Edward Cree <ecree.xilinx@gmail.com>
To: Przemek Kitszel <przemyslaw.kitszel@intel.com>, edward.cree@amd.com
Cc: linux-net-drivers@amd.com, netdev@vger.kernel.org,
	habetsm.xilinx@gmail.com, sudheer.mogilappagari@intel.com,
	jdamato@fastly.com, mw@semihalf.com, linux@armlinux.org.uk,
	sgoutham@marvell.com, gakula@marvell.com, sbhatta@marvell.com,
	hkelam@marvell.com, saeedm@nvidia.com, leon@kernel.org,
	jacob.e.keller@intel.com, andrew@lunn.ch, ahmed.zaki@intel.com,
	davem@davemloft.net, kuba@kernel.org, edumazet@google.com,
	pabeni@redhat.com
Subject: Re: [PATCH v6 net-next 3/9] net: ethtool: record custom RSS contexts in the XArray
Date: Tue, 25 Jun 2024 14:39:23 +0100	[thread overview]
Message-ID: <5ac63907-1982-0511-0121-194f09d9f30a@gmail.com> (raw)
In-Reply-To: <ca867437-1533-49d6-a25b-6058e2ee0635@intel.com>

On 20/06/2024 07:32, Przemek Kitszel wrote:
> On 6/20/24 07:47, edward.cree@amd.com wrote:
>> +    return struct_size((struct ethtool_rxfh_context *)0, data, flex_len);
> 
> struct_size_t

Yup, will do, thanks for the suggestion.
Don't think that existed yet when I wrote v1 :-D

>> +    /* Update rss_ctx tracking */
>> +    if (create) {
>> +        /* Ideally this should happen before calling the driver,
>> +         * so that we can fail more cleanly; but we don't have the
>> +         * context ID until the driver picks it, so we have to
>> +         * wait until after.
>> +         */
>> +        if (WARN_ON(xa_load(&dev->ethtool->rss_ctx, rxfh.rss_context))) {
>> +            /* context ID reused, our tracking is screwed */
> 
> why no error code set?

Because at this point the driver *has* created the context, it's
 in the hardware.  If we wanted to return failure we'd have to
 call the driver again to delete it, and that would still leave
 an ugly case where that call fails.

> 
>> +            kfree(ctx);
>> +            goto out;
>> +        }
>> +        /* Allocate the exact ID the driver gave us */
>> +        if (xa_is_err(xa_store(&dev->ethtool->rss_ctx, rxfh.rss_context,
>> +                       ctx, GFP_KERNEL))) {
> 
> this is racy - assuming it is possible that context was set by other
> means (otherwisce you would not xa_load() a few lines above) -
> a concurrent writer could have done this just after you xa_load() call.

I don't expect a concurrent writer - this is all under RTNL.
The xa_load() is there in case we create two contexts
 consecutively and the driver gives us the same ID both times.

> so, instead of xa_load() + xa_store() just use xa_insert()

The reason for splitting it up is for the WARN_ON on the
 xa_load().  I guess with xa_insert() it would have to be
 WARN_ON(xa_insert() == -EBUSY)?

> anyway I feel the pain of trying to support both driver-selected IDs
> and your own

  parent reply	other threads:[~2024-06-25 13:39 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-20  5:47 [PATCH v6 net-next 0/9] ethtool: track custom RSS contexts in the core edward.cree
2024-06-20  5:47 ` [PATCH v6 net-next 1/9] net: move ethtool-related netdev state into its own struct edward.cree
2024-06-20  5:47 ` [PATCH v6 net-next 2/9] net: ethtool: attach an XArray of custom RSS contexts to a netdevice edward.cree
2024-06-20  5:47 ` [PATCH v6 net-next 3/9] net: ethtool: record custom RSS contexts in the XArray edward.cree
2024-06-20  6:32   ` Przemek Kitszel
2024-06-20  6:37     ` Edward Cree
2024-06-25  7:17       ` Przemek Kitszel
2024-06-25  9:27         ` Edward Cree
2024-06-25 13:39     ` Edward Cree [this message]
2024-06-26  9:05       ` Przemek Kitszel
2024-06-27 14:24         ` Edward Cree
2024-06-28 12:15           ` Przemek Kitszel
2024-06-20  5:47 ` [PATCH v6 net-next 4/9] net: ethtool: let the core choose RSS context IDs edward.cree
2024-06-20  5:47 ` [PATCH v6 net-next 5/9] net: ethtool: add an extack parameter to new rxfh_context APIs edward.cree
2024-06-20  5:47 ` [PATCH v6 net-next 6/9] net: ethtool: add a mutex protecting RSS contexts edward.cree
2024-06-20  5:47 ` [PATCH v6 net-next 7/9] sfc: use new rxfh_context API edward.cree
2024-06-20  5:47 ` [PATCH v6 net-next 8/9] net: ethtool: use the tracking array for get_rxfh on custom RSS contexts edward.cree
2024-06-20 19:42   ` Simon Horman
2024-06-24 13:31     ` Edward Cree
2024-06-20  5:47 ` [PATCH v6 net-next 9/9] sfc: remove get_rxfh_context dead code edward.cree

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5ac63907-1982-0511-0121-194f09d9f30a@gmail.com \
    --to=ecree.xilinx@gmail.com \
    --cc=ahmed.zaki@intel.com \
    --cc=andrew@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=edward.cree@amd.com \
    --cc=gakula@marvell.com \
    --cc=habetsm.xilinx@gmail.com \
    --cc=hkelam@marvell.com \
    --cc=jacob.e.keller@intel.com \
    --cc=jdamato@fastly.com \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-net-drivers@amd.com \
    --cc=linux@armlinux.org.uk \
    --cc=mw@semihalf.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=przemyslaw.kitszel@intel.com \
    --cc=saeedm@nvidia.com \
    --cc=sbhatta@marvell.com \
    --cc=sgoutham@marvell.com \
    --cc=sudheer.mogilappagari@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).