From: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
To: Pierre Riteau <pierre@stackhpc.com>
Cc: Paolo Abeni <pabeni@redhat.com>,
Michal Swiatkowski <michal.swiatkowski@linux.intel.com>,
Andrew Lunn <andrew@lunn.ch>,
netdev@vger.kernel.org, jiri@resnulli.us, davem@davemloft.net,
edumazet@google.com, kuba@kernel.org, horms@kernel.org,
Dan Carpenter <error27@gmail.com>
Subject: Re: [net v1] devlink: fix xa_alloc_cyclic error handling
Date: Tue, 11 Mar 2025 10:16:55 +0100 [thread overview]
Message-ID: <Z8//h7IT3cf01bxB@mev-dev.igk.intel.com> (raw)
In-Reply-To: <CA+ny2sxC2Y7bxhkO7HqX+6E_Myf24_trmCUrroKFkyoce7QC9A@mail.gmail.com>
On Mon, Mar 10, 2025 at 12:42:13PM +0100, Pierre Riteau wrote:
> On Tue, 18 Feb 2025 at 12:56, Paolo Abeni <pabeni@redhat.com> wrote:
> >
> >
> >
> > On 2/14/25 2:58 PM, Michal Swiatkowski wrote:
> > > On Fri, Feb 14, 2025 at 02:44:49PM +0100, Andrew Lunn wrote:
> > >> On Fri, Feb 14, 2025 at 02:24:53PM +0100, Michal Swiatkowski wrote:
> > >>> Pierre Riteau <pierre@stackhpc.com> found suspicious handling an error
> > >>> from xa_alloc_cyclic() in scheduler code [1]. The same is done in
> > >>> devlink_rel_alloc().
> > >>
> > >> If the same bug exists twice it might exist more times. Did you find
> > >> this instance by searching the whole tree? Or just networking?
> > >>
> > >> This is also something which would be good to have the static
> > >> analysers check for. I wounder if smatch can check this?
> > >>
> > >> Andrew
> > >>
> > >
> > > You are right, I checked only net folder and there are two usage like
> > > that in drivers. I will send v2 with wider fixing, thanks.
> >
> > While at that, please add the suitable fixes tag(s).
> >
> > Thanks,
> >
> > Paolo
>
> Hello,
>
> I haven't seen a v2 patch from Michal Swiatkowski. Would it be okay to
> at least merge this net/devlink/core.c fix for inclusion in 6.14? I
> can send a revised patch adding the Fixes tag. Driver fixes could be
> addressed separately.
>
Sorry that I didn't send v2, but I have seen that Dan wrote to Jiri
about this code and also found more places to fix. I assumed that he
will send a fix for all cases that he found.
Dan, do you plan to send it or I should send v2?
Thanks,
Michal
> Thanks,
> Pierre
next prev parent reply other threads:[~2025-03-11 9:20 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-14 13:24 [net v1] devlink: fix xa_alloc_cyclic error handling Michal Swiatkowski
2025-02-14 13:44 ` Andrew Lunn
2025-02-14 13:58 ` Michal Swiatkowski
2025-02-14 14:14 ` Andrew Lunn
2025-02-18 11:56 ` Paolo Abeni
2025-03-10 11:42 ` Pierre Riteau
2025-03-11 9:16 ` Michal Swiatkowski [this message]
2025-03-11 11:49 ` Dan Carpenter
2025-03-11 12:09 ` Michal Swiatkowski
2025-02-16 15:06 ` Dan Carpenter
2025-02-16 16:08 ` Andrew Lunn
2025-02-17 7:46 ` Dan Carpenter
2025-02-17 6:57 ` Dan Carpenter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z8//h7IT3cf01bxB@mev-dev.igk.intel.com \
--to=michal.swiatkowski@linux.intel.com \
--cc=andrew@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=error27@gmail.com \
--cc=horms@kernel.org \
--cc=jiri@resnulli.us \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pierre@stackhpc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).