From: Tobias Waldekranz <tobias@waldekranz.com>
To: Maxime Chevallier <maxime.chevallier@bootlin.com>
Cc: davem@davemloft.net, kuba@kernel.org, marcin.s.wojtas@gmail.com,
linux@armlinux.org.uk, andrew@lunn.ch, edumazet@google.com,
pabeni@redhat.com, netdev@vger.kernel.org
Subject: Re: [PATCH v2 net] net: mvpp2: Prevent parser TCAM memory corruption
Date: Fri, 21 Mar 2025 11:27:03 +0100 [thread overview]
Message-ID: <87sen6adc8.fsf@waldekranz.com> (raw)
In-Reply-To: <20250321111028.709e6b0f@fedora.home>
On fre, mar 21, 2025 at 11:10, Maxime Chevallier <maxime.chevallier@bootlin.com> wrote:
> Hi Tobias,
>
> On Fri, 21 Mar 2025 10:03:23 +0100
> Tobias Waldekranz <tobias@waldekranz.com> wrote:
>
>> Protect the parser TCAM/SRAM memory, and the cached (shadow) SRAM
>> information, from concurrent modifications.
>>
>> Both the TCAM and SRAM tables are indirectly accessed by configuring
>> an index register that selects the row to read or write to. This means
>> that operations must be atomic in order to, e.g., avoid spreading
>> writes across multiple rows. Since the shadow SRAM array is used to
>> find free rows in the hardware table, it must also be protected in
>> order to avoid TOCTOU errors where multiple cores allocate the same
>> row.
>>
>> This issue was detected in a situation where `mvpp2_set_rx_mode()` ran
>> concurrently on two CPUs. In this particular case the
>> MVPP2_PE_MAC_UC_PROMISCUOUS entry was corrupted, causing the
>> classifier unit to drop all incoming unicast - indicated by the
>> `rx_classifier_drops` counter.
>>
>> Fixes: 3f518509dedc ("ethernet: Add new driver for Marvell Armada 375 network unit")
>> Signed-off-by: Tobias Waldekranz <tobias@waldekranz.com>
>
> I gave it a quick test with simple tcam-based vlan filtering and uc/mc
> filtering, it looks and behaves fine but I probably didn't stress it
> enough to hit the races you encountered. Still, the features that used
> to work still work :)
Good to hear! :)
I have tried to stress it by concurrently hammering on the promisc
setting on multiple ports, while adding/removing MDB entries without any
issues.
I've also ran the original reproducer about 10-20x the number of
iterations it usually took to trigger the issue.
> Reviewed-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
> Tested-by: Maxime Chevallier <maxime.chevallier@bootlin.com>
>
> Thanks a lot,
Thanks for reviewing and testing!
next prev parent reply other threads:[~2025-03-21 10:27 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-21 9:03 [PATCH v2 net] net: mvpp2: Prevent parser TCAM memory corruption Tobias Waldekranz
2025-03-21 10:10 ` Maxime Chevallier
2025-03-21 10:27 ` Tobias Waldekranz [this message]
2025-03-21 12:12 ` Andrew Lunn
2025-03-21 12:41 ` Tobias Waldekranz
2025-03-21 13:18 ` Andrew Lunn
2025-03-24 10:46 ` Tobias Waldekranz
2025-03-24 21:05 ` Jakub Kicinski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87sen6adc8.fsf@waldekranz.com \
--to=tobias@waldekranz.com \
--cc=andrew@lunn.ch \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=kuba@kernel.org \
--cc=linux@armlinux.org.uk \
--cc=marcin.s.wojtas@gmail.com \
--cc=maxime.chevallier@bootlin.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).