From: David Laight <David.Laight@ACULAB.COM>
To: 'Joel Fernandes' <joel@joelfernandes.org>,
Alan Huang <mmpgouride@gmail.com>
Cc: Eric Dumazet <edumazet@google.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"rcu@vger.kernel.org" <rcu@vger.kernel.org>,
"Paul E. McKenney" <paulmck@kernel.org>,
"roman.gushchin@linux.dev" <roman.gushchin@linux.dev>
Subject: RE: Question about the barrier() in hlist_nulls_for_each_entry_rcu()
Date: Fri, 21 Jul 2023 15:59:38 +0000 [thread overview]
Message-ID: <962bb2b940e64e7da7b71d11b307defc@AcuMS.aculab.com> (raw)
In-Reply-To: <cc9b292c-99b1-bec9-ba8e-9c202b5835cd@joelfernandes.org>
....
> Right, it shouldn't need to cache. To Eric's point it might be risky to remove
> the barrier() and someone needs to explain that issue first (or IMO there needs
> to be another tangible reason like performance etc). Anyway, FWIW I wrote a
> simple program and I am not seeing the head->first cached with the pattern you
> shared above:
>
> #include <stdlib.h>
>
> #define READ_ONCE(x) (*(volatile typeof(x) *)&(x))
> #define barrier() __asm__ __volatile__("": : :"memory")
>
> typedef struct list_head {
> int first;
> struct list_head *next;
> } list_head;
>
> int main() {
> list_head *head = (list_head *)malloc(sizeof(list_head));
> head->first = 1;
> head->next = 0;
>
> READ_ONCE(head->first);
> barrier();
> READ_ONCE(head->first);
>
> free(head);
> return 0;
> }
You probably need to try harder to generate the error.
It probably has something to do code surrounding the
sk_nulls_for_each_rcu() in the ca065d0c^ version of udp.c.
That patch removes the retry loop - and probably breaks udp receive.
The issue is that sockets can be moved between the 'hash2' chains
(eg by connect()) without being freed.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
next prev parent reply other threads:[~2023-07-21 15:59 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-20 18:53 Question about the barrier() in hlist_nulls_for_each_entry_rcu() Alan Huang
2023-07-20 19:22 ` Eric Dumazet
2023-07-20 19:59 ` Alan Huang
2023-07-20 21:11 ` Eric Dumazet
2023-07-21 14:31 ` Alan Huang
2023-07-21 14:47 ` Eric Dumazet
2023-07-21 15:21 ` Alan Huang
2023-07-21 12:54 ` Joel Fernandes
2023-07-21 14:27 ` Alan Huang
2023-07-21 15:21 ` Joel Fernandes
2023-07-21 15:54 ` Alan Huang
2023-07-21 16:00 ` Joel Fernandes
2023-07-21 15:59 ` David Laight [this message]
2023-07-21 17:14 ` Joel Fernandes
2023-07-21 20:08 ` Alan Huang
2023-07-21 20:40 ` Alan Huang
2023-07-21 21:25 ` Alan Huang
2023-07-22 13:32 ` Alan Huang
2023-07-22 14:06 ` David Laight
2023-07-22 15:00 ` Alan Huang
2023-07-31 20:09 ` Paul E. McKenney
2023-08-03 13:40 ` Alan Huang
2023-08-03 13:53 ` Paul E. McKenney
2023-08-03 14:39 ` David Laight
2023-07-21 11:51 ` David Laight
2023-07-21 15:55 ` Alan Huang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=962bb2b940e64e7da7b71d11b307defc@AcuMS.aculab.com \
--to=david.laight@aculab.com \
--cc=edumazet@google.com \
--cc=joel@joelfernandes.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mmpgouride@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=paulmck@kernel.org \
--cc=rcu@vger.kernel.org \
--cc=roman.gushchin@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox