virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: Oleg Nesterov <oleg@redhat.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: axboe@kernel.dk, brauner@kernel.org, mst@redhat.com,
	linux@leemhuis.info, linux-kernel@vger.kernel.org,
	ebiederm@xmission.com, stefanha@redhat.com,
	nicolas.dichtel@6wind.com,
	virtualization@lists.linux-foundation.org
Subject: Re: [PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression
Date: Mon, 5 Jun 2023 16:20:35 +0200	[thread overview]
Message-ID: <20230605142034.GD32275@redhat.com> (raw)
In-Reply-To: <CAHk-=whKyWvzg=7_m1o_KLC3zb9FjTBHftc36-5M9X78AxwRXg@mail.gmail.com>

On 06/02, Linus Torvalds wrote:
>
> On Fri, Jun 2, 2023 at 1:59 PM Oleg Nesterov <oleg@redhat.com> wrote:
> >
> > As I said from the very beginning, this code is fine on x86 because
> > atomic ops are fully serialised on x86.
>
> Yes. Other architectures require __smp_mb__{before,after}_atomic for
> the bit setting ops to actually be memory barriers.
>
> We *should* probably have acquire/release versions of the bit test/set
> helpers, but we don't, so they end up being full memory barriers with
> those things. Which isn't optimal, but I doubt it matters on most
> architectures.
>
> So maybe we'll some day have a "test_bit_acquire()" and a
> "set_bit_release()" etc.

In this particular case we need clear_bit_release() and iiuc it is
already here, just it is named clear_bit_unlock().

So do you agree that vhost_worker() needs smp_mb__before_atomic()
before clear_bit() or just clear_bit_unlock() to avoid the race with
vhost_work_queue() ?

Let me provide a simplified example:

	struct item {
		struct llist_node	llist;
		unsigned long		flags;
	};

	struct llist_head HEAD = {};	// global

	void queue(struct item *item)
	{
		// ensure this item was already flushed
		if (!test_and_set_bit(0, &item->flags))
			llist_add(item->llist, &HEAD);

	}

	void flush(void)
	{
		struct llist_node *head = llist_del_all(&HEAD);
		struct item *item, *next;

		llist_for_each_entry_safe(item, next, head, llist)
			clear_bit(0, &item->flags);
	}

I think this code is buggy in that flush() can race with queue(), the same
way as vhost_worker() and vhost_work_queue().

Once flush() clears bit 0, queue() can come on another CPU and re-queue
this item and change item->llist.next. We need a barrier before clear_bit()
to ensure that next = llist_entry(item->next) in llist_for_each_entry_safe()
completes before the result of clear_bit() is visible to queue().

And, I do not think we can rely on control dependency because... because
I fail to see the load-store control dependency in this code,
llist_for_each_entry_safe() loads item->llist.next but doesn't check the
result until the next iteration.

No?

Oleg.

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2023-06-05 14:21 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-22  2:51 [PATCH 0/3] vhost: Fix freezer/ps regressions Mike Christie
2023-05-22  2:51 ` [PATCH 1/3] signal: Don't always put SIGKILL in shared_pending Mike Christie
2023-05-23 15:30   ` Eric W. Biederman
2023-05-22  2:51 ` [PATCH 2/3] signal: Don't exit for PF_USER_WORKER tasks Mike Christie
2023-05-22  2:51 ` [PATCH 3/3] fork, vhost: Use CLONE_THREAD to fix freezer/ps regression Mike Christie
2023-05-22 12:30   ` Oleg Nesterov
2023-05-22 17:00     ` Mike Christie
2023-05-22 17:47       ` Oleg Nesterov
2023-05-23 12:15         ` Oleg Nesterov
2023-05-23 15:57           ` Eric W. Biederman
2023-05-24 14:10             ` Oleg Nesterov
2023-05-24 14:44               ` Eric W. Biederman
2023-05-25 11:55                 ` Oleg Nesterov
2023-05-25 15:30                   ` Eric W. Biederman
2023-05-25 16:20                     ` Linus Torvalds
2023-05-27  9:49                       ` Eric W. Biederman
2023-05-27 16:12                         ` Linus Torvalds
2023-05-28  1:17                           ` Eric W. Biederman
2023-05-28  1:21                             ` Linus Torvalds
2023-05-29 11:19                             ` Oleg Nesterov
2023-05-29 16:09                               ` michael.christie
2023-05-29 17:46                                 ` Oleg Nesterov
2023-05-29 17:54                                   ` Oleg Nesterov
2023-05-29 19:03                                     ` Mike Christie
2023-05-29 19:35                                   ` Mike Christie
2023-05-29 19:46                                     ` michael.christie
2023-05-30  2:48                                       ` Eric W. Biederman
2023-05-30  2:38                                 ` Eric W. Biederman
2023-05-30 15:34                                   ` Mike Christie
2023-05-31  3:30                                   ` Mike Christie
2023-05-29 16:11                               ` michael.christie
     [not found]                               ` <20230530-autor-faxnummer-01e0a31c0fb8@brauner>
2023-05-30 17:55                                 ` Oleg Nesterov
2023-05-30 15:01                         ` Eric W. Biederman
2023-05-31  5:22             ` Jason Wang
2023-05-24  0:02           ` Mike Christie
2023-05-25 16:15           ` Mike Christie
2023-05-28  1:41             ` Eric W. Biederman
2023-05-28 19:29               ` Mike Christie
2023-05-31  5:22           ` Jason Wang
2023-05-31  7:25             ` Oleg Nesterov
2023-05-31  8:17               ` Jason Wang
2023-05-31  9:14                 ` Oleg Nesterov
2023-06-01  2:44                   ` Jason Wang
2023-06-01  7:43                     ` Oleg Nesterov
2023-06-02  5:03                       ` Jason Wang
2023-06-02 17:58                         ` Oleg Nesterov
2023-06-02 20:07                           ` Linus Torvalds
2023-06-05 14:20                             ` Oleg Nesterov [this message]
2023-05-22 19:40   ` Michael S. Tsirkin
2023-05-23 15:39     ` Eric W. Biederman
2023-05-23 15:48     ` Mike Christie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230605142034.GD32275@redhat.com \
    --to=oleg@redhat.com \
    --cc=axboe@kernel.dk \
    --cc=brauner@kernel.org \
    --cc=ebiederm@xmission.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@leemhuis.info \
    --cc=mst@redhat.com \
    --cc=nicolas.dichtel@6wind.com \
    --cc=stefanha@redhat.com \
    --cc=torvalds@linux-foundation.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).