From: Juan Quintela <quintela@redhat.com>
To: Fabiano Rosas <farosas@suse.de>
Cc: qemu-devel@nongnu.org, Peter Xu <peterx@redhat.com>,
Leonardo Bras <leobras@redhat.com>,
Elena Ufimtseva <elena.ufimtseva@oracle.com>
Subject: Re: [RFC PATCH v2 1/6] migration/multifd: Remove channels_ready semaphore
Date: Thu, 19 Oct 2023 17:18:27 +0200 [thread overview]
Message-ID: <87wmvi3akc.fsf@secure.mitica> (raw)
In-Reply-To: <87r0lqy83p.fsf@suse.de> (Fabiano Rosas's message of "Thu, 19 Oct 2023 11:55:54 -0300")
Fabiano Rosas <farosas@suse.de> wrote:
> Juan Quintela <quintela@redhat.com> writes:
>
>> Fabiano Rosas <farosas@suse.de> wrote:
>>> The channels_ready semaphore is a global variable not linked to any
>>> single multifd channel. Waiting on it only means that "some" channel
>>> has become ready to send data. Since we need to address the channels
>>> by index (multifd_send_state->params[i]), that information adds
>>> nothing of value.
>>
>> NAK.
>>
>> I disagree here O:-)
>>
>> the reason why that channel exist is for multifd_send_pages()
>>
>> And simplifying the function what it does is:
>>
>> sem_wait(channels_ready);
>>
>> for_each_channel()
>> look if it is empty()
>>
>> But with the semaphore, we guarantee that when we go to the loop, there
>> is a channel ready, so we know we donat busy wait searching for a
>> channel that is free.
>>
>
> Ok, so that clarifies the channels_ready usage.
>
> Now, thinking out loud... can't we simply (famous last words) remove the
> "if (!p->pending_job)" line and let multifd_send_pages() prepare another
> payload for the channel? That way multifd_send_pages() could already
> return and the channel would see one more pending_job and proceed to
> send it.
No.
See the while loop in multifd_send_thread()
while (true) {
qemu_mutex_lock(&p->mutex);
if (p->pending_job) {
......
Do things with parts of the struct that are shared with the
migration thread
....
qemu_mutex_unlock(&p->mutex);
// Drop the lock
// Do mothing things on the channel, pending_job means that
// it is working
// mutex dropped means that migration_thread can use the
// shared variables, but not the channel
// now here we decrease pending_job, so main thread can
// change things as it wants
// But we need to take the lock first.
qemu_mutex_lock(&p->mutex);
p->pending_job--;
qemu_mutex_unlock(&p->mutex);
......
}
}
This is a common pattern for concurrency. To not have your mutex locked
too long, you put a variable (that can only be tested/changed with the
lock) to explain that the "channel" is busy, the struct that lock
protects is not (see how we make sure that the channel don't use any
variable of the struct without the locking).
> Or, since there's no resending anyway, we could dec pending_jobs earlier
> before unlocking the channel. It seems the channel could be made ready
> for another job as soon as the packet is built and the lock is released.
pending_jobs can be transformed in a bool. We just need to make sure
that we didn't screw it in _sync().
> That way we could remove the semaphore and let the mutex do the job of
> waiting for the channel to become ready.
As said, we don't want that. Because channels can go a different speeds
due to factors outside of our control.
If the semaphore bothers you, you can change it to to a condition
variable, but you just move the complexity from one side to the other
(Initial implementation had a condition variable, but Paolo said that
the semaphore is more efficient, so he won)
>> Notice that I fully agree that the sem is not needed for locking.
>> Locking is done with the mutex. It is just used to make sure that we
>> don't busy loop on that loop.
>>
>> And we use a sem, because it is the easiest way to know how many
>> channels are ready (even when we only care if there is one when we
>> arrive to that code).
>
> Yep, that's fine, no objection here.
>
>>
>> We lost count of that counter, and we fixed that here:
>
> Kind of, because we still don't wait on it during cleanup. If we did,
> then we could have an assert at the end to make sure this doesn't
> regress again.
>
> And maybe use channels_ready.count for other types of introspection.
We could.
>> commit d2026ee117147893f8d80f060cede6d872ecbd7f
>> Author: Juan Quintela <quintela@redhat.com>
>> Date: Wed Apr 26 12:20:36 2023 +0200
>>
>> multifd: Fix the number of channels ready
>>
>> We don't wait in the sem when we are doing a sync_main. Make it
>>
>> And we were addressing the problem that some users where finding that we
>> were busy waiting on that loop.
>>
>>> The channel being addressed is not necessarily the
>>> one that just released the semaphore.
>>
>> We only care that there is at least one free. We are going to search
>> the next one.
>>
>> Does this explanation makes sense?
>
> It does, thanks for taking the time to educate us =)
>
> I made some suggestions above, but I might be missing something still.
I think that the current code is already quite efficient.
But will have to think if that improves anything major.
Later, Juan.
next prev parent reply other threads:[~2023-10-19 15:19 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-12 14:06 [RFC PATCH v2 0/6] migration/multifd: Locking changes Fabiano Rosas
2023-10-12 14:06 ` [RFC PATCH v2 1/6] migration/multifd: Remove channels_ready semaphore Fabiano Rosas
2023-10-19 9:06 ` Juan Quintela
2023-10-19 14:35 ` Peter Xu
2023-10-19 15:00 ` Juan Quintela
2023-10-19 15:46 ` Peter Xu
2023-10-19 18:28 ` Juan Quintela
2023-10-19 18:50 ` Peter Xu
2023-10-20 7:56 ` Juan Quintela
2023-10-19 14:55 ` Fabiano Rosas
2023-10-19 15:18 ` Juan Quintela [this message]
2023-10-19 15:56 ` Fabiano Rosas
2023-10-19 18:41 ` Juan Quintela
2023-10-19 19:04 ` Peter Xu
2023-10-20 7:53 ` Juan Quintela
2023-10-20 12:48 ` Fabiano Rosas
2023-10-22 20:17 ` Peter Xu
2023-10-12 14:06 ` [RFC PATCH v2 2/6] migration/multifd: Stop checking p->quit in multifd_send_thread Fabiano Rosas
2023-10-19 9:08 ` Juan Quintela
2023-10-19 14:58 ` Fabiano Rosas
2023-10-19 15:19 ` Peter Xu
2023-10-19 15:19 ` Juan Quintela
2023-10-12 14:06 ` [RFC PATCH v2 3/6] migration/multifd: Decouple control flow from the SYNC packet Fabiano Rosas
2023-10-19 10:28 ` Juan Quintela
2023-10-19 15:31 ` Peter Xu
2023-10-12 14:06 ` [RFC PATCH v2 4/6] migration/multifd: Extract sem_done waiting into a function Fabiano Rosas
2023-10-12 14:06 ` [RFC PATCH v2 5/6] migration/multifd: Stop setting 'quit' outside of channels Fabiano Rosas
2023-10-19 10:35 ` Juan Quintela
2023-10-12 14:06 ` [RFC PATCH v2 6/6] migration/multifd: Bring back the 'ready' semaphore Fabiano Rosas
2023-10-19 10:43 ` Juan Quintela
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87wmvi3akc.fsf@secure.mitica \
--to=quintela@redhat.com \
--cc=elena.ufimtseva@oracle.com \
--cc=farosas@suse.de \
--cc=leobras@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).