qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] Rationalising exit_request, cpu->exit_request and tcg_exit_req?
@ 2015-12-16 17:14 Alex Bennée
  2015-12-16 17:21 ` Paolo Bonzini
  0 siblings, 1 reply; 2+ messages in thread
From: Alex Bennée @ 2015-12-16 17:14 UTC (permalink / raw)
  To: QEMU Devel, mttcg, mark.burton, fred.konrad, Paolo Bonzini,
	Richard Henderson, Peter Crosthwaite, Peter Maydell

Hi,

While looking at Fred's current MTTCG WIP branch I ran into a problem
where:

  - async_safe_work_pending() was true
  - this triggered setting cpu->exit_request
  - however we never left tcg_exec_all()
  - because the global exit_request wasn't set
  - hence qemu_tcg_wait_io_event() never drained the async work queue

While trying to understand why we have both a cpu and a global
exit_request I then discovered there is also cpu->tcg_exit_req which is
the actual variable the TCG examines. This leads to sequences like:

void cpu_exit(CPUState *cpu)
{
    cpu->exit_request = 1;
    /* Ensure cpu_exec will see the exit request after TCG has exited.  */
    smp_wmb();
    cpu->tcg_exit_req = 1;
}

which itself is amusingly called from:

static void qemu_cpu_kick_no_halt(void)
{
    CPUState *cpu;
    /* Ensure whatever caused the exit has reached the CPU threads before
     * writing exit_request.
     */
    atomic_mb_set(&exit_request, 1);
    cpu = atomic_mb_read(&tcg_current_cpu);
    if (cpu) {
        cpu_exit(cpu);
    }
}

This seems to me to be slightly insane as we now have 3 variables that
struggle to be kept in sync. Could all this not be rationalised into a
single variable or am I missing a subtly in their different semantics?

One problem I can think of when we get to the MTTCG world is a race when
signalling other CPUs to exit and making sure that request is not
dropped as we clear an old exit_request.

The other complication is the main cpu_exec loop which works hard to
avoid leaving the main loop when processing interrupts (which require
an exit_request to trigger). This means there a potentially multiple
places where exit_requests are drained.

I don't know if there is clean-up that can happen in master or if this
all needs to be done in the mttcg work but would it make sense just to
keep cpu->exit_request, make it visible to the TCG code and make all
exits fall out to qemu_tcg_cpu_thread_fn which would be the only place
to clear the flag?

I did have a brief look at the KVM side of the code and it only seems to
reference cpu->exit_request so I think the rest of this is a TCG
problem.

Thoughts?

--
Alex Bennée

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-12-16 17:22 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-12-16 17:14 [Qemu-devel] Rationalising exit_request, cpu->exit_request and tcg_exit_req? Alex Bennée
2015-12-16 17:21 ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).