From: David Hildenbrand <david@redhat.com>
To: Peter Xu <peterx@redhat.com>, qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <rth@twiddle.net>
Subject: Re: [PATCH v2 2/9] cpus: Move do_run_on_cpu into softmmu/cpus.c
Date: Tue, 27 Jul 2021 15:04:49 +0200 [thread overview]
Message-ID: <6c15cf4f-6991-567f-2fea-a04596184ce7@redhat.com> (raw)
In-Reply-To: <20210723193444.133412-3-peterx@redhat.com>
On 23.07.21 21:34, Peter Xu wrote:
> It's only used by softmmu binaries not linux-user ones. Make it static and
> drop the definition in the header too.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>
> ---
> cpus-common.c | 25 -------------------------
> include/hw/core/cpu.h | 12 ------------
> softmmu/cpus.c | 26 ++++++++++++++++++++++++++
> 3 files changed, 26 insertions(+), 37 deletions(-)
>
> diff --git a/cpus-common.c b/cpus-common.c
> index d814b2439a..670826363f 100644
> --- a/cpus-common.c
> +++ b/cpus-common.c
> @@ -124,31 +124,6 @@ void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
> qemu_cpu_kick(cpu);
> }
>
> -void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
> - QemuMutex *mutex)
> -{
> - struct qemu_work_item wi;
> -
> - if (qemu_cpu_is_self(cpu)) {
> - func(cpu, data);
> - return;
> - }
> -
> - wi.func = func;
> - wi.data = data;
> - wi.done = false;
> - wi.free = false;
> - wi.exclusive = false;
> -
> - queue_work_on_cpu(cpu, &wi);
> - while (!qatomic_mb_read(&wi.done)) {
> - CPUState *self_cpu = current_cpu;
> -
> - qemu_cond_wait(&qemu_work_cond, mutex);
> - current_cpu = self_cpu;
> - }
> -}
> -
> void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
> {
> struct qemu_work_item *wi;
> diff --git a/include/hw/core/cpu.h b/include/hw/core/cpu.h
> index f62ae88524..711ecad62f 100644
> --- a/include/hw/core/cpu.h
> +++ b/include/hw/core/cpu.h
> @@ -689,18 +689,6 @@ void qemu_cpu_kick(CPUState *cpu);
> */
> bool cpu_is_stopped(CPUState *cpu);
>
> -/**
> - * do_run_on_cpu:
> - * @cpu: The vCPU to run on.
> - * @func: The function to be executed.
> - * @data: Data to pass to the function.
> - * @mutex: Mutex to release while waiting for @func to run.
> - *
> - * Used internally in the implementation of run_on_cpu.
> - */
> -void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
> - QemuMutex *mutex);
> -
> /**
> * run_on_cpu:
> * @cpu: The vCPU to run on.
> diff --git a/softmmu/cpus.c b/softmmu/cpus.c
> index 071085f840..52adc98d39 100644
> --- a/softmmu/cpus.c
> +++ b/softmmu/cpus.c
> @@ -382,6 +382,32 @@ void qemu_init_cpu_loop(void)
> qemu_thread_get_self(&io_thread);
> }
>
> +static void
> +do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
> + QemuMutex *mutex)
> +{
> + struct qemu_work_item wi;
You could do
struct qemu_work_item wi = {
.func = func,
.data = data,
};
instead of the separate initialization below.
> +
> + if (qemu_cpu_is_self(cpu)) {
> + func(cpu, data);
> + return;
> + }
> +
> + wi.func = func;
> + wi.data = data;
> + wi.done = false;
> + wi.free = false;
> + wi.exclusive = false;
> +
> + queue_work_on_cpu(cpu, &wi);
> + while (!qatomic_mb_read(&wi.done)) {
> + CPUState *self_cpu = current_cpu;
> +
> + qemu_cond_wait(&qemu_work_cond, mutex);
> + current_cpu = self_cpu;
> + }
> +}
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Thanks,
David / dhildenb
next prev parent reply other threads:[~2021-07-27 13:06 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-23 19:34 [PATCH v2 0/9] memory: Sanity checks memory transaction when releasing BQL Peter Xu
2021-07-23 19:34 ` [PATCH v2 1/9] cpus: Export queue work related fields to cpu.h Peter Xu
2021-07-27 13:02 ` David Hildenbrand
2021-07-23 19:34 ` [PATCH v2 2/9] cpus: Move do_run_on_cpu into softmmu/cpus.c Peter Xu
2021-07-27 13:04 ` David Hildenbrand [this message]
2021-07-23 19:34 ` [PATCH v2 3/9] memory: Introduce memory_region_transaction_{push|pop}() Peter Xu
2021-07-27 13:06 ` David Hildenbrand
2021-07-23 19:34 ` [PATCH v2 4/9] memory: Don't do topology update in memory finalize() Peter Xu
2021-07-27 13:21 ` David Hildenbrand
2021-07-27 16:02 ` Peter Xu
2021-07-28 12:13 ` David Hildenbrand
2021-07-28 13:56 ` Peter Xu
2021-07-28 14:01 ` David Hildenbrand
2021-07-23 19:34 ` [PATCH v2 5/9] cpus: Use qemu_cond_wait_iothread() where proper Peter Xu
2021-07-27 12:49 ` David Hildenbrand
2021-07-23 19:34 ` [PATCH v2 6/9] cpus: Remove the mutex parameter from do_run_on_cpu() Peter Xu
2021-07-27 12:50 ` David Hildenbrand
2021-07-23 19:34 ` [PATCH v2 7/9] cpus: Introduce qemu_mutex_unlock_iothread_prepare() Peter Xu
2021-07-27 12:59 ` David Hildenbrand
2021-07-27 16:08 ` Peter Xu
2021-07-28 12:11 ` David Hildenbrand
2021-07-23 19:34 ` [PATCH v2 8/9] memory: Assert on no ongoing memory transaction before release BQL Peter Xu
2021-07-27 13:00 ` David Hildenbrand
2021-07-23 19:34 ` [PATCH v2 9/9] memory: Delay the transaction pop() until commit completed Peter Xu
2021-07-27 13:02 ` David Hildenbrand
2021-07-23 22:36 ` [PATCH v2 0/9] memory: Sanity checks memory transaction when releasing BQL Peter Xu
2021-07-27 12:41 ` David Hildenbrand
2021-07-27 16:35 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6c15cf4f-6991-567f-2fea-a04596184ce7@redhat.com \
--to=david@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).