From: "Alex Bennée" <alex.bennee@linaro.org>
To: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org,
fred.konrad@greensocs.com, a.rigo@virtualopensystems.com,
serge.fdrv@gmail.com, cota@braap.org, bobby.prani@gmail.com
Cc: mark.burton@greensocs.com, pbonzini@redhat.com,
jan.kiszka@siemens.com, rth@twiddle.net,
peter.maydell@linaro.org, claudio.fontana@huawei.com,
Peter Crosthwaite <crosthwaite.peter@gmail.com>
Subject: Re: [Qemu-devel] [PATCH v5 13/13] cpu-exec: replace cpu->queued_work with GArray
Date: Tue, 02 Aug 2016 18:42:36 +0100 [thread overview]
Message-ID: <877fbz6p9v.fsf@linaro.org> (raw)
In-Reply-To: <1470158864-17651-14-git-send-email-alex.bennee@linaro.org>
Alex Bennée <alex.bennee@linaro.org> writes:
> Under times of high memory stress the additional small mallocs by a
> linked list are source of potential memory fragmentation. As we have
> worked hard to avoid mallocs elsewhere when queuing work we might as
> well do the same for the list. We convert the lists to a auto-resizeing
> GArray which will re-size in steps of powers of 2.
<snip>
> diff --git a/cpu-exec-common.c b/cpu-exec-common.c
> index 6d5da15..745d973 100644
> --- a/cpu-exec-common.c
> +++ b/cpu-exec-common.c
> @@ -113,17 +113,18 @@ void wait_safe_cpu_work(void)
> static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
> {
> qemu_mutex_lock(&cpu->work_mutex);
> - if (cpu->queued_work_first == NULL) {
> - cpu->queued_work_first = wi;
> - } else {
> - cpu->queued_work_last->next = wi;
> +
> + if (!cpu->queued_work) {
> + cpu->queued_work = g_array_sized_new(true, true,
> + sizeof(struct qemu_work_item), 16);
> }
> - cpu->queued_work_last = wi;
> - wi->next = NULL;
> - wi->done = false;
> + trace_queue_work_on_cpu(cpu->cpu_index, wi->safe,
> cpu->queued_work->len);
Oops, this was left over from testing, I've posted an updated version of
the patch.
> +
> + g_array_append_val(cpu->queued_work, *wi);
> if (wi->safe) {
> atomic_inc(&safe_work_pending);
> }
> +
> qemu_mutex_unlock(&cpu->work_mutex);
>
> if (!wi->safe) {
> @@ -138,6 +139,7 @@ static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
> void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data)
> {
> struct qemu_work_item wi;
> + bool done = false;
>
> if (qemu_cpu_is_self(cpu)) {
> func(cpu, data);
> @@ -146,11 +148,11 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data)
>
> wi.func = func;
> wi.data = data;
> - wi.free = false;
> wi.safe = false;
> + wi.done = &done;
>
> queue_work_on_cpu(cpu, &wi);
> - while (!atomic_mb_read(&wi.done)) {
> + while (!atomic_mb_read(&done)) {
> CPUState *self_cpu = current_cpu;
>
> qemu_cond_wait(&qemu_work_cond, qemu_get_cpu_work_mutex());
> @@ -160,70 +162,75 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data)
>
> void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data)
> {
> - struct qemu_work_item *wi;
> + struct qemu_work_item wi;
>
> if (qemu_cpu_is_self(cpu)) {
> func(cpu, data);
> return;
> }
>
> - wi = g_malloc0(sizeof(struct qemu_work_item));
> - wi->func = func;
> - wi->data = data;
> - wi->free = true;
> - wi->safe = false;
> + wi.func = func;
> + wi.data = data;
> + wi.safe = false;
> + wi.done = NULL;
>
> - queue_work_on_cpu(cpu, wi);
> + queue_work_on_cpu(cpu, &wi);
> }
>
> void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data)
> {
> - struct qemu_work_item *wi;
> + struct qemu_work_item wi;
>
> - wi = g_malloc0(sizeof(struct qemu_work_item));
> - wi->func = func;
> - wi->data = data;
> - wi->free = true;
> - wi->safe = true;
> + wi.func = func;
> + wi.data = data;
> + wi.safe = true;
> + wi.done = NULL;
>
> - queue_work_on_cpu(cpu, wi);
> + queue_work_on_cpu(cpu, &wi);
> }
>
> void process_queued_cpu_work(CPUState *cpu)
> {
> struct qemu_work_item *wi;
> -
> - if (cpu->queued_work_first == NULL) {
> - return;
> - }
> + GArray *work_list = NULL;
> + int i;
>
> qemu_mutex_lock(&cpu->work_mutex);
> - while (cpu->queued_work_first != NULL) {
> - wi = cpu->queued_work_first;
> - cpu->queued_work_first = wi->next;
> - if (!cpu->queued_work_first) {
> - cpu->queued_work_last = NULL;
> - }
> - if (wi->safe) {
> - while (tcg_pending_threads) {
> - qemu_cond_wait(&qemu_exclusive_cond,
> - qemu_get_cpu_work_mutex());
> +
> + work_list = cpu->queued_work;
> + cpu->queued_work = NULL;
> +
> + qemu_mutex_unlock(&cpu->work_mutex);
> +
> + if (work_list) {
> +
> + g_assert(work_list->len > 0);
> +
> + for (i = 0; i < work_list->len; i++) {
> + wi = &g_array_index(work_list, struct qemu_work_item, i);
> +
> + if (wi->safe) {
> + while (tcg_pending_threads) {
> + qemu_cond_wait(&qemu_exclusive_cond,
> + qemu_get_cpu_work_mutex());
> + }
> }
> - }
> - qemu_mutex_unlock(&cpu->work_mutex);
> - wi->func(cpu, wi->data);
> - qemu_mutex_lock(&cpu->work_mutex);
> - if (wi->safe) {
> - if (!atomic_dec_fetch(&safe_work_pending)) {
> - qemu_cond_broadcast(&qemu_safe_work_cond);
> +
> + wi->func(cpu, wi->data);
> +
> + if (wi->safe) {
> + if (!atomic_dec_fetch(&safe_work_pending)) {
> + qemu_cond_broadcast(&qemu_safe_work_cond);
> + }
> + }
> +
> + if (wi->done) {
> + atomic_mb_set(wi->done, true);
> }
> }
> - if (wi->free) {
> - g_free(wi);
> - } else {
> - atomic_mb_set(&wi->done, true);
> - }
> +
> + trace_process_queued_cpu_work(cpu->cpu_index,
> work_list->len);
And so was this.
--
Alex Bennée
next prev parent reply other threads:[~2016-08-02 17:42 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-02 17:27 [Qemu-devel] [PATCH v5 00/13] cpu-exec: Safe work in quiescent state Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 01/13] atomic: introduce atomic_dec_fetch Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 02/13] cpus: pass CPUState to run_on_cpu helpers Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 03/13] cpus: Move common code out of {async_, }run_on_cpu() Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 04/13] cpus: Wrap mutex used to protect CPU work Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 05/13] cpus: Rename flush_queued_work() Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 06/13] linux-user: Use QemuMutex and QemuCond Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 07/13] linux-user: Rework exclusive operation mechanism Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 08/13] linux-user: Add qemu_cpu_is_self() and qemu_cpu_kick() Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 09/13] linux-user: Support CPU work queue Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 10/13] bsd-user: " Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 11/13] cpu-exec-common: Introduce async_safe_run_on_cpu() Alex Bennée
2016-08-02 19:22 ` Emilio G. Cota
2016-08-03 21:02 ` Alex Bennée
2016-08-03 23:17 ` Emilio G. Cota
2016-08-04 6:44 ` Alex Bennée
2016-08-28 0:21 ` Paolo Bonzini
2016-08-29 17:26 ` Paolo Bonzini
2016-08-31 10:09 ` Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 12/13] tcg: Make tb_flush() thread safe Alex Bennée
2016-08-02 17:27 ` [Qemu-devel] [PATCH v5 13/13] cpu-exec: replace cpu->queued_work with GArray Alex Bennée
2016-08-02 17:36 ` Alex Bennée
2016-08-02 17:42 ` Alex Bennée [this message]
2016-08-02 18:53 ` Emilio G. Cota
2016-08-03 8:34 ` Alex Bennée
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=877fbz6p9v.fsf@linaro.org \
--to=alex.bennee@linaro.org \
--cc=a.rigo@virtualopensystems.com \
--cc=bobby.prani@gmail.com \
--cc=claudio.fontana@huawei.com \
--cc=cota@braap.org \
--cc=crosthwaite.peter@gmail.com \
--cc=fred.konrad@greensocs.com \
--cc=jan.kiszka@siemens.com \
--cc=mark.burton@greensocs.com \
--cc=mttcg@listserver.greensocs.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=rth@twiddle.net \
--cc=serge.fdrv@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).