qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Pierrick Bouvier <pierrick.bouvier@linaro.org>
To: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>,
	qemu-devel@nongnu.org
Cc: alex.bennee@linaro.org, philmd@linaro.org,
	qemu-trivial@nongnu.org,
	Richard Henderson <richard.henderson@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eduardo Habkost <eduardo@habkost.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Yanan Wang <wangyanan55@huawei.com>
Subject: Re: [PATCH] cpu: fix memleak of 'halt_cond' and 'thread'
Date: Fri, 14 Jun 2024 10:49:05 -0700	[thread overview]
Message-ID: <baf59b50-c6b0-4904-839f-2e13565db92c@linaro.org> (raw)
In-Reply-To: <3ad18bc590ef28e1526e8053568086b453e7ffde.1718211878.git.quic_mathbern@quicinc.com>

On 6/12/24 10:04, Matheus Tavares Bernardino wrote:
> Since a4c2735f35 (cpu: move Qemu[Thread|Cond] setup into common code,
> 2024-05-30) these fields are now allocated at cpu_common_initfn(). So
> let's make sure we also free them at cpu_common_finalize().
> 
> Furthermore, the code also frees these on round robin, but we missed
> 'halt_cond'.
> 
> Signed-off-by: Matheus Tavares Bernardino <quic_mathbern@quicinc.com>
> ---
>   accel/tcg/tcg-accel-ops-rr.c | 1 +
>   hw/core/cpu-common.c         | 3 +++
>   2 files changed, 4 insertions(+)
> 
> diff --git a/accel/tcg/tcg-accel-ops-rr.c b/accel/tcg/tcg-accel-ops-rr.c
> index 84c36c1450..48c38714bd 100644
> --- a/accel/tcg/tcg-accel-ops-rr.c
> +++ b/accel/tcg/tcg-accel-ops-rr.c
> @@ -329,6 +329,7 @@ void rr_start_vcpu_thread(CPUState *cpu)
>           /* we share the thread, dump spare data */
>           g_free(cpu->thread);
>           qemu_cond_destroy(cpu->halt_cond);
> +        g_free(cpu->halt_cond);
>           cpu->thread = single_tcg_cpu_thread;
>           cpu->halt_cond = single_tcg_halt_cond;
>   
> diff --git a/hw/core/cpu-common.c b/hw/core/cpu-common.c
> index bf1a7b8892..f131cde2c0 100644
> --- a/hw/core/cpu-common.c
> +++ b/hw/core/cpu-common.c
> @@ -286,6 +286,9 @@ static void cpu_common_finalize(Object *obj)
>       g_array_free(cpu->gdb_regs, TRUE);
>       qemu_lockcnt_destroy(&cpu->in_ioctl_lock);
>       qemu_mutex_destroy(&cpu->work_mutex);
> +    qemu_cond_destroy(cpu->halt_cond);
> +    g_free(cpu->halt_cond);
> +    g_free(cpu->thread);
>   }
>   
>   static int64_t cpu_common_get_arch_id(CPUState *cpu)

Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>


  parent reply	other threads:[~2024-06-14 17:49 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-12 17:04 [PATCH] cpu: fix memleak of 'halt_cond' and 'thread' Matheus Tavares Bernardino
2024-06-13  9:39 ` Philippe Mathieu-Daudé
2024-06-14 17:49 ` Pierrick Bouvier [this message]
2024-06-17 12:48 ` Michael Tokarev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=baf59b50-c6b0-4904-839f-2e13565db92c@linaro.org \
    --to=pierrick.bouvier@linaro.org \
    --cc=alex.bennee@linaro.org \
    --cc=eduardo@habkost.net \
    --cc=marcel.apfelbaum@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-trivial@nongnu.org \
    --cc=quic_mathbern@quicinc.com \
    --cc=richard.henderson@linaro.org \
    --cc=wangyanan55@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).