From: "Philippe Mathieu-Daudé" <philippe.mathieu.daude@gmail.com>
To: Mark Kanda <mark.kanda@oracle.com>, qemu-devel@nongnu.org
Cc: pbonzini@redhat.com, richard.henderson@linaro.org, f4bug@amsat.org
Subject: Re: [PATCH v3 5/5] i386/cpu: Free env->xsave_buf in KVM and HVF destory_vcpu_thread routines
Date: Mon, 21 Mar 2022 23:08:37 +0100 [thread overview]
Message-ID: <38ac951e-011d-547b-21cf-347a2a06a2a6@gmail.com> (raw)
In-Reply-To: <1938c323-1737-479d-2e3b-baa6c746da4a@gmail.com>
On 21/3/22 23:04, Philippe Mathieu-Daudé wrote:
> On 21/3/22 15:14, Mark Kanda wrote:
>> Create KVM and HVF specific destory_vcpu_thread() routines to free
>
> Typo "destroy"
>
>> env->xsave_buf.
>>
>> vCPU hotunplug related leak reported by Valgrind:
>>
>> ==132362== 4,096 bytes in 1 blocks are definitely lost in loss record
>> 8,440 of 8,549
>> ==132362== at 0x4C3B15F: memalign (vg_replace_malloc.c:1265)
>> ==132362== by 0x4C3B288: posix_memalign (vg_replace_malloc.c:1429)
>> ==132362== by 0xB41195: qemu_try_memalign (memalign.c:53)
>> ==132362== by 0xB41204: qemu_memalign (memalign.c:73)
>> ==132362== by 0x7131CB: kvm_init_xsave (kvm.c:1601)
>> ==132362== by 0x7148ED: kvm_arch_init_vcpu (kvm.c:2031)
>> ==132362== by 0x91D224: kvm_init_vcpu (kvm-all.c:516)
>> ==132362== by 0x9242C9: kvm_vcpu_thread_fn (kvm-accel-ops.c:40)
>> ==132362== by 0xB2EB26: qemu_thread_start (qemu-thread-posix.c:556)
>> ==132362== by 0x7EB2159: start_thread (in
>> /usr/lib64/libpthread-2.28.so)
>> ==132362== by 0x9D45DD2: clone (in /usr/lib64/libc-2.28.so)
>>
>> Signed-off-by: Mark Kanda <mark.kanda@oracle.com>
>> ---
>> accel/hvf/hvf-accel-ops.c | 11 ++++++++++-
>> accel/kvm/kvm-accel-ops.c | 11 ++++++++++-
>> 2 files changed, 20 insertions(+), 2 deletions(-)
>
> Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
I meant:
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
next prev parent reply other threads:[~2022-03-21 22:13 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-21 14:14 [PATCH v3 0/5] vCPU hotunplug related memory leaks Mark Kanda
2022-03-21 14:14 ` [PATCH v3 1/5] accel: Introduce AccelOpsClass::destroy_vcpu_thread() Mark Kanda
2022-03-21 14:14 ` [PATCH v3 2/5] softmmu/cpus: Free cpu->thread in generic_destroy_vcpu_thread() Mark Kanda
2022-03-21 22:08 ` Philippe Mathieu-Daudé
2022-03-23 14:43 ` Paolo Bonzini
2022-03-21 14:14 ` [PATCH v3 3/5] softmmu/cpus: Free cpu->halt_cond " Mark Kanda
2022-03-21 22:12 ` Philippe Mathieu-Daudé
2022-03-22 12:52 ` Mark Kanda
2022-03-22 13:32 ` Philippe Mathieu-Daudé
2022-03-21 14:14 ` [PATCH v3 4/5] cpu: Free cpu->cpu_ases in cpu_address_space_destroy() Mark Kanda
2022-03-21 22:03 ` Philippe Mathieu-Daudé
2022-03-21 22:08 ` Philippe Mathieu-Daudé
2022-03-21 14:14 ` [PATCH v3 5/5] i386/cpu: Free env->xsave_buf in KVM and HVF destory_vcpu_thread routines Mark Kanda
2022-03-21 22:04 ` Philippe Mathieu-Daudé
2022-03-21 22:08 ` Philippe Mathieu-Daudé [this message]
2022-03-22 12:01 ` Philippe Mathieu-Daudé
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=38ac951e-011d-547b-21cf-347a2a06a2a6@gmail.com \
--to=philippe.mathieu.daude@gmail.com \
--cc=f4bug@amsat.org \
--cc=mark.kanda@oracle.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=richard.henderson@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).