From: "Philippe Mathieu-Daudé" <philippe.mathieu.daude@gmail.com>
To: Igor Mammedov <imammedo@redhat.com>
Cc: kvm@vger.kernel.org, "Marcelo Tosatti" <mtosatti@redhat.com>,
qemu-devel@nongnu.org, "Philippe Mathieu-Daudé" <f4bug@amsat.org>,
"Paolo Bonzini" <pbonzini@redhat.com>
Subject: Re: [RFC PATCH-for-7.0 v4] target/i386/kvm: Free xsave_buf when destroying vCPU
Date: Tue, 22 Mar 2022 15:11:34 +0100 [thread overview]
Message-ID: <4967c8c2-36be-fa58-d111-bf33342fe3cd@gmail.com> (raw)
In-Reply-To: <20220322145629.7e0b3b8c@redhat.com>
On 22/3/22 14:56, Igor Mammedov wrote:
> On Tue, 22 Mar 2022 13:05:22 +0100
> Philippe Mathieu-Daudé <philippe.mathieu.daude@gmail.com> wrote:
>
>> From: Philippe Mathieu-Daudé <f4bug@amsat.org>
>>
>> Fix vCPU hot-unplug related leak reported by Valgrind:
>>
>> ==132362== 4,096 bytes in 1 blocks are definitely lost in loss record 8,440 of 8,549
>> ==132362== at 0x4C3B15F: memalign (vg_replace_malloc.c:1265)
>> ==132362== by 0x4C3B288: posix_memalign (vg_replace_malloc.c:1429)
>> ==132362== by 0xB41195: qemu_try_memalign (memalign.c:53)
>> ==132362== by 0xB41204: qemu_memalign (memalign.c:73)
>> ==132362== by 0x7131CB: kvm_init_xsave (kvm.c:1601)
>> ==132362== by 0x7148ED: kvm_arch_init_vcpu (kvm.c:2031)
>> ==132362== by 0x91D224: kvm_init_vcpu (kvm-all.c:516)
>> ==132362== by 0x9242C9: kvm_vcpu_thread_fn (kvm-accel-ops.c:40)
>> ==132362== by 0xB2EB26: qemu_thread_start (qemu-thread-posix.c:556)
>> ==132362== by 0x7EB2159: start_thread (in /usr/lib64/libpthread-2.28.so)
>> ==132362== by 0x9D45DD2: clone (in /usr/lib64/libc-2.28.so)
>>
>> Reported-by: Mark Kanda <mark.kanda@oracle.com>
>> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
>> ---
>> Based on a series from Mark:
>> https://lore.kernel.org/qemu-devel/20220321141409.3112932-1-mark.kanda@oracle.com/
>>
>> RFC because currently no time to test
>> ---
>> target/i386/kvm/kvm.c | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
>> index ef2c68a6f4..e93440e774 100644
>> --- a/target/i386/kvm/kvm.c
>> +++ b/target/i386/kvm/kvm.c
>> @@ -2072,6 +2072,8 @@ int kvm_arch_destroy_vcpu(CPUState *cs)
>> X86CPU *cpu = X86_CPU(cs);
>> CPUX86State *env = &cpu->env;
>>
>> + g_free(env->xsave_buf);
>> +
>> if (cpu->kvm_msr_buf) {
>> g_free(cpu->kvm_msr_buf);
>> cpu->kvm_msr_buf = NULL;
>
>
> shouldn't we do the same in hvf_arch_vcpu_destroy() ?
Yeah HVF needs a similar patch (at least hvf_caps needs to be released
too, but I had no time to review it carefully yet).
next prev parent reply other threads:[~2022-03-22 14:13 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-22 12:05 [RFC PATCH-for-7.0 v4] target/i386/kvm: Free xsave_buf when destroying vCPU Philippe Mathieu-Daudé
2022-03-22 13:29 ` Philippe Mathieu-Daudé
2022-03-22 14:01 ` Mark Kanda
2022-03-22 13:56 ` Igor Mammedov
2022-03-22 14:11 ` Philippe Mathieu-Daudé [this message]
2022-03-22 16:11 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4967c8c2-36be-fa58-d111-bf33342fe3cd@gmail.com \
--to=philippe.mathieu.daude@gmail.com \
--cc=f4bug@amsat.org \
--cc=imammedo@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).