From: Linhaifeng <haifeng.lin@huawei.com>
To: "Daniel P. Berrange" <berrange@redhat.com>
Cc: mst@redhat.com, qemu-devel@nongnu.org, jerry.lilijun@huawei.com
Subject: Re: [Qemu-devel] [PATCH] fix the memory leak for share hugepage
Date: Sat, 18 Oct 2014 11:20:13 +0800 [thread overview]
Message-ID: <5441DC6D.2060602@huawei.com> (raw)
In-Reply-To: <20141017132626.GA6628@redhat.com>
On 2014/10/17 21:26, Daniel P. Berrange wrote:
> On Fri, Oct 17, 2014 at 04:57:27PM +0800, Linhaifeng wrote:
>>
>>
>> On 2014/10/17 16:33, Daniel P. Berrange wrote:
>>> On Fri, Oct 17, 2014 at 04:27:17PM +0800, haifeng.lin@huawei.com wrote:
>>>> From: linhaifeng <haifeng.lin@huawei.com>
>>>>
>>>> The VM start with share hugepage should close the hugefile fd
>>>> when exit.Because the hugepage fd may be send to other process
>>>> e.g vhost-user If qemu not close the fd the other process can
>>>> not free the hugepage otherwise exit process,this is ugly,so
>>>> qemu should close all shared fd when exit.
>>>>
>>>> Signed-off-by: linhaifeng <haifeng.lin@huawei.com>
>>>
>>> Err, all file descriptors are closed automatically when a process
>>> exits. So manually calling close(fd) before exit can't have any
>>> functional effect on a resource leak.
>>>
>>> If QEMU has sent the FD to another process, that process has a
>>> completely separate copy of the FD. Closing the FD in QEMU will
>>> not close the FD in the other process. You need the other process
>>> to exit for the copy to be closed.
>>>
>>> Regards,
>>> Daniel
>>>
>> Hi,daniel
>>
>> QEMU send the fd by unix domain socket.unix domain socket just install the fd to
>> other process and inc the f_count,if qemu not close the fd the f_count is not dec.
>> Then the other process even close the fd the hugepage would not freed whise the other process exit.
>
> The kernel always closes all FDs when a process exits. So if this FD is
> not being correctly closed then it is a kernel bug. There should never
> be any reason for an application to do close(fd) before exiting.
>
> Regards,
> Daniel
>
Hi,daniel
I don't think this is kernel's bug.May be this a problem about usage.
If you open a file you should close it too.
This is <<linux man page>>about how to free resource of file.
http://linux.die.net/man/2/close
I'm trying to describe my problem.
For example, there are 2 VMs run with hugepage and the hugepage only for QEMU to use.
Before run VM the meminfo is :
HugePages_Total: 4096
HugePages_Free: 4096
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Run the two VMs.QEMU deal with hugepage as follow steps:
1.open
2.unlink
3.mmap
4.use memory of hugepage.After this step the meminfo is :
HugePages_Total: 4096
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
5.shutdown VM with signal 15 without close(fd).After this step the meminfo is :
HugePages_Total: 4096
HugePages_Free: 4096
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Yes,it works well,like you said the kernel recycle all resources.
For another example,there are 2 VMs run with hugepage and share the hugepage with vapp(a vhost-user application).
Before run VM the meminfo is :
HugePages_Total: 4096
HugePages_Free: 4096
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Run the first VM.QEMU deal with hugepage as follow steps:
1.open
2.unlink
3.mmap
4.use memory of hugepage and send the fd to vapp with unix domain socket.After this step the meminfo is:
HugePages_Total: 4096
HugePages_Free: 2048
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Run the second VM.After this step the meminfo is:
HugePages_Total: 4096
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Then I want to close the first VM and run another VM.After close the first VM and close the fd in vapp the meminfo is :
HugePages_Total: 4096
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
So failed to run the third VM because the first VM have not free the hugepage.After apply this patch the meminfo is:
HugePages_Total: 4096
HugePages_Free: 2048
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
So i can run the third VM success.
--
Regards,
Haifeng
next prev parent reply other threads:[~2014-10-18 3:20 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-10-17 8:27 [Qemu-devel] [PATCH] fix the memory leak for share hugepage haifeng.lin
2014-10-17 8:33 ` Daniel P. Berrange
2014-10-17 8:56 ` Gonglei
2014-10-17 9:13 ` Linhaifeng
2014-10-17 8:57 ` Linhaifeng
2014-10-17 9:21 ` Linhaifeng
2014-10-17 13:26 ` Daniel P. Berrange
2014-10-18 3:20 ` Linhaifeng [this message]
2014-10-20 2:12 ` Wen Congyang
2014-10-20 4:48 ` Linhaifeng
2014-10-20 5:32 ` Wen Congyang
2014-10-20 6:17 ` Linhaifeng
2014-10-20 6:26 ` Wen Congyang
2014-10-20 7:50 ` Linhaifeng
2014-10-20 7:54 ` Daniel P. Berrange
2014-10-17 8:43 ` zhanghailiang
2014-10-18 3:22 ` Linhaifeng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5441DC6D.2060602@huawei.com \
--to=haifeng.lin@huawei.com \
--cc=berrange@redhat.com \
--cc=jerry.lilijun@huawei.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).