From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55293) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XWM3e-00078z-D7 for qemu-devel@nongnu.org; Tue, 23 Sep 2014 05:06:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XWM3Z-0003YY-Jl for qemu-devel@nongnu.org; Tue, 23 Sep 2014 05:06:50 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:38368) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XWM3Y-0003Xo-HQ for qemu-devel@nongnu.org; Tue, 23 Sep 2014 05:06:45 -0400 Message-ID: <54213809.1020707@huawei.com> Date: Tue, 23 Sep 2014 17:06:17 +0800 From: zhanghailiang MIME-Version: 1.0 References: <1411459067-840-1-git-send-email-zhang.zhanghailiang@huawei.com> <20140923083026.GD16527@redhat.com> <20140923083555.GB31177@G08FNSTD100614.fnst.cn.fujitsu.com> In-Reply-To: <20140923083555.GB31177@G08FNSTD100614.fnst.cn.fujitsu.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH] vl: Adjust the place of calling mlockall to speedup VM's startup List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Hu Tao , "Michael S. Tsirkin" Cc: imammedo@redhat.com, luonengjun@huawei.com, qemu-devel@nongnu.org, peter.huangpeng@huawei.com On 2014/9/23 16:35, Hu Tao wrote: > On Tue, Sep 23, 2014 at 11:30:26AM +0300, Michael S. Tsirkin wrote: >> On Tue, Sep 23, 2014 at 03:57:47PM +0800, zhanghailiang wrote: >>> If we configure mlock=on and memory policy=bind at the same time, >>> It will consume lots of time for system to treat with memory, >>> especially when call mbind after mlockall. >>> >>> Adjust the place of calling mlockall, calling mbind before mlockall >>> can remarkably reduce the time of VM's startup. >>> >>> Signed-off-by: zhanghailiang >> >> The idea makes absolute sense to me: >> bind after lock will force data copy of >> all pages. bind before lock gives us an >> indication where to put data on fault in. > > Agreed. > >> >> Acked-by: Michael S. Tsirkin >> >> >>> --- >>> Hi, >>> >>> Actually, for mbind and mlockall, i have made a test about the time consuming >>> for the different call sequence. >>> >>> The results is shown below. It is obviously that mlockall called before mbind is >>> more time-consuming. >>> >>> Besides, this patch is OK with memory hotplug. >>> >>> TEST CODE: >>> if (mbind_first) { >>> printf("mbind --> mlockall\n"); >>> mbind(ptr, ram_size/2, MPOL_BIND, &node0mask, 2, >>> MPOL_MF_STRICT | MPOL_MF_MOVE); >>> mbind(ptr + ram_size/2, ram_size/2, MPOL_BIND, &node1mask, 2, >>> MPOL_MF_STRICT | MPOL_MF_MOVE); >>> mlockall(MCL_CURRENT | MCL_FUTURE); >>> } else { >>> printf("mlockall --> mbind\n"); >>> mlockall(MCL_CURRENT | MCL_FUTURE); >>> mbind(ptr, ram_size/2, MPOL_BIND, &node0mask, 2 , >>> MPOL_MF_STRICT | MPOL_MF_MOVE); >>> mbind(ptr + ram_size/2, ram_size/2, MPOL_BIND, &node1mask, 2, >>> MPOL_MF_STRICT | MPOL_MF_MOVE); >>> } >>> >>> RESULT 1: >>> #time /home/test_mbind 10240 0 >>> memroy size 10737418240 >>> mlockall --> mbind >>> >>> real 0m11.886s >>> user 0m0.004s >>> sys 0m11.865s >>> #time /home/test_mbind 10240 1 >>> memroy size 10737418240 >>> mbind --> mlockall >>> >>> real 0m5.334s >>> user 0m0.000s >>> sys 0m5.324s >>> >>> RESULT 2: >>> #time /home/test_mbind 4096 0 >>> memroy size 4294967296 >>> mlockall --> mbind >>> >>> real 0m5.503s >>> user 0m0.000s >>> sys 0m5.492s >>> #time /home/test_mbind 4096 1 >>> memroy size 4294967296 >>> mbind --> mlockall >>> >>> real 0m2.139s >>> user 0m0.000s >>> sys 0m2.132s >>> >>> Best Regards, >>> zhanghailiang >>> --- >>> vl.c | 11 +++++------ >>> 1 file changed, 5 insertions(+), 6 deletions(-) >>> >>> diff --git a/vl.c b/vl.c >>> index dc792fe..adf4770 100644 >>> --- a/vl.c >>> +++ b/vl.c >>> @@ -134,6 +134,7 @@ const char* keyboard_layout = NULL; >>> ram_addr_t ram_size; >>> const char *mem_path = NULL; >>> int mem_prealloc = 0; /* force preallocation of physical target memory */ >>> +int enable_mlock = false; > > Why not bool? > Er, that is my fault, Will fix it and submit V2, Thanks;) > Regards, > Hu > > . >