From mboxrd@z Thu Jan 1 00:00:00 1970 From: Izik Eidus Subject: Re: [PATCH 0/4] Swapping Date: Sun, 14 Oct 2007 08:10:27 +0200 Message-ID: <4711B2D3.3070504@qumranet.com> References: <47102823.2000600@qumranet.com> <47115E75.1040203@codemonkey.ws> <47115F6A.7080800@codemonkey.ws> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Return-path: In-Reply-To: <47115F6A.7080800-rdkfGonbjUSkNkDKm+mE6A@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Anthony Liguori wrote: > Anthony Liguori wrote: >> Very nice! >> >> I've tested this series (with your new 3/4) with win2k, winxp, ubuntu >> 7.10, and opensuse. Everything seemed to work just fine. > > Spoke too soon, found the following in dmesg: > > [35078.913071] BUG: scheduling while atomic: > qemu-system-x86/0x10000001/21612 > [35078.913077] > [35078.913079] Call Trace: > [35078.913112] [] thread_return+0x21e/0x6c9 > [35078.913129] [] zone_statistics+0x7d/0x80 > [35078.913139] [] get_page_from_freelist+0x441/0x5b0 > [35078.913168] [] __cond_resched+0x1c/0x50 > [35078.913174] [] cond_resched+0x32/0x40 > [35078.913181] [] down_read+0x9/0x20 > [35078.913199] [] :kvm:gfn_to_page+0x4c/0x130 > [35078.913207] [] vm_normal_page+0x3d/0xc0 > [35078.913230] [] :kvm:gpa_to_hpa+0x24/0x70 > [35078.913249] [] > :kvm:paging32_set_pte_common+0x9e/0x2b0 > [35078.913285] [] :kvm:paging32_set_pte+0x49/0x50 > [35078.913308] [] :kvm:kvm_mmu_pte_write+0x33d/0x3b0 > [35078.913350] [] :kvm:paging32_walk_addr+0x292/0x310 > [35078.913383] [] :kvm:paging32_page_fault+0xc0/0x300 > [35078.913399] [] :kvm:x86_emulate_insn+0x11c/0x4190 > [35078.913448] [] > :kvm_intel:handle_exception+0x21b/0x2a0 > [35078.913474] [] :kvm:kvm_vcpu_ioctl+0xddc/0x1130 > [35078.913488] [] task_rq_lock+0x4c/0x90 > [35078.913494] [] __activate_task+0x29/0x50 > [35078.913504] [] try_to_wake_up+0x5c/0x3f0 > [35078.913511] [] futex_wait+0x2df/0x3c0 > [35078.913521] [] task_rq_lock+0x4c/0x90 > [35078.913528] [] __activate_task+0x29/0x50 > [35078.913545] [] __wake_up_common+0x47/0x80 > [35078.913561] [] __wake_up+0x43/0x70 > [35078.913575] [] __up_read+0x21/0xb0 > [35078.913585] [] futex_wake+0xd0/0xf0 > [35078.913617] [] __dequeue_signal+0x110/0x1d0 > [35078.913633] [] recalc_sigpending+0xe/0x30 > [35078.913638] [] dequeue_signal+0x5c/0x190 > [35078.913662] [] do_ioctl+0x35/0xe0 > [35078.913675] [] vfs_ioctl+0x74/0x2d0 > [35078.913680] [] recalc_sigpending+0xe/0x30 > [35078.913684] [] sigprocmask+0x67/0xf0 > [35078.913697] [] sys_ioctl+0x95/0xb0 > [35078.913715] [] system_call+0x7e/0x83 > [35078.913743] > this is funny, but this message is "ok" i wrote it when i sent the patch, it happen beacuse the disable_local_irqs in kvm, (beacuse in this part some emulator function get called and do gfn_to_page() we have to split this function,.... so it isnt really bug in the swapping, it is beacuse get_user_pages do cond_reschd(), when we will split the emulator function, we wont have this message :) > Regards, > > Anthony Liguori > >> I also was able to create four 1G VMs on my 2G laptop :-) That was >> very neat. >> >> Regards, >> >> Anthony Liguori >> >> Izik Eidus wrote: >>> this patchs allow the guest not shadowed memory to be swapped out. >>> >>> to make it the must effective you should run -kvm-shadow-memory 1 >>> (witch will make your machine slow) >>> with -kvm-shadow-memory 1, 3giga memory guest can get to be just >>> 32mb on physical host! >>> >>> when not using -kvm-shadow-memory, i saw 4100mb machine getting to >>> as low as 168mb on the physical host (not as bad as i thought it >>> would be, and surely not as bad as it can be with 41mb of shadow >>> pages :)) >>> >>> >>> it seems to be very stable, it didnt crushed to me once, and i was >>> able to run: >>> 2 3giga each windows xp + 5giga linux guest >>> >>> and >>> 2 4.1 giga each windows xp and 2 2giga each windows xp. >>> >>> few things to note: >>> ignore for now the ugly messages at dmesg, it is due to the fact >>> that gfn_to_page try to sleep while local intrreupts disabled ( we >>> have to split some emulator function so it wont do it) >>> >>> and i saw some issue with the new rmapp at fedora 7 live cd, for >>> some reason , in the nonpaging mode rmap_remove getting called about >>> 50 times less than it need >>> it doesnt happen at other linux guests, need to check this... (for >>> now it mean you might have about 200k of memory leak for each fedora >>> 7 live cd you are runing ) >>> >>> also note that now kvm load much faster, beacuse no memset on all >>> the memory is needed (beacuse gfn_to_page get called at run time) >>> >>> (avi, and dor, note that this patch include small fix to a bug in >>> the patch that i sent you) >>> >>> ------------------------------------------------------------------------- >>> >>> This SF.net email is sponsored by: Splunk Inc. >>> Still grepping through log files to find problems? Stop. >>> Now Search log events and configuration files using AJAX and a browser. >>> Download your FREE copy of Splunk now >> http://get.splunk.com/ >>> _______________________________________________ >>> kvm-devel mailing list >>> kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org >>> https://lists.sourceforge.net/lists/listinfo/kvm-devel >>> >>> >> >> > ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/