* [PATCH 0/3] add support for dynamicly allocation of mmu pages.
@ 2007-08-18 19:51 Izik Eidus
[not found] ` <1187466871.28221.39.camel@izike-desktop.qumranet.com>
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Izik Eidus @ 2007-08-18 19:51 UTC (permalink / raw)
To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f
this patch make kvm dynamicly allocate memory to its mmu pages buffer.
untill now kvm used to allocate just 1024 pages ( 4MB ) no matter what
was the guest ram size.
beacuse the mmu pages buffer was very small alot of pages that had
"correct" information about the guest pte, had to be released.
what i did here is the first step to get one or both of the below
options:
1)adding support to kvm to increase and decrease at runtime its mmu
pages buffer by considering how much times the mmu_free_some_pages
function is called.
2)adding support to kvm to share the mmu buffers with all VMs that run,
in this case an idle vm will give some of it mmu buffer to "highly
working vm"
i wrote this patch with this 2 options in mind, and therefor
i used lists and not arry, and created each entry of the list 1MB
(holding list of 256 pages).
it is now very easy and inexpensive to delete/add/move or doing anything
we want with this 1MB block.
ugly "benchmark" i ran showed that when the guest used 1% of 512mb vm to
its mmu buffer and compiled the linux kernel with -j 8 it had number of
21,100,000 fix page_fault exits and it took 8:10 secs
when the same guest with the same number of ram used 2% of the 512mb bm
to its mmu buffer it compiled the linux kernel with -8 at 7:48 secs and
had just 17,500,000 fix page_faults exits.
(as far as the guest will have more ram the results should be much
faster than without this patch)
(this benchmark was really ugly, i didnt use ram drive or anything like
that to compiling it..)
ohh, i must to add that i added a function to remove the lists and all
the pages it allocated to the mmu pages, but i didnt write any line in
it because i want to ask avi something first, so dont blame me for
stealing your ram :)
anyway enjoy.
-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems? Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
^ permalink raw reply [flat|nested] 11+ messages in thread[parent not found: <1187466871.28221.39.camel@izike-desktop.qumranet.com>]
[parent not found: <1187466871.28221.39.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>]
* [PATCH 1/3] add support for dynamicly allocation of mmu pages. [not found] ` <1187466871.28221.39.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> @ 2007-08-18 20:06 ` Izik Eidus [not found] ` <1187467612.28221.50.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> 0 siblings, 1 reply; 11+ messages in thread From: Izik Eidus @ 2007-08-18 20:06 UTC (permalink / raw) To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f [-- Attachment #1: Type: text/plain, Size: 266 bytes --] simply change kvm.h to work with the new lists and with blocks of 1MB please note that the KVM_MMU_PAGES_DIVIDER is the magic, it control on how much % memory will be allocated to the mmu pages, KVM_MMU_PAGES_DIVIDER = 100 mean 1% KVM_MMU_PAGES_DIVIDER = 50 mean 2% [-- Attachment #2: kvm_h.patch --] [-- Type: text/x-patch, Size: 903 bytes --] --- kvm.h 2007-08-15 11:37:07.000000000 +0300 +++ new_kvm.h 2007-08-19 06:21:23.000000000 +0300 @@ -40,7 +40,8 @@ #define KVM_MAX_VCPUS 4 #define KVM_ALIAS_SLOTS 4 #define KVM_MEMORY_SLOTS 4 -#define KVM_NUM_MMU_PAGES 1024 +#define KVM_NUM_MMU_PAGES_BLOCK 256 +#define KVM_MMU_PAGES_DIVIDER 50 #define KVM_MIN_FREE_MMU_PAGES 5 #define KVM_REFILL_PAGES 25 #define KVM_MAX_CPUID_ENTRIES 40 @@ -136,6 +137,11 @@ }; }; +struct kvm_mmu_hash_block { + struct list_head head; + struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES_BLOCK]; +}; + struct kvm_vcpu; extern struct kmem_cache *kvm_vcpu_cache; @@ -399,7 +405,8 @@ */ struct list_head active_mmu_pages; int n_free_mmu_pages; - struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; + int n_mmu_page_hash_blocks; + struct list_head mmu_page_hash_blocks; struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; int memory_config_version; int busy; [-- Attachment #3: Type: text/plain, Size: 315 bytes --] ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ [-- Attachment #4: Type: text/plain, Size: 186 bytes --] _______________________________________________ kvm-devel mailing list kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org https://lists.sourceforge.net/lists/listinfo/kvm-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <1187467612.28221.50.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>]
* Re: [PATCH 1/3] add support for dynamicly allocation of mmu pages. [not found] ` <1187467612.28221.50.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> @ 2007-08-19 8:14 ` Avi Kivity 0 siblings, 0 replies; 11+ messages in thread From: Avi Kivity @ 2007-08-19 8:14 UTC (permalink / raw) To: Izik Eidus; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f Izik Eidus wrote: > simply change kvm.h to work with the new lists and with blocks of 1MB > please note that the KVM_MMU_PAGES_DIVIDER is the magic, it control on > how much % memory will be allocated to the mmu pages, > KVM_MMU_PAGES_DIVIDER = 100 mean 1% > KVM_MMU_PAGES_DIVIDER = 50 mean 2% > Instead of a divider, a fraction is easier to use. For example: mmu_pages = ((u64)nr_pages * KVM_MMU_PAGES_FRAC) >> 16; To get 1%, set KVM_MMU_PAGES_FRAC to (65536 / 100). > struct list_head active_mmu_pages; > int n_free_mmu_pages; > - struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; > + int n_mmu_page_hash_blocks; > + struct list_head mmu_page_hash_blocks; > struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]; > int memory_config_version; > int busy; > Nothing works after applying this patch, right? Each patch should be self contained, and everything should compile and work after applying it. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <1187467003.28221.42.camel@izike-desktop.qumranet.com>]
[parent not found: <1187467003.28221.42.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>]
* [PATCH 2/3] add support for dynamicly allocation of mmu pages. [not found] ` <1187467003.28221.42.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> @ 2007-08-18 20:08 ` Izik Eidus [not found] ` <1187467694.28221.52.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> 0 siblings, 1 reply; 11+ messages in thread From: Izik Eidus @ 2007-08-18 20:08 UTC (permalink / raw) To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f [-- Attachment #1: Type: text/plain, Size: 154 bytes --] simply make the slot memory allocation function to count how much it allocate, and create blocks of mmu pages by this, in addition it initiate the lists. [-- Attachment #2: kvm_main_c.patch --] [-- Type: text/x-patch, Size: 1646 bytes --] --- kvm_main.c 2007-08-15 11:37:07.000000000 +0300 +++ new_kvm_main.c 2007-08-19 06:21:45.000000000 +0300 @@ -297,6 +297,8 @@ kvm_io_bus_init(&kvm->pio_bus); mutex_init(&kvm->lock); INIT_LIST_HEAD(&kvm->active_mmu_pages); + INIT_LIST_HEAD(&kvm->mmu_page_hash_blocks); + kvm->n_mmu_page_hash_blocks = 0; kvm_io_bus_init(&kvm->mmio_bus); spin_lock(&kvm_lock); list_add(&kvm->vm_list, &vm_list); @@ -336,6 +338,10 @@ kvm_free_physmem_slot(&kvm->memslots[i], NULL); } +static void kvm_free_mmu_hash(struct kvm *kvm) +{ +} + static void free_pio_guest_pages(struct kvm_vcpu *vcpu) { int i; @@ -382,6 +388,7 @@ kvm_io_bus_destroy(&kvm->mmio_bus); kvm_free_vcpus(kvm); kvm_free_physmem(kvm); + kvm_free_mmu_hash(kvm); kfree(kvm); } @@ -637,6 +644,7 @@ struct kvm_memory_slot *memslot; struct kvm_memory_slot old, new; int memory_config_version; + int total_mmu_page_hash_blocks; r = -EINVAL; /* General sanity checks */ @@ -740,6 +748,20 @@ if (mem->slot >= kvm->nmemslots) kvm->nmemslots = mem->slot + 1; + total_mmu_page_hash_blocks = (npages + kvm->n_mmu_page_hash_blocks * + KVM_NUM_MMU_PAGES_BLOCK * KVM_MMU_PAGES_DIVIDER) / + KVM_NUM_MMU_PAGES_BLOCK / KVM_MMU_PAGES_DIVIDER; + + for (; kvm->n_mmu_page_hash_blocks < total_mmu_page_hash_blocks; ++ kvm->n_mmu_page_hash_blocks) { + struct kvm_mmu_hash_block *hash_block = vmalloc(sizeof(struct kvm_mmu_hash_block)); + + if (!hash_block) + goto out_unlock; + + memset(hash_block, 0, sizeof(struct kvm_mmu_hash_block)); + list_add(&hash_block->head, &kvm->mmu_page_hash_blocks); + } + *memslot = new; ++kvm->memory_config_version; [-- Attachment #3: Type: text/plain, Size: 315 bytes --] ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ [-- Attachment #4: Type: text/plain, Size: 186 bytes --] _______________________________________________ kvm-devel mailing list kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org https://lists.sourceforge.net/lists/listinfo/kvm-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <1187467694.28221.52.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>]
* Re: [PATCH 2/3] add support for dynamicly allocation of mmu pages. [not found] ` <1187467694.28221.52.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> @ 2007-08-19 8:19 ` Avi Kivity 0 siblings, 0 replies; 11+ messages in thread From: Avi Kivity @ 2007-08-19 8:19 UTC (permalink / raw) To: Izik Eidus; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f Izik Eidus wrote: > simply make the slot memory allocation function to count how much it > allocate, and create blocks of mmu pages by this, > in addition it initiate the lists. > > +++ new_kvm_main.c 2007-08-19 06:21:45.000000000 +0300 > @@ -297,6 +297,8 @@ > Please use 'diff -p' (or simply 'git diff') so that patches have more context information. > kvm_io_bus_init(&kvm->pio_bus); > mutex_init(&kvm->lock); > INIT_LIST_HEAD(&kvm->active_mmu_pages); > + INIT_LIST_HEAD(&kvm->mmu_page_hash_blocks); > + kvm->n_mmu_page_hash_blocks = 0; > kvm_io_bus_init(&kvm->mmio_bus); > spin_lock(&kvm_lock); > list_add(&kvm->vm_list, &vm_list); > @@ -336,6 +338,10 @@ > kvm_free_physmem_slot(&kvm->memslots[i], NULL); > } > > +static void kvm_free_mmu_hash(struct kvm *kvm) > +{ > +} > + > static void free_pio_guest_pages(struct kvm_vcpu *vcpu) > { > int i; > @@ -382,6 +388,7 @@ > kvm_io_bus_destroy(&kvm->mmio_bus); > kvm_free_vcpus(kvm); > kvm_free_physmem(kvm); > + kvm_free_mmu_hash(kvm); > kfree(kvm); > } > > @@ -637,6 +644,7 @@ > struct kvm_memory_slot *memslot; > struct kvm_memory_slot old, new; > int memory_config_version; > + int total_mmu_page_hash_blocks; > > r = -EINVAL; > /* General sanity checks */ > @@ -740,6 +748,20 @@ > if (mem->slot >= kvm->nmemslots) > kvm->nmemslots = mem->slot + 1; > > + total_mmu_page_hash_blocks = (npages + kvm->n_mmu_page_hash_blocks * > + KVM_NUM_MMU_PAGES_BLOCK * KVM_MMU_PAGES_DIVIDER) / > + KVM_NUM_MMU_PAGES_BLOCK / KVM_MMU_PAGES_DIVIDER; > + > + for (; kvm->n_mmu_page_hash_blocks < total_mmu_page_hash_blocks; ++ kvm->n_mmu_page_hash_blocks) { > 80 column limit. > + struct kvm_mmu_hash_block *hash_block = vmalloc(sizeof(struct kvm_mmu_hash_block)); > + > + if (!hash_block) > + goto out_unlock; > + > + memset(hash_block, 0, sizeof(struct kvm_mmu_hash_block)); > + list_add(&hash_block->head, &kvm->mmu_page_hash_blocks); > + } > + > *memslot = new; > ++kvm->memory_config_version; > > This looks very coarse. Why not just resize the hash table? -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <1187467063.28221.44.camel@izike-desktop.qumranet.com>]
[parent not found: <1187467063.28221.44.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>]
* [PATCH 3/3] add support for dynamicly allocation of mmu pages. [not found] ` <1187467063.28221.44.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> @ 2007-08-18 20:09 ` Izik Eidus [not found] ` <1187467798.28221.55.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> 0 siblings, 1 reply; 11+ messages in thread From: Izik Eidus @ 2007-08-18 20:09 UTC (permalink / raw) To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f [-- Attachment #1: Type: text/plain, Size: 92 bytes --] this patch all the functions that use the mmu pages it teach them how to use the new lists. [-- Attachment #2: mmu_c.patch --] [-- Type: text/x-patch, Size: 5027 bytes --] --- mmu.c 2007-08-15 11:37:08.000000000 +0300 +++ new_mmu.c 2007-08-19 06:27:56.000000000 +0300 @@ -586,14 +586,25 @@ static struct kvm_mmu_page *kvm_mmu_lookup_page(struct kvm_vcpu *vcpu, gfn_t gfn) { - unsigned index; + unsigned index, index_div_blocks; struct hlist_head *bucket; struct kvm_mmu_page *page; struct hlist_node *node; + struct list_head *block_node; + struct kvm_mmu_hash_block *hash_block; + int i; pgprintk("%s: looking for gfn %lx\n", __FUNCTION__, gfn); - index = kvm_page_table_hashfn(gfn) % KVM_NUM_MMU_PAGES; - bucket = &vcpu->kvm->mmu_page_hash[index]; + index = kvm_page_table_hashfn(gfn) % (KVM_NUM_MMU_PAGES_BLOCK * + vcpu->kvm->n_mmu_page_hash_blocks); + block_node = vcpu->kvm->mmu_page_hash_blocks.next; + index_div_blocks = index / KVM_NUM_MMU_PAGES_BLOCK; + + for (i = 0; i < index_div_blocks; ++i) + block_node = block_node->next; + + hash_block = list_entry(block_node, struct kvm_mmu_hash_block, head); + bucket = &hash_block->mmu_page_hash[index % KVM_NUM_MMU_PAGES_BLOCK]; hlist_for_each_entry(page, node, bucket, hash_link) if (page->gfn == gfn && !page->role.metaphysical) { pgprintk("%s: found role %x\n", @@ -612,11 +623,14 @@ u64 *parent_pte) { union kvm_mmu_page_role role; - unsigned index; + unsigned index, index_div_blocks; unsigned quadrant; struct hlist_head *bucket; struct kvm_mmu_page *page; struct hlist_node *node; + struct list_head *block_node; + struct kvm_mmu_hash_block *hash_block; + int i; role.word = 0; role.glevels = vcpu->mmu.root_level; @@ -630,8 +644,16 @@ } pgprintk("%s: looking gfn %lx role %x\n", __FUNCTION__, gfn, role.word); - index = kvm_page_table_hashfn(gfn) % KVM_NUM_MMU_PAGES; - bucket = &vcpu->kvm->mmu_page_hash[index]; + index = kvm_page_table_hashfn(gfn) % (KVM_NUM_MMU_PAGES_BLOCK * + vcpu->kvm->n_mmu_page_hash_blocks); + block_node = vcpu->kvm->mmu_page_hash_blocks.next; + index_div_blocks = index / KVM_NUM_MMU_PAGES_BLOCK; + + for (i = 0; i < index_div_blocks; ++i) + block_node = block_node->next; + + hash_block = list_entry(block_node, struct kvm_mmu_hash_block, head); + bucket = &hash_block->mmu_page_hash[index % KVM_NUM_MMU_PAGES_BLOCK]; hlist_for_each_entry(page, node, bucket, hash_link) if (page->gfn == gfn && page->role.word == role.word) { mmu_page_add_parent_pte(vcpu, page, parent_pte); @@ -716,16 +738,26 @@ static int kvm_mmu_unprotect_page(struct kvm_vcpu *vcpu, gfn_t gfn) { - unsigned index; + unsigned index, index_div_blocks; struct hlist_head *bucket; struct kvm_mmu_page *page; struct hlist_node *node, *n; - int r; + struct list_head *block_node; + struct kvm_mmu_hash_block *hash_block; + int r, i; pgprintk("%s: looking for gfn %lx\n", __FUNCTION__, gfn); r = 0; - index = kvm_page_table_hashfn(gfn) % KVM_NUM_MMU_PAGES; - bucket = &vcpu->kvm->mmu_page_hash[index]; + index = kvm_page_table_hashfn(gfn) % (KVM_NUM_MMU_PAGES_BLOCK * + vcpu->kvm->n_mmu_page_hash_blocks); + block_node = vcpu->kvm->mmu_page_hash_blocks.next; + index_div_blocks = index / KVM_NUM_MMU_PAGES_BLOCK; + + for (i = 0; i < index_div_blocks; ++i) + block_node = block_node->next; + + hash_block = list_entry(block_node, struct kvm_mmu_hash_block, head); + bucket = &hash_block->mmu_page_hash[index % KVM_NUM_MMU_PAGES_BLOCK]; hlist_for_each_entry_safe(page, node, n, bucket, hash_link) if (page->gfn == gfn && !page->role.metaphysical) { pgprintk("%s: gfn %lx role %x\n", __FUNCTION__, gfn, @@ -1126,7 +1158,9 @@ struct kvm_mmu_page *page; struct hlist_node *node, *n; struct hlist_head *bucket; - unsigned index; + struct list_head *block_node; + struct kvm_mmu_hash_block *hash_block; + unsigned index , index_div_blocks; u64 *spte; unsigned offset = offset_in_page(gpa); unsigned pte_size; @@ -1136,6 +1170,7 @@ int level; int flooded = 0; int npte; + int i; pgprintk("%s: gpa %llx bytes %d\n", __FUNCTION__, gpa, bytes); if (gfn == vcpu->last_pt_write_gfn) { @@ -1146,8 +1181,16 @@ vcpu->last_pt_write_gfn = gfn; vcpu->last_pt_write_count = 1; } - index = kvm_page_table_hashfn(gfn) % KVM_NUM_MMU_PAGES; - bucket = &vcpu->kvm->mmu_page_hash[index]; + index = kvm_page_table_hashfn(gfn) % (KVM_NUM_MMU_PAGES_BLOCK * + vcpu->kvm->n_mmu_page_hash_blocks); + block_node = vcpu->kvm->mmu_page_hash_blocks.next; + index_div_blocks = index / KVM_NUM_MMU_PAGES_BLOCK; + + for (i = 0; i < index_div_blocks; ++i) + block_node = block_node->next; + + hash_block = list_entry(block_node, struct kvm_mmu_hash_block, head); + bucket = &hash_block->mmu_page_hash[index % KVM_NUM_MMU_PAGES_BLOCK]; hlist_for_each_entry_safe(page, node, n, bucket, hash_link) { if (page->gfn != gfn || page->role.metaphysical) continue; @@ -1237,7 +1280,8 @@ ASSERT(vcpu); - vcpu->kvm->n_free_mmu_pages = KVM_NUM_MMU_PAGES; + vcpu->kvm->n_free_mmu_pages = KVM_NUM_MMU_PAGES_BLOCK * + vcpu->kvm->n_mmu_page_hash_blocks; /* * When emulating 32-bit mode, cr3 is only 32 bits even on x86_64. [-- Attachment #3: Type: text/plain, Size: 315 bytes --] ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ [-- Attachment #4: Type: text/plain, Size: 186 bytes --] _______________________________________________ kvm-devel mailing list kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org https://lists.sourceforge.net/lists/listinfo/kvm-devel ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <1187467798.28221.55.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>]
* Re: [PATCH 3/3] add support for dynamicly allocation of mmu pages. [not found] ` <1187467798.28221.55.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> @ 2007-08-19 8:22 ` Avi Kivity 0 siblings, 0 replies; 11+ messages in thread From: Avi Kivity @ 2007-08-19 8:22 UTC (permalink / raw) To: Izik Eidus; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f Izik Eidus wrote: > this patch all the functions that use the mmu pages > it teach them how to use the new lists. > > --- mmu.c 2007-08-15 11:37:08.000000000 +0300 > +++ new_mmu.c 2007-08-19 06:27:56.000000000 +0300 > @@ -586,14 +586,25 @@ > static struct kvm_mmu_page *kvm_mmu_lookup_page(struct kvm_vcpu *vcpu, > gfn_t gfn) > { > - unsigned index; > + unsigned index, index_div_blocks; > struct hlist_head *bucket; > struct kvm_mmu_page *page; > struct hlist_node *node; > + struct list_head *block_node; > + struct kvm_mmu_hash_block *hash_block; > + int i; > > pgprintk("%s: looking for gfn %lx\n", __FUNCTION__, gfn); > - index = kvm_page_table_hashfn(gfn) % KVM_NUM_MMU_PAGES; > - bucket = &vcpu->kvm->mmu_page_hash[index]; > + index = kvm_page_table_hashfn(gfn) % (KVM_NUM_MMU_PAGES_BLOCK * > + vcpu->kvm->n_mmu_page_hash_blocks); > + block_node = vcpu->kvm->mmu_page_hash_blocks.next; > + index_div_blocks = index / KVM_NUM_MMU_PAGES_BLOCK; > + > + for (i = 0; i < index_div_blocks; ++i) > + block_node = block_node->next; > + > + hash_block = list_entry(block_node, struct kvm_mmu_hash_block, head); > + bucket = &hash_block->mmu_page_hash[index % KVM_NUM_MMU_PAGES_BLOCK]; > This is slowing down kvm_mmu_lookup_page() for large memory guests. This is a frequently called function! > hlist_for_each_entry(page, node, bucket, hash_link) > if (page->gfn == gfn && !page->role.metaphysical) { > pgprintk("%s: found role %x\n", > @@ -612,11 +623,14 @@ > u64 *parent_pte) > { > union kvm_mmu_page_role role; > - unsigned index; > + unsigned index, index_div_blocks; > unsigned quadrant; > struct hlist_head *bucket; > struct kvm_mmu_page *page; > struct hlist_node *node; > + struct list_head *block_node; > + struct kvm_mmu_hash_block *hash_block; > + int i; > > role.word = 0; > role.glevels = vcpu->mmu.root_level; > @@ -630,8 +644,16 @@ > } > pgprintk("%s: looking gfn %lx role %x\n", __FUNCTION__, > gfn, role.word); > - index = kvm_page_table_hashfn(gfn) % KVM_NUM_MMU_PAGES; > - bucket = &vcpu->kvm->mmu_page_hash[index]; > + index = kvm_page_table_hashfn(gfn) % (KVM_NUM_MMU_PAGES_BLOCK * > + vcpu->kvm->n_mmu_page_hash_blocks); > + block_node = vcpu->kvm->mmu_page_hash_blocks.next; > + index_div_blocks = index / KVM_NUM_MMU_PAGES_BLOCK; > + > + for (i = 0; i < index_div_blocks; ++i) > + block_node = block_node->next; > + > + hash_block = list_entry(block_node, struct kvm_mmu_hash_block, head); > + bucket = &hash_block->mmu_page_hash[index % KVM_NUM_MMU_PAGES_BLOCK]; > Code duplication, this is better extracted into a helper. > hlist_for_each_entry(page, node, bucket, hash_link) > if (page->gfn == gfn && page->role.word == role.word) { > mmu_page_add_parent_pte(vcpu, page, parent_pte); > @@ -716,16 +738,26 @@ > > static int kvm_mmu_unprotect_page(struct kvm_vcpu *vcpu, gfn_t gfn) > { > - unsigned index; > + unsigned index, index_div_blocks; > struct hlist_head *bucket; > struct kvm_mmu_page *page; > struct hlist_node *node, *n; > - int r; > + struct list_head *block_node; > + struct kvm_mmu_hash_block *hash_block; > + int r, i; > > pgprintk("%s: looking for gfn %lx\n", __FUNCTION__, gfn); > r = 0; > - index = kvm_page_table_hashfn(gfn) % KVM_NUM_MMU_PAGES; > - bucket = &vcpu->kvm->mmu_page_hash[index]; > + index = kvm_page_table_hashfn(gfn) % (KVM_NUM_MMU_PAGES_BLOCK * > + vcpu->kvm->n_mmu_page_hash_blocks); > + block_node = vcpu->kvm->mmu_page_hash_blocks.next; > + index_div_blocks = index / KVM_NUM_MMU_PAGES_BLOCK; > + > + for (i = 0; i < index_div_blocks; ++i) > + block_node = block_node->next; > + > + hash_block = list_entry(block_node, struct kvm_mmu_hash_block, head); > + bucket = &hash_block->mmu_page_hash[index % KVM_NUM_MMU_PAGES_BLOCK]; > Likewise. > hlist_for_each_entry_safe(page, node, n, bucket, hash_link) > if (page->gfn == gfn && !page->role.metaphysical) { > pgprintk("%s: gfn %lx role %x\n", __FUNCTION__, gfn, > @@ -1126,7 +1158,9 @@ > struct kvm_mmu_page *page; > struct hlist_node *node, *n; > struct hlist_head *bucket; > - unsigned index; > + struct list_head *block_node; > + struct kvm_mmu_hash_block *hash_block; > + unsigned index , index_div_blocks; > u64 *spte; > unsigned offset = offset_in_page(gpa); > unsigned pte_size; > @@ -1136,6 +1170,7 @@ > int level; > int flooded = 0; > int npte; > + int i; > > pgprintk("%s: gpa %llx bytes %d\n", __FUNCTION__, gpa, bytes); > if (gfn == vcpu->last_pt_write_gfn) { > @@ -1146,8 +1181,16 @@ > vcpu->last_pt_write_gfn = gfn; > vcpu->last_pt_write_count = 1; > } > - index = kvm_page_table_hashfn(gfn) % KVM_NUM_MMU_PAGES; > - bucket = &vcpu->kvm->mmu_page_hash[index]; > + index = kvm_page_table_hashfn(gfn) % (KVM_NUM_MMU_PAGES_BLOCK * > + vcpu->kvm->n_mmu_page_hash_blocks); > + block_node = vcpu->kvm->mmu_page_hash_blocks.next; > + index_div_blocks = index / KVM_NUM_MMU_PAGES_BLOCK; > + > + for (i = 0; i < index_div_blocks; ++i) > + block_node = block_node->next; > + > + hash_block = list_entry(block_node, struct kvm_mmu_hash_block, head); > + bucket = &hash_block->mmu_page_hash[index % KVM_NUM_MMU_PAGES_BLOCK]; > Ditto. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <1187467150.28221.47.camel@izike-desktop.qumranet.com>]
[parent not found: <1187467150.28221.47.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>]
* Re: [PATCH 0/3] add support for dynamicly allocation of mmu pages. [not found] ` <1187467150.28221.47.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> @ 2007-08-18 20:11 ` Izik Eidus [not found] ` <1187467867.28221.57.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> 0 siblings, 1 reply; 11+ messages in thread From: Izik Eidus @ 2007-08-18 20:11 UTC (permalink / raw) To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f i forgot the most important thing this is request for comment, not any more than this (as you can see it doesnt have clean up function) so all i am trying to get here, is your ideas to what to do with it now. add one of the options below?, add both of them?, not at all? anyway thanks! :) > On Sat, 2007-08-18 at 22:51 +0300, Izik Eidus wrote: > > this patch make kvm dynamicly allocate memory to its mmu pages buffer. > > > > untill now kvm used to allocate just 1024 pages ( 4MB ) no matter what > > was the guest ram size. > > > > beacuse the mmu pages buffer was very small alot of pages that had > > "correct" information about the guest pte, had to be released. > > > > what i did here is the first step to get one or both of the below > > options: > > > > 1)adding support to kvm to increase and decrease at runtime its mmu > > pages buffer by considering how much times the mmu_free_some_pages > > function is called. > > > > 2)adding support to kvm to share the mmu buffers with all VMs that run, > > in this case an idle vm will give some of it mmu buffer to "highly > > working vm" > > > > i wrote this patch with this 2 options in mind, and therefor > > i used lists and not arry, and created each entry of the list 1MB > > (holding list of 256 pages). > > it is now very easy and inexpensive to delete/add/move or doing anything > > we want with this 1MB block. > > > > ugly "benchmark" i ran showed that when the guest used 1% of 512mb vm to > > its mmu buffer and compiled the linux kernel with -j 8 it had number of > > 21,100,000 fix page_fault exits and it took 8:10 secs > > > > when the same guest with the same number of ram used 2% of the 512mb bm > > to its mmu buffer it compiled the linux kernel with -8 at 7:48 secs and > > had just 17,500,000 fix page_faults exits. > > > > (as far as the guest will have more ram the results should be much > > faster than without this patch) > > > > (this benchmark was really ugly, i didnt use ram drive or anything like > > that to compiling it..) > > > > ohh, i must to add that i added a function to remove the lists and all > > the pages it allocated to the mmu pages, but i didnt write any line in > > it because i want to ask avi something first, so dont blame me for > > stealing your ram :) > > > > anyway enjoy. > > ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <1187467867.28221.57.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>]
* Re: [PATCH 0/3] add support for dynamicly allocation ofmmu pages. [not found] ` <1187467867.28221.57.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org> @ 2007-08-18 20:35 ` Dor Laor [not found] ` <64F9B87B6B770947A9F8391472E032160D46464F-yEcIvxbTEBqsx+V+t5oei8rau4O3wl8o3fe8/T/H7NteoWH0uzbU5w@public.gmane.org> 0 siblings, 1 reply; 11+ messages in thread From: Dor Laor @ 2007-08-18 20:35 UTC (permalink / raw) To: Izik Eidus, kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f The idea to shift/share mmu cache memory between guests is great. You do need to take care of proper inter vm locking (kvm_lock lock, don't mix the kvm->lock). Combining the idea with LRU replacment algorithm and a rebalance timer for the mmu cache can be a winning combination. -- Dor. Btw: Later on the static parameters should be set as a function of the host/guest memory size and usage. Another nice thing is to ask certain cache size as a parameter on vm creation time. >i forgot the most important thing >this is request for comment, not any more than this >(as you can see it doesnt have clean up function) >so all i am trying to get here, is your ideas to what to do with it now. >add one of the options below?, add both of them?, not at all? > > >anyway thanks! :) > >> On Sat, 2007-08-18 at 22:51 +0300, Izik Eidus wrote: >> > this patch make kvm dynamicly allocate memory to its mmu pages >buffer. >> > >> > untill now kvm used to allocate just 1024 pages ( 4MB ) no matter >what >> > was the guest ram size. >> > >> > beacuse the mmu pages buffer was very small alot of pages that had >> > "correct" information about the guest pte, had to be released. >> > >> > what i did here is the first step to get one or both of the below >> > options: >> > >> > 1)adding support to kvm to increase and decrease at runtime its mmu >> > pages buffer by considering how much times the mmu_free_some_pages >> > function is called. >> > >> > 2)adding support to kvm to share the mmu buffers with all VMs that >run, >> > in this case an idle vm will give some of it mmu buffer to "highly >> > working vm" >> > >> > i wrote this patch with this 2 options in mind, and therefor >> > i used lists and not arry, and created each entry of the list 1MB >> > (holding list of 256 pages). >> > it is now very easy and inexpensive to delete/add/move or doing >anything >> > we want with this 1MB block. >> > >> > ugly "benchmark" i ran showed that when the guest used 1% of 512mb >vm to >> > its mmu buffer and compiled the linux kernel with -j 8 it had number >of >> > 21,100,000 fix page_fault exits and it took 8:10 secs >> > >> > when the same guest with the same number of ram used 2% of the 512mb >bm >> > to its mmu buffer it compiled the linux kernel with -8 at 7:48 secs >and >> > had just 17,500,000 fix page_faults exits. >> > >> > (as far as the guest will have more ram the results should be much >> > faster than without this patch) >> > >> > (this benchmark was really ugly, i didnt use ram drive or anything >like >> > that to compiling it..) >> > >> > ohh, i must to add that i added a function to remove the lists and >all >> > the pages it allocated to the mmu pages, but i didnt write any line >in >> > it because i want to ask avi something first, so dont blame me for >> > stealing your ram :) >> > >> > anyway enjoy. >> > > > >----------------------------------------------------------------------- - >- >This SF.net email is sponsored by: Splunk Inc. >Still grepping through log files to find problems? Stop. >Now Search log events and configuration files using AJAX and a browser. >Download your FREE copy of Splunk now >> http://get.splunk.com/ >_______________________________________________ >kvm-devel mailing list >kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org >https://lists.sourceforge.net/lists/listinfo/kvm-devel ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <64F9B87B6B770947A9F8391472E032160D46464F-yEcIvxbTEBqsx+V+t5oei8rau4O3wl8o3fe8/T/H7NteoWH0uzbU5w@public.gmane.org>]
* Re: [PATCH 0/3] add support for dynamicly allocation ofmmu pages. [not found] ` <64F9B87B6B770947A9F8391472E032160D46464F-yEcIvxbTEBqsx+V+t5oei8rau4O3wl8o3fe8/T/H7NteoWH0uzbU5w@public.gmane.org> @ 2007-08-19 8:09 ` Avi Kivity [not found] ` <46C7FACA.3020706-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 0 siblings, 1 reply; 11+ messages in thread From: Avi Kivity @ 2007-08-19 8:09 UTC (permalink / raw) To: Dor Laor; +Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f Dor Laor wrote: > The idea to shift/share mmu cache memory between guests is great. > You do need to take care of proper inter vm locking (kvm_lock lock, > don't mix the kvm->lock). > > No, we need to stay away from kvm_lock on anything resembling a hot path. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ^ permalink raw reply [flat|nested] 11+ messages in thread
[parent not found: <46C7FACA.3020706-atKUWr5tajBWk0Htik3J/w@public.gmane.org>]
* Re: [PATCH 0/3] add support for dynamically allocation ofmmu pages. [not found] ` <46C7FACA.3020706-atKUWr5tajBWk0Htik3J/w@public.gmane.org> @ 2007-08-19 8:25 ` Dor Laor 0 siblings, 0 replies; 11+ messages in thread From: Dor Laor @ 2007-08-19 8:25 UTC (permalink / raw) To: Avi Kivity; +Cc: Izik Eidus, kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f >Dor Laor wrote: >> The idea to shift/share mmu cache memory between guests is great. >> You do need to take care of proper inter vm locking (kvm_lock lock, >> don't mix the kvm->lock). >> >> > >No, we need to stay away from kvm_lock on anything resembling a hot >path. If we share the mmu cache globally we need a global lock, its either we'll use kvm_lock or a new one. I think the kvm_lock is sufficient. Anyway the basic intention is not to hold the guest when it needs new mmu cache page. In that case just evacuate another page used by the guest by LRU. There will be a tasklet/timer waking up on certain occasions (maybe after high water mark of evictions) that will re-adjust the mmu cache usage amount the guests. -- Dor. ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2007-08-19 8:25 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-08-18 19:51 [PATCH 0/3] add support for dynamicly allocation of mmu pages Izik Eidus
[not found] ` <1187466871.28221.39.camel@izike-desktop.qumranet.com>
[not found] ` <1187466871.28221.39.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>
2007-08-18 20:06 ` [PATCH 1/3] " Izik Eidus
[not found] ` <1187467612.28221.50.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>
2007-08-19 8:14 ` Avi Kivity
[not found] ` <1187467003.28221.42.camel@izike-desktop.qumranet.com>
[not found] ` <1187467003.28221.42.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>
2007-08-18 20:08 ` [PATCH 2/3] " Izik Eidus
[not found] ` <1187467694.28221.52.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>
2007-08-19 8:19 ` Avi Kivity
[not found] ` <1187467063.28221.44.camel@izike-desktop.qumranet.com>
[not found] ` <1187467063.28221.44.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>
2007-08-18 20:09 ` [PATCH 3/3] " Izik Eidus
[not found] ` <1187467798.28221.55.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>
2007-08-19 8:22 ` Avi Kivity
[not found] ` <1187467150.28221.47.camel@izike-desktop.qumranet.com>
[not found] ` <1187467150.28221.47.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>
2007-08-18 20:11 ` [PATCH 0/3] " Izik Eidus
[not found] ` <1187467867.28221.57.camel-wV29XY6ncz+I84jL4+POOYeT0m0igiSA0E9HWUfgJXw@public.gmane.org>
2007-08-18 20:35 ` [PATCH 0/3] add support for dynamicly allocation ofmmu pages Dor Laor
[not found] ` <64F9B87B6B770947A9F8391472E032160D46464F-yEcIvxbTEBqsx+V+t5oei8rau4O3wl8o3fe8/T/H7NteoWH0uzbU5w@public.gmane.org>
2007-08-19 8:09 ` Avi Kivity
[not found] ` <46C7FACA.3020706-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-08-19 8:25 ` [PATCH 0/3] add support for dynamically " Dor Laor
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox