kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] KVM: remove dummy pages
@ 2012-07-26  3:56 Xiao Guangrong
  2012-07-26  3:57 ` [PATCH v2 1/3] KVM: MMU: use kvm_release_pfn_clean to release pfn Xiao Guangrong
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Xiao Guangrong @ 2012-07-26  3:56 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, LKML, KVM

Changelog:
  correct some typos in the title/changelog.

Currently, kvm allocates some pages (e.g: bad_page/fault_page) and use them
as error indicators, it wastes memory and is not good for scalability.

Base on Avi's suggestion, in this patchset, we introduce some error codes
instead of these pages to indicate the error conditions.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/3] KVM: MMU: use kvm_release_pfn_clean to release pfn
  2012-07-26  3:56 [PATCH v2 0/3] KVM: remove dummy pages Xiao Guangrong
@ 2012-07-26  3:57 ` Xiao Guangrong
  2012-07-26  3:58 ` [PATCH v2 2/3] KVM: use kvm_release_page_clean to release the page Xiao Guangrong
  2012-07-26  3:58 ` [PATCH v2 3/3] KVM: remove dummy pages Xiao Guangrong
  2 siblings, 0 replies; 9+ messages in thread
From: Xiao Guangrong @ 2012-07-26  3:57 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM

The current code depends on the fact that fault_page is the normal page,
however, we will use the error code instead of these dummy pages in the
later patch, so we use kvm_release_pfn_clean to release pfn which will
release the error code properly

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 arch/x86/kvm/mmu.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2419934..a9a2052 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3275,7 +3275,7 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn,
 	if (!async)
 		return false; /* *pfn has correct page already */

-	put_page(pfn_to_page(*pfn));
+	kvm_release_pfn_clean(*pfn);

 	if (!prefault && can_do_async_pf(vcpu)) {
 		trace_kvm_try_async_get_page(gva, gfn);
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/3] KVM: use kvm_release_page_clean to release the page
  2012-07-26  3:56 [PATCH v2 0/3] KVM: remove dummy pages Xiao Guangrong
  2012-07-26  3:57 ` [PATCH v2 1/3] KVM: MMU: use kvm_release_pfn_clean to release pfn Xiao Guangrong
@ 2012-07-26  3:58 ` Xiao Guangrong
  2012-07-26  3:58 ` [PATCH v2 3/3] KVM: remove dummy pages Xiao Guangrong
  2 siblings, 0 replies; 9+ messages in thread
From: Xiao Guangrong @ 2012-07-26  3:58 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM

In kvm_async_pf_wakeup_all, it uses bad_page to generate broadcast wakeup,
and uses put_page to release bad_page, the work depends on the fact that
bad_page is the normal page. But we will use the error code instead of
bad_page, so use kvm_release_page_clean to release the page which will
release the error code properly

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 virt/kvm/async_pf.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
index 74268b4..ebae24b 100644
--- a/virt/kvm/async_pf.c
+++ b/virt/kvm/async_pf.c
@@ -112,7 +112,7 @@ void kvm_clear_async_pf_completion_queue(struct kvm_vcpu *vcpu)
 				   typeof(*work), link);
 		list_del(&work->link);
 		if (work->page)
-			put_page(work->page);
+			kvm_release_page_clean(work->page);
 		kmem_cache_free(async_pf_cache, work);
 	}
 	spin_unlock(&vcpu->async_pf.lock);
@@ -139,7 +139,7 @@ void kvm_check_async_pf_completion(struct kvm_vcpu *vcpu)
 		list_del(&work->queue);
 		vcpu->async_pf.queued--;
 		if (work->page)
-			put_page(work->page);
+			kvm_release_page_clean(work->page);
 		kmem_cache_free(async_pf_cache, work);
 	}
 }
-- 
1.7.7.6

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 3/3] KVM: remove dummy pages
  2012-07-26  3:56 [PATCH v2 0/3] KVM: remove dummy pages Xiao Guangrong
  2012-07-26  3:57 ` [PATCH v2 1/3] KVM: MMU: use kvm_release_pfn_clean to release pfn Xiao Guangrong
  2012-07-26  3:58 ` [PATCH v2 2/3] KVM: use kvm_release_page_clean to release the page Xiao Guangrong
@ 2012-07-26  3:58 ` Xiao Guangrong
  2012-07-26  8:56   ` Avi Kivity
  2 siblings, 1 reply; 9+ messages in thread
From: Xiao Guangrong @ 2012-07-26  3:58 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM

Currently, kvm allocates some pages and use them as error indicators,
it wastes memory and is not good for scalability

Base on Avi's suggestion, we use the error codes instead of these pages
to indicate the error conditions

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
 include/linux/kvm_host.h |    3 +-
 virt/kvm/async_pf.c      |    3 +-
 virt/kvm/kvm_main.c      |  121 +++++++++++++++++++---------------------------
 3 files changed, 52 insertions(+), 75 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6d1a51e..f4e132c 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -411,6 +411,7 @@ void kvm_arch_flush_shadow(struct kvm *kvm);
 int gfn_to_page_many_atomic(struct kvm *kvm, gfn_t gfn, struct page **pages,
 			    int nr_pages);

+struct page *get_bad_page(void);
 struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn);
 unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn);
 void kvm_release_page_clean(struct page *page);
@@ -564,7 +565,7 @@ void kvm_arch_sync_events(struct kvm *kvm);
 int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu);
 void kvm_vcpu_kick(struct kvm_vcpu *vcpu);

-int kvm_is_mmio_pfn(pfn_t pfn);
+bool kvm_is_mmio_pfn(pfn_t pfn);

 struct kvm_irq_ack_notifier {
 	struct hlist_node link;
diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c
index ebae24b..7972278 100644
--- a/virt/kvm/async_pf.c
+++ b/virt/kvm/async_pf.c
@@ -203,8 +203,7 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu)
 	if (!work)
 		return -ENOMEM;

-	work->page = bad_page;
-	get_page(bad_page);
+	work->page = get_bad_page();
 	INIT_LIST_HEAD(&work->queue); /* for list_del to work */

 	spin_lock(&vcpu->async_pf.lock);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index eb15833..92aae8b 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -100,17 +100,11 @@ EXPORT_SYMBOL_GPL(kvm_rebooting);

 static bool largepages_enabled = true;

-struct page *bad_page;
-static pfn_t bad_pfn;
-
-static struct page *hwpoison_page;
-static pfn_t hwpoison_pfn;
-
-static struct page *fault_page;
-static pfn_t fault_pfn;
-
-inline int kvm_is_mmio_pfn(pfn_t pfn)
+bool kvm_is_mmio_pfn(pfn_t pfn)
 {
+	if (is_error_pfn(pfn))
+		return false;
+
 	if (pfn_valid(pfn)) {
 		int reserved;
 		struct page *tail = pfn_to_page(pfn);
@@ -936,34 +930,55 @@ EXPORT_SYMBOL_GPL(kvm_disable_largepages);

 int is_error_page(struct page *page)
 {
-	return page == bad_page || page == hwpoison_page || page == fault_page;
+	return IS_ERR(page);
 }
 EXPORT_SYMBOL_GPL(is_error_page);

 int is_error_pfn(pfn_t pfn)
 {
-	return pfn == bad_pfn || pfn == hwpoison_pfn || pfn == fault_pfn;
+	return IS_ERR_VALUE(pfn);
 }
 EXPORT_SYMBOL_GPL(is_error_pfn);

+static pfn_t get_bad_pfn(void)
+{
+	return -ENOENT;
+}
+
+pfn_t get_fault_pfn(void)
+{
+	return -EFAULT;
+}
+EXPORT_SYMBOL_GPL(get_fault_pfn);
+
+static pfn_t get_hwpoison_pfn(void)
+{
+	return -EHWPOISON;
+}
+
 int is_hwpoison_pfn(pfn_t pfn)
 {
-	return pfn == hwpoison_pfn;
+	return pfn == -EHWPOISON;
 }
 EXPORT_SYMBOL_GPL(is_hwpoison_pfn);

 int is_noslot_pfn(pfn_t pfn)
 {
-	return pfn == bad_pfn;
+	return pfn == -ENOENT;
 }
 EXPORT_SYMBOL_GPL(is_noslot_pfn);

 int is_invalid_pfn(pfn_t pfn)
 {
-	return pfn == hwpoison_pfn || pfn == fault_pfn;
+	return !is_noslot_pfn(pfn) && is_error_pfn(pfn);
 }
 EXPORT_SYMBOL_GPL(is_invalid_pfn);

+struct page *get_bad_page(void)
+{
+	return ERR_PTR(-ENOENT);
+}
+
 static inline unsigned long bad_hva(void)
 {
 	return PAGE_OFFSET;
@@ -1035,13 +1050,6 @@ unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn)
 }
 EXPORT_SYMBOL_GPL(gfn_to_hva);

-pfn_t get_fault_pfn(void)
-{
-	get_page(fault_page);
-	return fault_pfn;
-}
-EXPORT_SYMBOL_GPL(get_fault_pfn);
-
 int get_user_page_nowait(struct task_struct *tsk, struct mm_struct *mm,
 	unsigned long start, int write, struct page **page)
 {
@@ -1119,8 +1127,7 @@ static pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async,
 		if (npages == -EHWPOISON ||
 			(!async && check_user_page_hwpoison(addr))) {
 			up_read(&current->mm->mmap_sem);
-			get_page(hwpoison_page);
-			return page_to_pfn(hwpoison_page);
+			return get_hwpoison_pfn();
 		}

 		vma = find_vma_intersection(current->mm, addr, addr+1);
@@ -1158,10 +1165,8 @@ static pfn_t __gfn_to_pfn(struct kvm *kvm, gfn_t gfn, bool atomic, bool *async,
 		*async = false;

 	addr = gfn_to_hva(kvm, gfn);
-	if (kvm_is_error_hva(addr)) {
-		get_page(bad_page);
-		return page_to_pfn(bad_page);
-	}
+	if (kvm_is_error_hva(addr))
+		return get_bad_pfn();

 	return hva_to_pfn(addr, atomic, async, write_fault, writable);
 }
@@ -1215,37 +1220,45 @@ int gfn_to_page_many_atomic(struct kvm *kvm, gfn_t gfn, struct page **pages,
 }
 EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic);

+static struct page *kvm_pfn_to_page(pfn_t pfn)
+{
+	WARN_ON(kvm_is_mmio_pfn(pfn));
+
+	if (is_error_pfn(pfn) || kvm_is_mmio_pfn(pfn))
+		return get_bad_page();
+
+	return pfn_to_page(pfn);
+}
+
 struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn)
 {
 	pfn_t pfn;

 	pfn = gfn_to_pfn(kvm, gfn);
-	if (!kvm_is_mmio_pfn(pfn))
-		return pfn_to_page(pfn);
-
-	WARN_ON(kvm_is_mmio_pfn(pfn));

-	get_page(bad_page);
-	return bad_page;
+	return kvm_pfn_to_page(pfn);
 }

 EXPORT_SYMBOL_GPL(gfn_to_page);

 void kvm_release_page_clean(struct page *page)
 {
-	kvm_release_pfn_clean(page_to_pfn(page));
+	if (!is_error_page(page))
+		kvm_release_pfn_clean(page_to_pfn(page));
 }
 EXPORT_SYMBOL_GPL(kvm_release_page_clean);

 void kvm_release_pfn_clean(pfn_t pfn)
 {
-	if (!kvm_is_mmio_pfn(pfn))
+	if (!is_error_pfn(pfn) && !kvm_is_mmio_pfn(pfn))
 		put_page(pfn_to_page(pfn));
 }
 EXPORT_SYMBOL_GPL(kvm_release_pfn_clean);

 void kvm_release_page_dirty(struct page *page)
 {
+	WARN_ON(is_error_page(page));
+
 	kvm_release_pfn_dirty(page_to_pfn(page));
 }
 EXPORT_SYMBOL_GPL(kvm_release_page_dirty);
@@ -2724,33 +2737,6 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align,
 	if (r)
 		goto out_fail;

-	bad_page = alloc_page(GFP_KERNEL | __GFP_ZERO);
-
-	if (bad_page == NULL) {
-		r = -ENOMEM;
-		goto out;
-	}
-
-	bad_pfn = page_to_pfn(bad_page);
-
-	hwpoison_page = alloc_page(GFP_KERNEL | __GFP_ZERO);
-
-	if (hwpoison_page == NULL) {
-		r = -ENOMEM;
-		goto out_free_0;
-	}
-
-	hwpoison_pfn = page_to_pfn(hwpoison_page);
-
-	fault_page = alloc_page(GFP_KERNEL | __GFP_ZERO);
-
-	if (fault_page == NULL) {
-		r = -ENOMEM;
-		goto out_free_0;
-	}
-
-	fault_pfn = page_to_pfn(fault_page);
-
 	if (!zalloc_cpumask_var(&cpus_hardware_enabled, GFP_KERNEL)) {
 		r = -ENOMEM;
 		goto out_free_0;
@@ -2825,12 +2811,6 @@ out_free_1:
 out_free_0a:
 	free_cpumask_var(cpus_hardware_enabled);
 out_free_0:
-	if (fault_page)
-		__free_page(fault_page);
-	if (hwpoison_page)
-		__free_page(hwpoison_page);
-	__free_page(bad_page);
-out:
 	kvm_arch_exit();
 out_fail:
 	return r;
@@ -2850,8 +2830,5 @@ void kvm_exit(void)
 	kvm_arch_hardware_unsetup();
 	kvm_arch_exit();
 	free_cpumask_var(cpus_hardware_enabled);
-	__free_page(fault_page);
-	__free_page(hwpoison_page);
-	__free_page(bad_page);
 }
 EXPORT_SYMBOL_GPL(kvm_exit);
-- 
1.7.7.6


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] KVM: remove dummy pages
  2012-07-26  3:58 ` [PATCH v2 3/3] KVM: remove dummy pages Xiao Guangrong
@ 2012-07-26  8:56   ` Avi Kivity
  2012-07-26  9:20     ` Takuya Yoshikawa
  2012-07-26  9:25     ` Xiao Guangrong
  0 siblings, 2 replies; 9+ messages in thread
From: Avi Kivity @ 2012-07-26  8:56 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Marcelo Tosatti, LKML, KVM

On 07/26/2012 06:58 AM, Xiao Guangrong wrote:
> Currently, kvm allocates some pages and use them as error indicators,
> it wastes memory and is not good for scalability
> 
> Base on Avi's suggestion, we use the error codes instead of these pages
> to indicate the error conditions
> 
> 
> +static pfn_t get_bad_pfn(void)
> +{
> +	return -ENOENT;
> +}
> +
> +pfn_t get_fault_pfn(void)
> +{
> +	return -EFAULT;
> +}
> +EXPORT_SYMBOL_GPL(get_fault_pfn);
> +
> +static pfn_t get_hwpoison_pfn(void)
> +{
> +	return -EHWPOISON;
> +}
> +

Would be better as #defines

>  int is_hwpoison_pfn(pfn_t pfn)
>  {
> -	return pfn == hwpoison_pfn;
> +	return pfn == -EHWPOISON;
>  }
>  EXPORT_SYMBOL_GPL(is_hwpoison_pfn);
> 
>  int is_noslot_pfn(pfn_t pfn)
>  {
> -	return pfn == bad_pfn;
> +	return pfn == -ENOENT;
>  }
>  EXPORT_SYMBOL_GPL(is_noslot_pfn);
> 
>  int is_invalid_pfn(pfn_t pfn)
>  {
> -	return pfn == hwpoison_pfn || pfn == fault_pfn;
> +	return !is_noslot_pfn(pfn) && is_error_pfn(pfn);
>  }
>  EXPORT_SYMBOL_GPL(is_invalid_pfn);
> 

So is_*_pfn() could go away and be replaced by ==.

> 
>  EXPORT_SYMBOL_GPL(gfn_to_page);
> 
>  void kvm_release_page_clean(struct page *page)
>  {
> -	kvm_release_pfn_clean(page_to_pfn(page));
> +	if (!is_error_page(page))
> +		kvm_release_pfn_clean(page_to_pfn(page));
>  }
>  EXPORT_SYMBOL_GPL(kvm_release_page_clean);

Note, we can remove calls to kvm_release_page_clean() from error paths
now, so in the future we can drop the test.

Since my comments are better done as a separate patch, I applied all
three patches.  Thanks!

-- 
error compiling committee.c: too many arguments to function

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] KVM: remove dummy pages
  2012-07-26  8:56   ` Avi Kivity
@ 2012-07-26  9:20     ` Takuya Yoshikawa
  2012-07-26  9:35       ` Xiao Guangrong
  2012-07-26  9:25     ` Xiao Guangrong
  1 sibling, 1 reply; 9+ messages in thread
From: Takuya Yoshikawa @ 2012-07-26  9:20 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Xiao Guangrong, Marcelo Tosatti, LKML, KVM

On Thu, 26 Jul 2012 11:56:15 +0300
Avi Kivity <avi@redhat.com> wrote:

> Since my comments are better done as a separate patch, I applied all
> three patches.  Thanks!

Is this patch really safe for all architectures?

IS_ERR_VALUE() casts -MAX_ERRNO to unsigned long and then does comparison.
Isn't it possible to conflict with valid pfns?

What are the underlying assumptions?

	Takuya

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] KVM: remove dummy pages
  2012-07-26  8:56   ` Avi Kivity
  2012-07-26  9:20     ` Takuya Yoshikawa
@ 2012-07-26  9:25     ` Xiao Guangrong
  1 sibling, 0 replies; 9+ messages in thread
From: Xiao Guangrong @ 2012-07-26  9:25 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, LKML, KVM

On 07/26/2012 04:56 PM, Avi Kivity wrote:
> On 07/26/2012 06:58 AM, Xiao Guangrong wrote:
>> Currently, kvm allocates some pages and use them as error indicators,
>> it wastes memory and is not good for scalability
>>
>> Base on Avi's suggestion, we use the error codes instead of these pages
>> to indicate the error conditions
>>
>>
>> +static pfn_t get_bad_pfn(void)
>> +{
>> +	return -ENOENT;
>> +}
>> +
>> +pfn_t get_fault_pfn(void)
>> +{
>> +	return -EFAULT;
>> +}
>> +EXPORT_SYMBOL_GPL(get_fault_pfn);
>> +
>> +static pfn_t get_hwpoison_pfn(void)
>> +{
>> +	return -EHWPOISON;
>> +}
>> +
> 
> Would be better as #defines

Okay.

> 
>>  int is_hwpoison_pfn(pfn_t pfn)
>>  {
>> -	return pfn == hwpoison_pfn;
>> +	return pfn == -EHWPOISON;
>>  }
>>  EXPORT_SYMBOL_GPL(is_hwpoison_pfn);
>>
>>  int is_noslot_pfn(pfn_t pfn)
>>  {
>> -	return pfn == bad_pfn;
>> +	return pfn == -ENOENT;
>>  }
>>  EXPORT_SYMBOL_GPL(is_noslot_pfn);
>>
>>  int is_invalid_pfn(pfn_t pfn)
>>  {
>> -	return pfn == hwpoison_pfn || pfn == fault_pfn;
>> +	return !is_noslot_pfn(pfn) && is_error_pfn(pfn);
>>  }
>>  EXPORT_SYMBOL_GPL(is_invalid_pfn);
>>
> 
> So is_*_pfn() could go away and be replaced by ==.
> 

Okay.

>>
>>  EXPORT_SYMBOL_GPL(gfn_to_page);
>>
>>  void kvm_release_page_clean(struct page *page)
>>  {
>> -	kvm_release_pfn_clean(page_to_pfn(page));
>> +	if (!is_error_page(page))
>> +		kvm_release_pfn_clean(page_to_pfn(page));
>>  }
>>  EXPORT_SYMBOL_GPL(kvm_release_page_clean);
> 
> Note, we can remove calls to kvm_release_page_clean() from error paths
> now, so in the future we can drop the test.
> 

Right, since the release path (kvm_release_page_clean) is used in many place
and on many architectures, i did the change as small as possible for good
review.

> Since my comments are better done as a separate patch, 

Yes, i will make a patch to apply all your comments. Thank you, Avi!

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] KVM: remove dummy pages
  2012-07-26  9:20     ` Takuya Yoshikawa
@ 2012-07-26  9:35       ` Xiao Guangrong
  2012-07-26  9:52         ` Takuya Yoshikawa
  0 siblings, 1 reply; 9+ messages in thread
From: Xiao Guangrong @ 2012-07-26  9:35 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM

On 07/26/2012 05:20 PM, Takuya Yoshikawa wrote:
> On Thu, 26 Jul 2012 11:56:15 +0300
> Avi Kivity <avi@redhat.com> wrote:
> 
>> Since my comments are better done as a separate patch, I applied all
>> three patches.  Thanks!
> 
> Is this patch really safe for all architectures?
> 
> IS_ERR_VALUE() casts -MAX_ERRNO to unsigned long and then does comparison.
> Isn't it possible to conflict with valid pfns?
> 

See IS_ERR_VALUE():

#define IS_ERR_VALUE(x) unlikely((x) >= (unsigned long)-MAX_ERRNO)

The minimal value of the error code is:
0xffff f001 on 32-bit and 0x ffff ffff ffff f001 on 64-bit,
it is fair larger that a valid pfn (for the pfn, the most top of 12 bits
are always 0).

Note, PAE is a special case, but only 64G physical memory is valid,
0xffff f001 is also suitable for that.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] KVM: remove dummy pages
  2012-07-26  9:35       ` Xiao Guangrong
@ 2012-07-26  9:52         ` Takuya Yoshikawa
  0 siblings, 0 replies; 9+ messages in thread
From: Takuya Yoshikawa @ 2012-07-26  9:52 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Avi Kivity, Marcelo Tosatti, LKML, KVM

On Thu, 26 Jul 2012 17:35:13 +0800
Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> wrote:

> > Is this patch really safe for all architectures?
> > 
> > IS_ERR_VALUE() casts -MAX_ERRNO to unsigned long and then does comparison.
> > Isn't it possible to conflict with valid pfns?
> > 
> 
> See IS_ERR_VALUE():
> 
> #define IS_ERR_VALUE(x) unlikely((x) >= (unsigned long)-MAX_ERRNO)
> 
> The minimal value of the error code is:
> 0xffff f001 on 32-bit and 0x ffff ffff ffff f001 on 64-bit,
> it is fair larger that a valid pfn (for the pfn, the most top of 12 bits
> are always 0).
> 
> Note, PAE is a special case, but only 64G physical memory is valid,
> 0xffff f001 is also suitable for that.

Ah, I see.  I misread the type pfn_t and was confused.
Thank you!

	Takuya

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2012-07-26  9:52 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-26  3:56 [PATCH v2 0/3] KVM: remove dummy pages Xiao Guangrong
2012-07-26  3:57 ` [PATCH v2 1/3] KVM: MMU: use kvm_release_pfn_clean to release pfn Xiao Guangrong
2012-07-26  3:58 ` [PATCH v2 2/3] KVM: use kvm_release_page_clean to release the page Xiao Guangrong
2012-07-26  3:58 ` [PATCH v2 3/3] KVM: remove dummy pages Xiao Guangrong
2012-07-26  8:56   ` Avi Kivity
2012-07-26  9:20     ` Takuya Yoshikawa
2012-07-26  9:35       ` Xiao Guangrong
2012-07-26  9:52         ` Takuya Yoshikawa
2012-07-26  9:25     ` Xiao Guangrong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).