* Re: slow xp hibernation revisited
2011-06-04 8:05 ` James Harper
@ 2011-06-04 8:30 ` Keir Fraser
2011-06-06 13:58 ` Stefano Stabellini
2011-06-04 8:33 ` Keir Fraser
2011-06-06 13:09 ` Stefano Stabellini
2 siblings, 1 reply; 19+ messages in thread
From: Keir Fraser @ 2011-06-04 8:30 UTC (permalink / raw)
To: James Harper, Tim Deegan; +Cc: xen-devel, Ian Jackson, Stefano Stabellini
On 04/06/2011 09:05, "James Harper" <james.harper@bendigoit.com.au> wrote:
>>
>> On 04/06/2011 08:38, "James Harper" <james.harper@bendigoit.com.au>
> wrote:
>>
>>> Looking past the test_bit call, the next statement does another test
> and
>>> sets last_address_index to 0 and returns NULL. Is this just to
> ensure
>>> that the next access isn't just trivially accepted?
>>
>> Yes, first test is on a potentially stale bucket. Second test is on a
> fresh
>> bucket.
>>
>
> How about the following patch? Is munmap the correct way to unmap or is
> an IOCTL required too?
>
> The exit condition is what would happen anyway after the remap is done
> and the page is still invalid.
Looks fine to me, although maybe the function needs refactoring a little. Cc
Stefano and Ian who are more involved in qemu maintenance.
Also, looking at qemu_map_cache() now, the early exit at the top of the
function looks a bit bogus to me. It exits successfully if we hit the same
address_index as last invocation, even though we might be hitting a
different pfn within the indexed range, and a possibly invalid/unmapped pfn
at that.
-- Keir
> diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
> index d02e23f..1ff80bb 100644
> --- a/hw/xen_machine_fv.c
> +++ b/hw/xen_machine_fv.c
> @@ -151,6 +151,24 @@ uint8_t *qemu_map_cache(target_phys_addr_t
> phys_addr, uint8_t lock)
> pentry->next = entry;
> qemu_remap_bucket(entry, address_index);
> } else if (!entry->lock) {
> + if (entry->vaddr_base && entry->paddr_index == address_index &&
> !test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
> + {
> + /* The page was invalid previously. Test if it is valid now
> and only remap if so */
> + xen_pfn_t pfn;
> + int err;
> + void *tmp_vaddr;
> +
> + pfn = phys_addr >> XC_PAGE_SHIFT;
> + tmp_vaddr = xc_map_foreign_bulk(xc_handle, domid,
> PROT_READ|PROT_WRITE, &pfn, &err, 1);
> + if (tmp_vaddr)
> + munmap(tmp_vaddr, PAGE_SIZE);
> +
> + if (!tmp_vaddr || err)
> + {
> + last_address_index = ~0UL;
> + return NULL;
> + }
> + }
> if (!entry->vaddr_base || entry->paddr_index != address_index
> || !test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
> qemu_remap_bucket(entry, address_index);
> }
>
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: slow xp hibernation revisited
2011-06-04 8:30 ` Keir Fraser
@ 2011-06-06 13:58 ` Stefano Stabellini
2011-06-21 17:05 ` Ian Jackson
0 siblings, 1 reply; 19+ messages in thread
From: Stefano Stabellini @ 2011-06-06 13:58 UTC (permalink / raw)
To: Keir Fraser
Cc: Tim Deegan, James Harper, Ian Jackson,
xen-devel@lists.xensource.com, Stefano Stabellini
On Sat, 4 Jun 2011, Keir Fraser wrote:
> Also, looking at qemu_map_cache() now, the early exit at the top of the
> function looks a bit bogus to me. It exits successfully if we hit the same
> address_index as last invocation, even though we might be hitting a
> different pfn within the indexed range, and a possibly invalid/unmapped pfn
> at that.
>
Yes, I am afraid that you are right.
The mistake is memorizing the last address_index rather than
the last page address.
I'll submit a similar patch to upstream qemu.
---
mapcache: remember the last page address rather then the last address_index
A single address_index corresponds to multiple pages that might or
might not be mapped.
It is better to just remember the last page address for the sake of this
optimization, so that we are sure that it is mapped.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
index a353ee6..603a508 100644
--- a/hw/xen_machine_fv.c
+++ b/hw/xen_machine_fv.c
@@ -63,7 +63,7 @@ static unsigned long nr_buckets;
TAILQ_HEAD(map_cache_head, map_cache_rev) locked_entries = TAILQ_HEAD_INITIALIZER(locked_entries);
/* For most cases (>99.9%), the page address is the same. */
-static unsigned long last_address_index = ~0UL;
+static unsigned long last_address_page = ~0UL;
static uint8_t *last_address_vaddr;
static int qemu_map_cache_init(void)
@@ -138,7 +138,7 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, uint8_t lock)
unsigned long address_index = phys_addr >> MCACHE_BUCKET_SHIFT;
unsigned long address_offset = phys_addr & (MCACHE_BUCKET_SIZE-1);
- if (address_index == last_address_index && !lock)
+ if ((phys_addr >> XC_PAGE_SHIFT) == last_address_page && !lock)
return last_address_vaddr + address_offset;
entry = &mapcache_entry[address_index % nr_buckets];
@@ -157,17 +157,17 @@ uint8_t *qemu_map_cache(target_phys_addr_t phys_addr, uint8_t lock)
}
if (!test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping)) {
- last_address_index = ~0UL;
+ last_address_page = ~0UL;
return NULL;
}
- last_address_index = address_index;
+ last_address_page = phys_addr >> XC_PAGE_SHIFT;
last_address_vaddr = entry->vaddr_base;
if (lock) {
struct map_cache_rev *reventry = qemu_mallocz(sizeof(struct map_cache_rev));
entry->lock++;
reventry->vaddr_req = last_address_vaddr + address_offset;
- reventry->paddr_index = last_address_index;
+ reventry->paddr_index = address_index;
TAILQ_INSERT_TAIL(&locked_entries, reventry, next);
}
@@ -182,7 +182,7 @@ void qemu_invalidate_entry(uint8_t *buffer)
int found = 0;
if (last_address_vaddr == buffer)
- last_address_index = ~0UL;
+ last_address_page = ~0UL;
TAILQ_FOREACH(reventry, &locked_entries, next) {
if (reventry->vaddr_req == buffer) {
@@ -252,7 +252,7 @@ void qemu_invalidate_map_cache(void)
entry->vaddr_base = NULL;
}
- last_address_index = ~0UL;
+ last_address_page = ~0UL;
last_address_vaddr = NULL;
mapcache_unlock();
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: slow xp hibernation revisited
2011-06-06 13:58 ` Stefano Stabellini
@ 2011-06-21 17:05 ` Ian Jackson
2011-06-21 18:00 ` Stefano Stabellini
0 siblings, 1 reply; 19+ messages in thread
From: Ian Jackson @ 2011-06-21 17:05 UTC (permalink / raw)
To: Stefano Stabellini
Cc: Tim Deegan, James Harper, Keir Fraser,
xen-devel@lists.xensource.com
Stefano Stabellini writes ("Re: [Xen-devel] slow xp hibernation revisited"):
> mapcache: remember the last page address rather then the last address_index
Thanks, I have applied this. It should go into 4.1 too I guess ?
Ian.
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: slow xp hibernation revisited
2011-06-21 17:05 ` Ian Jackson
@ 2011-06-21 18:00 ` Stefano Stabellini
0 siblings, 0 replies; 19+ messages in thread
From: Stefano Stabellini @ 2011-06-21 18:00 UTC (permalink / raw)
To: Ian Jackson
Cc: Tim Deegan, James Harper, Keir Fraser,
xen-devel@lists.xensource.com, Stefano Stabellini
On Tue, 21 Jun 2011, Ian Jackson wrote:
> Stefano Stabellini writes ("Re: [Xen-devel] slow xp hibernation revisited"):
> > mapcache: remember the last page address rather then the last address_index
>
> Thanks, I have applied this. It should go into 4.1 too I guess ?
Yes indeed.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: slow xp hibernation revisited
2011-06-04 8:05 ` James Harper
2011-06-04 8:30 ` Keir Fraser
@ 2011-06-04 8:33 ` Keir Fraser
2011-06-06 13:09 ` Stefano Stabellini
2 siblings, 0 replies; 19+ messages in thread
From: Keir Fraser @ 2011-06-04 8:33 UTC (permalink / raw)
To: James Harper, Tim Deegan; +Cc: xen-devel, Ian Jackson, Stefano Stabellini
On 04/06/2011 09:05, "James Harper" <james.harper@bendigoit.com.au> wrote:
>>
>> On 04/06/2011 08:38, "James Harper" <james.harper@bendigoit.com.au>
> wrote:
>>
>>> Looking past the test_bit call, the next statement does another test
> and
>>> sets last_address_index to 0 and returns NULL. Is this just to
> ensure
>>> that the next access isn't just trivially accepted?
>>
>> Yes, first test is on a potentially stale bucket. Second test is on a
> fresh
>> bucket.
>>
>
> How about the following patch? Is munmap the correct way to unmap or is
> an IOCTL required too?
By the way, depending on how this patch performs for you, another
alternative I thought of would be to fail this function if the address
passed in is the same as the address in a io-request we are currently
processing from Xen. After all, if Xen punted the memory access to qemu,
obviously there is no RAM there!
Could be an uglier patch than what you have below however, and maybe below
patch is good enough.
-- Keir
> The exit condition is what would happen anyway after the remap is done
> and the page is still invalid.
>
> diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
> index d02e23f..1ff80bb 100644
> --- a/hw/xen_machine_fv.c
> +++ b/hw/xen_machine_fv.c
> @@ -151,6 +151,24 @@ uint8_t *qemu_map_cache(target_phys_addr_t
> phys_addr, uint8_t lock)
> pentry->next = entry;
> qemu_remap_bucket(entry, address_index);
> } else if (!entry->lock) {
> + if (entry->vaddr_base && entry->paddr_index == address_index &&
> !test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
> + {
> + /* The page was invalid previously. Test if it is valid now
> and only remap if so */
> + xen_pfn_t pfn;
> + int err;
> + void *tmp_vaddr;
> +
> + pfn = phys_addr >> XC_PAGE_SHIFT;
> + tmp_vaddr = xc_map_foreign_bulk(xc_handle, domid,
> PROT_READ|PROT_WRITE, &pfn, &err, 1);
> + if (tmp_vaddr)
> + munmap(tmp_vaddr, PAGE_SIZE);
> +
> + if (!tmp_vaddr || err)
> + {
> + last_address_index = ~0UL;
> + return NULL;
> + }
> + }
> if (!entry->vaddr_base || entry->paddr_index != address_index
> || !test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
> qemu_remap_bucket(entry, address_index);
> }
>
^ permalink raw reply [flat|nested] 19+ messages in thread* RE: slow xp hibernation revisited
2011-06-04 8:05 ` James Harper
2011-06-04 8:30 ` Keir Fraser
2011-06-04 8:33 ` Keir Fraser
@ 2011-06-06 13:09 ` Stefano Stabellini
2011-06-06 13:26 ` James Harper
2 siblings, 1 reply; 19+ messages in thread
From: Stefano Stabellini @ 2011-06-06 13:09 UTC (permalink / raw)
To: James Harper; +Cc: Tim Deegan, Keir Fraser, xen-devel@lists.xensource.com
On Sat, 4 Jun 2011, James Harper wrote:
> >
> > On 04/06/2011 08:38, "James Harper" <james.harper@bendigoit.com.au>
> wrote:
> >
> > > Looking past the test_bit call, the next statement does another test
> and
> > > sets last_address_index to 0 and returns NULL. Is this just to
> ensure
> > > that the next access isn't just trivially accepted?
> >
> > Yes, first test is on a potentially stale bucket. Second test is on a
> fresh
> > bucket.
> >
>
> How about the following patch? Is munmap the correct way to unmap or is
> an IOCTL required too?
>
munmap is OK
> The exit condition is what would happen anyway after the remap is done
> and the page is still invalid.
>
> diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
> index d02e23f..1ff80bb 100644
> --- a/hw/xen_machine_fv.c
> +++ b/hw/xen_machine_fv.c
> @@ -151,6 +151,24 @@ uint8_t *qemu_map_cache(target_phys_addr_t
> phys_addr, uint8_t lock)
> pentry->next = entry;
> qemu_remap_bucket(entry, address_index);
> } else if (!entry->lock) {
> + if (entry->vaddr_base && entry->paddr_index == address_index &&
> !test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
> + {
> + /* The page was invalid previously. Test if it is valid now
> and only remap if so */
> + xen_pfn_t pfn;
> + int err;
> + void *tmp_vaddr;
> +
> + pfn = phys_addr >> XC_PAGE_SHIFT;
> + tmp_vaddr = xc_map_foreign_bulk(xc_handle, domid,
> PROT_READ|PROT_WRITE, &pfn, &err, 1);
> + if (tmp_vaddr)
> + munmap(tmp_vaddr, PAGE_SIZE);
> +
> + if (!tmp_vaddr || err)
> + {
> + last_address_index = ~0UL;
> + return NULL;
> + }
> + }
> if (!entry->vaddr_base || entry->paddr_index != address_index
> || !test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
> qemu_remap_bucket(entry, address_index);
> }
This is just a matter of aesthetic, but probably something like the
following would be clearer?
if (entry->vaddr_base && entry->paddr_index == address_index) {
if (!test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping)) {
/* your code + remap bucket */
}
} else {
qemu_remap_bucket(entry, address_index);
}
Could you also please send a similar patch for upstream qemu?
The code should be quite similar in this area, look at xen-mapcache.c.
^ permalink raw reply [flat|nested] 19+ messages in thread* RE: slow xp hibernation revisited
2011-06-06 13:09 ` Stefano Stabellini
@ 2011-06-06 13:26 ` James Harper
2011-06-06 13:29 ` James Harper
2011-06-06 13:35 ` Stefano Stabellini
0 siblings, 2 replies; 19+ messages in thread
From: James Harper @ 2011-06-06 13:26 UTC (permalink / raw)
To: Stefano Stabellini; +Cc: Tim Deegan, Keir Fraser, xen-devel
> > The exit condition is what would happen anyway after the remap is
done
> > and the page is still invalid.
> >
> > diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
> > index d02e23f..1ff80bb 100644
> > --- a/hw/xen_machine_fv.c
> > +++ b/hw/xen_machine_fv.c
> > @@ -151,6 +151,24 @@ uint8_t *qemu_map_cache(target_phys_addr_t
> > phys_addr, uint8_t lock)
> > pentry->next = entry;
> > qemu_remap_bucket(entry, address_index);
> > } else if (!entry->lock) {
> > + if (entry->vaddr_base && entry->paddr_index ==
address_index &&
> > !test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
> > + {
> > + /* The page was invalid previously. Test if it is valid
now
> > and only remap if so */
> > + xen_pfn_t pfn;
> > + int err;
> > + void *tmp_vaddr;
> > +
> > + pfn = phys_addr >> XC_PAGE_SHIFT;
> > + tmp_vaddr = xc_map_foreign_bulk(xc_handle, domid,
> > PROT_READ|PROT_WRITE, &pfn, &err, 1);
> > + if (tmp_vaddr)
> > + munmap(tmp_vaddr, PAGE_SIZE);
> > +
> > + if (!tmp_vaddr || err)
> > + {
> > + last_address_index = ~0UL;
> > + return NULL;
> > + }
> > + }
> > if (!entry->vaddr_base || entry->paddr_index !=
address_index
> > || !test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
> > qemu_remap_bucket(entry, address_index);
> > }
>
> This is just a matter of aesthetic, but probably something like the
> following would be clearer?
>
> if (entry->vaddr_base && entry->paddr_index == address_index) {
> if (!test_bit(address_offset>>XC_PAGE_SHIFT,
entry->valid_mapping)) {
> /* your code + remap bucket */
> }
> } else {
> qemu_remap_bucket(entry, address_index);
> }
>
Yes that should work, although I think I'm too tired now for Boolean
algebra :)
Is my calculation of the pfn correct? I think I don't need to fuss
around with decoding from the bucket index and bucket offset if I'm just
doing a trial map of one page, so using phys_addr directory is correct
right?
James
^ permalink raw reply [flat|nested] 19+ messages in thread* RE: slow xp hibernation revisited
2011-06-06 13:26 ` James Harper
@ 2011-06-06 13:29 ` James Harper
2011-06-06 13:38 ` Stefano Stabellini
2011-06-06 13:35 ` Stefano Stabellini
1 sibling, 1 reply; 19+ messages in thread
From: James Harper @ 2011-06-06 13:29 UTC (permalink / raw)
To: James Harper, Stefano Stabellini; +Cc: Tim Deegan, Keir Fraser, xen-devel
> > This is just a matter of aesthetic, but probably something like the
> > following would be clearer?
> >
> > if (entry->vaddr_base && entry->paddr_index == address_index) {
> > if (!test_bit(address_offset>>XC_PAGE_SHIFT,
> entry->valid_mapping)) {
> > /* your code + remap bucket */
> > }
> > } else {
> > qemu_remap_bucket(entry, address_index);
> > }
> >
>
> Yes that should work, although I think I'm too tired now for Boolean
> algebra :)
>
> Is my calculation of the pfn correct? I think I don't need to fuss
> around with decoding from the bucket index and bucket offset if I'm
just
> doing a trial map of one page, so using phys_addr directory is correct
> right?
>
Is this what you had in mind?
diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
index d02e23f..3c9fc0f 100644
--- a/hw/xen_machine_fv.c
+++ b/hw/xen_machine_fv.c
@@ -151,8 +151,30 @@ uint8_t *qemu_map_cache(target_phys_addr_t
phys_addr, uint8_t lock)
pentry->next = entry;
qemu_remap_bucket(entry, address_index);
} else if (!entry->lock) {
- if (!entry->vaddr_base || entry->paddr_index != address_index
|| !test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
+ if (entry->vaddr_base && entry->paddr_index == address_index)
+ {
+ if (!test_bit(address_offset>>XC_PAGE_SHIFT,
entry->valid_mapping))
+ {
+ /* The page was invalid previously. Test if it is valid
now and only remap if so */
+ xen_pfn_t pfn;
+ int err;
+ void *tmp_vaddr;
+
+ pfn = phys_addr >> XC_PAGE_SHIFT;
+ tmp_vaddr = xc_map_foreign_bulk(xc_handle, domid,
PROT_READ|PROT_WRITE, &pfn, &err, 1);
+ if (tmp_vaddr)
+ munmap(tmp_vaddr, PAGE_SIZE);
+
+ if (!tmp_vaddr || err)
+ {
+ last_address_index = ~0UL;
+ return NULL;
+ }
+ qemu_remap_bucket(entry, address_index);
+ }
+ } else {
qemu_remap_bucket(entry, address_index);
+ }
}
if (!test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
{
^ permalink raw reply related [flat|nested] 19+ messages in thread* RE: slow xp hibernation revisited
2011-06-06 13:29 ` James Harper
@ 2011-06-06 13:38 ` Stefano Stabellini
0 siblings, 0 replies; 19+ messages in thread
From: Stefano Stabellini @ 2011-06-06 13:38 UTC (permalink / raw)
To: James Harper
Cc: Tim Deegan, Keir Fraser, xen-devel@lists.xensource.com,
Stefano Stabellini
On Mon, 6 Jun 2011, James Harper wrote:
> > > This is just a matter of aesthetic, but probably something like the
> > > following would be clearer?
> > >
> > > if (entry->vaddr_base && entry->paddr_index == address_index) {
> > > if (!test_bit(address_offset>>XC_PAGE_SHIFT,
> > entry->valid_mapping)) {
> > > /* your code + remap bucket */
> > > }
> > > } else {
> > > qemu_remap_bucket(entry, address_index);
> > > }
> > >
> >
> > Yes that should work, although I think I'm too tired now for Boolean
> > algebra :)
> >
> > Is my calculation of the pfn correct? I think I don't need to fuss
> > around with decoding from the bucket index and bucket offset if I'm
> just
> > doing a trial map of one page, so using phys_addr directory is correct
> > right?
> >
>
> Is this what you had in mind?
>
Yes, it is fine for me.
Could you please send another version for upstream?
Use this git tree, just to be sure you have the latest mapcache fixes:
git://xenbits.xen.org/people/sstabellini/qemu-dm.git stable
> diff --git a/hw/xen_machine_fv.c b/hw/xen_machine_fv.c
> index d02e23f..3c9fc0f 100644
> --- a/hw/xen_machine_fv.c
> +++ b/hw/xen_machine_fv.c
> @@ -151,8 +151,30 @@ uint8_t *qemu_map_cache(target_phys_addr_t
> phys_addr, uint8_t lock)
> pentry->next = entry;
> qemu_remap_bucket(entry, address_index);
> } else if (!entry->lock) {
> - if (!entry->vaddr_base || entry->paddr_index != address_index
> || !test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
> + if (entry->vaddr_base && entry->paddr_index == address_index)
> + {
> + if (!test_bit(address_offset>>XC_PAGE_SHIFT,
> entry->valid_mapping))
> + {
> + /* The page was invalid previously. Test if it is valid
> now and only remap if so */
> + xen_pfn_t pfn;
> + int err;
> + void *tmp_vaddr;
> +
> + pfn = phys_addr >> XC_PAGE_SHIFT;
> + tmp_vaddr = xc_map_foreign_bulk(xc_handle, domid,
> PROT_READ|PROT_WRITE, &pfn, &err, 1);
> + if (tmp_vaddr)
> + munmap(tmp_vaddr, PAGE_SIZE);
> +
> + if (!tmp_vaddr || err)
> + {
> + last_address_index = ~0UL;
> + return NULL;
> + }
> + qemu_remap_bucket(entry, address_index);
> + }
> + } else {
> qemu_remap_bucket(entry, address_index);
> + }
> }
>
> if (!test_bit(address_offset>>XC_PAGE_SHIFT, entry->valid_mapping))
> {
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: slow xp hibernation revisited
2011-06-06 13:26 ` James Harper
2011-06-06 13:29 ` James Harper
@ 2011-06-06 13:35 ` Stefano Stabellini
1 sibling, 0 replies; 19+ messages in thread
From: Stefano Stabellini @ 2011-06-06 13:35 UTC (permalink / raw)
To: James Harper
Cc: Tim Deegan, Keir Fraser, xen-devel@lists.xensource.com,
Stefano Stabellini
On Mon, 6 Jun 2011, James Harper wrote:
> Is my calculation of the pfn correct? I think I don't need to fuss
> around with decoding from the bucket index and bucket offset if I'm just
> doing a trial map of one page, so using phys_addr directory is correct
> right?
Yes, I think is correct.
^ permalink raw reply [flat|nested] 19+ messages in thread